Privatization is increasingly driving the uptake of generative artificial intelligence (AI) across various sectors. The drive for AI adoption, whether in the name of innovation or the economy, has dominated mainstream news. However, there is less public awareness of generative AI’s devastating impacts on labour and the environment. Whether in self-regulation or government regulation, Big Tech influences the direction of governance of AI, which increasingly is evolving to governance by AI and the automation of jobs. “The future of work is already here,” states a 2025 report from Human Rights Watch. “Workers around the world are increasingly hired, compensated, disciplined, and fired by algorithms.”1
Introduction
Generative AI consists of AI systems such as large language models (LLMs),2 but most AI that has been used for decades is not generative AI. All of AI, from LLMs to machine learning and deep learning, consists of biases in race and gender.3 Extensive scholarly research has highlighted the politics of AI and algorithmic harm in race, gender and class.4, 5, 6 However, the hype created by AI companies and the push toward using generative AI is unprecedented. Generative AI tools represent a massive opportunity for companies based on the data they gather as business intelligence to increase profit and accuracy in surveillance.7 This extractive model does not stop at Big Tech companies such as Alphabet (Google), Amazon, Apple, Meta and Microsoft, but extends to third-party companies you may never have heard of.8
This paper argues that the harms of generative AI are exacerbated through privatization, partnerships with Big Tech companies, and the lack of adequate oversight by government and third-party regulators. The paper discusses three issues with generative AI and the realm of privatization: exploitation of workers and the environment; defence partnerships and human rights abuses; and concentration of power through foreign acquisition.
Governance by AI at the Expense of Workers and the Environment
The role of labour is essential in AI development,9, 10 but largely ignored by Big Tech. The notion of algorithmic colonialism reveals that “while traditional colonialism is often spearheaded by political and government forces, digital colonialism is driven by corporate tech monopolies — both of which are in search of wealth accumulation,”11 creating a new form of infrastructural dependency. In the same vein, the notion of data colonialism explicates a new surveillance economy that “paves the way for a new stage of capitalism whose outlines we only glimpse: the capitalization of life without limit.” Consequently, this exacerbates labour exploitation in the global South,12 where predatory behaviour of Big Tech is reinforced by hegemonic AI,13 founded on extractive relationships between the private sector and users, and more broadly, data extractivism.14 Labour is outsourced to workers globally “through digital labour platforms (crowdsourcing) or business process outsourcing companies”15 that are controlled by the tech and AI industry.
Exploitative labour practices in the Global South AI industry16 reveal the power relations of algorithmic colonialism.17 For instance, to improve the quality of giant Silicon Valley datasets used in AI training models, “ghost workers”18 do content moderation work “behind the screens,”19 such as manually tagging images or deciding what text is appropriate for platform users to read.
Content moderators are contracted by tech intermediaries such as Sama, a self-proclaimed ethical AI company, whose clients include companies such as Meta.20 In 2022, Daniel Motaung, a former Facebook (Meta) content moderator and whistleblower, filed a lawsuit against Meta and Sama in Kenya.21 Content moderation work is extremely harmful. Moderators are required to look at and sift through trauma-inducing content (both images and text) for hours at a time at extremely low wages, with no well-being support. Even though content moderators are the reason AI improves in quality, they do not reap any of its benefits.
In addition to extensively underpaying content moderators, Sama also engaged in forced labour, human trafficking and union busting.22, 23 While labour exploitation in Africa is normalized through content moderation work, Motaung’s lawsuit is seminal because it is one of the first against Meta outside the West.24 Since 2022, multiple class action lawsuits have been filed against Meta, OpenAI and TikTok.25 In April 2025, the Global Trade Union Alliance of Content Moderators was launched26 to protect workers from algorithmic, wage and labour exploitation, which also have come under scrutiny in North America.27
More AI use often means more harmful human labour and, now more frequently seen, fewer jobs for humans. There is substantial evidence of labour displacement from technology companies such as Meta, Tesla, X and Google, all of which have laid off thousands of employees, as a result of automation and generative AI integration.28 Reviews and critiques of the development of LLMs have revealed the contribution of LLMs to racism through the exploitation of workers, communities and their environments,29 as well as the planetary costs of AI.30 A new study shows that a single ChatGPT-generated email of 100 words is equivalent to using 519 millilitres of bottled water.31 This is just one example of the shocking amount of fresh water consumed when using generative AI. Cooling data centres requires large amounts of fresh water. Companies that host data centres ignore the rights of locals even in drought conditions, as experienced in communities across Spain32 and the United States.33 Recently, scholars have begun to address the nexus between generative AI and environmental degradation, exploring the structural and ecological consequences of AI systems, especially in the context of resource extraction, energy consumption and environmental injustice.34
Currently, there is a private proposal to build a $70-billion data centre in the province of Alberta. For the Sturgeon Lake Cree Nation, however, the development is seen as an infringement on its treaty rights.35 This should cause concern, especially as new data centre deals are being formalized in Canada in spite of pushback from First Nations communities.
Thus, more generative AI often means more harmful jobs for humans and more devastating impacts on the environment. Paradoxically, and irrespective of the harms they cause, investment in these technologies continues to grow. The creation of ever larger and more powerful LLMs highlights Big Tech business model goals to “further shift power relations in their favour.”36
Governance by AI in the Name of Partnerships at Any Cost
Big Tech promotes governance by AI enabled by LLMs. Regulators have expressed a range of concerns regarding the potential consequences of generative AI applications, including dependency and social withdrawal, unhealthy attitudes to relationships, heightened risk of sexual abuse, compounded risk of bullying, and financial exploitation.37 Several LLMs developed by technology companies have come under scrutiny for producing harmful outputs, including facilitating inappropriate interactions with minors38 and encouraging self-harm and suicide, in some instances leading to actual death.39
Notably, many of these companies have increasingly sidelined the human-in-the-loop (HITL), thereby reducing opportunities for meaningful oversight and accountability. At the same time, major technology firms have deepened their involvement in defence contracts, raising serious human rights concerns. To remain competitive, emerging AI companies such as Anthropic and Cohere have adopted similar strategies.40 A particularly illustrative example is Cohere, whose partnerships highlight the broader risks associated with the alignment between emerging AI firms.
Cohere, widely promoted as Canada’s flagship AI innovator, specializes in producing LLMs. It has been positioned as the competitor to foreign AI companies. In December 2024, Innovation, Science and Economic Development Canada (ISED) (legally named the Department of Industry as per the Department of Industry Act)41 announced its CDN$240 million investment in Cohere.42 Government bolsters economic development by expanding technological innovation through startup investment such as Cohere. This type of public investment is similar to other investments made by ISED toward supporting and developing Canada’s private sector.
Partnering and partnerships are more than new collaborations on projects and public relations opportunities. These partnerships not only signal newly formed loyalties but also technical dependencies and the future privatization of public assets. In December 2014, Cohere quietly launched its partnership with the American defence and data analytics company, Palantir Technologies Inc.43 The partnership revealed that Cohere’s AI models are accessible by Palantir’s customers through Palantir’s software, Foundry,44 marketed as the “ontology-powered operating system for the modern enterprise.”45
Palantir has been widely criticized by human rights and civil society organizations (CSOs) on their numerous human rights abuses. In the United States, the government’s Immigration and Customs Enforcement (ICE) agency has used Palantir systems to conduct mass raids, resulting in the separation of families and the prolonged detention and deportation of caregivers.46 These actions have caused significant harm to children and communities. Concerns intensified when the United Nations World Food Program partnered with Palantir, potentially granting it access to sensitive refugee data.47 Privacy International questioned the compatibility of such a partnership with the humanitarian principle of “Do No Harm,” emphasizing the need for stronger safeguards.48 In the United Kingdom, Palantir has been lobbying for the privatization of healthcare for several years, with much pushback from healthcare providers and CSOs.49 In Gaza, there are reasonable grounds to conclude that Palantir’s AI platform facilitates “real-time battlefield data integration for automated decision-making”50 in the genocide of Palestinians.51 It is worrying that the Canadian government has not set parameters for the partnerships that government-supported companies may enter into.
Governance by AI to Concentrate Power via Familiar Acquisition Patterns
Not all partnerships with private firms need be inherently harmful. Harm prevention entails ethical safeguards and independent oversight, outside of self-regulatory policy mechanisms. When investing in tech start-ups such as Cohere, however, the federal government offers insufficient guardrails to protect Canadian sovereignty of intellectual property (IP) from foreign acquisition, especially by Silicon Valley companies. This is a valid concern we have seen play out over the years, especially in technology innovation. Some AI startups that previously benefitted from ISED investment were later acquired by American companies. For example, North Inc. was a Canadian company that was acquired by Google.52 More recently, Element AI, a company heavily funded by the Canadian and Quebec governments, was acquired by ServiceNow, a developer of employee monitoring software, which is a type of AI that has come under scrutiny for its automation of hiring, firing and promotion decisions.53
These acquisitions were stipulated to happen at low prices, especially in the case of Element AI at CDN$230 million, as “founders saw value mostly wiped out.”54 The coverage was brief, the Canadian tech policy community moved on, and new start-ups were created. Element AI employees went to work for ServiceNow or Big Tech companies, or created new start-ups that partner with insurance companies to sell AI assurance to tech companies, essentially underwriting AI “hallucinations” (inaccurate or fabricated results).55
Although so many AI technologies promise to relieve workers of mundane tasks, thereby enabling greater creativity in daily work, ironically it is the industries where creativity is essential — the arts, writing, film production, and so on — that are most at risk. They are especially vulnerable as a lot of generative AI platforms such as ChatGPT violate the IP rights of creators. Often, the expansion of AI does not enhance human labour but rather displaces it, leading to reduced employment opportunities in fields historically driven by human creativity.
Privatization in AI often means abiding by the rules crafted by Big Tech and competing with business models that incentivize high-risk behaviour, such as illegal scraping of online content. These practices have triggered numerous lawsuits, placing corporate reputations and financial stability at risk. These lawsuits have not only targeted Big Tech companies such as Google and Microsoft, but have also extended to newer AI companies such as OpenAI, Anthropic56 and Canada’s Cohere. In Cohere’s case, the company is currently facing multiple lawsuits57 from national and international media outlets over alleged IP violations.58
Policy Recommendations
Cohere is the latest major start-up to receive significant government investment, following examples such as Element AI and North Inc. As I have argued elsewhere, these acquisitions illustrate recurring trends in shaping business models designed to attract major industry players.59 The Silicon Valley acquisition of these firms should serve as a cautionary tale for the future of AI governance. Crucially, this means the need to influence and direct the shift from governance of AI to governance by AI, one that risks accelerating the automation of public sector roles and diminishing democratic oversight.
This can be done by advocating for changes in current policy:
Require conditional investment in tech and AI companies.
Public investments and partnerships play a central role in the procurement, deployment, development and use of AI. However, time and time again AI companies have been shown to exploit gaps in public sector technology procurement processes and the absence of regulatory mechanisms, such as the US-based facial recognition company Clearview AI.60 Of particular concern is the widespread use of unregulated free software trials, which currently fall outside formal procurement accountability mechanisms. A policy should be created requiring the proactive disclosure of all AI-related software trials used by government and the companies these software trials represent. This could take the form of a publicly accessible registry.
There is a pressing need to strengthen the criteria guiding government investments in technology companies to ensure that public authorities retain decision-making power over the allocation of financial support. Such measures will help deter disproportionate private sector influence over public governance processes. This especially will help safeguard against the encroachment of private interest into government policy development to regulate AI. In addition to establishing clear investment criteria, governments should prioritize clarity in investment programs for industry, as well as details of AI initiatives, such as “precise strategic goals, success measures, and performance targets for their initiatives”61 and their long-term impacts and outcomes.
Create a list of companies linked to human rights abuses.
The government should draw on its human rights policies to develop a list of companies directly linked to or complicit in human rights abuses. Such a list would serve as a critical tool for aligning AI regulation with the principles enshrined in the Canadian Charter of Rights and Freedoms.62 This mechanism would support the creation of more meaningful regulatory frameworks, ones that not only prevent future abuses but also enable access to redress and accountability for affected communities.
This also implies that the federal government should remove Palantir from its AI Source List.63 Officially known as the Pre-qualified AI Suppliers List, the AI Source List was created by the Public Services and Procurement Canada and Treasury Board of Canada Secretariat to fast-track companies that can be contracted by federal departments and agencies. Given Palantir’s documented involvement in human rights controversies, its continued inclusion undermines the ethical integrity of public procurement and contradicts Canada’s stated commitments to human rights–based governance. Demonstrating greater consistency and transparency would help improve public trust and chart a path toward accountability.64
Make algorithmic impact assessments (AIAs) mandatory for all companies operating in Canada.
The mandated use of AIAs was among the first steps the federal government took toward AI accountability. The AIA sits under Canada’s Directive on Automated Decision-Making, which should be formalized into law to signal to the public the government’s commitment to transparency and accountability. Transparency commitments could include offering publicly accessible information about the use and procurement of AI technologies and disseminating updates through official websites and social media channels. Accountability commitments can require the implementation of AI systems to be accompanied by specific and ongoing monitoring requirements, particularly in cases where the system’s use, scope, or societal impact evolve over time.
Include independent oversight of the ethical frameworks companies produce.
It is essential to recognize that companies, regardless of how many ethical principles they endorse, should not be assumed trustworthy by default. As the director of Amnesty International’s Silicon Valley Initiative has aptly stated, “Palantir touts its ethical commitments, saying it will never work with regimes that abuse human rights abroad. This is deeply ironic, given the company’s willingness stateside to work directly with ICE, which has used its technology to execute harmful policies that target migrants and asylum-seekers.”65 This contradiction underscores the limits of self-regulation and highlights the urgent need for independent oversight and enforceable accountability mechanisms in the AI sector. Consequently, new international coordination mechanisms are paramount for addressing transnational data and labour exploitation. The International Labor Organization can incorporate the Global Trade Union Alliance of Content Moderators in its governance frameworks for fair work related to AI labour, including AI ethics guidelines spearheaded by UNESCO.66 Equally important are sector-specific contributions such as the United Nations Environment Programme’s digital environmental governance agenda to advance climate justice.67 Combining global governance frameworks with national and sub-national regulations specific to AI can offer protection from and prevention of AI harms.
Encourage an open culture of discussing and learning from public investment failures.
It is necessary to critically examine the federal government’s financial support for companies like Cohere and their partnerships, instead of championing them as Canada’s “good” AI company, setting them apart from competitors.68, 69 Such narratives risk obscuring problematic corporate relationships and positioning these companies as exceptional or above scrutiny. A national discussion on these acquisitions is urgently needed, one that interrogates the implications of foreign acquisitions and partnership before putting forth new models to protect small and medium-sized Canadian enterprises.70 Without this reckoning, policy frameworks may inadvertently reinforce the very dynamics they seek to challenge.
Governments, companies and citizens need to be less insular and focused on Canada. Instead, they need to recognize and understand that the limitations and harms of AI vary across sectors and nations. While AI systems are often developed globally, their impacts are often deeply local too. As such, there is a pressing need for concerted efforts to proactively safeguard employees and contractors in AI supply chains.71 Regulatory frameworks must embed workers’ rights as a core principle, both within Canada and internationally, including protections related to data handling, surveillance and labour conditions in the development and deployment of AI systems.72
Despite the claims made by investors, tech companies and governments, generative AI systems such as chatbots have not produced measurable economic growth. Anders Humlum and Emilie Vestergaard asserted that their findings “challenge narratives of imminent labour market transformation due to Generative AI.”73 In fact, so-called generative AI remains prone to hallucinations, frequently generating inaccurate or fabricated information. Paid-for generative AI tools may be even more likely to deliver confidently incorrect outputs.74
The choice for federal governments should be clear, according to Asmelash Teka Hadgu and Timnit Gebru: “We cannot afford to replace the critical tasks of federal workers with models that completely make stuff up. There is no substitute for the expertise of federal workers handling sensitive information and working on life-critical sectors ranging from health care to immigration.”75 It is essential to resist both the privatization of AI governance and the broader shift toward governance by AI, as these are developments that threaten to erode public sector integrity, accountability and democratic oversight.