Challenging Privatization in Governance by AI: A Caution for the Future of AI Governance

VOLUME 7

ISSUE 6

September 15, 2025

Privatization is increasingly driving the uptake of generative artificial intelligence (AI) across various sectors. The drive for AI adoption, whether in the name of innovation or the economy, has dominated mainstream news. However, there is less public awareness of generative AI’s devastating impacts on labour and the environment. Whether in self-regulation or government regulation, Big Tech influences the direction of governance of AI, which increasingly is evolving to governance by AI and the automation of jobs. “The future of work is already here,” states a 2025 report from Human Rights Watch. “Workers around the world are increasingly hired, compensated, disciplined, and fired by algorithms.”1

Introduction

Generative AI consists of AI systems such as large language models (LLMs),2 but most AI that has been used for decades is not generative AI. All of AI, from LLMs to machine learning and deep learning, consists of biases in race and gender.3 Extensive scholarly research has highlighted the politics of AI and algorithmic harm in race, gender and class.4, 5, 6 However, the hype created by AI companies and the push toward using generative AI is unprecedented. Generative AI tools represent a massive opportunity for companies based on the data they gather as business intelligence to increase profit and accuracy in surveillance.7 This extractive model does not stop at Big Tech companies such as Alphabet (Google), Amazon, Apple, Meta and Microsoft, but extends to third-party companies you may never have heard of.8

This paper argues that the harms of generative AI are exacerbated through privatization, partnerships with Big Tech companies, and the lack of adequate oversight by government and third-party regulators. The paper discusses three issues with generative AI and the realm of privatization: exploitation of workers and the environment; defence partnerships and human rights abuses; and concentration of power through foreign acquisition.

Governance by AI at the Expense of Workers and the Environment

The role of labour is essential in AI development,9, 10 but largely ignored by Big Tech. The notion of algorithmic colonialism reveals that “while traditional colonialism is often spearheaded by political and government forces, digital colonialism is driven by corporate tech monopolies — both of which are in search of wealth accumulation,”11 creating a new form of infrastructural dependency. In the same vein, the notion of data colonialism explicates a new surveillance economy that “paves the way for a new stage of capitalism whose outlines we only glimpse: the capitalization of life without limit.” Consequently, this exacerbates labour exploitation in the global South,12 where predatory behaviour of Big Tech is reinforced by hegemonic AI,13 founded on extractive relationships between the private sector and users, and more broadly, data extractivism.14 Labour is outsourced to workers globally “through digital labour platforms (crowdsourcing) or business process outsourcing companies”15 that are controlled by the tech and AI industry.

Exploitative labour practices in the Global South AI industry16 reveal the power relations of algorithmic colonialism.17 For instance, to improve the quality of giant Silicon Valley datasets used in AI training models, “ghost workers”18 do content moderation work “behind the screens,”19 such as manually tagging images or deciding what text is appropriate for platform users to read.

Content moderators are contracted by tech intermediaries such as Sama, a self-proclaimed ethical AI company, whose clients include companies such as Meta.20 In 2022, Daniel Motaung, a former Facebook (Meta) content moderator and whistleblower, filed a lawsuit against Meta and Sama in Kenya.21 Content moderation work is extremely harmful. Moderators are required to look at and sift through trauma-inducing content (both images and text) for hours at a time at extremely low wages, with no well-being support. Even though content moderators are the reason AI improves in quality, they do not reap any of its benefits.

In addition to extensively underpaying content moderators, Sama also engaged in forced labour, human trafficking and union busting.22, 23 While labour exploitation in Africa is normalized through content moderation work, Motaung’s lawsuit is seminal because it is one of the first against Meta outside the West.24 Since 2022, multiple class action lawsuits have been filed against Meta, OpenAI and TikTok.25 In April 2025, the Global Trade Union Alliance of Content Moderators was launched26 to protect workers from algorithmic, wage and labour exploitation, which also have come under scrutiny in North America.27

More AI use often means more harmful human labour and, now more frequently seen, fewer jobs for humans. There is substantial evidence of labour displacement from technology companies such as Meta, Tesla, X and Google, all of which have laid off thousands of employees, as a result of automation and generative AI integration.28 Reviews and critiques of the development of LLMs have revealed the contribution of LLMs to racism through the exploitation of workers, communities and their environments,29 as well as the planetary costs of AI.30 A new study shows that a single ChatGPT-generated email of 100 words is equivalent to using 519 millilitres of bottled water.31 This is just one example of the shocking amount of fresh water consumed when using generative AI. Cooling data centres requires large amounts of fresh water. Companies that host data centres ignore the rights of locals even in drought conditions, as experienced in communities across Spain32 and the United States.33 Recently, scholars have begun to address the nexus between generative AI and environmental degradation, exploring the structural and ecological consequences of AI systems, especially in the context of resource extraction, energy consumption and environmental injustice.34

Currently, there is a private proposal to build a $70-billion data centre in the province of Alberta. For the Sturgeon Lake Cree Nation, however, the development is seen as an infringement on its treaty rights.35 This should cause concern, especially as new data centre deals are being formalized in Canada in spite of pushback from First Nations communities.

Thus, more generative AI often means more harmful jobs for humans and more devastating impacts on the environment. Paradoxically, and irrespective of the harms they cause, investment in these technologies continues to grow. The creation of ever larger and more powerful LLMs highlights Big Tech business model goals to “further shift power relations in their favour.”36

Governance by AI in the Name of Partnerships at Any Cost

Big Tech promotes governance by AI enabled by LLMs. Regulators have expressed a range of concerns regarding the potential consequences of generative AI applications, including dependency and social withdrawal, unhealthy attitudes to relationships, heightened risk of sexual abuse, compounded risk of bullying, and financial exploitation.37 Several LLMs developed by technology companies have come under scrutiny for producing harmful outputs, including facilitating inappropriate interactions with minors38 and encouraging self-harm and suicide, in some instances leading to actual death.39

Notably, many of these companies have increasingly sidelined the human-in-the-loop (HITL), thereby reducing opportunities for meaningful oversight and accountability. At the same time, major technology firms have deepened their involvement in defence contracts, raising serious human rights concerns. To remain competitive, emerging AI companies such as Anthropic and Cohere have adopted similar strategies.40 A particularly illustrative example is Cohere, whose partnerships highlight the broader risks associated with the alignment between emerging AI firms.

Cohere, widely promoted as Canada’s flagship AI innovator, specializes in producing LLMs. It has been positioned as the competitor to foreign AI companies. In December 2024, Innovation, Science and Economic Development Canada (ISED) (legally named the Department of Industry as per the Department of Industry Act)41 announced its CDN$240 million investment in Cohere.42 Government bolsters economic development by expanding technological innovation through startup investment such as Cohere. This type of public investment is similar to other investments made by ISED toward supporting and developing Canada’s private sector.

Partnering and partnerships are more than new collaborations on projects and public relations opportunities. These partnerships not only signal newly formed loyalties but also technical dependencies and the future privatization of public assets. In December 2014, Cohere quietly launched its partnership with the American defence and data analytics company, Palantir Technologies Inc.43 The partnership revealed that Cohere’s AI models are accessible by Palantir’s customers through Palantir’s software, Foundry,44 marketed as the “ontology-powered operating system for the modern enterprise.”45

Palantir has been widely criticized by human rights and civil society organizations (CSOs) on their numerous human rights abuses. In the United States, the government’s Immigration and Customs Enforcement (ICE) agency has used Palantir systems to conduct mass raids, resulting in the separation of families and the prolonged detention and deportation of caregivers.46 These actions have caused significant harm to children and communities. Concerns intensified when the United Nations World Food Program partnered with Palantir, potentially granting it access to sensitive refugee data.47 Privacy International questioned the compatibility of such a partnership with the humanitarian principle of “Do No Harm,” emphasizing the need for stronger safeguards.48 In the United Kingdom, Palantir has been lobbying for the privatization of healthcare for several years, with much pushback from healthcare providers and CSOs.49 In Gaza, there are reasonable grounds to conclude that Palantir’s AI platform facilitates “real-time battlefield data integration for automated decision-making”50 in the genocide of Palestinians.51 It is worrying that the Canadian government has not set parameters for the partnerships that government-supported companies may enter into.

Governance by AI to Concentrate Power via Familiar Acquisition Patterns

Not all partnerships with private firms need be inherently harmful. Harm prevention entails ethical safeguards and independent oversight, outside of self-regulatory policy mechanisms. When investing in tech start-ups such as Cohere, however, the federal government offers insufficient guardrails to protect Canadian sovereignty of intellectual property (IP) from foreign acquisition, especially by Silicon Valley companies. This is a valid concern we have seen play out over the years, especially in technology innovation. Some AI startups that previously benefitted from ISED investment were later acquired by American companies. For example, North Inc. was a Canadian company that was acquired by Google.52 More recently, Element AI, a company heavily funded by the Canadian and Quebec governments, was acquired by ServiceNow, a developer of employee monitoring software, which is a type of AI that has come under scrutiny for its automation of hiring, firing and promotion decisions.53

These acquisitions were stipulated to happen at low prices, especially in the case of Element AI at CDN$230 million, as “founders saw value mostly wiped out.”54 The coverage was brief, the Canadian tech policy community moved on, and new start-ups were created. Element AI employees went to work for ServiceNow or Big Tech companies, or created new start-ups that partner with insurance companies to sell AI assurance to tech companies, essentially underwriting AI “hallucinations” (inaccurate or fabricated results).55

Although so many AI technologies promise to relieve workers of mundane tasks, thereby enabling greater creativity in daily work, ironically it is the industries where creativity is essential — the arts, writing, film production, and so on — that are most at risk. They are especially vulnerable as a lot of generative AI platforms such as ChatGPT violate the IP rights of creators. Often, the expansion of AI does not enhance human labour but rather displaces it, leading to reduced employment opportunities in fields historically driven by human creativity.

Privatization in AI often means abiding by the rules crafted by Big Tech and competing with business models that incentivize high-risk behaviour, such as illegal scraping of online content. These practices have triggered numerous lawsuits, placing corporate reputations and financial stability at risk. These lawsuits have not only targeted Big Tech companies such as Google and Microsoft, but have also extended to newer AI companies such as OpenAI, Anthropic56 and Canada’s Cohere. In Cohere’s case, the company is currently facing multiple lawsuits57 from national and international media outlets over alleged IP violations.58

Policy Recommendations

Cohere is the latest major start-up to receive significant government investment, following examples such as Element AI and North Inc. As I have argued elsewhere, these acquisitions illustrate recurring trends in shaping business models designed to attract major industry players.59 The Silicon Valley acquisition of these firms should serve as a cautionary tale for the future of AI governance. Crucially, this means the need to influence and direct the shift from governance of AI to governance by AI, one that risks accelerating the automation of public sector roles and diminishing democratic oversight.

This can be done by advocating for changes in current policy:

Require conditional investment in tech and AI companies.

Public investments and partnerships play a central role in the procurement, deployment, development and use of AI. However, time and time again AI companies have been shown to exploit gaps in public sector technology procurement processes and the absence of regulatory mechanisms, such as the US-based facial recognition company Clearview AI.60 Of particular concern is the widespread use of unregulated free software trials, which currently fall outside formal procurement accountability mechanisms. A policy should be created requiring the proactive disclosure of all AI-related software trials used by government and the companies these software trials represent. This could take the form of a publicly accessible registry.

There is a pressing need to strengthen the criteria guiding government investments in technology companies to ensure that public authorities retain decision-making power over the allocation of financial support. Such measures will help deter disproportionate private sector influence over public governance processes. This especially will help safeguard against the encroachment of private interest into government policy development to regulate AI. In addition to establishing clear investment criteria, governments should prioritize clarity in investment programs for industry, as well as details of AI initiatives, such as “precise strategic goals, success measures, and performance targets for their initiatives”61 and their long-term impacts and outcomes.

Create a list of companies linked to human rights abuses.

The government should draw on its human rights policies to develop a list of companies directly linked to or complicit in human rights abuses. Such a list would serve as a critical tool for aligning AI regulation with the principles enshrined in the Canadian Charter of Rights and Freedoms.62 This mechanism would support the creation of more meaningful regulatory frameworks, ones that not only prevent future abuses but also enable access to redress and accountability for affected communities.

This also implies that the federal government should remove Palantir from its AI Source List.63 Officially known as the Pre-qualified AI Suppliers List, the AI Source List was created by the Public Services and Procurement Canada and Treasury Board of Canada Secretariat to fast-track companies that can be contracted by federal departments and agencies. Given Palantir’s documented involvement in human rights controversies, its continued inclusion undermines the ethical integrity of public procurement and contradicts Canada’s stated commitments to human rights–based governance. Demonstrating greater consistency and transparency would help improve public trust and chart a path toward accountability.64

Make algorithmic impact assessments (AIAs) mandatory for all companies operating in Canada.

The mandated use of AIAs was among the first steps the federal government took toward AI accountability. The AIA sits under Canada’s Directive on Automated Decision-Making, which should be formalized into law to signal to the public the government’s commitment to transparency and accountability. Transparency commitments could include offering publicly accessible information about the use and procurement of AI technologies and disseminating updates through official websites and social media channels. Accountability commitments can require the implementation of AI systems to be accompanied by specific and ongoing monitoring requirements, particularly in cases where the system’s use, scope, or societal impact evolve over time.

Include independent oversight of the ethical frameworks companies produce.

It is essential to recognize that companies, regardless of how many ethical principles they endorse, should not be assumed trustworthy by default. As the director of Amnesty International’s Silicon Valley Initiative has aptly stated, “Palantir touts its ethical commitments, saying it will never work with regimes that abuse human rights abroad. This is deeply ironic, given the company’s willingness stateside to work directly with ICE, which has used its technology to execute harmful policies that target migrants and asylum-seekers.”65 This contradiction underscores the limits of self-regulation and highlights the urgent need for independent oversight and enforceable accountability mechanisms in the AI sector. Consequently, new international coordination mechanisms are paramount for addressing transnational data and labour exploitation. The International Labor Organization can incorporate the Global Trade Union Alliance of Content Moderators in its governance frameworks for fair work related to AI labour, including AI ethics guidelines spearheaded by UNESCO.66 Equally important are sector-specific contributions such as the United Nations Environment Programme’s digital environmental governance agenda to advance climate justice.67 Combining global governance frameworks with national and sub-national regulations specific to AI can offer protection from and prevention of AI harms.

Encourage an open culture of discussing and learning from public investment failures.

It is necessary to critically examine the federal government’s financial support for companies like Cohere and their partnerships, instead of championing them as Canada’s “good” AI company, setting them apart from competitors.68, 69 Such narratives risk obscuring problematic corporate relationships and positioning these companies as exceptional or above scrutiny. A national discussion on these acquisitions is urgently needed, one that interrogates the implications of foreign acquisitions and partnership before putting forth new models to protect small and medium-sized Canadian enterprises.70 Without this reckoning, policy frameworks may inadvertently reinforce the very dynamics they seek to challenge.

Governments, companies and citizens need to be less insular and focused on Canada. Instead, they need to recognize and understand that the limitations and harms of AI vary across sectors and nations. While AI systems are often developed globally, their impacts are often deeply local too. As such, there is a pressing need for concerted efforts to proactively safeguard employees and contractors in AI supply chains.71 Regulatory frameworks must embed workers’ rights as a core principle, both within Canada and internationally, including protections related to data handling, surveillance and labour conditions in the development and deployment of AI systems.72

Despite the claims made by investors, tech companies and governments, generative AI systems such as chatbots have not produced measurable economic growth. Anders Humlum and Emilie Vestergaard asserted that their findings “challenge narratives of imminent labour market transformation due to Generative AI.”73 In fact, so-called generative AI remains prone to hallucinations, frequently generating inaccurate or fabricated information. Paid-for generative AI tools may be even more likely to deliver confidently incorrect outputs.74

The choice for federal governments should be clear, according to Asmelash Teka Hadgu and Timnit Gebru: “We cannot afford to replace the critical tasks of federal workers with models that completely make stuff up. There is no substitute for the expertise of federal workers handling sensitive information and working on life-critical sectors ranging from health care to immigration.”75 It is essential to resist both the privatization of AI governance and the broader shift toward governance by AI, as these are developments that threaten to erode public sector integrity, accountability and democratic oversight.

Endnotes

1. Brian Stauffer, “The Gig Trap: Algorithmic, wage and labor exploitation in platform work in the US,” Human Rights Watch, May 12, 2025, https://www.hrw.org/report/2025/05/12/gig-trap/algorithmic-wage-and-labor-exploitation-platform-work-us.

2. LLMs are generative AI technologies that include OpenAI’s ChatGPT and Google’s Bard.

3. Joy Buolamwini and Timnit Gebru, “Gender shades: Intersectional accuracy disparities in commercial gender classification,” Proceedings of Machine Learning Research 81 (2018) https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.

4. Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York University Press, 2018).

5. Ruha Benjamin, Race after Technology: Abolitionist Tools for the New Jim Code (Polity Press, 2019).

6. Virginia Eubanks, Automating Inequality: How High-tech Tools Profile, Police, and Punish the Poor (St. Martin’s Press, 2018).

7. Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (Public Affairs, 2019).

8. Sarah Lamdan, Data Cartels: The Companies that Control and Monopolize Our Information (Stanford University Press, 2022).

9. Abeba Birhane, “Algorithmic colonization of Africa,” SCRIPTed 17 (2020): 389.

10. Julian Posada, “From development to deployment: For a comprehensive approach to ethics of AI and labour,” Association of Internet Researchers Selected Papers of Internet Research (2020).

11. Birhane, “Algorithmic colonization.”

12. Posada, “From development to deployment.”

13. Paola Ricaurte, “Ethics for the majority world: AI and the question of violence at scale,” Media, Culture & Society 44, no. 4 (2022): 726–745.

14. Paola Ricaurte, “Data epistemologies, the coloniality of power, and resistance,” Television & New Media 20, no. 4 (2019): 350–365.

15. Milagros Miceli, Julian Posada and Tianling Yang, “Studying up machine learning data: Why talk about bias when we mean power?” Proceedings of the Association for Computing Machinery on Human-Computer Interaction 6, issue GROUP (2022): 1–14, 34.

16. The Global South AI industry also includes Eastern Europe, where countries such as Bulgaria are used to outsource content moderation work.

17. Birhane, “Algorithmic colonization.”

18. Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (Harper Business, 2019).

19. Sarah T. Roberts, Behind the Screen: Content Moderation in the Shadows (Yale University Press, 2019).

20. Raksha Vasudevan, “A lawsuit against Meta shows the emptiness of social enterprises,” Wired July 28, 2022, https://www.wired.com/story/social-enterprise-technology-africa/.

21. Billy Perrigo, “Under fire, Facebook’s ‘ethical’ outsourcing partner quits content moderation work,” TIME, January 10, 2022, https://time.com/6246018/facebook-sama-quits-content-moderation/.

22. Mukanzi Musanga, “Meta’s attempt to dodge trial in Kenya thwarted by judge,” OpenDemocracy, February 7, 2023, https://www.opendemocracy.net/en/5050/meta-lawsuit-kenya-facebook-content-moderator-sued-daniel-motaung/.

23. Nanjira Sambuli, “Facebook lawsuit in Kenya could affect Big Tech accountability across Africa,” OpenDemocracy, August 12, 2022, https://www.opendemocracy.net/en/5050/facebook-meta-sama-daniel-motaung-court-kenya/.

24. Sambuli, “Facebook lawsuit.”

25. Foxglove Legal, “Big win in Kenya! 185 former Facebook content moderators to take their case against mass firing to trial after courts slap down Meta appeal,” September 23, 2024, https://www.foxglove.org.uk/2024/09/23/facebook-content-moderators-kenya-meta-appeal/.

26. Jess Weatherbed, “Content moderators are organizing against Big Tech,” The Verge, April 30, 2025, https://www.theverge.com/news/658566/content-moderator-union-alliance-meta-tiktok-google.

27. Stauffer, “The Gig Trap.”

28. Cody Corrall, Alyssa Stringer and Kate Park, “A comprehensive list of 2025 tech layoffs,” Tech Crunch, July 31, 2025, https://techcrunch.com/2025/07/31/tech-layoffs-2025-list/.

29. Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell, “On the dangers of stochastic parrots: Can language models be too big?,” Proceedings of the 2021 Association for Computing Machinery Conference on Fairness, Accountability, and Transparency (2021), 610–623.

30. Kate Crawford, The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021).

31. Pengfei Li, Jianyi Yang, Mohammad A. Islam and Shaolei Ren, “Making AI less ‘thirsty’,” Communications of the Association for Computing Machinery 68, no. 7 (2025): 54–61, https://dl.acm.org/doi/full/10.1145/3724499.

32. Clara Hernanz Lizarraga and Olivia Solon, “Thirsty data centers are making hot summers even scarier,” Bloomberg, July 26, 2023, https://www.bloomberg.com/news/articles/2023-07-26/extreme-heat-drought-drive-opposition-to-ai-data-centers.

33. Olivia Solon, “Drought-stricken communities push back against data centers,” NBC News, June 19, 2021, https://www.nbcnews.com/tech/internet/drought-stricken-communities-push-back-against-data-centers-n1271344.

34. Obasesam Okoi, “Artificial Intelligence, the Environment and Resource Conflict,” Balsillie Papers 7, no. 3 (2025). https://balsilliepapers.ca/bsia-paper/artificial-intelligence-the-environment-and-resource-conflict-emerging-challenges-in-global-governance/.

35. Emilie Rubayita, “Alberta First Nation voices ‘grave concern’ over Kevin O’Leary’s proposed $70B AI data centre,” CBC News, January 16, 2025, https://www.cbc.ca/news/canada/edmonton/alberta-first-nation-voices-grave-concern-over-kevin-o-leary-s-proposed-70b-ai-data-centre-1.7431550.

36. Dieuwertje Luitse and Wiebke Denkena, “The great transformer: Examining the role of large language models in the political economy of AI,” Big Data & Society 8, no. 2 (2021): 20539517211047734, 1.

37. Government of Australia, “AI chatbots and companions – risks to children and young people,” Office of the Commissioner of eSafety, February 18, 2025, https://www.esafety.gov.au/newsroom/blogs/ai-chatbots-and-companions-risks-to-children-and-young-people.

38. Jeff Horwitz, “Meta’s ‘digital companions’ will talk sex with users—even children,” Wall Street Journal, April 26, 2025, https://www.wsj.com/tech/ai/meta-ai-chatbots-sex-a25311bf.

39. Blake Montgomery, “Mother says AI chatbot led her son to kill himself in lawsuit against its maker,” The Guardian, October 23, 2024, https://www.theguardian.com/technology/2024/oct/23/character-ai-chatbot-sewell-setzer-death.

40. Rebecca Heilwell, “Palantir, Anthropic, Google deepen ties on government AI use with Claude partnership,” FedScoop, April 17, 2025, https://fedscoop.com/palantir-anthropic-google-government-ai-claude-partnership/.

41. Department of Industry Act, SC 1995, c 1.

42. Department of Finance Canada, “Deputy Prime Minister announces $240 million for Cohere to scale-up AI compute capacity,” December 6, 2024, https://www.canada.ca/en/department-finance/news/2024/12/deputy-prime-minister-announces-240-million-for-cohere-to-scale-up-ai-compute-capacity.html.

43. Charles Rollet, “Cohere is quietly working with Palantir to deploy its AI models,” TechCrunch, December 16, 2024, https://techcrunch.com/2024/12/16/cohere-is-quietly-working-with-palantir-to-deploy-its-ai-models/.

44. Rollet, “Cohere is quietly working.”

45. Palantir Technologies Inc., https://www.palantir.com/platforms/foundry/.

46. Amnesty International, “Failing to do right: The urgent need for Palantir to respect human rights” (2020), https://www.amnesty.org/en/documents/amr51/3124/2020/en/.

47. Responsible Data, “Open letter to WFP re: Palantir agreement,” February 8, 2019, https://responsibledata.io/2019/02/08/open-letter-to-wfp-re-palantir-agreement/.

48. Privacy International, “One of the UN’s largest aid programmes just signed a deal with the CIA-backed data monolith Palantir,” February 12, 2019, https://privacyinternational.org/news-analysis/2712/one-uns-largest-aid-programmes-just-signed-deal-cia-backed-data-monolith.

49. Rhiannon Mihranian Osborne, “NHS England must cancel its contract with Palantir,” bmj 386 (2024), https://www.bmj.com/content/386/bmj.q1712.

50. United Nations Human Rights Office of the High Commissioner, “From economy of occupation to economy of genocide — Report of the Special Rapporteur on the situation of human rights in the Palestinian territories occupied since 1967,” A/HRC/59/23, June 16, 2025, https://www.ohchr.org/en/documents/country-reports/ahrc5923-economy-occupation-economy-genocide-report-special-rapporteur.

51. Amnesty International, “Amnesty International investigation concludes Israel is committing genocide against Palestinians in Gaza,” December 5, 2024, https://www.amnesty.org/en/latest/news/2024/12/amnesty-international-concludes-israel-is-committing-genocide-against-palestinians-in-gaza/.

52. Ana Brandusescu, “Artificial intelligence policy and funding in Canada: Public investments, private interests,” Centre for Interdisciplinary Research on Montreal, McGill University, March 2021, https://www.mcgill.ca/centre-montreal/files/centre-montreal/aipolicyandfunding_report_updated_mar5.pdf.

53. Patrick Thibodeau, “Lawmakers take aim at employee monitoring software,” Tech Target, September 11, 2024, https://www.techtarget.com/searchhrsoftware/news/366609943/Lawmakers-take-aim-at-employee-monitoring-software.

54. Sean Silcoff, “Element AI sold for $230-million as founders saw value mostly wiped out, document reveals,” The Globe and Mail, December 18, 2020, https://www.theglobeandmail.com/business/article-element-ai-sold-for-230-million-as-founders-saw-value-wiped-out/.

55. Lee Harris and Melissa Heikkilä, “Insurers launch cover for losses caused by AI chatbot errors,” Financial Times, May 11, 2025, https://www.ft.com/content/1d35759f-f2a9-46c4-904b-4a78ccc027df.

56. In its copyright lawsuit, Anthropic has been accused of using an AI-fabricated source: https://www.reuters.com/legal/litigation/anthropic-expert-accused-using-ai-fabricated-source-copyright-case-2025-05-13/.

57. Joe Castaldo, “Canadian AI company Cohere sued by major publishers for copyright violations,” The Globe and Mail, February 13, 2025, https://www.theglobeandmail.com/business/article-canadian-ai-company-cohere-sued-by-major-publishers-for-copyright/.

58. It is too early to tell the impact of IP violations on economic growth, although the UN’s World Intellectual Property Organization (WIPO) has begun these efforts: “The World Intangible Investment Highlights is co-published annually as a part of the WIPO-LBS Partnership on Intangible Assets in the Global Economy. By producing comprehensive and timely data on investment across a broad range of intangible assets—including those not included in official statistics—this partnership aims to bridge critical data gaps and support evidence-based policymaking.” https://www.wipo.int/pressroom/en/articles/2025/article_0005.html

59. Brandusescu, “Artificial intelligence policy and funding.”

60. House of Commons, Facial recognition technology and the growing power of artificial intelligence: Report of the Standing Committee on Access to Information, Privacy and Ethics (October 2022) (Chair: Pat Kelly), https://www.ourcommons.ca/Content/Committee/441/ETHI/Reports/RP11948475/ethirp06/ethirp06-e.pdf.

61. Blair Attard-Frost, Ana Brandusescu and Kelly Lyons, “The governance of artificial intelligence in Canada: Findings and opportunities from a review of 84 AI governance initiatives,” Government Information Quarterly 41, no. 2 (2024): 101929.

62. Canadian Charter of Rights and Freedoms, s 7, Part 1 of the Constitution Act, 1982, being Schedule B to the Canada Act 1982 (UK), 1982, c 11.

63. House of Commons, Facial recognition technology.

64. For examples of public trust in AI, see Jess Reia and Ana Brandusescu, “Artificial intelligence in the city: building civic engagement and public trust” (2022), https://libraopen.lib.virginia.edu/public_view/5t34sj769.

65. Amnesty International, “Palantir Technologies contracts raise human rights concerns before NYSE direct listing,” 2020, https://amnesty.org.nz/palantir-technologies-contracts-raise-human-rights-concerns-nyse-direct-listing.

66. UNESCO, “Ethics of artificial intelligence” (November 2021), https://www.unesco.org/en/artificial-intelligence/recommendation-ethics.

67. United Nations Environment Programme, “Environmental Governance Update,” Digital Environmental Governance Agenda, July 11, 2025, https://www.unep.org/resources/newsletter/environmental-governance-update.

68. Murad Hemmadi, “Cohere has a plan to win the AI race—without burning piles of money,” The Logic, February 18, 2025, https://thelogic.co/news/the-big-read/cohere-strategy-nick-frosst-interview/.

69. Rollet, “Cohere is quietly working.”

70. Witness testimony to the Standing Committee on Industry and Technology re: Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, December 7, 2023, https://www.ourcommons.ca/DocumentViewer/en/44-1/INDU/meeting-102/notice.

71. Blair Attard-Frost and David Gray Widder, “The ethics of AI value chains,” Big Data & Society 12, no. 2 (2025): 20539517251340603.

72. Ana Brandusescu and Renee Sieber, “Missed opportunities in AI regulation: Lessons from Canada’s AI and Data Act,” Data & Policy 7 (2025): e40, https://www.cambridge.org/core/journals/data-and-policy/article/missed-opportunities-in-ai-regulation-lessons-from-canadas-ai-and-data-act/5178DE82B270CD41FA3B7ECFC94BF810.

73. Anders Humlum and Emilie Vestergaard, “Large language models, small labor market effects,” University of Chicago, Becker Friedman Institute for Economics Working Paper 2025-56 (2025), https://www.nber.org/papers/w33777?trk=feed_main-feed-card_feed-article-content.

74. Benj Edwards, “AI search engines cite incorrect news sources at an alarming 60% rate, study says,” Ars Technica, March 13, 2025,

https://arstechnica.com/ai/2025/03/ai-search-engines-give-incorrect-answers-at-an-alarming-60-rate-study-says/.

75. Asmelash Teka Hadgu and Timnit Gebru, “Replacing federal workers with chatbots would be a dystopian nightmare,” Scientific American, April 14, 2025, https://www.scientificamerican.com/article/replacing-federal-workers-with-chatbots-would-be-a-dystopian-nightmare/.

ISSN 2563-674X

doi:10.51644/bap76

Skip to content