The Economics of the Data-driven Economy and the Demand for Antitrust

VOLUME 8

ISSUE 4

January 15, 2026

Antitrust traces its beginnings to the first Gilded Age, in the late nineteenth and early twentieth century, when increasing returns due to economies of scale in the heyday of industrialization enabled the rise of the superstar firms of the age, concentrating wealth in industrial centres and putting the top hats on the tycoons who ran the firms. After a hiatus during the war years, antitrust activity picked up in earnest in the 1960s. By then, however, the demand for antitrust had faded — as reflected in the rhetorical question posed by Richard Hofstadter, “Whatever happened to the antitrust movement?”1

What did happen? One factor was the exhaustion of economies of scale as the postwar economy scaled up. The maturing industrial economy was characterized by constant returns to scale and a constant share of national income flowing to labour (the so-called Kaldor facts).2 Moreover, due to successive rounds of multilateral trade liberalization under the GATT (General Agreement on Tariffs and Trade), and innovations in transport and telecommunications,3 markets had become global in scale, exhausting any remaining economies of scale at the national level. Indeed, by 1985, globalization was far enough advanced to inspire a book titled The Global Factory.4 By then, the transition from the industrial economy to the knowledge-based economy was already underway.5 In this new economy, economic rents (supernormal profits above the cost of capital) accrued not to scalable manufacturing (which now operated under fiercely competitive market conditions) but to protected intellectual property (IP): patents, copyrights and industrial designs at the product development stage, and branding at the marketing stage, as brought out by Stan Shih’s “smiling curve.”6

The gilding was off the industrial economy and domestic manufacturers were coming to the halls of government, hat now in hand, seeking protection from foreign competition. It was the age of another instrument that emerged in the first Gilded Age — antidumping.7 As demand for antidumping rose, demand for antitrust waned. The postwar rise in the annual number of antitrust cases peaked in the late 1970s and then reversed.

Antitrust is again in vogue; its long winter has ended. And it is not difficult to see why. The revival of demand for antitrust is coincident with the advent of a new Gilded Age — this time in the context of an economy built on intangible assets — IP and (later, increasingly) data. Now, as then, it is concern over the societal income disparities and the overweening influence of larger-than-life CEOs of superstar firms8 that drives political activism; technocratic concerns about consumer welfare and conditions of competition in product markets, while hardly irrelevant for economic policy, are secondary and largely overshadowed as drivers.

The societal and political drivers of the rebirth of the antitrust movement are perhaps best exemplified by the refusal by Mark Zuckerberg and Cheryl Sandberg to appear before an international committee of Parliamentarians.9 Jim Balsillie presciently (given the subsequent rise of the “broligarchy”10) argued their refusal asserted a political status:

Technology is disrupting governance and if left unchecked could render liberal democracy obsolete. By displacing the print and broadcast media in influencing public opinion, technology is becoming the new Fourth Estate. In our system of checks and balances, this makes technology co-equal with the executive, the legislature, and the judiciary. When this new Fourth Estate declines to appear before this Committee — as Silicon Valley executives are currently doing — it is symbolically asserting this aspirational co-equal status. But it is asserting this status and claiming its privileges without the traditions, disciplines, legitimacy or transparency that checked the power of the traditional Fourth Estate.11

The concerns are not uniquely American. While then-US President Joe Biden’s appointment of Lina Kahn to chair the Federal Trade Commission in 2021 was interpreted as a move to rein in big tech, the European Union’s Margrethe Vestager had already achieved star status as a big tech trust buster, and China had also moved to take big tech CEOs down a notch, with the most notable move being the disappearance for some months of Jack Ma, the star CEO of Alibaba.

Maurice E. Stucke and Ariel Ezrachi describe four historical cycles for antitrust, the fourth being the period of decline from peak activism in the late 1970s to 2010, a period which coincides with the era of the knowledge-based economy.12 The fifth cycle in this parsing, one of rising attention to antitrust, then coincides with the rise of the data-driven economy in the leading technological economies. This leads us to look at the economic and technological conditions of this new economy.

The Economic and Technological Conditions of the New Gilded Age

The start of the data-driven economy can be dated to about 2010. Its emergence was enabled by a series of connected technological breakthroughs in the late 2000s, namely: the development of deep learning neural nets by Geoffrey Hinton in 2006; the release of the iPhone by Apple in 2007, ushering in the mobile era, which generated massive amounts of data streaming into the cloud; and the application of GPUs (graphics processing units) to power neural nets by Stanley Ng’s team at Stanford in 2009. In 2010, Eric Schmidt, then CEO of Google, speaking at a tech conference in Barcelona, conveyed the sense of a new economic age dawning. Referring to mobile computing and mobile data networks, he stated, “these networks are now so pervasive, we can literally know everything if we want to. What people are doing, what people care about, information that’s monitored, we can literally know it, if we want to and if people want us to know it.”13

While these features are exciting and highly profitable for Google, they are a problem for competition policy since the data-driven economy has a number of characteristics that individually can lead to market failure and that collectively result in a “winner takes most” commercial environment that is hostile to competitive markets and concentrates economic power in the hands of the superstar firms that emerge as winners. These characteristics include:

  • steep economies of scale, which emerge because of the investment costs to capture, classify and curate data (see, for example, Google’s massive server farms) and to successfully monetize it;
  • powerful economies of scope due to the increasing value of data as it is cross-referenced through relational databases;
  • network externalities in many use cases, including two-sided markets that are prone to “tipping” and the emergence of superstar firms; and
  • irreducible information asymmetry, which can be thought of as an industrial-strength “sixth sense” with all the evolutionary advantages this implies for those who possess it.14

The combination of these factors makes the data-driven economy extraordinarily difficult to regulate, both in the technical sense of creating efficient strategies to address the plethora of negative externalities to which these factors can potentially give rise, and in the political economy sense, since even a small fraction of the enormous economic rents captured by big tech, deployed through “lobbynomics,”15 is sufficient to ensure regulatory capture and the promulgation of regulatory regimes favourable for these superstar firms.16

However, even if the political will could be mustered to grapple with said negative externalities, a number of daunting challenges must be overcome: the elusive nature of data as an economic resource; the business model of this economy, which is built on exploiting the characteristics enumerated above that drive market failure; and the geopolitics of the data-driven economy. Let’s review these in turn.

The Nature of Data

Previous productive assets that defined their era — land in the agrarian age, the machinery of mass production in the industrial age, and traditional IP in the knowledge-based economy — were reducible to basic units that could be bought and sold in markets with invoices and receipts that reflect and establish market value. Markets in that sense are reducible to transactions. This is not the case with data. For the most part, data is captured rather than acquired in market transactions.

Moreover, traditional productive assets came with property rights — i.e., there was ownership. But “ownership” becomes problematic with datafication. For example, Google’s Bard (now rebranded as Gemini) read most everything on the internet in creating a model of what language looks like.17 So did OpenAI’s ChatGPT. In that sense, the often-made analogy to oil breaks down: two oil companies cannot exploit the same barrel of oil. But two companies can exploit the same barrel of data.

At the same time, relational data cannot be owned, even in principle. First, there is an inherent duality to transactional data in the sense that there are at least two sides to a transaction. In modern digital commercial contexts, the number of parties privy to transactions proliferates because numerous intermediaries participate in physically administering them, from the e-commerce companies and search engines that connect buyers and sellers, to the financial entities that record the financial particulars, to the telecommunications agencies that transmit the digital information. But, paradoxically, while such data is not a private good in the sense that exclusive ownership can be assigned, neither is it a public good, accessible by all. Neither the private good nor the public good framing works for data.

Further, the value of data is not intrinsic to the individual “datums” that comprise it. An individual’s data has minimal measurable value. When combined with massive numbers of other data points, the assembled data has enormous value — but the value lies in the patterns they reveal, and those patterns cannot be traced back to the individual data points to assign the latter a value.

To be sure, in the case of two-sided markets, one side of which involves a zero-price good, a value can be assigned to the data that is acquired based on the value of the “free” good or service that is provided in implicit exchange for the data. However, this approach fails to capture the value of the economic rents that data generate.

Similarly, some insights into the value of data can be obtained from secondary market transactions that involve curated databases or merger and acquisitions activity involving data-rich firms. However, the large pools of data that truly define the data-driven economy (i.e., those assembled by the superstar firms) are not traded. They are akin in this sense to the “dark pools” of capital in equity markets that allow private exchange without influencing market prices through transparent bids. This lack of transparency of the proprietary data assets of the major digital firms in turn makes it virtually impossible to regulate them effectively.

These are unprecedented conundrums for a productive asset that serves as the essential capital asset for a market economy. Simply put, data does not have characteristics around which market frameworks can be built. Ultimately, at the present moment, the value of data must be inferred indirectly — for example, on the basis of enterprise market valuation.18

An Economy Built on a Source of Market Failure

A second critical feature of big data is that the scale is large relative to the global economy, let alone individual economies or firms. Insofar as data is used to train artificial intelligence (AI), the technology is still at a stage where more data means better AI. For example, recent experience with the improvements of large language models (LLMs) suggests that the larger the data set, the better the trained AI is in understanding context, interpreting out-of-sample data, capturing outliers, and handling nuances and variations in language, etc. Accordingly, there is a voracious appetite for more data.

  • Data is now measured in zettabytes per year at the global level and is growing at about 40 percent annually, doubling every few years. A zettabyte is 1021 bytes or a trillion gigabytes. Put another way, the amount of data assembled (in bytes) by 2025 was about 165 times the number of stars in the observable universe.19 The figures are truly astronomical.
  • Computer chips training AI systems now have as many as four trillion transistors.20
  • AI models now routinely feature one trillion or more parameters; specialized systems have been developed to handle models in the 100+ trillion range (so-called brain-scale AI systems).21

There is a direct connection between this property of data and the superstar firm phenomenon that is characteristic of the data-driven economy. In effect, there is a data Matthew Principle that drives runaway market concentration: the more data the market leader has, the better the AI models that can be trained, the better the quality of inferences extracted, the stronger the network effects in inducing users to come on board, the greater the data edge, and so on.22 This self-reinforcing loop continuously widens the information asymmetry between entities with extensive data and those without.

The scale issue goes beyond data. The ecosystem that produces the cutting-edge devices that power the data-driven economy features an international fellowship of Leviathans, each of which has overwhelming dominance in its niche. These include TSMC (Taiwan Semiconductor Manufacturing Company), which dominates advanced chip production; the Dutch firm ASML, which is the only company in the world producing the most advanced EUV (extreme ultraviolet) lithography machines; the United Kingdom’s ARM, which dominates chip design; a handful of US firms that dominate the EDA (electronic design automation) tools industry; and Nvidia, which dominates production of the GPUs used for AI training. One ASML lithography machine is assembled from more than 100,000 components, 85 percent of which are made by ASML suppliers; shipping one machine is said to require 20 trucks, 40 freight containers, and three 747 jets.23 This is not an economy for Lilliputians and that is a problem for a branch of economic policy that sets the competitive market as its ideal.

Beyond scale, there is a still deeper problem with the data-driven economy: information asymmetry. Exploitation of information asymmetry for commercial advantage is at the heart of the business model of the data-driven economy (recall the quotation from Eric Schmidt above). Yet information asymmetry is a source of market failure. This is the original sin of the data-driven economy: it is built on a market failure. In turn, this raises unprecedented conundrums since information asymmetry is something that regulation seeks to correct in order to have a level playing field. At the same time, information asymmetry underpins the enterprise value of data-driven companies, which of course no one wants to eliminate. Standard Oil could be broken up into 34 independent companies without destroying the value of a barrel of oil. That cannot be done with data companies. And, while markets have historically developed services to correct for information asymmetry,24 it is hard to see new companies forming to address the information asymmetry advantage of the incumbent superstar firms.

Geopolitics and Geoeconomics

The data-driven economy features economies of scale at the global level; accordingly, firms that dominate their sectors capture global economic rents, which in turn enriches their home countries. Moreover, dominance of the nexus of big data, machine learning and AI has become perhaps the most intensely contested area in the geopolitical rivalry between the United States and China. This makes the hyperscaler data firms — the only ones in which antitrust would be interested — prime candidates for government support rather than discipline. And, of course, the hyperscalers have hardly been loath to emphasize their role as national champions and the alignment of their interests with the national interest. For their part, politicians have equally not hesitated to come to the defence of the hyperscalers when the latter are threatened with foreign regulation or taxation.

The Demand for Antitrust Is There — But Is It the Right Tool?

The demand for antitrust activism has risen because of the concentration of wealth in the data-driven economy, and the influence that concentrated wealth exerts over markets, the media and the political system. These concerns are redolent of those in the first Gilded Age, and they are well-founded: such concentration of power is unhealthy for democracy.

To mitigate the potential harms that flow from this concentration of market power, numerous proposals have been generated, including: requiring interoperability of platforms to reduce network effects; lowering the threshold for scrutiny of proposed mergers that might have anti-competitive effects (e.g., acquisitions aimed to prevent a new firm from doing to a Meta/Facebook, what Facebook did to MySpace); requiring greater transparency on proprietary data and algorithms to enable verification of compliance with standards set for consumer protection; establishment of a new digital authority to strengthen capacity and expertise in dealing with the major digital firms; and various others.

However, it is telling that none of these suggestions would materially change the nature of the data-driven economy, even if adopted.25 As noted by the Stigler Committee on Digital Platforms, “The winner-take-all characteristics of many digital markets suggest that even if all the proposed policies are implemented, in some markets we would still find ourselves in a world of few companies (sometimes just one) with outsized market and political power.”26

The first Gilded Age eventually gave way to a highly competitive industrial economy, as global markets grew larger than the scale economies in production of most products. In other words, the size of global markets for most products came to substantially exceed the minimum efficient scale of production of those products, meaning that numerous firms could compete at the global level, given openness to trade. Economic and technological conditions changed and ushered in a new type of economy, that of the global factory and the “made in the world” production system.

In our second Gilded Age, the global economy is not likely to grow large enough, fast enough, to exhaust the economies of scale and scope in the data-driven economy. However, technological change may intervene in a positive way. In the last several years, the scaling up of AI systems has led to breakthroughs that suggest we have entered yet another new economic age — one in which generative AI and increasingly agentic AI become the essential capital assets and define the characteristics of the economy.

In this regard, it is useful to draw a distinction between the first phase of the data-driven economy — that of big data plus predictive AI, an era dominated by global platform firms — and a second phase of big data plus generative/agentic AI. The latter can be dated to breakthroughs in 2022 when ChatGPT and a number of text-to-image models were released. It is noteworthy that this emerging era of “machine knowledge capital” is giving rise to business models that enable firms at every scale to engage in AI development, alongside the major platform firms. Cloud-based “Software as a Service” infrastructure is available for scale-up startups. Meanwhile, Nvidia markets affordable desktop workstations, pre-installed with AI development tools such as TensorFlow and PyTorch, which is reminiscent of the dawn of the knowledge-based economy, when IBM PCs, loaded with CAD-CAM software, enabled the new economy to develop on a distributed basis, showering wealth on small college towns around the world.

To summarize, market scale turned out to be the answer to the social and political issues raised by scale economies at the industrial plant level in the first Gilded Age. Technological change may yet be the answer to the social and political issues raised by the economies of scale and scope, network externalities and information asymmetry at the heart of the second Gilded Age. However, the early years of the new age of machine knowledge capital have not provided hopeful evidence to that effect. The proliferation of firms developing AI-enabled applications has not translated into a commensurate diffusion of economic or political power. Control over the foundational layers of machine knowledge capital — training data at scale, frontier models, compute infrastructure and algorithmic distribution channels — remains highly concentrated. At the same time, the political economy is evolving in a direction antithetical to antitrust enforcement, something perhaps best illustrated by the on-again, off-again alliance between US President Donald Trump and prospective tech trillionaire Elon Musk. Antitrust activism as a response to the excesses of the data-driven economy has been subordinated to geopolitics, and increasingly framed as an ideological intrusion (such as the Trump administration’s attacks on EU regulation of US hyperscaler firms) rather than a constitutional safeguard of market democracy. The concentration of economic power through rent capture has been compounded by the concentration of narrative authority, as starkly highlighted by Elon Musk’s acquisition of Twitter and the transformation of the latter into a platform to attack “wokeness.”27 In short, the gilding on the new Gilded Age is thickening, not eroding. Antitrust is an instrument of the state — and the very nature of the state is now an open question in the face of transformational technological change.28


Acknowledgement

An earlier version of this article appeared in Competition Policy International (CPI) Antitrust Chronicle, February 16, 2024, https://www.pymnts.com/cpi-posts/the-economics-of-the-data-driven-economy-and-the-demand-for-antitrust/. It has been updated to reflect subsequent developments.

Endnotes

1. Richard Hofstadter, “Whatever Happened to the Antitrust Movement?” in The Paranoid Style in American Politics and Other Essays (Harvard University Press, 1996), 188.

2. Nicholas Kaldor, “Capital Accumulation and Economic Growth,” in The Theory of Capital, eds. F.A. Lutz and D.C. Hague (International Economic Association Series, 1961), 177–222.

3. These include containerization, the rise of long-distance air freight for high-value cargo, steeply falling telecommunications costs, the development of information technology to coordinate distributed production systems, and the use of radio-frequency identification to track inventory. The economic literature on the multinational firm really begins in the 1970s, with seminal works such as Richard E. Caves, “International Corporations: The Industrial Economics of Foreign Investment,” Economica 38 (1971): 1–27, and John H. Dunning, “Trade, Location of Economic Activity and the MNE: A Search for an Eclectic Approach,” in The International Allocation of Economic Activity, eds. Bertil Ohlin, Per Ove Hesselborn and Per Magnus Wijkman (Macmillan, 1977)

4. Joseph Grunwald and Kenneth Flamm, The Global Factory: Foreign Assembly in International Trade (Brookings Institution Press, 1985).

5. The transition to the knowledge-based economy can be related to several key technological and policy developments at the beginning of the 1980s: awareness of the importance of innovation and IP for the US economy as signalled by the passage of the Bayh-Dole Act of 1980, in the last days of the Carter administration; and the release of the IBM personal computer (1981), which, when coupled with the release of CAD-CAM PC software by John Walker’s Autodesk (1982), enabled widespread application of computers in industrial design. These innovations enabled the industrialization of R&D, accelerating the pace of innovation, as evidenced by the steep upturn in the pace of US patenting, and patenting under the Patent Cooperation Treaty in the early 1980s.

6. For an illustration of the smiling curve in this context, see Dan Ciuriak, “Economic Rents and the Contours of Conflict in the Data-driven Economy,” CIGI Paper No. 245, July 27, 2020, https://www.cigionline.org/publications/economic-rents-and-contours-conflict-data-driven-economy. For a rigorous exposition of the concept, see Richard Baldwin and Tadashi Ito, “The Smile Curve: Evolving Sources of Value Added in Manufacturing,” Canadian Journal of Economics 54, no. 4 (2022): 1455–1880.

7. The first antidumping statute was enacted by Canada in 1904. For a discussion of the context, see Dan Ciuriak, “Anti-dumping at 100 Years and Counting: A Canadian Perspective,” World Economy 28, no. 5 (2005): 641–649. However, as with antitrust, there was a long period of limited activity — indeed, prior to 1980, there is virtually no economic literature on antidumping; subsequently it became a cottage industry. What changed after 1980 was not an increase in the number of complaints from US industry, but rather administrative and legislative changes that made it much more likely for a complaint to receive a full review rather than being dismissed at an early stage. In this regard, see Douglas Irwin, “The Rise of U.S. Antidumping Action in Historical Perspective,” NBER Working Paper No. 10582 (2004). In other words, industry demand for protection has been more or less a constant; political and social demand for protection, however, rose steeply after 1980.

8. On the rise of charismatic CEOs, see Nitin Nohria, “When Charismatic CEOs Are an Asset — and When They’re a Liability,” Harvard Business Review, December 1, 2023, https://hbr.org/2023/12/when-charismatic-ceos-are-an-asset-and-when-theyre-a-liability.

9. The picture of the empty chairs reserved for Zuckerberg and Sandberg at the committee hearing in Ottawa in May 2019 is symbolic. See Jesse Hirsh, “What You Need to Know about the Grand Committee on Big Data, Privacy and Democracy,” CIGI, May 29, 2019, https://www.cigionline.org/articles/what-you-need-know-about-grand-committee-big-data-privacy-and-democracy.

10. Jens Hillebrand Pohl, “America’s Hostile Takeover: How the Tech Broligarchy Created the First Post-State Superpower — and Why It No Longer Needs Its Allies,” March 23, 2025, https://www.linkedin.com/pulse/americas-hostile-takeover-how-tech-broligarchy-created-pohl-umtjf; Carole Cadwalladr, “A message to America: we are not your enemy,” How to Survive the Broligarchy (Substack), December 24, 2025, https://broligarchy.substack.com/p/a-message-to-america-we-are-not-your?r=nv9u6&utm_medium=ios&shareImageVariant=overlay&triedRedirect=true.

11. Jim Balsillie, prepared remarks before the International Grand Committee on Big Data, Privacy and Democracy, Ottawa, May 28, 2019, published as Jim Balsillie, “Data is not the new oil — it’s the new plutonium,” National Post, May 28, 2019.

12. Maurice E. Stucke and Ariel Ezrachi, “The Rise, Fall, and Rebirth of the U.S. Antitrust Movement,” Harvard Business Review, December 15, 2017. See also Ciuriak, “Economic Rents and the Contours of Conflict.”

13. Eric Schmidt speaking at the Mobile World Congress, Barcelona, 2010, https://www.youtube.com/watch?v=ClkQA2Lb_iE.

14. Dan Ciuriak, “The Economics of Data: Implications for the Data-driven Economy,” CIGI, March 5, 2018, https://www.cigionline.org/articles/economics-data-implications-data-driven-economy/.

15. Benjamin Hav Mitra-Kahn, “Copyright, Evidence and Lobbynomics: The World after the UK’s Hargreaves Review,” Review of Economic Research on Copyright Issues 8, no. 2 (2011): 65.

16. This plays out at the international level for the big tech firms. On big tech lobbying the Canadian government, see Murad Hemmadi, “Foreign tech giants have more than tripled their lobbying since Justin Trudeau became prime minister,” The Logic, July 8, 2019, https://thelogic.co/news/exclusive/foreign-tech-giants-have-more-than-tripled-their-lobbying-since-justin-trudeau-became-prime-minister/?gift=031edb6dd6bb8ecf9493d3aa10b2e697; on the deleterious consequences of this lobbying for Canadian policy in the knowledge-based and data-driven economy, see Jim Balsillie, “Canada is pushing its tech sector into a race to the bottom,” The Globe and Mail, September 21, 2019.

17. Scott Pelley, “Is artificial intelligence advancing too quickly? What AI leaders at Google say,” CBS News, April 16, 2023, https://www.cbsnews.com/news/google-artificial-intelligence-future-60-minutes-transcript-2023-04-16/.

18. Dan Ciuriak, “Enterprise Value and the Value of Data,” paper prepared for the IARIW-CIGI Conference on the Valuation of Data, Waterloo, Ontario, Canada, November 2–3, 2023. We are seeing the emergence of market pricing of data used for training AI on a per-token basis at the APIs (application processing interfaces) of LLMs. These prices provide a rental value for data that might support an inference as to the value of data assets. However, this will be a non-trivial exercise since the prices vary widely (see, e.g., OpenAI’s pricing schedule at https://openai.com/api/pricing/).

19. These are rough estimates, of course. For estimates of the volume of data assembled as of 2025, see Kevin Bartley, “Big data statistics: How much data is there in the world?” Rivery, May 28, 2025, https://rivery.io/blog/big-data-statistics-how-much-data-is-there-in-the-world/. For a comparison of the amount of data measured in bytes to the number of stars in the observable universe, see Appen, “Data Trends in the Zettabyte Era,” November 6, 2019, https://www.appen.com/blog/data-trends-in-the-zettabyte-era.

20. The Cerebras Wafer Scale Engine 3 (WSE 3) was placed on the market in March 2024. See https://www.cerebras.ai/press-release/cerebras-announces-third-generation-wafer-scale-engine.

21. See Chris Young, “A supercomputer in China ran a ‘brain-scale’ AI model with 174 trillion parameters: ‘Rivaling the number of synapses in the brain,’” Interesting Engineering, June 22, 2022, https://interestingengineering.com/innovation/supercomputer-brain-scale-174-trillion-parameters.

22. Data scarcity persists, notwithstanding breakthroughs such as DeepSeek’s in training approaches that radically reduce the cost of training LLMs. See Bertin Martens, “How DeepSeek has changed artificial intelligence and what it means for Europe,” Bruegel, Policy Brief 12/25, March 20, 2025, https://www.bruegel.org/policy-brief/how-deepseek-has-changed-artificial-intelligence-and-what-it-means-europe. See also: SAP, “What Happens When LLM’s Run Out Of Useful Data?” Forbes, June 25, 2025, https://www.forbes.com/sites/sap/2025/06/25/what-happens-when-llms-run-out-of-useful-data/.

23. ASML, “A backgrounder on Extreme Ultraviolet (EUV) lithography,” January 18, 2017, https://medium.com/@ASMLcompany/a-backgrounder-on-extreme-ultraviolet-euv-lithography-a5fccb8e99f4.

24. Historically, market mechanisms have emerged to address information asymmetries in the normal course of business, as in the market for lemons described by George Akerlof, “The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism,” Quarterly Journal of Economics 84, no. 3 (1970): 488.

25. There is no evidence that adoption of these countermeasures is to be expected — viz. the resounding silence in the face of Nvidia acquiring rival Groq in a deal valued at around US$20 billion. Formally, the Nvidia-Groq transaction involved a non-exclusive licensing agreement; however, the price paid is many multiples of a typical licensing agreement and has been described as an “acqui-hire” under which Groq founder and CEO Jonathan Ross, President Sunny Madra and a significant portion of Groq’s engineering team join Nvidia, while GroqCloud remains independent. See Maja Popovska, “NVIDIA Strikes $20 Billion Deal with AI Chip Start Groq,” Blog, TestDevLab, December 30, 2025, https://www.testdevlab.com/blog/nvidia-groq-ai-chip-deal. While the deal may have been structured as it was to avoid scrutiny from the US antitrust authority, the Federal Trade Commission, the reality is it went ahead.

26. Stigler Committee on Digital Platforms, Final Report, Stigler Center for the Study of the Economy and the State, University of Chicago, September 16, 2019, 21.

27. Rose See, “‘X is the voice of the people’: How Elon Musk styles X as a newsroom,” Humanities and Social Sciences Communications 12 (2025): 1921, 1-11, https://doi.org/10.1057/s41599-025-06187-8.

28. Pohl, “America’s Hostile Takeover.”

ISSN 2563-674X

doi:10.51644/bap84