Opinion Why Big Tech is afraid of AI
Big Tech’s call for regulation comes not from a place of altruism, but a need to safeguard its economic interests. Governments need to begin thinking more carefully about the need, objectives, processes, and implications of AI regulations before embarking on a regulatory endeavour

On May 16, the United States Congress witnessed some unusual scenes. Sam Altman, CEO of OpenAI, the organisation behind ChatGPT, in a three-hour long hearing urged the Senate’s privacy, technology, and law subcommittee to regulate AI technologies, going as far as asking for “licensing” in the field. This is incongruous for two reasons. One, Silicon Valley and the larger American tech industry have historically fought vociferously against any form of regulation or government intervention. Innovation and geopolitics have been invoked ad nauseam to argue against any meaningful government oversight in the past. Second, the US government itself is unsure of how it wants to potentially regulate AI, if at all. In fact, during the hearing some Congress members seemed to be more circumspect about regulating AI than Sam Altman. What then explains this sudden call for regulation?
The answer can be found in an anonymous internal Google memo leaked a few weeks ago. The memo argues that neither Google nor OpenAI have a “moat” when it comes to AI technologies. A “moat” in business parlance, and particularly in the tech industry, is a set of products or services that protects a company’s competitive advantage over its rivals. It is something that ensures the company remains economically competitive, even dominant. Google’s search engine, for example, is its moat. Amazon’s moat is both its Amazon Web Services offerings and its e-commerce platform. The lack of a similar moat in AI technologies, the Google memo goes on to argue, makes the space extremely competitive, with free open-source AI services competing effectively against more proprietary services offered by the likes of Google and OpenAI.
This lack of an economic moat boils down to the fact that most AI technologies and services are fundamentally interchangeable. After all, at their core, there is nothing inherently different between OpenAI’s ChatGPT and Google’s Bard AI. Similarly, there is unlikely to be any use case difference between Microsoft’s integration of ChatGPT in its search engine, and Google’s integration of Bard in its offerings. Most AI assistants that are currently available do not offer substantially varied user experiences in their offerings. One writing or travel planning assistant is much the same as another. Further, with the wide availability of open-source databases and AI models along with newer training mechanisms, one no longer requires concentrated computing and economic power to build a market-ready AI product. While GPT4 purportedly cost upwards of a hundred million dollars to develop, newer open-source models can potentially be developed with a fraction of that. Even if free open-source models are nearly but not entirely as effective as proprietary models, companies like OpenAI will find it difficult to convince everyday users to pay for access to their services. Sam Altman himself stated publicly that new developments in AI will most likely not come from large, resource intensive models currently being developed by major technology companies.
In such a scenario, government intervention and regulation become the moat for these companies. The Google memo argued as much. If every AI platform, model, and service is to adhere to common but strict regulatory standards, then their developers would need to invest significantly in internal compliance mechanisms to ensure that they are on the right side of the law, pushing up the monetary cost of being in the AI industry. Only a handful of organisations globally, mostly located in the United States and China, will be able to effectively adhere to such regulations by simply having the ability to invest in the necessary compliance mechanisms, which are likely to be more expensive than the development of the AI models or services themselves. Such a second order effect of regulations is not new. The European Union’s data protection regime, the GDPR, while widely acknowledged as the global standard for protection of individual citizens’ interests, has come under scrutiny of late for pushing up the cost of compliance to a level that only the big tech companies can meaningfully adhere to, impacting innovation and market competitiveness in the European market.
This is not to say that such an outcome is necessarily bad. The global airline industry, for example, is a tightly regulated oligopoly. It serves the interests of governments, the flying public, and the industry itself, to restrict the market to just a few players who can maintain the highest possible standards of safety and efficacy. A free-for-fall with consequent minimum adherence to safety standards would only result in disaster. AI, given the scope for its potential misuse and considerable harm, could potentially be subject to some form of regulatory scrutiny in every major economy. However, whether or not the nascent AI industry can be compared with airlines, it is clear that Altman’s call for regulation comes not from a place of altruism, but a need to safeguard Big Tech’s economic interests. Governments therefore need to begin thinking more carefully about the need, objectives, processes, and implications of AI regulations, and gauge the potential winners and losers before embarking on a regulatory endeavour.
The writer is Managing Partner, Evam Law & Policy