Premium
This is an archive article published on May 1, 2023

Grappling with AI: How govts plan to deal with revolutionary tools like ChatGPT and Google’s Bard

The potential and consequences of generative artificial intelligence have drawn the attention of policymakers across jurisdictions, who have stepped up regulatory scrutiny of these tools. G7 has committed to 'risk-based' regulation; EU's AI Act is on the way.

Bard and ChatgptTech leaders Elon Musk, Apple co-founder Steve Wozniak, and over 15,000 others have called for a six-month pause in AI development, saying labs are in an “out-of-control race” to develop systems that no one can fully control. (File photos)
Listen to this article
Grappling with AI: How govts plan to deal with revolutionary tools like ChatGPT and Google’s Bard
x
00:00
1x 1.5x 1.8x

Ahead of Europe’s AI Act that could establish a benchmark for how national governments regulate artificial intelligence tools, the Group of Seven (G7) developed nations has said that a “risk-based” regulation of AI could be a potential first step towards creating a template to regulate emerging tools such as Open AI’s ChatGPT and Google’s Bard.

In a joint statement released at the end of their two-day meeting in Japan on Sunday, G7 ministers said such regulation must “preserve an open and enabling environment” for the development of AI technologies while being based on democratic values.

Risk-based approach

G7’s “risk-based” approach could involve graded regulation, with a lesser compliance burden on developers or users of AI tool deployed in areas such as the word processing business or generating music, as compared to the regulatory supervision on, say, a tool aiding doctors in medical diagnosis or one linked to a face-reading device that’s matching people’s identities.

Story continues below this ad

The ministerial statement issued in Tokyo said: “We plan to convene future G7 discussions on generative AI which could include topics such as governance, how to safeguard intellectual property rights including copyright, promote transparency, address disinformation”, including information manipulation by foreign forces.

The pact acknowledged that “policy instruments to achieve the common vision and goal of trustworthy AI may vary across G7 members”.

Policy responses

The success of tools like ChatGPT has drawn the attention of policymakers across jurisdictions, who have stepped up regulatory scrutiny of generative AI tools.

The EU has taken a predictably tough stance, with the proposed AI Act segregating artificial intelligence by use-case scenarios based broadly on the degree of invasiveness and risk. Italy has become the first major Western country to ban ChatGPT out of concerns over privacy. The 27-member EU had taken steps to regulate AI back in 2018, and the AI Act due next year is a keenly awaited document.

Story continues below this ad

The UK is on the other end of the spectrum, with a decidedly ‘light-touch’ approach that aims to foster, and not stifle, innovation in this nascent field. Japan too has taken an accommodative approach to AI developers.

China has been developing its own regulatory regime — the country’s federal Internet regulator earlier this month put out a 20-point draft to regulate generative AI services, including mandates to ensure accuracy and privacy, prevent discrimination and guarantee protection of intellectual property rights.

The draft, which is likely to be enforced later this year, requires AI providers to clearly label AI-generated content, establish a mechanism for handling user grievances, and undergo a security assessment before going public. AI-generated content must “reflect the core values of socialism” and not contain anything that could lead to an overthrow of the socialist system, according to the draft quoted by Forbes.

India has said that it is not considering any law to regulate the artificial intelligence sector. IT Minister Ashwini Vaishnaw has said that although AI “had ethical concerns and associated risks”, it had proven to be an enabler of the digital and innovation ecosystem.

Story continues below this ad

Outlook in the US

On April 11, the US took its most decisive step in addressing the regulatory uncertainty in this space when the Department of Commerce asked the public to weigh in on how it could create rules and laws that would ensure AI systems operate as advertised.

The agency flagged the possibility of floating an auditing system to assess whether AI systems include harmful bias or distort communications to spread misinformation or disinformation.

New assessments and protocols may be needed to ensure AI systems work without negative consequences, much like financial audits confirm the accuracy of business statements, according to Alan Davidson, an assistant secretary in the US Department of Commerce.

White House Blueprint

Last month’s policy action in the US built on a 76-page Blueprint for an AI Bill of Rights that was published by the White House Office of Science and Technology Policy (OSTP) in October 2022, proposing a nonbinding roadmap for the responsible use of AI.

Story continues below this ad

The Blueprint spelt out five core principles to govern the effective development of AI systems, with special attention to the unintended consequences of civil and human rights abuses. These related to:

* protecting users from unsafe or ineffective systems;

* protecting users against discrimination by algorithms;

* users being protected against abusive data practices via built-in protections, and having agency over the use of their data;

* users knowing that an automated system is being used, and comprehending how and why it contributes to outcomes that impact them; and

* users being able to opt out, and have access to a person who can quickly consider and remedy problems.

Story continues below this ad

The Blueprint set out to “help guide the design, use, and deployment of automated systems to protect the American Public”. The principles are non-regulatory and non-binding — it is not an enforceable “Bill of Rights” with legislative protections.

The document included multiple examples of AI use cases that the White House OSTP considers “problematic” — it clarifies that it should only apply to automated systems that “have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services, generally excluding many industrial and/or operational applications of AI”.

The Blueprint expanded on examples for the use of AI in lending, human resources, surveillance and other areas, which would find a counterpart in the ‘high-risk’ use case framework of the proposed EU AI Act, according to a World Economic Forum synopsis of the document.

Some gaps remain

Nicol Turner Lee and Jack Malamud at Brookings have said that while the identification and mitigation of the intended and unintended consequential risks of AI are widely known, how the Blueprint would facilitate the reprimand of such grievances remained undetermined.

Story continues below this ad

Also, it is unknown “whether the non binding document will prompt necessary congressional action to govern this unregulated space”, they said in a December paper titled ‘Opportunities and blind spots in the White House’s blueprint for an AI Bill of Rights’.

Calls to action

Tech leaders Elon Musk, Apple co-founder Steve Wozniak, and over 15,000 others have called for a six-month pause in AI development, saying labs are in an “out-of-control race” to develop systems that no one can fully control. They have also said labs and independent experts should work together to implement a set of shared safety protocols.

Repeated efforts have been made in the US to pass laws to limit the power of Big Tech, but they have made little headway given the political divisions in Congress.

Anil Sasi is National Business Editor with the Indian Express and writes on business and finance issues. He has worked with The Hindu Business Line and Business Standard and is an alumnus of Delhi University. ... Read More

Latest Comment
Post Comment
Read Comments
Advertisement
Advertisement
Advertisement
Advertisement