Google expands Gemini integration for free tier Workspace users. (Image credit: Google)
Google has announced significant changes to its guiding principles on AI development and use, including the omission of language that previously barred its AI systems from being used as weapons and to “cause overall harm.”
This comes a month after reports stating that Google Cloud employees had directly worked with IDF officials to ease access to its AI tools amid Israel’s ground invasion of the Gaza strip in 2023.
You have exhausted your monthly limit of free stories.
Read more stories for free with an Express account.
Under the updated AI principles, the big tech company states that it will implement “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.”
Google also now says it will work to “mitigate unintended or harmful outcomes and avoid unfair bias” in its AI development and deployment lifecycle, while also “promoting privacy and security, and respecting intellectual property rights.”
The changes were unveiled in a blog post published on Tuesday, February 4, by James Manyika, Google’s senior vice president for research, labs, technology and society, and Demis Hassabis, CEO and co-founder of Google DeepMind.
“We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security,” the two executives wrote on why the company’s AI principles needed to be revised.
“…we will continue to focus on AI research and applications that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights…” the blog post read.
Story continues below this ad
Key changes in Google’s AI principles
As part of its AI Principles published back in 2018, Google had vowed not to pursue “technologies that cause or are likely to cause overall harm,” “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” “technologies that gather or use information for surveillance violating internationally accepted norms,” and “technologies whose purpose contravenes widely accepted principles of international law and human rights.”
However, this passage does not appear in the revamped webpage on Google’s AI principles. It also no longer features a list of banned use cases of the company’s AI systems.
The 2018 principles stated that Google would not develop weapons, certain surveillance tools, and technologies that undermine human rights. The recently altered AI principles do not pledge these commitments.
In Wednesday’s blog post, Google said its updated AI principles will focus on developing AI to drive economic progress and enable scientific breakthroughs. It will further prioritise the responsible development of AI “from design to testing to deployment to iteration.”
Story continues below this ad
Phrases such as “be socially beneficial” and maintain “scientific excellence” have been deleted, as per a report by Wired.
What is Project Nimbus?
Last month, The Washington Post reported that employees at Google Cloud prioritised requests from Israel Defense Forces (IDF) to access the company’s AI tools and services in the aftermath of Hamas’ October 7 attack.
The official Cloud Platform Acceptable Use Policy of Google Cloud forbids violating “the legal rights of others” and engaging in or promoting illegal activity, such as “terrorism or violence that can cause death, serious harm, or injury to individuals or groups of individuals.”
Google Cloud’s Terms of Service similarly prohibits the use of any applications that breach the law or “lead to death or serious physical harm to an individual.”
The Israeli government reportedly has a $1.2 billion contract, referred to as Project Nimbus, with Google and Amazon to access its cloud computing services.
However, the company has said that the work carried out under Project Nimbus “is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.”
“We have been very clear that the Nimbus contract is for workloads running on our commercial cloud by Israeli government ministries, who agree to comply with our Terms of Service and Acceptable Use Policy,” a spokesperson was quoted as saying by The Verge in April 2024.
Technology on smartphone reviews, in-depth reports on privacy and security, AI, and more. We aim to simplify the most complex developments and make them succinct and accessible for tech enthusiasts and all readers. Stay updated with our daily news stories, monthly gadget roundups, and special reports and features that explore the vast possibilities of AI, consumer tech, quantum computing, etc.on smartphone reviews, in-depth reports on privacy and security, AI, and more. We aim to simplify the most complex developments and make them succinct and accessible for tech enthusiasts and all readers. Stay updated with our daily news stories, monthly gadget roundups, and special reports and features that explore the vast possibilities of AI, consumer tech, quantum computing, etc.