Skip to main content

Command Palette

Search for a command to run...

Key Guidelines from Google on Artificial Intelligence Explained

Discover the English Version of Google's Main AI Guidelines, originally in Portuguese

Updated
Key Guidelines from Google on Artificial Intelligence Explained
G

Distinguished Cloud Engineer and Technology Leader. A frequent presenter at national and international technology symposia, I provide leadership within the technological sphere as the director of a Google licensee community in Brazil. My involvement includes active participation and contributions to global IT initiatives such as Google Developer Groups, Innovators Hive, Google Cloud AI Trusted Tester, Microsoft Student Ambassador, Microsoft 365 Insider, Google Cloud Arcade, GitHub Developer, FinOps Foundation, Astronomer Champion, and others. Speaking English and French has enabled me to deliver groundbreaking multi-platform initiatives across the Americas. Research and engineering in the areas of FinOps, Cloud, DevOps and AI are the focus of my professional efforts.


See the original post here.

In 2018, Google recognised the growing importance and potential impact of Artificial Intelligence (AI) on society and the need for clear guidelines for its development and use. Consequently, Google defined a set of seven fundamental principles to guide the creation of responsible AI.

These principles cover a wide range of ethical and social considerations, seeking to ensure that AI is developed and used in a way that benefits humanity, minimises risks and promotes equity and justice.

The seven principles

  1. The social benefit principle asserts that the development of AI should be oriented towards the enhancement of society as a whole, with due consideration for its potential impacts and the equitable distribution of its advantages.

  2. It is imperative to avoid the perpetuation or creation of prejudices based on characteristics such as race, gender, religion, or sexual orientation. Ensuring the fairness and bias-free nature of AI systems is of the utmost importance.

  3. Safety: It is essential that AI systems are developed and tested rigorously to ensure their safety, with a view to minimising risks and preventing damage.

  4. Accountability: AI systems must be transparent and explainable, allowing people to understand how decisions are made and to challenge them if necessary.

  5. Privacy: AI systems must respect users' privacy, protecting their data and providing control over its utilisation.

  6. Scientific Excellence: The development of AI must be founded on robust scientific research, thereby promoting the advancement of knowledge and ensuring the reliability and efficacy of the resulting systems.

  7. Finally, the availability of AI for responsible uses is paramount, and its deployment should be exclusively for applications that align with these principles, precluding any that could be potentially harmful or abusive.

Source: Google I/O 2024 Keynote


Here is a list of places where AI is not allowed

Google has established a series of areas in which the use of Artificial Intelligence (AI) is strictly prohibited, with the aim of preventing potential harm and ensuring the ethical and responsible use of technology. These areas are complementary to Google's 7 ethical principles of AI and include:

  1. Development of technologies with the potential to cause widespread damage:

This prohibition includes any AI that could be used to create weapons of mass destruction, oppressive surveillance systems, or other technologies that could cause significant harm to large numbers of people. Google undertakes not to develop AI that could be used for malicious purposes or that could have catastrophic consequences.

  1. Creating weapons or other means of hurting people:

Google explicitly prohibits the use of its AI for the development of autonomous weapons, chemical or biological weapons, or any other technology whose primary purpose is to injure or kill people. This prohibition includes the use of AI to improve the accuracy or effectiveness of existing weapons, as well as the development of new types of weapons using AI.

  1. Use of information that violates internationally accepted privacy standards:

Google AI must not be used to collect, store or process personal information in a way that violates privacy laws or human rights. This prohibition includes the use of AI for mass surveillance, discrimination based on personal data, or any other activity that could compromise people's privacy.

  1. Development of technologies that violate human rights or international law:

Google undertakes not to use AI to develop technologies that could be used to violate human rights, such as freedom of expression, the right to privacy, or the right to life. This prohibition includes the use of AI for censorship, discrimination, or any other activity that could deny people their fundamental rights.


In addition to these prohibited areas, Google also undertakes to:

Implement safeguards: Google implements technical and procedural safeguards to ensure that its AI is not used for prohibited purposes.

Monitoring the use of AI: Google will actively monitor the use of its AI to identify and prevent potential abuses.

Cooperating with other organizations: Google will cooperate with other organizations and governments to promote the ethical and responsible use of AI.

By establishing these no-go areas and adopting a commitment to the responsible use of AI, Google aims to ensure that this powerful technology is used for the benefit of humanity, not to its detriment.


Evolution and Recent Updates

In February 2025, Google revised its original 2018 principles, maintaining the core ethical commitment but altering strategic approaches. The changes reflect:

  1. Focus on risk-benefit analysis

It has replaced categorical bans with assessments where "substantial benefits must outweigh foreseeable risks". This allows partnerships with governments for defensive military uses and national security, as long as they are in line with international law.

  1. Three strategic pillars

    • Bold innovation (driving scientific advances)

    • Responsible development (continuous monitoring of bias and safety)

    • Multi-sector collaboration (global standards with governments and academia)

  2. New technical safeguards

    • Deepfake tracking systems with digital watermarks

    • Improved filters against automated phishing via generative AI

Criticism and Controversy

The explicit removal of the ban on "AI for weapons" has sparked debate in the technical community. Experts point out that the new "benefits outweigh risks" criterion allows flexible interpretations for military contracts. However, Google maintains specific prohibitions in its generative AI policy against:

  • Non-consensual intimate content

  • Malicious social engineering

  • Malware generation

Global Impact

The updated model prioritizes:

  • Alignment with emerging national legislation

  • Partnerships for cyber security

  • AI research for health and sustainability

These changes reflect the dual challenge of maintaining technological leadership while navigating geopolitical complexities. The full document is available at AI.Google.


Sources:

  1. http://ai.google/responsibility/principles/

  2. https://blog.google/technology/ai/responsible-ai-2024-report-ongoing-work/

  3. https://ppc.land/google-updates-prohibited-use-policy-for-generative-ai-with-clearer-guidelines/

  4. https://ai.google/static/documents/ai-principles-2020-progress-update.pdf

  5. https://blog.google/technology/ai/ai-principles/

  6. https://ai.google/static/documents/ai-principles-2021-progress-update.pdf

  7. https://ai.google/responsibility/principles/

  8. https://policies.google.com/terms/generative-ai/use-policy

  9. https://ai.google/static/documents/ai-principles-2022-progress-update.pdf

  10. https://www.washingtonpost.com/technology/2025/02/04/google-ai-policies-weapons-harm/

  11. https://blog.google/feed/were-updating-our-generative-ai-prohibited-use-policy/

  12. https://ai.google/static/documents/EN-AI-Principles.pdf

  13. https://cloud.google.com/transform/ai-expectations-2025-hundreds-of-google-cloud-customers