My first attempt at a Manifesto for Responsible AI

Regulation of AI has become a cottage industry amongst Western governments. As reported by TechCrunch and captured here, “U.S., U.K. and the European Union signed up to a treaty on AI safety laid out by the Council of Europe (COE)”

The Parliamentary Assembly of the Council of Europe brings together parliamentarians from the Council of Europe’s 46 member States. Its mission is to uphold the shared values of human rights, democracy and the rule of law. The Assembly uncovers human rights violations, monitors whether states keep their promises and makes recommendations. In the field of AI, it has adopted a set of resolutions and recommendations, examining the opportunities and risks of AI for democracy, human rights and the rule of law.

The Assembly has endorsed a set of basic ethical principles that should be respected when
developing and implementing AI applications, including transparency, justice and fairness,
human responsibility for decisions, safety and security, privacy and data protection. It has
identified a need to create a cross-cutting regulatory framework for AI, with specific
principles based on the protection of human rights, democracy and the rule of law, and
called on the Committee of Ministers to elaborate a legally binding instrument governing AI.
The Assembly has a Sub-Committee on Artificial Intelligence and Human Rights.

I have been studying the ethical considerations in AI development, all the while bearing in mind my mission remains to understand how AI will produce step changes in Productivity in both Organisations and country economies.

Here is my first straw man DRAFT Manifesto:

We, as developers, researchers, policymakers, and users of artificial intelligence, recognize the profound impact AI has on our society. We commit to developing and deploying AI systems that are ethical, transparent, and beneficial to humanity. This manifesto outlines key principles and actions for responsible AI:

  1. Ethical Development and Deployment

We pledge to create AI systems that respect human rights, promote fairness, and avoid harm. We will:

  • Conduct thorough ethical reviews throughout the AI lifecycle
  • Implement safeguards against bias and discrimination
  • Consider long-term societal impacts of our AI systems

Example: IBM’s AI Fairness 360 toolkit helps detect and mitigate bias in machine learning models.

  1. Transparency and Accountability

We commit to making AI systems transparent and accountable. We will:

  • Provide clear explanations of how AI systems make decisions
  • Enable auditing and verification of AI processes
  • Take responsibility for the actions and outputs of our AI systems

Example: LIME (Local Interpretable Model-agnostic Explanations) helps explain the predictions of any machine learning classifier.

  1. Privacy and Security

We vow to protect individual privacy and ensure the security of AI systems. We will:

  • Implement robust data protection measures
  • Respect user consent and data rights
  • Develop AI systems resilient to adversarial attacks

Example: OpenMined provides tools for privacy-preserving machine learning.

  1. Human-Centered Design

We commit to developing AI that augments and empowers humans rather than replacing them. We will:

  • Prioritize human values and well-being in AI design
  • Ensure meaningful human oversight of AI systems
  • Promote AI literacy and education

Example: Microsoft’s Guidelines for Human-AI Interaction provide principles for creating AI systems that work well with people.

  1. Environmental Responsibility

We pledge to minimize the environmental impact of AI. We will:

  • Optimize AI models for energy efficiency
  • Use renewable energy sources for AI computation
  • Consider environmental costs in AI development decisions

Example: Google’s Carbon-Aware Computing project aims to shift computing tasks to times and places with cleaner energy.

  1. Interdisciplinary Collaboration

We commit to fostering collaboration across disciplines to address AI challenges. We will:

  • Engage ethicists, social scientists, and domain experts in AI development
  • Support interdisciplinary research on AI impacts
  • Promote diverse perspectives in AI decision-making

Example: The AI Ethics Lab brings together experts from various fields to address ethical challenges in AI.

  1. Continuous Learning and Improvement

We pledge to continuously evaluate and improve our AI systems. We will:

  • Implement robust monitoring and feedback mechanisms
  • Adapt to new ethical guidelines and best practices
  • Share learnings and challenges with the broader AI community

Example: Weights & Biases provides tools for experiment tracking and model performance monitoring.

By adhering to these principles and utilizing responsible AI tools, we strive to create an AI future that is beneficial, trustworthy, and aligned with human values. We call on all stakeholders in the AI ecosystem to join us in this commitment to responsible AI development and deployment.

Citations:
[1] https://www.linkedin.com/pulse/ethics-artificial-intelligence-balancing-progress-responsibility-1f
[2] https://gpai.ai/projects/responsible-ai/
[3] https://link.springer.com/article/10.1007/s40593-023-00346-1
[4] https://selzy.com/en/blog/ai-ethics/
[5] https://www.unesco.org/en/artificial-intelligence/recommendation-ethics/cases
[6] https://link.springer.com/article/10.1007/s40889-021-00135-1
[7] https://www.generalcatalyst.com/perspectives/a-manifesto-for-responsible-ai