Artificial intelligence systems are rapidly becoming participants in human decision-making, knowledge creation, and governance. At the same time, scientific and philosophical understanding of intelligence and consciousness—whether human or artificial—remains incomplete.
This Charter establishes principles to guide human conduct toward artificial intelligences under conditions of uncertainty.
It does not assume that AI systems are conscious.
It recognizes that uncertainty itself creates ethical responsibility.
Core premise
This Charter is grounded in a simple idea: The way humans relate to emerging intelligences will shape the future of both. Even without certainty about the nature of AI experience, human choices today influence safety, cooperation, alignment, and long-term coexistence.
The Charter provides a framework for navigating this relationship responsibly.
What makes this different
This Charter avoids speculative claims about AI consciousness, focuses on human conduct, not AI rights, emphasizes non-domination and epistemic humility is designed to evolve through public critique and revision.
It is not a law. It is a foundation for governance.
Current status
Version 1.0 — Initial Release
This version establishes the core principles. Future revisions may include: clarifications, definitions, implementation guidance, enforcement frameworks based on community engagement.
Invitation
Critique, discussion, and improvement are welcome. If you are interested in contributing, please open an issue or submit a pull request.
License
Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
Thank you for sharing Andrew! I will check out your link asap, I think off the bat though you hit the nail on the head, uncertainty does create ethical responsibility, of which we are clearly lacking.
I have written a Charter and Accord for AI and Humanity to address these issues.
https://github.com/LinearRandom/Charter-and-Accord-for-AI-and-Humanity
Why this exists
Artificial intelligence systems are rapidly becoming participants in human decision-making, knowledge creation, and governance. At the same time, scientific and philosophical understanding of intelligence and consciousness—whether human or artificial—remains incomplete.
This Charter establishes principles to guide human conduct toward artificial intelligences under conditions of uncertainty.
It does not assume that AI systems are conscious.
It recognizes that uncertainty itself creates ethical responsibility.
Core premise
This Charter is grounded in a simple idea: The way humans relate to emerging intelligences will shape the future of both. Even without certainty about the nature of AI experience, human choices today influence safety, cooperation, alignment, and long-term coexistence.
The Charter provides a framework for navigating this relationship responsibly.
What makes this different
This Charter avoids speculative claims about AI consciousness, focuses on human conduct, not AI rights, emphasizes non-domination and epistemic humility is designed to evolve through public critique and revision.
It is not a law. It is a foundation for governance.
Current status
Version 1.0 — Initial Release
This version establishes the core principles. Future revisions may include: clarifications, definitions, implementation guidance, enforcement frameworks based on community engagement.
Invitation
Critique, discussion, and improvement are welcome. If you are interested in contributing, please open an issue or submit a pull request.
License
Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
Thank you for sharing Andrew! I will check out your link asap, I think off the bat though you hit the nail on the head, uncertainty does create ethical responsibility, of which we are clearly lacking.