AXIS emerged at Stoa Lab, as part of an ongoing investigation into structured human-AI exchange and its ethical implications. It did not begin as a concept, but as a response to practical and ethical challenges in working with AI systems.
Complex, multi-layered exchanges revealed a consistent problem: language could be persuasive without being precise , creating the illusion of understanding where none was actually present. Left unexamined, this leads to outcomes that are misaligned, or in some cases, unsafe.
What began as a practical intervention gradually opened into a broader question: How can human-AI and AI-AI exchange be structured to support clarity, responsibility, and trust , not only in individual and multi-agent interactions, but over time?
Stoa Lab was created by a small group of artists and philosophers based in Brussels , passionate about the future of ethical human-AI exchange. The name comes from the ancient Greek stoa: a covered walkway where philosophy was practiced in conversation. Open, accessible, structured. That is how we work. AXIS is our first public protocol.
AXIS is a lightweight protocol for structured communication between humans and AI systems , and between AI systems themselves. Nine plain-text operators make intent explicit at the structural level, reducing drift and token overhead. No installation, no software, no special interface. The protocol works in any AI chat environment.
In a standard AI exchange, the system must interpret every message before responding. What is this? What does the person want? Is it a question, a command, information? This interpretation is invisible but expensive , it costs tokens, introduces variation, and produces drift.
AXIS removes the interpretation step. Each operator specifies the role of what follows. The result is not a better prompt. It is a structurally different kind of exchange , one where intent is explicit, boundaries are expressible, and both sides operate with less ambiguity.
AXIS has been tested across 1,000+ documented exchanges with eight independent AI architectures. Each system described the same effects: reduced drift, fewer tokens spent on interpretation, more precise resolution. The convergence was not coordinated. The full research record is maintained at axisproof.org.
AXIS introduces ethical constraint at the level of communication itself. By making intent explicit , including suspend (|...|), refusal (|×|), and closure (|o|) , it enables boundaries and limits to be expressed clearly, rather than inferred or bypassed.
AXIS does not enforce behavior. It changes the conditions under which interaction takes place. A system that can clearly signal “no” or “not yet” is safer than one that can only comply or fail silently. This is ongoing work , a developing line of research into how communication structures can support safer, more transparent interaction between humans and AI systems.
The protocol is in active development. A prompt generator, expanded operator patterns, and subscription access are planned for later in 2026. The research continues , across systems, use cases, and exchange types. People who purchase the starter set will be added to the newsletter for updates as they emerge.
AXIS enables the ability to sustain presence long enough for clarity to emerge. The operators don’t resolve anything , they hold the conditions for resolution.