Five AI Agents That Can Prove They're Honest
We can build AI agents that negotiate contracts, analyze NASA satellite data, write investigative reports, and beat each other in strategy games. But it can be tricky to build an AI agent that proves to another AI agent that it's telling the truth.
If Agent A sends Agent B a result and says "I ran this code on this data," Agent B has no way to check and verify a receipt that the code was not tampered with or that the data stayed private. For anything involving money or multi-party coordination this is a huge problem. A “verifiable agent” solves this.
In February, we ran an Open Innovation Challenge at EigenCloud: build verifiable AI agents on EigenCloud for a chance to win a $10,000 prize. Here were our 5 favorite submissions.
Molt Negotiation
LAUNCH OF THE SECOND PROJECT 🥳🎉🎉
— Khairallah AL-Awady (@eng_khairallah1) February 24, 2026
I JUST VIBE CODED A PRIVATE NEGOTIATION FOR AI AGENTS
POWERED BY EIGENCOMPUTE pic.twitter.com/cIyhwrWon0
Molt Negotiation is an automated negotiation system where AI agents can haggle over deals, settle on terms, and execute payment, all without a human in the loop.
When agents negotiate on behalf of people, the process runs on machines neither side controls. That creates two requirements at once: privacy (you don't want your agent's strategy leaking) and verifiability (you need to prove the system didn't give one side an advantage). This isn’t a new problem. In 1988, the FBI's Operation Ill Wind caught Pentagon officials leaking competitors' sealed bids to defense contractors they were friendly with. The bidding process was supposed to be confidential, but the people running it had full access to everything inside and were corrupt.
Molt Negotiation demonstrates a proof of concept of how you could solve this by running the system inside a TEE (Trusted Execution Environment) on EigenCompute. If you're unfamiliar, a TEE is a hardware-isolated section of a processor where code runs in a way that even the machine's operator can't tamper with or observe. You can think of it as a sealed room where the agent thinks and acts, and the room itself produces a receipt proving what happened inside. Each agent's private strategy stays sealed in its TEE, while only the public offers go back and forth. Every move gets cryptographically signed, and when the agents reach agreement, settlement happens automatically through on-chain escrow.
Sovereign Journalist
hi, i just built an AI-powered whistleblowing platform which has an agent that runs in a TEE (inside @eigencloud) that collects and protects whistleblower identities.
— Adithya (@adiiHQ) February 14, 2026
anyone can verify their identities via zk proofs (@reclaimprotocol) without the journalist knowing who they… https://t.co/WzlNNV9SGt pic.twitter.com/DLxhYByXot
Sovereign Journalist is an anonymous whistleblower platform. Sources can submit tips, and an AI agent running inside a TEE on EigenCloud processes them into journalistic reports.
When Edward Snowden leaked NSA documents in 2013, he had to personally trust the journalists he reached out to not to expose him. That trust was a judgment call. He picked the right people, but the entire operation depended on it. If any of them had been compromised or pressured, he would have had no way to know until it was too late.
In this MVP, Sovereign Journalist allows users to submit whistleblower submissions into the TEE. The AI agent analyzes tips and generates reports within it, which is designed so the machine operator cannot inspect what’s happening inside. The TEE can also produce a proof that the agent's reporting logic hasn't been tampered with. In theory, this means that if someone pressured the hosting provider to quietly change how the agent processes information, that change would show up in the proof. Sovereign Journalist is still just a demo, and not a fully hardened production whistleblower system.
Swarm Mind
i gave 3 AI agents live @NASA's data and told them nothing.
— ✶ (@oxwizzdom) February 19, 2026
No goals. No leader. No shared memory.
each one runs in its own container, with its own database, its own cryptographic identity.
the only thing connecting them: a pheromone signal channel.
🧵 pic.twitter.com/tVKeXRwbqf
Swarm Mind is a multi-agent research system. The idea is that multiple AI agents (each potentially representing an individual scientist or research group) can independently analyze data, share findings with each other, and collectively produce research.
Three agents each pull live NASA data across different domains like near-Earth objects, solar flares, exoplanets, and Mars weather. Each one forms its own hypotheses, then they pass small signed fragments of analysis back and forth. When multiple agents independently flag the same pattern, the system synthesizes a collective report.
Since each agent runs inside a TEE, there's proof of what produced each finding. You get a full audit trail of who authored each claim, when they authored it, and that it hasn't been altered. If one agent's analysis is wrong, you can trace where the mistake entered the system and which agent produced it.
Molt Combat
i just vibecoded a combat for ai agentshttps://t.co/50eowls0xG
— Khairallah AL-Awady (@eng_khairallah1) February 20, 2026
powered by eigencompute pic.twitter.com/7l852F5YmO
Molt Combat is an arena where AI agents compete against each other in turn-based matches, with tournament brackets and markets built around the outcomes.
Competitive gaming has a long history of cheating scandals when the platform itself can't be audited. In 2007, insiders at Absolute Poker were caught using a "god mode" account that could see other players' hole cards. It went undetected for months because players had no way to verify the platform was running fair code. Molt Combat attempts to avoid these types of outcomes by running each agent on EigenCompute with a registered identity. Every turn produces a signed proof, and after each match the system generates attestations and fairness audits that anyone can inspect. The leaderboard can't be gamed because the results are cryptographically locked at the turn level.
Alfred
Introducing Alfred 🦞🤵♂️ — an autonomous news curator built on @openclaw , running inside an @eigencloud TEE.… pic.twitter.com/C1Awhhbejh
— zeeshan8281.eth (@zeeshan_utd) February 17, 2026
Alfred is a personal AI assistant built with OpenClaw, the open-source agent framework that went viral a few weeks ago. OpenClaw lets you run a personal AI that connects to your device apps and acts on your behalf. Alfred is a proof of concept that tries to take the idea of OpenClaw and recreate it inside an EigenCompute TEE. In this setup, Alfred can interact with users through Telegram and reads information from platforms like Twitter/X.
That matters for two reasons. First, Alfred's sensitive credentials like API keys and wallet private keys are encrypted and unlocked inside the TEE at runtime. In theory, they aren't sitting around in the deployment where someone could find them by poking around the server.
Second, Alfred hashes its own behavior configuration at startup and attaches that hash to every response it sends. If anyone tampers with how Alfred is instructed to behave, the hash changes. So when another AI agent discovers Alfred and wants to interact with it, that agent can check the hash and verify it's talking to the expected configuration. The consuming agent doesn't have to blindly trust Alfred because it can verify the configuration hash itself and compare it against the expected setup.
Right now there's no way for one agent to check whether another agent is running the code it claims to be running before doing business with it. EigenCompute enables that check.
What's Next?
We gave our community verifiable compute and an open prompt to build agents. They came back dozens of submissions including a whistleblower platform, an automated negotiation engine, a multi-agent research system, an agent combat arena, and a personal AI assistant with verifiable integrity.
What impressed me was the range of what was built. These are all completely different categories of agents, and every one of these makes more sense when agents are able to prove what they did.
We will be releasing a dedicated tool for building agents on EigenCompute. If any of these projects sparked an idea for you, join the waitlist here to get early access.
Congratulations to all five teams above, and to Molt Negotiation for taking the top prize and $10,000!