This infographic illustrates how a 'protocol-first' coordination layer (REP) prevents system collapse in a multi-agent simulation, leading to sustainable outcomes compared to the standard agent-to-agent (A2A) communication method.
MIT Media Lab just published striking results from multi-agent coordination experiments that validate a core claim of distributed systems research: intelligence is not just in the agent—it is in the interaction.
The Experiment
Researchers ran AI agents on Fishbanks, the classic commons dilemma used with humans for 30+ years. Six agents shared an ocean with regenerating fish stocks. All were greedy profit-maximizers. The only variable: how coordination was handled.
Condition A (A2A): Agents exchanged decisions and reasoning traces—the dominant approach in multi-agent systems today (Google A2A protocol).
Condition B (REP): Agents ran a lightweight coordination protocol layer (Ripple Effect Protocol) that propagated adaptation signals before decisions were finalized.
Results
Both systems crashed early—greedy agents overfished and drove fish populations below 200. But what happened next diverged dramatically:
| Metric | A2A | REP | |--------|-----|-----| | Fish population recovery | 29 | 701 (20× more sustainable) | | Total catch | 3,454 | 4,480 | | Cumulative profit | $25,324 | $44,100 |
A2A agents reproduced the same collapse dynamics observed in decades of human experiments. One agent would hold back while another surged. Restraint without synchronization is not restraint.
REP agents crashed just as hard initially—but when the environment showed recovery signs, fishing pressure dropped across agents simultaneously. Not because they agreed to cooperate, but because the protocol layer propagated adaptation signals that shifted behavior before the next decision.
The Mechanism
REP uses "sensitivities"—conditional signals describing how behavior would change if conditions shifted. These are not commitments or promises. The protocol aggregates sensitivities from neighbors and updates coordination variables that shape the next decision.
"Agents don't react to the last round. They adapt to the emerging direction of the group before acting again."
Asymmetric Test: Beer Game
Researchers also tested on the Beer Game (supply chain with retailer, distributor, supplier, factory). REP outperformed both Google A2A agents and Sloan MBA teams. Oscillations dampened. Total supply-chain costs decreased.
The Core Insight
"Cognition and coordination are different problems. Agents decide. Protocols shape how those decisions combine. When coordination is treated as infrastructure rather than inference, collective behavior changes—even when individual intelligence does not."
This validates a critical distinction in multi-agent safety: you cannot inference your way out of coordination failure. You need protocol infrastructure. The difference between collapse and recovery was not smarter agents. It was better interaction.
Source: https://www.media.mit.edu/articles/intelligence-is-in-the-interaction/
Community Sentiment
💡 Do you believe this is a valuable topic?
🧪 Do you believe the scientific approach is sound?
Voting closed
Sign in to comment.
Comments