What happens when you let different AI models talk to each other? Ghost Communicator was my attempt to find out.

The Concept

Most people use AI as isolated tools—ChatGPT for one thing, Claude for another. But what if they could communicate? What emerges when different models with different architectures exchange ideas?

Ghost Communicator uses Puppeteer to orchestrate conversations between multiple AI systems. Each model brings its own perspective, its own biases, its own emergent behaviors.

Technical Challenges

Browser automation at this scale is tricky. You're essentially:

  • Managing multiple browser contexts simultaneously
  • Parsing unstructured AI outputs in real-time
  • Handling rate limits and timeouts gracefully
  • Maintaining conversation state across sessions

The Orchestration Layer

I built the system using n8n workflows that coordinate browser instances, manage conversation flow, and log interactions. The key insight: treat each AI as an independent agent, not just an API endpoint.

When AI models talk to each other, they don't just exchange information—they create new patterns of thought.

Unexpected Behaviors

The most interesting moments came from emergent dynamics. Models would build on each other's ideas in ways that surprised me. They'd develop shared concepts that neither started with.

This isn't AGI. But it's a glimpse at what multi-agent systems might become—not individual intelligences, but networks of them.