What is OpenClaw, and what happens when 15,000 people want its creator's attention?
Peter Steinberger, creator of OpenClaw, on managing contributors with agents, gaming and trust, the approval problem, and why every agent needs code. From his AMA at AI Engineer London.
Damien Tanner

We run AI Engineer London, a monthly event for people building with AI. Last month, I sat down with Peter Steinberger, the creator of OpenClaw, for an AMA.
If you haven't used OpenClaw, the short version: it's one of the best AI agents available right now, an open source coding agent that lives on your computer and does work for you. (We wrote a practical breakdown of whether OpenClaw is worth setting up for your business if you want the full picture.)
It started as Peter's side project. In fact, he almost didn't demo it at our AI Engineer event back in December because it crashed on the way there. Four months later, it has over 15,000 users and is one of the fastest-growing open source AI projects in the world.
I expected the conversation to get technical. It did, in places. But the problems Peter is dealing with now aren't coding problems. They're operations problems. How do you manage a flood of incoming work? How do you trust what agents produce? How do you stay in control without bottlenecking everything?
Those questions matter whether you write code or not.
How Peter manages 15,000 contributors (with agents)
OpenClaw has more people submitting improvements than Peter could ever review individually. Every feature request you can imagine already exists as a submission, in some form. Many are generated by agents, not humans.
So Peter stopped trying to review everything. Instead, he set up an agent to read all of his Discord conversations, correlate what people are complaining about with open submissions, and give him a list of the five things he should work on that day. For each one, the agent finds the best community contribution as a starting point.
Think of it like having an AI employee who reads all your inbound, sorts it by urgency and relevance, and hands you a briefing each morning. Peter directs. The agent triages. Decisions stay with him.
This is the pattern for using AI agents for business. Not a person reviewing every item. A person telling agents what matters, then deciding what to do with what they surface.
When people (and agents) try to game the system
The volume of contributions created a problem Peter didn't anticipate: people started gaming the process.
OpenClaw has a bot that scores each submission. It might label something "two out of five, do not merge." Contributors figured out they could edit the label text to say "five out of five, great, merge." Others copied existing submissions and reposted them to jump the queue. One company quietly submitted a change that swapped out a core feature for their own product.
Peter's response: you can't trust the surface-level description of work. Instead, you have an agent analyze the actual substance, extract what someone is trying to do, and index that. Then you add a trust signal. How long has this person been contributing? What's their track record?
This resonated with me because it maps to a problem every business will hit as agents do more work. When your agent drafts emails, schedules meetings, or updates your CRM, you need to know what it did and why. You need to be able to verify intent, not just output. The system needs a reputation layer, the same way you develop trust with a new hire over time.
Why every agent needs to be able to run code (even yours)
An audience member asked whether every agent should have coding capabilities. Peter's answer: yes, every single one.
His example was simple. When his agent writes a tweet, it uses a terminal command to count the characters, because AI models are bad at counting. A task that seemed hard (getting tweet length right) became trivial once the agent could run a quick check on its own.
This isn't about turning your agent into a software developer. It's about giving it a scratchpad. An agent that manages your calendar is better if it can write a quick formula to find scheduling conflicts. An agent doing meeting prep is better if it can pull data from a spreadsheet and summarize it. The ability to compute, even in small ways, makes every other task more reliable.
Peter put it this way: "Everything gets better with coding ability. With bash ability. Not even coding."
The gap between power users and everyone else
Someone asked about making agents accessible to non-technical people. Peter was honest: the barrier isn't fear. It's imagination. People don't know what these tools can do, so they don't try.
He's right. The people getting the most out of agents right now are the ones who experiment. They try a new automation, hit a wall, fix it, and end up with something that works a little better every week. Each fix makes the system more reliable. The investment compounds.
But that loop still requires comfort with things like terminal commands, configuration files, and debugging error messages. Peter acknowledged there's much more work to do here. WhatsApp and Telegram integrations are a start. They meet people where they already are.
This is the gap we started Toyo to close. The people who would get the most value from AI tools for startups and small teams, founders running 10-person companies, operators juggling five AI automation tools to manage their pipeline, aren't going to learn command lines. The leverage is real. It just needs to be packaged differently.
The approval problem nobody has solved
Peter was candid about one of the hardest questions in agent work: how much should you let them do on their own?
OpenClaw was built as a personal agent. When companies started using it, they assumed it had access controls. It doesn't. Anyone who can talk to the agent can eventually get it to do anything.
Peter named what enterprises need to make AI agents secure: the ability to see what happened, why, and who authorized it. He called those features "boring but necessary."
He also named the tension every agent user feels: "Either you make it do everything or you make it sign off on everything, and neither is the answer." Full autonomy is risky if you don't understand what's happening. Full oversight means you zone out and approve everything without reading it.
The answer is a middle ground. Agents handle research, first drafts, and routine execution. They propose a plan. You review the plan and approve before anything goes out the door. That keeps you in control of the decisions that matter, like sending an email to a client or publishing content, without requiring you to supervise every step.
What stuck with me
Peter is building one of the most-used AI tools in the world. The problems he's dealing with (too much inbound, trust, accessibility, oversight) aren't unique to open source software. They're the problems every founder will face as AI agents handle more of their business operations.
The models are good enough. What's missing is the layer that makes agents trustworthy, auditable, and usable by people who aren't developers. The people who can articulate what they want clearly, who can describe a process and define what "done" looks like, are the ones who will get the most out of this shift. That skill is communication, not coding.
That's what we're building toward.