Scaling the village’s information economy is possible because of AI. Here´s why AI belongs in democracy — and where, exactly, it belongs.
The bridge and the river
Imagine a village of a hundred and fifty people deciding whether to build a bridge across the river that runs past their houses. The decision is not hard to organise. Everyone knows which river. Everyone knows who crosses it and why. Everyone has an opinion about the ford that floods every spring, the timber that grows on the north slope, the neighbour whose cart broke an axle last March. When the villagers gather to decide, the conversation is short, not because they are wise but because the relevant information is already known by the very people who will decide.
This is usually explained as a story about trust. Small communities work, we are told, because people know each other, social capital is dense, reputations can be tracked, and free-riders shamed. There is something to that. But I think it misses the more important point.
The village’s democratic decision is cheap because the villagers are already experts in the thing being decided. They don’t need briefings. They don’t need to read position papers. They don’t need to trust a specialist’s summary. The knowledge required for the decision already lives inside the people making it. The village is not a community of friends casting votes — it is a community of people deciding about their own practice.
This is a much more radical observation than it first appears. Because if what makes small-scale democracy cheap is not friendship but embedded knowledge, then the scaling problem of democracy has been misdiagnosed for two hundred years.
Where democracy’s cost actually lived
Every form of collective decision-making has transaction costs: the cost of gathering information, the cost of deliberating, the cost of aggregating preferences, and the cost of making the decision stick afterwards. Different forms shift the costs around. A dictator has almost no aggregation cost but pays enormously for legitimation, often through surveillance and force. A corporate board outsources information-gathering to staff and aggregates with clean voting rules. A jury splits the costs modestly among twelve people, one question at a time.
Democracy’s cost has historically concentrated in one specific place: the symbol domain. The cost of reading in, of following arguments, of forming a position, of finding others who share it, of making oneself heard in a crowd. These are all costs paid in language, in attention, in the slow work of symbol-handling. And they are precisely the costs that scale badly. One villager thinking about a nearby bridge is almost free. Ten million citizens thinking about ten thousand far-fetched questions is ruinous.
Representative democracy was invented as a response to this problem. If we cannot bring everyone into every room, we will send a few on everyone’s behalf, and they will do the symbol-work full-time. It was an ingenious compromise. But it created a new problem in place of the old one: the representatives cannot possibly be deeply informed about the hundreds of matters they must decide on, and so around them grows a permanent apparatus of parties, staff, experts, lobbyists, and career officials whose job is to know things on the representatives’ behalf. The decision-makers become progressively more dependent on the knowledge-keepers, and the knowledge-keepers become the new informal aristocracy.
Peer Democracy’s different answer
There is another way to think about the problem. Instead of sending a few to decide about everything, let everyone decide — but only about what they already know.
This sounds modest, but it inverts the usual logic. Classical direct democracy asks every citizen to form an opinion on every question, which is why it collapses above the scale of a small town. Representative democracy asks every citizen to choose someone who will form an opinion on every question, which works better at scale but transfers power to those who are supposed to know. Peer Democracy asks each citizen to participate in the questions where they already are, in the sense that the villagers are already in the question of the bridge. Not because they are neighbours. Because the question is already part of their life.
This is an information-economic argument, not a romantic one. It does not require us to believe that ordinary people possess hidden wisdom, or that participation is inherently ennobling, or that everyone should care about politics. It requires only the observation that decisions tend to be better and cheaper when made by those who already hold the relevant knowledge — and that in any population of sufficient size, for any given question, there exists a subset of people who already hold that knowledge, who would participate willingly if only they could find the question and the question could find them.
The difficulty is in the finding. In a village, the finding is free: there are only a handful of questions, and everyone lives inside all of them. In a municipality of thirty thousand or a country of ten million, with hundreds of live questions at any moment, the matching between person and question is itself a transaction cost — and historically a prohibitive one. It is why mass direct democracy has never worked at scale, not because citizens are stupid or lazy, but because no one has ever been able to afford the matching.
What AI actually is, in this picture
It is tempting to talk about AI as a new kind of intelligence that will join human decision-making as a partner or a guide. I don’t think that framing helps. A large language model is a compressed, queryable sediment of human symbolic activity — billions of arguments, explanations, contradictions, and agreements, distilled into a system that can recombine them on demand.
That means an LLM has no stake in any decision. It has no interests of its own, no experience of living anywhere, no career dreams or other disturbances. What it has is extraordinary fluency in the symbol domain: it can match, translate, summarise, structure, and surface. It can read ten thousand times as fast as any human. It is tireless, cheap, and — crucially — it has no ambition to become the spider in the web.
This matters because democracy’s historical bottleneck has always been in the domain where LLMs are strong. Democracy coordinates through language — through the slow, expensive work of citizens understanding questions, forming opinions, and finding ways to organise and express them. For the first time in history, that particular kind of work has become dramatically cheaper.
The right way to think about AI in democracy, then, is not as a participant but as infrastructure. Specifically, the infrastructure that makes it possible to match people to the questions they already know about, at a scale where such matching was previously impossible. AI is what lets us restore the village’s information economy to a society of millions.
It is worth pausing here to notice that this infrastructure already exists. TikTok, Instagram, YouTube, and Amazon all run extraordinarily sophisticated matching systems that learn what each user cares about and deliver a personalised stream of content and products. The algorithms are very good at what they do. The only reason we associate them with dystopia rather than democracy is that they have, so far, been pointed almost entirely at commerce and entertainment — at figuring out which shoes you might buy and which videos will keep you scrolling.
The same technical capability, pointed instead at civic questions, would tell you which local decisions you actually have something to contribute to, which budget consultations match your expertise, and which physical meetings are happening near your home. We have spent a decade building the most powerful attention-matching infrastructure in history and using it almost exclusively to sell things. There is no technical reason it could not serve democracy just as well. There is only a question of whose interests the matching is tuned to — the advertiser’s, or the citizen’s.
The spider and the web
There is a familiar figure in every grassroots organisation: the person who holds everything together. They remember who cares about what, who should be invited to which meeting, who is drifting and needs a call, who said something interesting at the last gathering, and should be connected to someone at this one. Without them, the organisation falls apart. But the organisation slowly becomes dependent on them, and then subtly shaped by their judgement, and eventually — no matter how egalitarian the intentions — organised around their particular sense of what matters.
This is not a character flaw but a structural position. The spider in the web has power because the web cannot function without the knowledge that lives in the spider’s head. And because that knowledge is hard to transfer, the position is hard to rotate.
A well-designed AI infrastructure can dissolve the knot — not by replacing the spider’s judgement, which it cannot do, but by absorbing the mechanical part of the spider’s work: the remembering, the matching, the reminding, the nudging toward the right room at the right time.
What remains for human leaders is what actually requires a human: physical meetings, the reading of moods, the difficult conversations, the moral judgements, the public narrative that Marshall Ganz rightly identified as the core of organising. The spider becomes lighter, and therefore more replaceable, and therefore less oligarchic.
The aim is not to automate leadership. It is to free leadership from administration.
Four honest tensions
None of this is as clean in practice as it sounds on paper, and a bottom-up democratic movement that reaches for AI without thinking carefully will find itself reproducing the problems it meant to solve. Four tensions are worth naming in advance.
- Driving is not the same as encouraging. An AI can send notifications when a relevant question opens, but it cannot know that a particular participant has had a hard week and needs a phone call rather than a ping. The relational part of organising is not a transaction cost to be minimised; it is part of what makes democracy worth doing. Infrastructure should lighten the human work, not substitute for it.
- Whoever configures the algorithm becomes the new spider. How to set up the AI system, the tone of its reminders, the frequency of its contact — all of these are political choices. To make them accountable, transparency and participant control over one’s own settings are pivotal.
- Helpful nudging can slide into manipulation. There is a short distance from “are you interested in this question?” to “you haven’t participated in two weeks, here are three issues you should care about.” The first feels like service. The second feels like discipline, even when well-intentioned. The line is drawn roughly where the system stops respecting the legitimacy of not participating.
- The cold-start problem is real. People often discover what they care about only by encountering it. A matching system based purely on stated preferences risks narrowing citizens into the interests they already know they have, rather than opening them to new ones. Some element of serendipity — some deliberate exposure to things outside one’s declared profile — needs to be built in, or the village’s information economy becomes a filter bubble wearing democratic clothing.
None of these tensions is a reason not to build. They are reasons to build carefully, and to measure not only participation rates but whether participants experience the system as supportive or as pushy. That second measurement is the one that tells you whether you have landed on the right side of the line.
What we will test in Vallentuna
This is not only a theoretical argument. In September 2026, the new local party Vallentuna Framåt will stand for election to the municipal council in Vallentuna, Sweden, on a platform built around exactly this model: citizens deciding about the questions they already live with, with AI as the matching infrastructure rather than the leader. The early pieces are already in place — a decision platform, a growing set of participants, a series of public invitations to play with civic engagement in low-stakes ways before the election.
What we expect to learn is where the concrete friction lives. How do people feel about a system that notices them? How much serendipity is enough? How do the four tensions appear in practice, and which of them bite hardest? These are questions that cannot be answered from a desk. They require a real municipality, real questions, real people with real lives, and a willingness to treat the whole undertaking as an experiment that might teach us something we did not expect.
The point
Democracy’s historical cost lived in one specific place: the mismatch between who decides and who knows. Villages solved this by coincidence — the people voting on the bridge were the people who lived by the river. Representative democracy tried to solve it by delegation, and accidentally created a new class of professional deciders whose information advantage hardened into power. Peer Democracy offers a third option: give everyone restricted decision rights so they engage only in what they already know and are interested in. Then use the one tool that has ever been cheap enough in the symbol domain to make the matching work.
AI is not a new citizen and will not be one. It is not wise, not situated, not accountable, not at stake. But within the narrow domain of symbols, where democracy’s bottleneck has always lived, it is the first infrastructure in history capable of scaling the village’s information economy to a society that long ago outgrew the small community.
That is all AI needs to do in democracy. It is also, as it happens, quite a lot.