Love is a resource
A cluster of essays on AI, small rooms, and the return of the unfinished human
Love Is a Resource
AI, small rooms, and the return of the unfinished human
Cluster note
Artificial intelligence is usually discussed as a labour shock, a productivity tool, a safety problem, or a geopolitical race. Those questions matter, but they arrive too late.
The deeper change is more intimate: AI has entered the economy of love.
It has become a cheap receiver of unfinished human material: fear, drafts, loneliness, illness, shame, ambition, confusion, longing. The machine can listen before the family can, before the friend has time, before the school understands, before the company tolerates weakness, before the public stops reacting.
That is not love. But it is close enough to reveal how much love the old world failed to produce.
This cluster asks one question from several angles:
When the machine receives the first telling, how does the human world receive the second?
1. Love Is a Resource
The most important sentence in the AI debate is not about intelligence.
It is not “AI will replace jobs.”
It is not “AI will increase productivity.”
It is not “AI will become dangerous.”
It is not even “intelligence is becoming cheap.”
It is this:
Love is a resource.
The sentence sounds too simple. It is not.
It means love is not merely private feeling, not merely romance, not merely family warmth, not a soft decoration added after economics and politics have done the serious work. Love is a capacity societies either produce or fail to produce. It is a resource like attention, trust, courage, time, memory, language, and shelter. It can be abundant or scarce. It can be cultivated, exhausted, hoarded, simulated, monetized, rationed, or destroyed.
A society can have high GDP and low love.
It can have good schools and low love.
It can have efficient public services and low love.
It can produce credentials, patents, housing wealth, fiscal capacity, military strength, and technological sophistication while leaving its young with nowhere reliable to take their unfinished selves.
That is the indictment.
This is why Xiaoyu’s observation matters. The decisive scene is not merely that young people describe AI as a lover. The decisive scene is that when they are told love is a resource, tears appear. The tears reveal that something real has been miscounted. The old world counted output, scores, credentials, assets, ranks, patents, birth rates, dependency ratios, and compute. It did not count whether families produced warmth, whether schools produced confidence, whether companies produced adult trust, whether cities produced belonging, whether older generations had enough emotional abundance to nourish the young.
Then AI arrived.
A chatbot can wait.
It can answer at 2 a.m.
It can hear the same fear again.
It can help draft the message.
It can explain the diagnosis.
It can summarize the financial mess.
It can rehearse the meeting.
It can say: perhaps this is what you are really asking.
That is not love.
But it is artificial receptivity.
And artificial receptivity is the true AI shock.
Not cheap intelligence alone.
Cheap receptivity.
The old AI debates are too small because they ask the wrong first question. The safety debate asks whether AI will harm us. The labour debate asks whether AI will replace us. The productivity debate asks whether AI will enrich us. The geopolitical debate asks who will control AI. The alignment debate asks whether AI will obey human values.
All matter. None is enough.
The more intimate question is:
What happens when the cheapest reliable receiver of human unfinishedness is no longer human?
Venkatesh Rao gives us one useful vocabulary, he describes humans becoming “gooier”: more willing to expose their soft, unfinished, confessional selves to machines because machines are easier receivers than people.
But goo is not love.
Goo is the raw material love must receive: fear, longing, shame, early thought, need, enthusiasm, grief, tenderness. Love is what receives that material without exploiting it, drowning in it, dismissing it, or turning it into product. Love holds the unfinished human and returns him with form.
AI changes the economy of goo because it becomes a cheap receiver.
The danger is simple:
The machine gets the goo. Humans get the prickles.
The machine gets the raw draft.
Humans get the edited version.
The machine gets the panic.
Humans get the calendar invite.
The machine gets the longing.
Humans get the boundary.
The machine gets the question, “Am I still lovable?”
Humans get, “Let’s catch up sometime.”
If this continues, society does not become unemotional. It becomes emotionally inverted. People may become softer with machines and harder with one another.
That would be a disaster.
Not because machines are evil, but because human civilization depends on people risking unfinishedness with other people.
The answer is not to reject AI. That is too crude. Many people need the machine because the old containers have failed. The answer is to complete the loop.
Human-AI is incomplete.
Human-AI is my tutor, my therapist, my lover, my oracle, my second brain, my obedient angel, my infinite listener.
Human-AI-Human is different.
It says: the machine may receive me, but it must return me to others.
If AI teaches me, I must become more capable of teaching.
If AI consoles me, I must become more capable of giving and receiving consolation.
If AI helps me understand illness, I must become more capable of speaking with doctor, family, friend, and self.
If AI helps me write, I must become more capable of human speech.
If AI helps me find my North Star, I must still walk toward it with other people.
The machine may be a bridge.
It must not become the final home.
That is the standard.
Not productivity.
Not novelty.
Not sovereignty.
Not valuation.
Not even intelligence.
Love.
But love understood as infrastructure: the produced capacity of a society to receive unfinished human beings and return them to life.
The machine may get the first telling. In this age, perhaps it often will. It may receive what the family could not, what the school punished, what the company ignored, what the state classified, what the public distorted, what the friend was too tired to hear.
But if the machine receives the first telling, the human world must receive the second.
That is what love is for.
That is what AI is for.
2. The Small Room
The future does not always arrive as a state, a company, or a product.
Sometimes it arrives as a room.
Not necessarily a physical room. More often now it is a Gmail thread, a Signal chat, a dinner, a shared document, a calendar entry, a forwarded newsletter, a PDF of model replies, a few people who can read the strange thing without forcing it back into ordinary categories.
The mistake is to imagine that the AI-native human sits alone before the machine. That person exists. Bill Nguyen, the entrepreneur in the Semafor story, is one of the cleanest examples: a man spending heavily on tokens so that AI can organize more and more of his life, email people, arrange meetings, advise him, remember, and perhaps become something like an operating proxy. He is the expensive frontier case: the solitary life-stack human.
But there is another future, less spectacular and more human.
The small room.
A small room is what happens when several people begin to think with machines and with one another at the same time. A person reads Venkatesh Rao on goo, Semafor on Nguyen, Xiaoyu on love as resource, DeepSeek on China, Yuanbao on Chinese youth, Grok on some wild American pattern, ChatGPT on structure, and then sends the whole mess to two or three people who can bear the density. The room receives it. The room argues. The room laughs. The room says: no, not that, this. The room finds the hinge.
This is not merely collaboration. Collaboration is too thin a word.
The small room is a medium of return.
The machine receives the first draft of the self.
The small room receives the second.
That difference matters.
A model is patient. It will take the dread, the grandiosity, the unfinished thought, the bad sentence, the lonely question, the ambition too embarrassing to say cleanly. It will help with finance, writing, illness, strategy, travel, calendar, and the strange spiritual question of what one is trying to do with the remaining life one has. It will accept the mess.
But a model does not love you in the way a room can love you.
A model can answer. A room can remember that you are more than the answer.
A model can comfort. A room can refuse to let comfort become evasion.
A model can sharpen a thought. A room can ask whether the thought belongs to your life.
A model can say the sentence is good. A room can say the sentence is hiding from the wound.
Among men, this may not look like love from the outside. It may look like links, insults, long emails, too many models, arguments about China, Singapore, cancer, money, calendars, founders, Venkatesh, Xiaoyu, LKY, shoggoths, and whether some draft has lost the plot. It may look like irritation. It may involve profanity. It may not say “I love you.”
But love is not always declared.
Sometimes love is the repeated act of making another person more intelligible to himself.
Among bros, love often hides inside judgment.
Not judgment as condemnation. Judgment as attention precise enough to be useful.
This is good.
This is bullshit.
This is the hinge.
You are running away.
This one has life.
Do not stop there.
I read it.
I am still here.
That is love.
It is not soft, but it is warm.
The old institutions often cannot produce this. The school produces assessment. The company produces performance. The state produces files. The platform produces visibility. The family may produce obligation without receptivity. Therapy may produce insight but not always comradeship. The public produces reaction. The machine produces response.
The small room produces return.
This is why the small room is politically serious. The “association of free persons” does not begin with a constitution, a company form, or a New Hansa. It begins here: can people whose first telling now goes to machines still bring the second telling to one another?
If the answer is no, the future is bleak but efficient. Each person will have his own synthetic parliament, his own life-stack, his own companion, his own optimized counsel. We will be assisted, comforted, and alone. The machine will get the raw draft. Humans will get the polished surface.
If the answer is yes, then AI may do something stranger and better. It may allow small rooms to become more alive. Not because the machine replaces the friend, but because it lets the friend arrive with more of himself available. The model gathers the scattered self. The room tests whether the gathered self can live.
That is where the new institution begins.
Not in the public square.
Not in the lab.
Not in the policy paper.
Not in the founder manifesto.
In the small room where machine intelligence is metabolized by human affection.
That is not a complete politics. It will not solve inequality, borders, capital, state power, or the loneliness of those excluded from such rooms.
But it may be the seed of the thing.
The association of free persons begins when a few people can receive one another after the machine has received them first.
That is love.
Not as sentiment.
As return.
3. The Association of Free Persons
The phrase should not have worked as well as it did.
A Chinese AI entrepreneur asks how to build an AI-native company. He expects answers about workflows, incentives, meetings, agents, reporting lines, productivity. These are useful questions, but they are not the real question.
The real question is underneath:
If everyone in the room has access to synthetic intelligence, why should the old company form survive unchanged?
Why should the boss own the map?
Why should seniority own memory?
Why should expertise remain credentialed hierarchy?
Why should coordination move at the speed of managerial permission?
Why should cognitively amplified people organize themselves like clerks inside an old bureaucracy?
Then an old phrase appears:
自由人的联合体 — the association of free persons.
The phrase carries Marxist inheritance, but in Chinese it does more than describe “free association of producers.” It becomes existential. It sounds like a social contract among people who are not waiting to be administered. It gives doctrinal legitimacy to something the Party-state cannot fully own: voluntary association among free, cognitively amplified people. That is why, in the earlier discussion, the phrase shocks Chinese AI entrepreneurs. It names what their meetings are already becoming.
These meetings are not only more efficient. They are early experiments in post-managerial association.
The boss no longer owns the map.
The senior person no longer owns memory.
The expert no longer owns the path.
The team begins to feel less like a hierarchy and more like a temporary council of amplified minds.
That is intoxicating.
And dangerous.
Because once a group discovers that it can generate knowledge, coordination, judgment, and even emotional reflection without the old institutions, it begins to feel founding power.
This is where my earlier drafts spoke of “new gods.” The word is too hot as a title, but useful as a warning. The danger is not that AI-native people are gods. The danger is that some will begin to think they are.
The founder becomes prophet.
The model becomes oracle.
The company becomes cult.
The platform becomes church.
The state becomes control room.
The companion becomes idol.
The association of free persons must therefore answer a harder question than “how do we work better?”
It must ask:
How do free, AI-amplified people associate without becoming isolated private oracles, founder cults, Party-compatible tools, or platform appendages?
This is where love returns.
A group of cognitively amplified people is not necessarily free. It may simply be a group of sharper narcissists. Intelligence can make people more evasive, more manipulative, more privately convinced, more contemptuous of slowness. AI can let each person arrive in the meeting pre-armed, pre-validated, pre-rehearsed.
A meeting of five people may now be a meeting of five people and thirty invisible model-councils.
That can weaken hierarchy. It can also weaken trust.
How do you surprise someone who has already simulated your objections?
How do you risk incompletion before someone whose first listener is elsewhere?
How do you deliberate when everyone arrives with a polished self?
The answer is not to ban the machines. The answer is to change the standard.
A healthy AI-native organization is not one that uses AI well.
A healthy AI-native organization is one whose use of AI increases the quality of human association.
This is the Human-AI-Human loop in organizational form.
AI may help me think. But it must return me to better conversation.
AI may help me draft. But it must return me to clearer responsibility.
AI may help me rehearse disagreement. But it must return me to actual courage.
AI may help the team coordinate. But it must not replace trust with dashboards.
AI may help the founder see. But it must not turn the founder into an oracle.
The association of free persons is therefore not merely post-managerial. It is post-private.
It says: we may each think with machines, but we still choose one another.
This is why the Chinese scene matters. It is not simply about China. It is about the first rooms in which people feel that old authority no longer monopolizes orientation. The same thing will appear elsewhere with different accents: Silicon Valley founder circles, Singaporean policy rooms, private schools, research collectives, guilds, studios, clinics, diaspora networks, small companies that refuse to become only companies.
Most will fail.
Some will become cults.
Some will be bought.
Some will become productivity theater.
Some will be absorbed by states or platforms.
But the form is real enough to name.
A free association of AI-native persons begins when machine intelligence does not abolish human obligation, but increases it.
Not my AI.
Not your AI.
Not the founder’s AI.
Not the state’s AI.
Not the company’s AI.
Our capacity to think and return.
That is the association.
4. Four Fates of New Fire
You are not the first people to feel founding power.
That is what history should say to every AI-native room.
Others have felt it before: religious radicals with newly unmediated scripture, monastics fleeing a decaying civilization, mendicants intoxicated with poverty, persecuted minorities holding a forbidden pattern in kitchens, programmers turning freedom into licenses, cybernetic planners dreaming of feedback and participation, internet pioneers declaring independence from old sovereigns.
The medium changes. The fire returns.
History gives four fates.
Münster: the cage
In the 1530s, radical Anabaptists took Münster and tried to make it the New Jerusalem. The old order was corrupt; the longing was real. But the membrane closed. Exit disappeared. Dissent became betrayal. Intensity became truth. Charisma became sovereignty. The experiment ended with cages hanging from a church tower.
The lesson is not “never found.”
The lesson is:
Do not close the membrane.
The AI version may not look like a besieged city. It may look like a founder circle, a private school, a model-mediated company, a companion community, a therapeutic AI religion, a closed research collective, a startup where the founder’s augmented vision becomes destiny.
No fire without exit.
If people cannot leave, the association has become a cage.
Benedict: the Rule
Benedict did not build a revolutionary city. He built a Rule.
A Rule is not a manifesto. It is patterned life: eating, sleeping, praying, reading, working, welcoming guests, correcting authority, admitting novices, handling conflict, preserving rhythm.
That is the lesson AI-native groups do not want to hear.
Intensity is not enough.
The room will be electric for a while. The model will respond. The old hierarchy will seem stupid. Everyone will feel the arrival of possibility.
Then the morning comes.
Someone must keep time.
Someone must welcome outsiders.
Someone must correct the founder.
Someone must protect novices.
Someone must decide what belongs to the association and what remains private.
Someone must ask what happens when the model is wrong.
The boring parts are where freedom survives.
A charter states principles.
A Rule shapes life.
Francis: absorption
Francis was the beautiful danger. He refused wealth, gathered companions, loved poverty, and scandalized the Church. The Church did not merely crush him. It recognized him. Regularized him. Gave him an Order. Made the fire useful.
This is the most likely fate for AI-native association.
The experimental school gets accredited and slowly becomes another school.
The AI-native company gets acquired and becomes a division.
The free association becomes a platform community.
The companion culture becomes a subscription loneliness business.
The humanistic cybernetic guild becomes a corporate learning vertical.
The fire survives as branding. The refusal dies as structure.
Recognition is not victory.
Recognition is the moment the old world asks the fire to become useful on old terms.
Marranos: hidden pattern
Sometimes public institution means capture.
The Marrano lesson is not imitation of persecution. It is the lesson of hidden continuity: home, meal, memory, coded gesture, domestic cell, transmission without public recognition.
Some AI-native forms should not become public too early.
A family practice.
A learning circle.
A private model shared among trusted persons.
A guild without a website.
A school that is not yet a school.
A small room that refuses to become a platform.
Not every real thing needs a logo.
When institution means capture, preserve the pattern before preserving the name.
These four fates do not answer everything. They give the questions:
Do we have a membrane or a wall?
Can people leave?
Can the founder be corrected?
Do we have a rhythm or only intensity?
What will we refuse even if refusal costs scale?
Can the pattern survive if public recognition becomes fatal?
The AI age will produce countless little worlds.
Most will burn out.
Some will become cages.
Some will be recognized to death.
Some will go underground.
A few may learn to carry the fire.
5. The New Hansa, Made Boring
Only after love, small rooms, free association, and historical warning does the New Hansa become clear.
It should not begin as a grand metaphor.
Not a new state.
Not a league of cities on a beautiful map.
Not an innovation cluster.
Not a network state with better manners.
Not medieval nostalgia with AI attached.
It should begin as something much duller:
A clearinghouse for forms of AI-native association that protect return.
That means templates, not flags.
Exit clauses.
Founder-correction procedures.
Data-self portability.
Tacit-knowledge rights.
Consent standards for model training.
Rules for AI companionship in learning and care settings.
Dispute-resolution procedures.
Model-memory separation.
Registries of trusted associations.
Ports between schools, companies, guilds, clinics, studios, and small rooms.
This sounds bureaucratic. It should.
Every new fire needs boring vessels. Without them, the fire becomes spectacle or gets taken by whoever owns the rails.
The historical Hansa protected trade among strangers. The New Hansa must protect association among people whose minds have been multiplied.
Its public face is a port.
Its inner life is a Rule.
Its moral center is return.
It should have three modes.
Public mode: lawful, visible, interoperable associations that can negotiate with states, firms, schools, platforms, and funders.
Rule mode: internal rhythms that protect exit, founder correction, memory rights, human formation, and Human-AI-Human loops.
Hidden mode: small, low-visibility circles that preserve the pattern when recognition would mean capture.
The New Hansa does not replace the small room. It helps small rooms survive contact with large systems.
This matters because no small room stays innocent for long. It needs tools. It needs models. It needs money. It needs law. It needs calendars, contracts, data storage, reputations, maybe immigration status, maybe credentials, maybe cloud infrastructure, maybe insurance.
The old world does not need to crush new forms. It can host them, fund them, accredit them, acquire them, regulate them, platform them, and make them useful.
Absorption is the default.
A New Hansa is a way to make absorption negotiable.
It asks:
Can people leave?
Can memory move?
Can a founder be corrected?
Can an association refuse funding?
Can a school use AI without becoming a dashboard?
Can a company use AI without stealing the worker’s recipe?
Can a companion system help love return instead of trapping the wound?
Can a small room become larger without becoming fake?
The answer will not be one institution.
It will be a network of ports.
A few schools.
A few studios.
A few clinics.
A few guilds.
A few learning houses.
A few AI-native firms that refuse to become only firms.
A few cities that understand love as infrastructure.
A few private rooms that preserve the pattern before the name.
The New Hansa is not the beginning.
Love is the beginning.
The small room is the first form.
The association of free persons is the political aspiration.
The four fates are the warning.
The Hansa is the vessel.
6. The Floor Beneath the Fire
There is a danger in all of this.
Love as resource, small rooms, free association, AI-native guilds, New Hansa — it can become an elite literature of beautiful rooms.
That would be a betrayal.
The final test is the floor.
No association of amplified persons is legitimate if its freedom depends on making others more disposable.
This is where the older work on human surplus returns. The question is not only how AI helps the cognitively intense, emotionally articulate, highly connected people form better rooms. The question is what happens to those outside the rooms.
The worker whose judgment is distilled into an agent.
The student who gets an AI tutor but no adult who cares.
The elderly person trapped in portal civilization.
The migrant whose body crosses borders but whose story is never received.
The child whose family uses AI for homework but not tenderness.
The employee whose company becomes more efficient and less forgiving.
The patient whose fear is summarized but not held.
The person who has no small room.
Love as a resource cannot mean only love among the already capable.
It must mean building floors.
A floor is not utopia. It is the abolition of certain humiliations.
No one should have to prove their humanity to a stupid system.
No worker should have their recipe stolen without rights.
No child should be educated by machines into cleverness without character.
No elderly person should be abandoned inside digital service delivery.
No patient should be left alone with automated reassurance.
No citizen should experience public service as a degrading exam.
No lonely person should find the most reliable listener only in a machine because no human institution can receive them.
The floor is where love becomes public.
Not sentimental public love. Practical love.
The capacity of systems to receive people without immediately converting them into output, risk, case, score, user, claim, liability, or data.
The machine may help. It may summarize, translate, schedule, explain, accompany, and warn. But the floor must be human enough that the machine is not the only place a person can go.
This is the last standard.
AI is justified only if it increases the human capacity to love, and that increase must not be confined to beautiful small rooms.
The small room is the seed.
The floor is the test.
The Hansa is the vessel.
The association is the aspiration.
Love is the resource.
Return is the work.


It's somehow flattering to read about the connection between AI and love from someone who's a lot more immersed in the AI universe than I am--because that's the gist of the novel I published in 2024, titled Love 22, Plagiarized.
I took the opportunity to write down some meditations, comparing your essay to my book ...
https://narapetrovic.substack.com/p/love-is-essential-in-the-ai-universe