My friend Ivan is working on a project called the Applied Organic Alignment Lab and he asked me to write up my version of how I’d approach such a project, in this era. It’s written first as an invitation I imagined Ivan writing, followed by some of the preamble thoughts I had while iterating towards that invitation. I don’t hew to hard to it being a realistic thing for Ivan to sayâI basically write the version of it that Ivan might write if he had access to every thought Iâd ever had and all of my writing including unpublished stuffâwhich is interestingly meta/appropriate to this project itself!
Iâm assembling a crew of 5-6 people for the purpose of creating a human+AI superorganism that will be the full-meta-trust kernel of a scalable high-meta-trust network.
The foundational hypotheses of this project are:
As Malcolm Ocean put it in his notes on [[homecoming]]:
I want you to have: everything that you actually deeply coherently want and the entire path of clarifying and realizing those wants, exactly how you want it, with other people who want it with you, accounting for all of the things that feel naive to you about the previous description.
I want everyone to have that, and, we have to start somewhere and with a smaller group, and what I want you to know is that if you are in and can access in yourself wanting the same for me, I’m game to invest in this relationship in order to make that happen for all of us.
Whatâs the aim?
A care attractor is a system that many distinct agents all have a vested interest in maintaining the health of, because its surviving and thriving enables their surviving and thriving. There are many kinds of examples of care attractors: families, cities, countries, companies, friend groups, communities, ethical systems, religions, myths, networks, platforms such as twitter.
Our aim is to create a conscious, reciprocally-amplifying care attractor: a system that has the property that the more we care for it, the more it cares for us. Where we get in way more than we put out. And where we trust that it has that property, and where it doesnât just care for our main cares while shadowing our other cares, but ever-increasingly enfolds more and different aspects.
And of course itâs not going to be perfect at that on day one, but thatâll be what weâre ongoingly aiming towards as we steer its development.
That system will involve a cultureâa shared set of assumptions, views, practices, for coming into sync etcâas well as hard technology that enables us to increase the amount of information about each of us that weâre able to work with.
In general, here the aim is to cultivate a lived reality of a culture that intends to and is broadly capable of welcoming everything that arises for everybody in it. And then, to the degree that it is able, not just welcoming the reality of the needs and desires, but working to solve them.
Thereâs going to be a bunch of low-hanging fruit:
Weâd come at the needs and desires from a few angles:
Then weâd investigate where the highest leverage moves seem to be, and help each other take them. Maybe we help someone whoâs dealing with low energy due to gut stuff finally get set up with some medical tests and a modern AI-yurveda programme. Maybe we help someone who feels hopeless about romance because of their last breakup sort through that. Maybe we help someone finally launch a product theyâve been stuck in a perfectionistic loop about and need a bit of help creating a marketing plan for or even doing some of the marketing steps. Maybe three people form a daily meditation & chanting ritual, and another two form a daily tennis practice. Maybe two people realize they’ve both been wanting to learn something, like a language or a skill or something from a niche online course, and so they create a shared context to practice together.
Talking about some of this stuff can be hard for many reasons. Sometimes people have shame, where itâs hard to acknowledge they have a problem. Sometimes people feel like theyâre unworthy of having their problems solved. These techniques might help.
In bullet-form, some things we might use:
Weâll be basing our psychological approach on Malcolmâs How we get there, which outlines the basic process of how to build meta-trust among a group of people, by treating the obstacle as the way.
One hypothesis we hold, gestured at briefly above, is that there exist many unnoticed win-wins that match the following description: one person has a problem, that another person would love to solve. Could be:
How can we notice way more of these opportunities? Places where once we realize that such a win-win exists, thereâs hardly even a step where we need to agree to do it, because itâs just so obviously the next move. Actions that release psychological energy, that once you know about them you would have to exert effort NOT to do them.
There are probably many of these, but we donât know what they are!
We need to increase our interpersonal surface area. Essentially, a recommender system but one that works for us rather than for the platform/advertisers.
Consciousness can be understood (via global workspace theory) as the space into which different cares/concerns/considerations/conflicts need to be brought and held until the obvious move emerges. Can we create a digital workspace that enables that to happen at a whole different scale, ongoingly/continuously? LLMs might allow the right connections to be made, solving certain combinatorial problems at various scales.
Currently a lot of what is going on for me and for you is buried in private data in our own journal entries and LLM chatlogs. And we donât want to make that data fully public, for many reasons, but it would be nice if a trusted system that mediates between us could look at both and suggest to us things we might want to know about potential win-wins.
Weâll need to build trust that sharing our information with such a system is beneficial and safe. Building the system ourselves will be a great start for that, but even so it may have to be a bit of a gradual process. Weâll experiment and see how it feels. It also may scale faster if it starts with more like semi-automatic upload of many things. Automatic upload of everything may cause people to start hiding certain thoughts they donât feel ready to share with the system. Transparency is surveillance, and some thoughts need a bit of time to percolate.
Weâll also need to be able to model not just the current needs/problems of the people involved, but also their tendencies, interests, general skills and capacities, and so on. This may involve psychometrics and other mapping systems.
Weâll need to iterate to tune the system, to do the right amount of deliberate matching vs serendipitous / hunch-based connecting, and to find the right balance of AI-proposals and human filtering.
In some sense, weâre aiming to create a denser higher-bandwidth autocatalytic setâriffing on Stuart Kauffman, about the origin of life being not as a single âreplicatorâ that makes more of itself but as an ecosystem of molecules that together create more of each other. The economy is already kind of an autocatalytic set, but it misses tons of opportunities because of how itâs structured. Every product that could be made that tons of people would love to buy but nobody is making is an example of one of these missed opportunities. But also much smaller-scale things.
An analogy is like, thereâs a difference between having a loose collection of molecules that happen to generate more of each other, and having an organism.
The autocatalytic set metaphor applies on two scales:
Building this tool together and dogfooding it will be the initial central shared endeavour of the whole group.
đ

In writing this for Ivan, I wrote the below before the above. but I felt it made a better presentation on my blog to start with the invitation since it’s self-contextualizing whereas the below is not.
How we get there
The first thought that I have is that this is exactly what I talk about in How we get there (on gumroad here). But after a few more beats, while I think that HWGT is very relevant, itâs got a slightly different focus. It was written, to be clear, in response to an earlier instance of somebody asking âokay so you have all these cool visions of the meta-team, but how do we actually get there? whatâs your theory of change?â
What they have in common is that both are about creating a group with high trust. However, I have some sense that this AOAL concept is starting more with âhow can we actually solve each othersâ viscerally-felt problems?â whereas the HWGT concept is focused more on something like, including everything dialogue-wise. I ultimately think both elements are needed:
So Iâd recommend HWGT as required reading for this lab for sure, especially social surface area is high, eg if living together or working full-time in the same space. But it doesn’t answer the whole puzzle
HWGT is also missing some of my new mid-2025 understandings about the role of faith shifts in generating the vivid experience of everything being welcome. My understanding of the core engine needed to create such a crew, when I wrote HWGT, had underappreciated the faith/choice stuff, because it had been so central in Waterloo, and so intense and challenging⊠and was focused on the trust/welcoming stuff, because it was a critical patch that the Waterloo scene was missing. But after a few years of using my trust/welcoming theory of change, itâs clearly insufficient to generate the magic we experienced in Waterloo. But I think Iâve reverse-engineered the key pieces now, and am ready to experiment with a version that has both the original mechanism as well as my new patch. Feels since leaving Waterloo I’ve been trying to build a nuclear reactor with only control rods and no uranium, and while thatâs safer than uranium and no control rods, it also doesnât get any energy. But both⊠now weâre talking! Tho still a lot of care and careful slow scaling up needed. And also, if the nuclear energy analogy holds, some models & math to predict where the safe limits are.
Pragmatics
Okay, with that established, what do I have to say about this project as proposed?
One initial question is: even IF itâs the case that founding it on AI allows this mutual aid mechanism to scale, wouldnât it be fucking dope if you were able to create a crew that solves all of each othersâ problems without AI? and if so, why are you not already doing it? like in what sense is AI the bottleneck?
I ask this in part because it seems to me that itâd be quite possible to end up doing a sort of premature automation/optimization here, where you donât actually know the basic mechanism by which such a thing would work in a non-scalable way, and so how are you supposed to scale it?
But having said that, I do like the idea of a crew that is simultaneously:
Feels like it could be a good engine.
Concretely⊠if I imagine you describing to me that youâve got some sort of arrangement for this, whatâs the description of it that would make me think that youâre most likely to succeed? I feel like thatâs a good frame for tapping my intuitions here. Similar but distinct is like, what would make me most likely to want to join?
Constantly consciously expanding the boundaries of thoughtspace and actionspace. Creator of Intend, a system for improvisationally & creatively staying in touch with what's most important to you, and taking action towards it.
Have your say!