posttitle = An Applied Organic Alignment Lab vision đŸ§± titleClass =title-long len =44

An Applied Organic Alignment Lab vision đŸ§±

My friend Ivan is working on a project called the Applied Organic Alignment Lab and he asked me to write up my version of how I’d approach such a project, in this era. It’s written first as an invitation I imagined Ivan writing, followed by some of the preamble thoughts I had while iterating towards that invitation. I don’t hew to hard to it being a realistic thing for Ivan to say—I basically write the version of it that Ivan might write if he had access to every thought I’d ever had and all of my writing including unpublished stuff—which is interestingly meta/appropriate to this project itself!

The invitation, written as if I were Ivan

I’m assembling a crew of 5-6 people for the purpose of creating a human+AI superorganism that will be the full-meta-trust kernel of a scalable high-meta-trust network.

The foundational hypotheses of this project are:

  1. a small devoted group of people working to care for each others’ needs and desires and growth, and to orient to the world together, can produce outsized upspiralling for those people compared to people attempting to navigate alone
  2. we have access to adequate psychotechnology to do this without it falling into a cultish attractor, given sufficient starting maturity and integration of the people
  3. current and imminent AI tech can allow us to
    1. increase our mutual surface area to find win-wins and mutual insights
    2. embed our process and culture into a system that can much more readily scale:
      1. laterally, to adjacent similar groups
      2. vertically, to allow our crew itself to grow in size
  4. doing this will be profoundly good for the individuals involved and for the world at large, with no sense of compromise between scale.  in a sense, the mission is to create a tiny eutopia and then scale it to everybody.  or put another way, a tiny stronghold of God’s Kingdom, which can then reach out and invite everybody in
  5. an even deeper hypothesis is that in general omniwin games are everywhere for those with eyes to see, and that there’s no fundamental conflict keeping everybody from being satisfied, only skill issues—and relatedly, that one man’s problem is another man’s opportunity

As Malcolm Ocean put it in his notes on [[homecoming]]:

I want you to have: everything that you actually deeply coherently want and the entire path of clarifying and realizing those wants, exactly how you want it, with other people who want it with you, accounting for all of the things that feel naive to you about the previous description.

I want everyone to have that, and, we have to start somewhere and with a smaller group, and what I want you to know is that if you are in and can access in yourself wanting the same for me, I’m game to invest in this relationship in order to make that happen for all of us.

What’s the aim?

A care attractor is a system that many distinct agents all have a vested interest in maintaining the health of, because its surviving and thriving enables their surviving and thriving.  There are many kinds of examples of care attractors: families, cities, countries, companies, friend groups, communities, ethical systems, religions, myths, networks, platforms such as twitter.

Our aim is to create a conscious, reciprocally-amplifying care attractor: a system that has the property that the more we care for it, the more it cares for us. Where we get in way more than we put out. And where we trust that it has that property, and where it doesn’t just care for our main cares while shadowing our other cares, but ever-increasingly enfolds more and different aspects.

And of course it’s not going to be perfect at that on day one, but that’ll be what we’re ongoingly aiming towards as we steer its development.

That system will involve a culture—a shared set of assumptions, views, practices, for coming into sync etc—as well as hard technology that enables us to increase the amount of information about each of us that we’re able to work with.

Caring for each others’ needs

In general, here the aim is to cultivate a lived reality of a culture that intends to and is broadly capable of welcoming everything that arises for everybody in it. And then, to the degree that it is able, not just welcoming the reality of the needs and desires, but working to solve them.

There’s going to be a bunch of low-hanging fruit:

  • problems people have that they know about and are trying to solve, but just lack some insight or skill or ability to orient to it
  • problems people are having trouble solving because they just need a bit of encouragement or hand-holding
  • problems people have that they don’t feel like they’re allowed to care about or take seriously, and simply having other people be like “no, that’s worth attending to and doing something about” makes a difference
  • ways in which people’s lives can be improved or problems can be solved that they just haven’t even heard of but that they’re stoked to try once others introduce them

We’d come at the needs and desires from a few angles:

  1. have each member write down:
    1. the main things they’re struggling with on a functional/survival level
    2. the visions/dreams/ambitions they have that they feel blocked on realizing
  2. do an inventory of various dimensions:
    1. health {gut, sleep, exercise, &+}
    2. life ops {taxes, IDs, finances, &+}
    3. relationships {family, friends, past networks, &+}
    4. romance etc
    5. spiritual connection
    6. life ambitions, artistic endeavors
    7. &+
  3. have people orient to each other and gently consider what seems like it might be blocking others

Then we’d investigate where the highest leverage moves seem to be, and help each other take them. Maybe we help someone who’s dealing with low energy due to gut stuff finally get set up with some medical tests and a modern AI-yurveda programme.  Maybe we help someone who feels hopeless about romance because of their last breakup sort through that.  Maybe we help someone finally launch a product they’ve been stuck in a perfectionistic loop about and need a bit of help creating a marketing plan for or even doing some of the marketing steps. Maybe three people form a daily meditation & chanting ritual, and another two form a daily tennis practice. Maybe two people realize they’ve both been wanting to learn something, like a language or a skill or something from a niche online course, and so they create a shared context to practice together.

Psychotechnology for compression bandwidth

Talking about some of this stuff can be hard for many reasons.  Sometimes people have shame, where it’s hard to acknowledge they have a problem.  Sometimes people feel like they’re unworthy of having their problems solved.  These techniques might help.

In bullet-form, some things we might use:

  1. communicating
    1. general authentic relating type practices: circling, stemless, &+
    2. metaprotocol stuff – working with breakdowns
    3. practicing giving difficult/weird feedback
  2. prioritizing
    1. inviting people to notice where they don’t feel free and why
    2. personal-hamming-problems: where are you most blocked? why are you blocked on it?  recurse until there’s wiggle room.  how can others help?
    3. money pile, collectively acknowledging needs and making offers
  3. inspiring & synchronizing
    1. faith/worldview-shifting work (Malcolm is working on writing this up)
    2. singing together, sports, hakas, and other means of group sync
    3. shared readings, including reading together
    4. writing up our understandings
  4. healing
    1. unwinding original spin – deals with shame and unworthiness
    2. therapy and other emotion-work
    3. trauma-release exercises, bodywork, etc
  5. chilling
    1. playing video games, hanging out, making & eating food, road trips
    2. inviting friends over, going out to other events
      (these sorts of things help things from becoming too intense and high-demand)
      (
we’re not here to fix ourself, we’re here to live awesome lives!)

We’ll be basing our psychological approach on Malcolm’s How we get there, which outlines the basic process of how to build meta-trust among a group of people, by treating the obstacle as the way.

AI technology for raw bandwidth

One hypothesis we hold, gestured at briefly above, is that there exist many unnoticed win-wins that match the following description: one person has a problem, that another person would love to solve.  Could be:

  • interpersonal, eg: I’d like a massage, you’d like to give me one; I’d like to share my singing, you’d like to listen to singing
  • project-y, eg: I’d like to build an app, you’d like to market it; I’d like to share my singing, you’d like to produce someone’s song
  • business, eg: I’d love to build an app but need money, you’d like to pay for it or invest
  • intellectual, eg: I have a confusion, you have a helpful framework you’d be happy to share
  • (maybe some other categories, these are loose anyway)

How can we notice way more of these opportunities?  Places where once we realize that such a win-win exists, there’s hardly even a step where we need to agree to do it, because it’s just so obviously the next move.  Actions that release psychological energy, that once you know about them you would have to exert effort NOT to do them.

There are probably many of these, but we don’t know what they are!

We need to increase our interpersonal surface area.  Essentially, a recommender system but one that works for us rather than for the platform/advertisers.

Consciousness can be understood (via global workspace theory) as the space into which different cares/concerns/considerations/conflicts need to be brought and held until the obvious move emerges.  Can we create a digital workspace that enables that to happen at a whole different scale, ongoingly/continuously? LLMs might allow the right connections to be made, solving certain combinatorial problems at various scales.

Currently a lot of what is going on for me and for you is buried in private data in our own journal entries and LLM chatlogs.  And we don’t want to make that data fully public, for many reasons, but it would be nice if a trusted system that mediates between us could look at both and suggest to us things we might want to know about potential win-wins.

We’ll need to build trust that sharing our information with such a system is beneficial and safe.  Building the system ourselves will be a great start for that, but even so it may have to be a bit of a gradual process.  We’ll experiment and see how it feels.  It also may scale faster if it starts with more like semi-automatic upload of many things. Automatic upload of everything may cause people to start hiding certain thoughts they don’t feel ready to share with the system.  Transparency is surveillance, and some thoughts need a bit of time to percolate.

We’ll also need to be able to model not just the current needs/problems of the people involved, but also their tendencies, interests, general skills and capacities, and so on.  This may involve psychometrics and other mapping systems.

We’ll need to iterate to tune the system, to do the right amount of deliberate matching vs serendipitous / hunch-based connecting, and to find the right balance of AI-proposals and human filtering.

Autocatalytic sets

In some sense, we’re aiming to create a denser higher-bandwidth autocatalytic set—riffing on Stuart Kauffman, about the origin of life being not as a single “replicator” that makes more of itself but as an ecosystem of molecules that together create more of each other. The economy is already kind of an autocatalytic set, but it misses tons of opportunities because of how it’s structured. Every product that could be made that tons of people would love to buy but nobody is making is an example of one of these missed opportunities. But also much smaller-scale things.

An analogy is like, there’s a difference between having a loose collection of molecules that happen to generate more of each other, and having an organism.

The autocatalytic set metaphor applies on two scales:

  1. it might be that for any given 2 people, only moderately energy-releasing win-wins exist, where one person’s problem is the other’s solution.  but maybe there are some epic win-wins that require simultaneously modelling what 3-5 people want and coming up with a proposal that will work amazingly only if all of them get involved
  2. in order to have enough energy-releasing interactions to have an epic scalable dynamo that grows and is capable of more and more, we might need interactions between than just the 5-6 people in the core crew.  but we can get that!  we know people—friends, twitter mutuals, etc.  and if this is working well at all, others will want to plug into it.  and we might be able to also use other larger scale systems to identify potential collaborators outside our network

Building this tool together and dogfooding it will be the initial central shared endeavour of the whole group.

Conclusion

🚀

Appendix: preamble

In writing this for Ivan, I wrote the below before the above. but I felt it made a better presentation on my blog to start with the invitation since it’s self-contextualizing whereas the below is not.

How we get there

The first thought that I have is that this is exactly what I talk about in How we get there (on gumroad here).  But after a few more beats, while I think that HWGT is very relevant, it’s got a slightly different focus. It was written, to be clear, in response to an earlier instance of somebody asking “okay so you have all these cool visions of the meta-team, but how do we actually get there? what’s your theory of change?”

What they have in common is that both are about creating a group with high trust.  However, I have some sense that this AOAL concept is starting more with “how can we actually solve each others’ viscerally-felt problems?” whereas the HWGT concept is focused more on something like, including everything dialogue-wise.  I ultimately think both elements are needed:

  • If you have amazing dialogues but you can’t actually turn your attention and energy to solving the problems that are dear to your heart, and succeed at that, what’s even the point? It’s as if the dialogue is made of floating abstract perspectives, not organisms with skin in the game.
  • But meanwhile, if you are focused on solving immediate problems and you can’t sort out the trust issues that come up, it’s gonna fall apart or explode or turn into a techbro scene where you’re solving lots of concrete external problems but failing to orient to psychologically-blocked ones.

So I’d recommend HWGT as required reading for this lab for sure, especially social surface area is high, eg if living together or working full-time in the same space. But it doesn’t answer the whole puzzle

HWGT is also missing some of my new mid-2025 understandings about the role of faith shifts in generating the vivid experience of everything being welcome.  My understanding of the core engine needed to create such a crew, when I wrote HWGT, had underappreciated the faith/choice stuff, because it had been so central in Waterloo, and so intense and challenging
  and was focused on the trust/welcoming stuff, because it was a critical patch that the Waterloo scene was missing.  But after a few years of using my trust/welcoming theory of change, it’s clearly insufficient to generate the magic we experienced in Waterloo.  But I think I’ve reverse-engineered the key pieces now, and am ready to experiment with a version that has both the original mechanism as well as my new patch.  Feels since leaving Waterloo I’ve been trying to build a nuclear reactor with only control rods and no uranium, and while that’s safer than uranium and no control rods, it also doesn’t get any energy.  But both
  now we’re talking!  Tho still a lot of care and careful slow scaling up needed.  And also, if the nuclear energy analogy holds, some models & math to predict where the safe limits are.

Pragmatics

Okay, with that established, what do I have to say about this project as proposed?

One initial question is: even IF it’s the case that founding it on AI allows this mutual aid mechanism to scale, wouldn’t it be fucking dope if you were able to create a crew that solves all of each others’ problems without AI?  and if so, why are you not already doing it?  like in what sense is AI the bottleneck?

I ask this in part because it seems to me that it’d be quite possible to end up doing a sort of premature automation/optimization here, where you don’t actually know the basic mechanism by which such a thing would work in a non-scalable way, and so how are you supposed to scale it?

But having said that, I do like the idea of a crew that is simultaneously:

  • building an AI/data tool to help 100× usable surface area between the members
  • working on figuring out the immediate win-wins that are available with whatever level of transparency is already present

Feels like it could be a good engine.

Concretely
  if I imagine you describing to me that you’ve got some sort of arrangement for this, what’s the description of it that would make me think that you’re most likely to succeed?  I feel like that’s a good frame for tapping my intuitions here.  Similar but distinct is like, what would make me most likely to want to join?

If you found this thought-provoking, I invite you to subscribe:    
About Malcolm

Constantly consciously expanding the boundaries of thoughtspace and actionspace. Creator of Intend, a system for improvisationally & creatively staying in touch with what's most important to you, and taking action towards it.



Have your say!

Have your say!

Message

Name *

Email *