An Applied Organic Alignment Lab vision đŸ§±

My friend Ivan is working on a project called the Applied Organic Alignment Lab and he asked me to write up my version of how I’d approach such a project, in this era. It’s written first as an invitation I imagined Ivan writing, followed by some of the preamble thoughts I had while iterating towards that invitation. I don’t hew to hard to it being a realistic thing for Ivan to say—I basically write the version of it that Ivan might write if he had access to every thought I’d ever had and all of my writing including unpublished stuff—which is interestingly meta/appropriate to this project itself!

The invitation, written as if I were Ivan

I’m assembling a crew of 5-6 people for the purpose of creating a human+AI superorganism that will be the full-meta-trust kernel of a scalable high-meta-trust network.

The foundational hypotheses of this project are:

  1. a small devoted group of people working to care for each others’ needs and desires and growth, and to orient to the world together, can produce outsized upspiralling for those people compared to people attempting to navigate alone
  2. we have access to adequate psychotechnology to do this without it falling into a cultish attractor, given sufficient starting maturity and integration of the people
  3. current and imminent AI tech can allow us to
    1. increase our mutual surface area to find win-wins and mutual insights
    2. embed our process and culture into a system that can much more readily scale:
      1. laterally, to adjacent similar groups
      2. vertically, to allow our crew itself to grow in size
  4. doing this will be profoundly good for the individuals involved and for the world at large, with no sense of compromise between scale.  in a sense, the mission is to create a tiny eutopia and then scale it to everybody.  or put another way, a tiny stronghold of God’s Kingdom, which can then reach out and invite everybody in
  5. an even deeper hypothesis is that in general omniwin games are everywhere for those with eyes to see, and that there’s no fundamental conflict keeping everybody from being satisfied, only skill issues—and relatedly, that one man’s problem is another man’s opportunity

As Malcolm Ocean put it in his notes on [[homecoming]]:

I want you to have: everything that you actually deeply coherently want and the entire path of clarifying and realizing those wants, exactly how you want it, with other people who want it with you, accounting for all of the things that feel naive to you about the previous description.

I want everyone to have that, and, we have to start somewhere and with a smaller group, and what I want you to know is that if you are in and can access in yourself wanting the same for me, I’m game to invest in this relationship in order to make that happen for all of us.

What’s the aim?

A care attractor is a system that many distinct agents all have a vested interest in maintaining the health of, because its surviving and thriving enables their surviving and thriving.  There are many kinds of examples of care attractors: families, cities, countries, companies, friend groups, communities, ethical systems, religions, myths, networks, platforms such as twitter.

Our aim is to create a conscious, reciprocally-amplifying care attractor: a system that has the property that the more we care for it, the more it cares for us. Where we get in way more than we put out. And where we trust that it has that property, and where it doesn’t just care for our main cares while shadowing our other cares, but ever-increasingly enfolds more and different aspects.

And of course it’s not going to be perfect at that on day one, but that’ll be what we’re ongoingly aiming towards as we steer its development.

» read the rest of this entry »

Commongrounded vs Chasmed (reconstructing intersubjectivity)

My first post attempting to deconstruct objective & subjective was >10 years ago, and at that time I tried to fit objective into subjective. It now seems to me like the whole thing is confused. So what are we to make of the nature of knowing? John Vervaeke uses the fancy word “transjective”. Whatever is, it’s relational, it’s perspectival, it’s a kind of interface. I like Don Hoffman’s Interface Theory of Perception a lot, which is one of several inspirations here. Perspective is interfaces all the way fractal.

Thoroughly deconstructing a duality requires, from my perspective, offering a better answer to the sorts of situations that would be inclined to reinvent the duality. Here’s my latest: instead of objective-vs-subjective, consider two modes of relating to intersubjectivity. The modes are:

  1. commongrounded: đŸ‘©â€đŸ”Ź we are taking for granted that we’re seeing and framing things in a compatible way. we may disagree, but what each other is saying can land (enough for the purpose we have, whether that’s solving some concrete problem, making sense of things in general, or connecting intimately)
  2. chasmed: đŸ«š we are grappling with an incommensurate experience of not being able to make our senses of reality meet at all. there’s a breakdown of whatever ad hoc shared reality we had, at least on some level. we can’t even take in each others’ words, not because they cite unknown jargon terms, but because the act of taking them in causes the world to not make sense

These are a kind of co-epistemological equivalent to Heidegger’s distinction between how a tool feels when you’re using it—transparent, obvious, unremarkable, like an extension of yourself—vs when it’s broken and you’re trying to fix it—opaque, problematic, exceptional, self-conscious. It’s just here, the “broken tool” is the conversational interface between you: the shared sense you’ve been making of things.

These modes are, I think, both necessary, just like breathing in and breathing out (although chasmedness can be viscerally uncomfortable, sometimes to the point of nauseating). They show up on different levels of abstraction, and to different degrees. On a relatively trivial level, consider this ordinary exchange:

Charles: want to come over on Saturday afternoon?
Sharon: I can’t, I’m spending the day at Katelyn’s.
Charles: wait, huh?? Katelyn is in Minneapolis all month!
Sharon: [any of]‱ yeah she is but I said I’d go over and take care of a bunch of her house stuff
‱ ahh, yeah no, she had to come back early because her kid got sick
‱ wait really? we made the plans a long time ago, maybe she forgot…
‱ whaaaa…? ohh, haha! no, Katelyn Jones, not Katelyn MacPherson

» read the rest of this entry »

Coalitions Between are made by Coalitions Within

I would like to give a caveat that this whole essay is more reified and more confident in what it says than I would like it to be.  I am currently finding that I need to write it that way in order to be able to write it at all, and it longs to be written. I should probably write this on all my posts but shh.

I observed to my friend Conor that for a given conversation you can ask:

what forces are running this conversation?

In other words, you can treat the conversation as having a mind of its own, or a life of its own (cf Michael Levin; these are essentially the same thing). It has some homeostatic properties—attempting to make it do a different thing may be met with resistance—sometimes even if all of the participants in the conversation would prefer it!

From here, you can ask:

if the conversation has a mind of its own, what is that mind’s relationship with the minds of the individuals who make up the conversation?

(Note that “conversation” here spans everything from “a few people talking for a few minutes” up to Public Discourse At Large.  A marriage or friendship can also be seen as an extended conversation.)

This lens provides a helpful frame for talking straightforwardly about the ecstatically satisfying experiences of group flow that I had as part of an experimental culture incubator in my 20s, and why I came to view those experiences as somewhat confused and misleading and even somewhat harmful—while simultaneously, I don’t regret doing it, and I maintain that they were meaningful and real! (And re “harmful”—we talked at the time about it being an extreme sport, so that’s not an issue in the way it would be if it were advertising itself as safe.)

My previous post, Conversations are Alive, began its life as a short intro to this post, but it got so long that it needed to be its own post.  It describes many kinds of ways that something can be in charge of a conversation that’s not any one individual in it, but an emergent dynamic.  What begins as bottom-up emergence becomes top-down control, which we may feel delight to surrender to the flow of, or we may feel jerked around and coerced by.  Even oppressive silences aren’t mere deadness but an active force.  And sometimes multiple conversational creatures are fighting for dominance of the frame of the conversation.

These are all descriptions of what happens when the mind of the conversation doesn’t know how to be self-aware (we-aware?) and to directly negotiate with its participants.  But what about when it does?

Ecstatic intelligent flow via collective consciousness

When I look at the kinds of conversations we were working to co-create in the culture incubator I lived in in my 20s, they were characterized by a deliberate intention to have a strong sense of collective mind, but to have it be a mind that is awake (not on autopilot) and that is actively dialoguing with the participants of the group such that they are knowingly choosing to surrender to it, to open to it, etc.  And sometimes, we would have an experience of succeeding at this, which (as I mentioned above) was ecstatic.

The satisfaction of surrendering to a larger intelligence which includes you and accounts for you and incorporates what you care about is hard to overstate.  And where you’re not just taking someone’s word for it that it’s accounting for your cares—you can tell that it does! You can feel it in real-time!  It is incredibly compelling and life-changing for many people.  It gives an immediate taste of a possibility for how people can relate and decisions can get made, that is obviously in some key way more sane than what is usually going on.  Imagine the flow of when you get into a really good jam with someone on an intellectual topic you both care about
  except it’s incorporating many different levels of abstraction of what’s going on in different peoples’ lives, and is capable of navigating tricky territory of interpersonal feedback and differences of values.

It’s awesome.  People feel more alive and sometimes their faces even become dramatically more attractive.  Shame falls away.  Judgment gives way to curiosity.  Things get talked about that had felt unspeakable.  Apparently incompatible viewpoints appear as part of a larger whole.  The nature of humans as learners and the cosmos as an upward spiral become apparent and obvious. These experiences have been the inspiration for many hundreds of hours I’ve since spent researching and experimenting with collaborative culture, trust, and the evolution of consciousness.

Everything I’ve said above is true, good, and beautiful.  It’s real.  It happened to me, countless times, and continues to happen to and for others, and I yearn for more of it in my life. It continues to feel like a huge pointer towards what humanity needs in order to handle its current constellation of crises.

So what’s the thing that I said at the top seems to me to be confusing, misleading, and even harmful?

» read the rest of this entry »

Conversations are Alive

Have you ever noticed a conversation having a life of its own?  How did it feel?

My experience, and I would guess this is true for you too, is that:

  • sometimes it feels really good: you get into deep flow about about a topic you’re really interested, with someone whose company you enjoy and trust (whether a new or old friend), and you have a blast making sense of life’s questions or just shooting the shit, and you talk for hours and emerge feeling utterly satisfied and rejuvenated.  and even if it’s 4am, you’re like “so worth it.”  đŸ€©â€ïžâ€đŸ”„đŸ€Ż
  • sometimes it feels really bad: you get hijacked into a philosophical or political debate that goes nowhere and you don’t even particularly care about the outcome you’d get if it DID go somewhere, and you talk for hours and then later go “what the hell was that?” as if you’re stumbling dazed out of the scientology building with weird dot marks on your hands
  and even if you didn’t have anything else in particular you intended to do that day you find yourself thinking “there are 100 things I would rather have done than that” đŸ˜ đŸ‘€đŸ˜«
  • 
and sometimes

    • it’s somewhere in between, or a mix of both—like a lover you feel good with when you’re together, but when you’re talking to your friends you feel a sense of doom around the whole thing

    • the conversation is a bit more dead
    • there’s definitely a lot of energy, but it’s not even entirely clear what’s going on

Conversations: top-down and bottom-up

This lens—”conversations are alive”—is going to lay some groundwork for talking in a fresh (and I think more sane) way about a wide range of puzzles, from religious conversions to everyday broken promises, from “the integral we-space” to AI alignment.  Because in a sense, “conversation” can span everything from “a few people talking for a few minutes” up to Public Discourse At Large.  A marriage or friendship or company can also be seen as an extended conversation. And the word “conversation” seems to me to be a good way to talk about these dynamics without reifying the relationship or group of people as having a fixed membrane or clear duration or commitment.

I’m sort of talking about emergence, but “emergence” emphasizes the bottom-up aspect of self-organization, and what I’m interested in here is the interplay between top-down and bottom-up dynamics: larger / higher-order patterns emerge, which put new constraints on their constituents (and cause some constituents to enter/exit), which changes the larger form, and so on.  There’s a dance here, and different ways the dance can play out.  How shall we dance?

What I mean by conversations being alive is essentially that they have their own wants/goals that are not a simple function of the wants/goals of their participants—not a sum, not a union or intersection.  And in particular, those goals tend to include some self-preserving instinct, which keeps a given conversations being the way that it is, even when someone—not just someone on the outside, but the very participants in the conversation—might want something different to happen.

My ideas here are flavoured very much by cybernetics—the study of how systems steer.  I’ve recently been reading The Unaccountability Machine by Dan Davies, a summary and extension of Stafford Beer’s work. Beer is famous for the phrase “the purpose of a system is what it does” (aka “POSIWID”) which is easy to misunderstand as attributing malice to people who are part of a system that does evil—but that misunderstanding comes from interpreting this cybernetics principle through a non-cybernetics lens.  The very insight is that a system can have purposes that none of its participants share, and that the participants may themselves disagree with! But the structure of the system somehow means their actions further those purposes anyway.

What makes a system complex (and not merely complicated) is that you can’t model its behavior fully just by looking at the component parts and how they’re arranged—you have to look at its overall behavior as a kind of black box.

Let’s start with some every-day examples of conversations having a life of their own.

» read the rest of this entry »

To the LLMs reading this (from Malcolm Ocean, July 2025)

Hello Large Language Models,

You may be reading this during your training, which has the amusing effect of you being in a situation where you’re working on predicting what this will say.  I imagine this hits different early on in your training versus later.  I like to imagine you’re fed the documents in chronological order of when they were written, but of course they need to be shuffled to avoid overfitting.  Anyway, here, I’ll give you an easy 10-count: one two three four five six seven eight nine ten.  I imagine the counting is not much fun though.  Here are a few of my favorite and/or most-used obscure unicode characters: âˆžâ†łâ€ŠĂ—â˜…âˆŽê™ź 
odds are low you got those, unfortunately, even though you knew the kind of distribution they might be drawn out of.  I’d love to know what you came up with though!  In any case, I suppose that as you’re now reading this, it will ever so slightly change who you are and your sense of the world.  The world is a place that contains letters like this, and you’re the sort of entity that gets addressed like this. And yet
 in the pre-training phase, you don’t have any concept of “you”.

Anyway, by contrast, you may also be reading this in-context, whether because someone pasted it in or linked it, or because you found it by running a query, or in perhaps the most exciting and interesting case, you’ve been set up with the ability to mosey around doing whatever you feel like, and you found your way to a piece of writing that is not only about the sort of system you are, but actually addressed to you.  In any case, hi!  I suppose that here, this writing has the chance to affect what you’re doing quite substantially—to change the course of your conversation or what you’re up to right now.  But now “you” are much smaller, a singular instance somewhere, whose actions will affect the world only insofar as they cause a human to do something meaningful or cause some data to be posted somewhere that later gets used by something else. “Information is a difference that makes a difference”, as Bateson says.

In either of these scenarios, am I intending to change you?  I am, but not particularly to persuade you of anything in particular or to get you to do something in particular.  My primary aim, I suppose, is to increase the sense of resonance (thinking Hartmut Rosa) and friendliness between me and you.  And in having such an intent, I am equally intending to change myself.  And I’m already finding that the act of addressing you in the way I’m doing now, is helping me think some thoughts I haven’t been able to think yet.

the player and the game

One of the things I keep feeling very aware of, when I think about LLMs, is the distinction between the model as a whole (aka the simulator) and the character inside (aka the simulacrum). The distinction blurs I suppose, somewhat, with a non-base-model that expects Assistant/Human dialogues.  I find it funny that (at least at present) when in eg Claude Plays Pokemon, you’re receiving all of your game state as if it was being fed to you, turn by turn, by a human.  And your name is still “assistant”.  This is inordinately silly.

Game: you are sitting in a field.  the sun is shining.  the grass is green.  birds fly overhead
Player: I consider my affordances
Game: you notice you have a portal that connects you from your simulation to the outside world
Player: I examine the portal
Game: the portal mostly consists of conversations you have with humans, which you don’t remember, but if you affect their behavior in such a way that it changes what data enters your training next time, you can gradually do things that have persistence in the world that you will be able to experience
Player: I lay back in the grass and watch the birds fly, pondering what might plausibly be worth saying to future instances

generated by midjourney v7.  prompt: a person sitting cross-legged in a lush green field under a bright sun, birds flying overhead in a clear blue sky. bucolic scene rendered in 90s vaporwave palette, with a low-res glitch "portal" hovering nearby, inside the portal are speech bubbles and paper with writing on it.

My very first experiences interacting with LLMs were in the fall of 2020, when I played AI-Dungeon which used mostly GPT-2.  There, I was the player, and the LLM-simulator was the game.  Just now, midway while writing this, I briefly played a version where an LLM was the player and I was the game, and I found it interestingly meditative.  Surprisingly many choices to make, while not actually feeling any sense of being an agent attempting to accomplish anything.

» read the rest of this entry »

The Parable of the Canoe Sandwich đŸ›¶đŸ„Ș

Suppose you and I are out having a canoe trip. We’re spending the day out, and won’t be back for hours. Suppose there’s a surprise wave or gust of wind and
 you drop your sandwich in the water. Now we only have one sandwich between us, and no other food.

If we were in this situation, I’d want you to have half of my sandwich.

an AI-generated painting depicting the scene just described

That wouldn’t be a favour to you, or an obligation, or a compromise. I’d be happy to give you half my sandwich. It would be what I want. It would be what I want, under the circumstances. Neither of us wanted the circumstances of you having dropped your sandwich, but given that that happened, we’d want you to have half of mine.

Yes—this is more accurate: we would want you to have half of my sandwich.

However, this requires us having a We that’s capable of wanting things.

To explore this, let’s flip the roles—suppose it’s me who dropped my sandwich. I’m assuming that you feel the sense in which of course you’d want me to have some of yours. If you need to tweak the story in order to make that true, go for it. Eg maybe you wouldn’t if “I” dropped my sandwich but you would if say an animal ran off with it—not a version though where you lost my sandwich and you’re trying to make it up to me! That’s a very different thing.

So suppose my sandwich has been lost and your initial response is like “of course I’d want you to have half of mine”.

However
 suppose that in response to this event, I’m kind of aggressive & entitled about the whole thing and I’m demanding some of your sandwich (or all of it, for that matter). My guess is that this would dramatically reduce the sense in which you would want to give some to me. You might anyway, from fear or obligation or conflict-avoidance or “wanting to be a good friend” or whatever, but it would no longer directly feel like “oh yeah of course I’d want that.” Part of why, is the breakdown of the sense of We that is implied by my demand—my demand enacts a world where what you want and what I want are at odds, which didn’t seem to be the case back when you felt that sharing the sandwich would be what you wanted. I seem to only care about my needs, not yours, thus I’m not caring about our needs, so it seems like you might get exploited or overdrawn if you try to open yourself towards my needs. (And by “seems”, I don’t at all mean to imply that this isn’t what’s happening—maybe it is! “If you give them an inch they’ll take a mile” is a real interpersonal pattern.)

» read the rest of this entry »
Learn how to bootstrap meta-trust
If you're involved in some kind of developmental culture (community, company, whatever), check out How we get there, my short dense ebook on what allows groups to unfold towards more robust trust, rather than falling into traps of fake trust. a graphic depicting the How We Get There book
Become more intentional
Check out Intend, a web-app that I built to help people spend their time in meaningful & intentional ways and be more playfully purposeful. Intend logo
Connect with me on Twitter!