posttitle = Foregrounding trust solves the intersubjective verification paradox titleClass =title-long len =67

Foregrounding trust solves the intersubjective verification paradox

There’s a puzzle that shows up when talking about intersubjective verification: how can I ever really know what’s going on in your head?  What is it like to be you? What are your desires, goals, understandings?  If I have an insight, can I tell that you have the same insight?

It seems to me that: indeed, in some sense I can’t ever know what’s going on in your head—there’s a measurement problem.

But I can come to trust things about you, and what that means is that I know it’s good enough for my purposes. It is sufficient for all the purposes that I have for now and the foreseeable future that I can just treat this as how things are. I don’t even want to say “treat this as true”—to say that it’s true is to again enter into the objective lens, which is irrelevant. It’s how things are, as far as I’m concerned, as far as I can tell.

And that’s good enough—trust is, by definition, what’s good enough. I don’t need to make a further claim that it’s true.

I’ve talked about trust as “what truth feels like in first person”—this is the dimension of trust that’s less about safety or alignment and more just about the sense of how things are.  It’s your basic sense of things.

And trust is dynamic, of course.  I’m trusting a bridge until one step is rotten, and then oop!  Maybe I proceed with caution.  Maybe I turn back, relaxedly trusting the steps that I already walked on.  Maybe I observe that the ropes are clearly holding even if the beams aren’t, so I try walking with my feet towards the outside, holding onto the ropes.

To say I know something about you (or that something is true of you) is to say that others should agree.  But to say that I trust something about you is to say that I’ve done the checks that I need to do, given my needs and purposes.  You, who have different needs and purposes, will not in general trust what I trust.  You might trust something on the basis of my say so, but you might not.  It depends on, well, everything—your purposes, your needs, your sense of me and mine, your trust in my motives for speaking, and the quality of my assessments, etc.  And you don’t make those choices consciously, you just find out: when I say I trust something (or say why I trust it), does it result in you trusting it, or not?

Anyway! This is one of those funny things where everybody is doing this just fine all the time, but then philosophers come along with a framework that makes it seem impossible.  Wikipedia’s page on intersubjective verifiability says:

While specific internal experiences are not intersubjectively verifiable…

They aren’t if you have to force things to be objective—if you have to find the one standard for all time that you can apply. But if we’re allowed to intersubjectively verify things according to our own unverified personal gnosis (ie our trust) not an objective standard, then we can just do it, the way we always do it in order to form a common sense of things.

Examples of intersubjective verification via trust

The most obvious cases are practical social situations—being able to trust that a particular employee understands the assignment, or being able to trust that your spouse actually gets the thing that really bothered you about what they said this morning. Or developing a share sense of why someone was being weird or whether they’re safe to invite to another party, by debriefing things. Sometimes things add up, to an experience we trust… and other times they don’t add up, and we don’t trust them.

Then there’s intersubjective verification of understanding of eg physical or mathematical phenomena—the phenomenon might be objective, but the question of whether someone understands it is not! So getting a common sense that it’s understood by a group still involves this engagement with whether it it feels like you can treat it as commonly known or whether you feel that you need to keep hedging or treating it as debatable or unclear.

Then, consider intersubjective verification of buy-in—this is very relevant to game theory.  If you’ve got a stag hunt (a game where there are two options—solo-hunting rabbit, which produces a small win for anyone who chooses it, and co-hunting-stag, which produces a massive win for everybody if and only if everybody chooses it, otherwise those who choose it get nothing).  If everybody trusts that everybody else will choose stag, then everybody will want to choose stag, thus will choose stag.  If we only somewhat trust that, then we might.  Even if it were true that everybody else would choose stag, the operative question for you is whether you trust that they would.  And so the matter of buy-in needs to deal with the question of trust—each person’s trust, which may need to be earned differently.  (And, as Duncan Sabien pointed out, in practice someone who can’t afford to risk getting a zero win this round is not likely to be able to choose stag, so trust will be best earned in an iterative game by having a few rounds where everybody agrees to stick to rabbit, to build up that surplus and to build up the experience of people doing what they said they would do even if it wasn’t risky.) These same dynamics apply to much more complex situations of team buy-in

This also dissolves solipsism, in a sense.  Can I know that you’re really there, having experiences and dreams and so on?  Moot point—acting like you are works better than acting like you aren’t, so I trust that you are. The important point is that’s all I ever have—there never was certainty anyway. It was all always just made of trust.

Where this gets really interesting is in matters of subjective science and reflexivity—when the map changes the territory. Take some insight that is of the interior, not the exterior, such as buddhist no-self, IFS Self, or the NNTD insight, or religious experiences… how can we know that each other has also experienced this? Well, once again it’s a matter of trust-building. We start simply not knowing, and as we trust-dance in relation to it (basically allowing our interfaces to come into honest contact) until we develop trust that we’re experiencing something compatible enough for our purposes… or until we start to distrust that. Or we just don’t know how to proceed any further and we still don’t know.

One open question or edge for me is that it seems pretty obvious to me that even in reflexive domains, where there are multiple stable possibilities, there can be something like objective facts about what the stable possibilities are. And eg the core NNTD insight (“you can’t trust what you can’t trust”) seems very obviously true to me, not merely one of many stable ways of viewing things. If someone said they disagreed, I’d say “we’re clearly not talking about the same thing”, the same way as someone would of a mathematical knowing. (This is less true of the whole NNTD framework that I’ve developed based on the insight—see the many meanings of NNTD—although even there I’m pretty sure most of it basically holds given some assumptions (some of which I may not be conscious of).)

So I have this sense that I can tell for myself that NNTD is true (ie not just that I trust it, but that anybody who investigated it thoroughly would also come to trust it) but the most obvious truth of it somehow routes through subjective experience. I can give reasons, and you can reason about those reasons, but ultimately the question is not “does that logically hold?” but “do you see it?”

And—just between us—the question is not “do you see it?” but “can I trust that you see it?”

If you found this thought-provoking, I invite you to subscribe:    
About Malcolm

Constantly consciously expanding the boundaries of thoughtspace and actionspace. Creator of Intend, a system for improvisationally & creatively staying in touch with what's most important to you, and taking action towards it.



Have your say!

Have your say!

Message

Name *

Email *