To trust or not to trust is NOT the question

This post was adapted from a comment I made responding to a facebook group post. This is what they said:

Trusting isn’t virtuous. Trusting should not be the default. Care to double crux me?

(I believe that this was itself implicitly responding to yet others claiming the opposite of it: that trust is virtuous and should be a default/norm.)

My perspective is that it’s not about virtue at all. It’s just about to what extent you can rely on a particular system (a single human, a group of humans, an animal, an ecosystem, a mechanical or software system, or whatever) to behave in a particular way. Some of these ways will make you inclined to interact with that system more; others less.

We are, of course, imperfect at making such discernments, but we can get better. However, people who are claiming it’s virtuous to trust are probably undermining the skill-building by undermining peoples’ trust in whatever level of discernment they do have: is it wrong if I don’t trust someone who is supposedly trustworthy? The Guru Papers illustrates how this happens in great detail. I would strongly recommend that book to anyone wanting to understand trust.

If I were to gesture at a default stance it would be neither “trust” nor “distrust” nor some compromise in between. It would be a stance of trust-building.

It would be an orientation towards building self-trust, including trust in your own discernment to the extent that it makes sense to trust your own discernment, as well as building trust in your experience of other people—whether “positive” or “negative”—in a way that can be consciously looked at by people who are involved, without that creating some sort of threat or problem. This requires not just individual change, but the creation of contexts that support such a way-of-being and the learning of it.

Describing how to effectively operate out of such a stance is emphatically not a straightforward task, nor is learning how to actually do it, but both of these are top priorities—for me personally and for the Upstart Collaboratory project in Waterloo that I keep mentioning. And, in a different sense of the word “priority”, it’s high priority for humanity as a whole as well to figure this out.

On robots, double-binds, and meta-communication

By “not about virtue”, I’m sort of gesturing at something very basic and utilitarian. If you have a bunch of little robot agents that are in an environment in which they interact and there are some finite resources, then naively-trusting or naively-distrusting robots will do way worse than trust-building robots—no morals involved.

They will also do worse than robots that have some compromise between naïve trust and naïve distrust, such as prisoner’s dilemma strategies like “tit for tat” and “tit for two tats”. These strategies aren’t as silly as “always cooperate” or “always defect”, but they’re still fundamentally naïve in the sense that they lack trust-building. This naïveté is essentially an inevitability in the prisoner’s dilemma scenario, because (unlike real life scenarios) no meta-communication is possible. In many real life scenarios, meta-communication is forbidden (and punished) but the channel is still there even if it’s closed.

The reason it is forbidden is that certain strategies of domination are themselves threatened by meta-communication. If you’re put in a damned-if-you-do, damned-if-you-don’t situation, it’s only a trap if something is preventing you from illuminating the impossibility of the situation and finding a way out. Note that most contexts in the world today are full of traps like this. (For examples, see Knots by R. D. Laing)

Advising people to trust-by-default can totally play into these traps. As I said above, it tends to undermine people’s self-trust and their process of leveling up at discernment.

But that doesn’t mean that any person saying “you should trust by default” necessarily themselves prefers those strategies of domination. They may be trapped in their own double binds and also unable to meta-communicate about their situation, and from the situation they’re in, that’s the best option they can see. Or they may be responding to fears of what would happen if people were doing distrust-by-default—and this is a legitimate concern!

Confused stances come in pairs, and “trust-by-default” and “distrust-by-default” are one such pair.

In addition to the absence of even the possibility of meta-communication, note that the simple zero-sum & negatively-correlated nature of prisoner’s dilemma style games can also be misleading with respect to real-life situations, in which one of the more useful leverage points can be to make the situation be positive sum and have positively-correlated outcomes, such that there aren’t actually good win-lose options. This is not a thing you can do when you’re stuck inside the frame of a toy problem like prisoner’s dilemma.

(Further on double-binds and meta-communication)

Furthering the double-crux invitation, I welcome the comments of this blog post as a space for you to explore what beliefs or assumptions you have that underpin your sense of what sorts of social norms make sense here, whether trust-by-default or distrust-by-default or whatever-else-by-default.

Some partial cruxes of mine might include:

  • we can improve at discerning when and how to trust systems we interact with
  • effective meta-communication is possible and practically achievable among humans
  • contexts where people are doing this non-naïvely are rare
  • trust-by-default and distrust-by-default are mostly not even fully possible (although it is possible for people to believe that that is the norm of a group) and if they are they aren’t remotely sustainable

I’m experimenting, for January (or maybe the whole year if I like it), with publishing something short each day: just a minimum of 100 words, on any topic. I’ve been getting a bit perfectionistic with my blogging lately, and while I want to keep up quality and depth and intertwinglement and longformnosity, I think it’ll be good to pump my creative juices a bit more regularly, and ummm… bottle them.

Since the whole point is being more relaxed about publishing, I haven’t really specified any particular format that this must take. So far (ie the last 5 days) I’ve just been casually posting things on my tumblr: I was going to post this one there, and then I felt it was sufficiently relevant and important, so I’m posting it here instead. And then I’ve been typing more stuff, and now it feels more like a full-on post anyway! Muahaha my strategy is working.

If you found this thought-provoking, I invite you to subscribe:    
About Malcolm

Constantly consciously expanding the boundaries of thoughtspace and actionspace.

Creator of Complice, a system for achieving your important goals.

Personal Website

1 Comment

Luke Freeman » 8 Jan 2018 » Reply

I like the metaphor of robot agents 🙂

Have your say!

Have your say!


Name *

Email *