One of the easiest times to change your personality (to become less shy, for instance) is when you move somewhere new. Personalities are interfaces, so those who are familiar with you will have expectations of how to interface with you—some of which they may cherish; others may be frustrating.
But at any rate, the ways that they’ll interact with you will be designed to interface with the personality they know. Which means that it’ll tend to reinforce the older patterns in you, since those will be easiest and most comfortable. (There’s an additional element related to the logic of appropriateness, too)
I recently found myself wanting to upgrade my personality, without an obvious context change like moving.
And, since I had been talking with my friend Brent about chaos magick, ritual-work and my behaviour change desires, he suggested creating a ritual for myself.
I liked the idea: a ritual would…
As I said above, if you want to have dramatic change, there usually has to be a moment when it happens. Otherwise you’re going to tend to assume that » read the rest of this entry »
Alternate title: “Use unique, non-obvious terms for nuanced concepts”
Naming things! Naming things is hard. It’s been claimed that it’s one of the hardest parts of computer science. Now, this might sound surprising, but one of my favoritely named concepts is Kahneman’s System 1 and System 2.
I want you to pause for a few seconds and consider what comes to mind when you read just the bolded phrase above.
Got it?
If you’re familiar with the concepts of S1 and S2, then you probably have a pretty rich sense of what I’m talking about. Or perhaps you have a partial notion: “I think it was about…” or something. If you’ve never been exposed to the concept, then you probably have no idea.
Now, Kahneman could have reasonably named these systems lots of other things, like “emotional cognition” & “rational cognition”… or “fast, automatic thinking” & “slow, deliberate thinking”. But now imagine that it had been “emotional and rational cognition” that Kahneman had written about, and the effect on the earlier paragraph.
It would be about the same for those who had studied it in depth, but now those who had heard about it briefly (or maybe at one point knew about the concepts) would be reminded of that one particular contrast between S1 and S2 (emotion/reason) and be primed to think that was the main one, forgetting about all of the other parameters that that distinction seeks to describe. Those who had never heard of Kahneman’s research might assume that they basically knew what the terms were about, because they already have a sense of what emotion and reason are.
Update: I have revised my opinion on S1/S2 in particular. There may or may not be meaningful clusters being pointed at by Kahneman and others, but in this case the terms S1 & S2 were vague enough that a bunch of other things got projected onto them instead. See this LW post and my comment on it for more on this.
The more general point I’m trying to make in this point still stands though.
This post is kind of from two years ago. I got thinking about it again last night when I was reading Wait But Why’s The Cook and the Chef, an article describing how Elon Musk does what he does, which is a lot. The author, Tim Urban, is using an analogy of chefs as those who actually do something original and cooks who just follow recipes. He remarks that most people think that most people are chefs and then some chefs are just better than others… but that a better model is that most people are cooks (some better than others) and then the main difference between most people and Elon Musk isn’t quantitative (“he’s smarter”) but rather qualitative (“he does things differently”).
It’s like a bunch of typewriters looking at a computer and saying, “Man, that is one talented typewriter.”
Imagine a laptop.
What can you use it for?
That laptop can be used as a paperweight.
It is, in fact, better than some objects (such as a pen) at being a paperweight.
But that’s probably a waste of the laptop.
What else can you use it for?
It can also be used as a nightlight.
It has quite a lot of comparative advantage at being a nightlight—most objects don’t emit light, so a laptop works pretty well there.
However, it’s still a huge waste.
And, if you’re a human, not a computer, it feels terrible to be wasted: to not be used for your full range of capabilities.
» read the rest of this entry »
If somebody asks you why, there are often two markedly different kinds of explanations you could give.
Their differences are psychological & social in addition to being semantic.
“everything is the way it is because it got that way”
— D’Arcy Thompson
I run a software company, and sometimes users will email me asking, “Why is feature X like this? It should be like that.”
My response, which I don’t necessarily write out: if you want to know “why feature X is like this”, well… I could tell you the long history of how Complice mutated its way to being what it is today, which would contain a causal explanation for why the feature is the way it is.
…however, if you’re looking for not a causal explanation, but rather a normative explanation, or justification of “why it makes sense for feature X to be like this”, then I don’t really have one. I basically agree with you. All I have to offer is that it would be work to change it. And that I probably will at some point but it hasn’t been a priority yet.
We might say that causal explanations explain “why [proposition] is true” whereas normative explanations explain “why [[proposition] is true] is ‘reasonable,’ or ‘acceptable.'”
I think we want to be a little wary of the second kind of explanatory process. » read the rest of this entry »
When you think of “ultimatums”, what comes to mind?
Manipulativeness, maybe? Ultimatums are typically considered a negotiation tactic, and not a very pleasant one.
But there’s a different thing that can happen, where an ultimatum is made, but where articulating it isn’t a speech act but rather an observation. As in, the ultimatum wasn’t created by the act of stating it, but rather, it already existed in some sense.
I had a tense relationship conversation a few years ago. We’d planned to spend the day together in the park, and I was clearly angsty, so my partner asked me what was going on. I didn’t have a good handle on it, but I tried to explain what was uncomfortable for me about the relationship, and how I was confused about what I wanted. After maybe 10 minutes of this, she said, “Look, we’ve had this conversation before. I don’t want to have it again. If we’re going to do this relationship, I need you to promise we won’t have this conversation again.”
I thought about it. I spent a few moments simulating the next months of our relationship. I realized that I totally expected this to come up again, and again. Earlier on, when we’d had the conversation the first time, I hadn’t been sure. But it was now pretty clear that I’d have to suppress important parts of myself if I was to keep from having this conversation.
“…yeah, I can’t promise that,” I said.
“I guess that’s it then.”
“I guess so.”
I think a more self-aware version of me could have recognized, without her prompting, that my discomfort represented an unreconcilable part of the relationship, and that I basically already wanted to break up.
The rest of the day was a bit weird, but it was at least nice that we had resolved this. We’d realized that it was a fact about the world that there wasn’t a serious relationship that we could have that we both wanted.
I sensed that when she posed the ultimatum, she wasn’t doing it to manipulate me. She was just » read the rest of this entry »
It can be tempting, when engaging in mindset-shifting, to dream of the day when your old mindset goes away forever. I think that that’s not the best target to aim for. It may happen eventually, but there’s often a long phase where both streams of thought coexist. Sometimes it’s even helpful to still have access to that old mindset, but in a kind of isolated way, where you can query it for its opinion but it doesn’t actually run your decisions. Knowing this is important, because otherwise you can think of old-mindset thoughts as failures.
What does this feel like on the inside? One model that my intentional community developed is the idea of there being multiple channels to your thought. So if you have a model of human experience that has steps something like this…
Stimulus → Perception → Interpretation → Feeling / Thought → Intention → Action
…then the channels model suggests that your brain generates multiple interpretations of a given perception in parallel, each of which can in turn generate distinct thoughts and feelings, which might tend you towards different kinds of action. Unless you’ve trained in this particular kind of mindfulness or phenomenological awareness, any particular experience will usually be primarily interpreted through one channel, yielding a dominant thought/feeling/intention/action that comes out of how that channel makes sense of things. I think the skill of pulling these apart is valuable.
It’s all too easy to let a false understanding of something replace your actual understanding. Sometimes this is an oversimplification, but it can also take the form of an overcomplication. I have an illuminating story:
Years ago, when I was young and foolish, I found myself in a particular romantic relationship that would later end for epistemic reasons, when I was slightly less young and slightly less foolish. Anyway, this particular girlfriend of mine was very into healthy eating: raw, organic, home-cooked, etc. During her visits my diet would change substantially for a few days. At one point, we got in a tiny fight about something, and in a not-actually-desperate chance to placate her, I semi-jokingly offered: “I’ll go vegetarian!”
“I don’t care,” she said with a sneer.
…and she didn’t. She wasn’t a vegetarian. Duhhh… I knew that. We’d made some ground beef together the day before.
So what was I thinking? » read the rest of this entry »
You might not be as meta as you think you are.
There’ s a famous scene in The Princess Bride, in which, after winning a game of skill and a game of brawn, the Man in Black engages with Vizzini in a “battle of wits” The Man in Black prepares two cups, and places one in front of himself and the other in front of his adversary. It’s pretty hilarious. Watch here, or read the transcript below. If you watch the video, make sure to read the last few lines of the transcript, which aren’t in this clip but are relevant for understanding the post!
Man in Black: All right. Where is the poison? The battle of wits has begun. It ends when you decide and we both drink, and find out who is right… and who is dead.
Vizzini: But it’s so simple. All I have to do is divine from what I know of you: are you the sort of man who would put the poison into his own goblet or his enemy’s? Now, a clever man would put the poison into his own goblet, because he would know that only a great fool would reach for what he was given. I am not a great fool, so I can clearly not choose the wine in front of you. But you must have known I was not a great fool, you would have counted on it, so I can clearly not choose the wine in front of me.
Man in Black: You’ve made your decision then?
There’s an obscure concept (from an obscure field called semantics) that I find really fun to think with: Dot Objects. This post is an attempt to pull it out of that technical field and into, well, the community of people who read my blog. I think that semantics tools are fundamental for rationality and quality thinking in general—Alfred Korzybski, coiner of the phrase “the map is not the territory” and founder of the field general semantics, would probably agree with me. Note that I extrapolate a ton here, so (disclaimer!) don’t take anything I say as being true to the technical study of the subject.
So. Consider the sentence: “The university needed renovations, so it emailed its alumni to raise funds.” The university that has the alumni isn’t the one that needs the repairs. One is an organization, the other is a physical structure.
Dot objects are
entities that subsist simultaneously in multiple semantic domains.[1]
The name “dot objects” (also sometimes “dot types”) comes from the notation used in academic papers on the subject, which is X • Y where X and Y are the two domains. So the above example might be Org • Phy.
My blog post titles have been getting weirder and weirder. This one’ll make sense by the end, I swear.
At recommendation by Kenzi at the Center for Applied Rationality, I’ve been reading the book Eat That Frog by Brian Tracy right now, which is based around the following premises:
“You will never be caught up.”
“There is never enough time to do everything, but there is always enough time to do the most important thing.”
However, this only works if you actually have the focusing ability to turn [intending to do the most important thing] into not just doing it but finishing it. And you need to be able to trust yourself to do that. This post explores a particular kind of failure mode that occurs if you don’t have that kind of trust.
The titular frogs refer to » read the rest of this entry »