We interrupt your regularly scheduled metaprogramming to bring you a stream-of-consciousness musing on the nature of being, and related topics. This is more me playing with ideas than trying to make any case in particular.
Sometimes I forget that I exist in the physical realm. That I’m made of stuff. Less so, perhaps, than many of my mathier friends, but still fairly often.
In one sense, this is true: what “I” am is an identity, a sense of self, a pattern. The pattern happens to currently be expressed in a very physical sense: my computations may be virtual in a sense, but they’re tightly coupled to input from the physical world, including parts of the physical world that are also considered to be “me”. The parts of my body.
But of course they’re “me” for convenience, because they’re an extension of my cognition. Immediately after my finger is cut off, it’s very immediately no longer “me”. I wonder if people who are paraplegic don’t feel like their legs are “them”. Does someone with phantom limb syndrome include their phantom limb in their notion of “me”, even if it doesn’t exist in the normal sense?
Relatedly, we often feel like the rogue agents in our brains aren’t us. Hell, sometimes I’ve even said/heard “my brain just generated a thought, which was…” So I guess a large fraction of my cognition also isn’t exactly “me”. Dis-identification from my thoughts, for better or for worse.
Seriously though, we’re made out of stuff. I just had a sip of water. Some of that water is going to become me. Pop quiz: what percentage of human body mass is water?
The answer is of course 70%.
You probably knew that. I’ve known it since back when I was so small that my entire body weighed less than the amount of my body that is currently made of water.
But I was staring at that fact today and boggling.
Imagine trying to build a robot that’s competitive at soccer, or chess, or basically anything, and 70% of the functioning mass of the robot has to be made of water. Where would you even start?
I looked it up. Turns out 70% of that 70% (ie half of human mass) is within our cells, and isn’t exactly pure water. It’s something called cytosol, which is water with a bunch of other stuff suspended in it. Okay, that makes sense: humans are made mostly of water and almost entirely of cells, so naturally cells are mostly made of water.
But I look at my skin, ~at my muscles (other cells, like fat and bone, have much less water) and they don’t look like they’re made of 70% water. They seem pretty dry. My brain, on the other hand…
Apparently the human brain is 85% water. It’s basically already a brain in a vat, except the vat is conveniently attached to a body etc. Which really has nothing to do with the point of the term “brain in a vat”. Except that I think at least a little of what we find weird about that is the idea of the brain suspended in a liquid. Which—spoiler!—it already is.
And all of your cognition happens in the form of chemical and electrical signals which often aren’t electrons, they’re sodium, potassium, and calcium ions.
Someone (Steven Pinker!) wrote a book called “The Stuff of Thought”. It’s about language. But it could have been about ions and water.
I’ve become more existentialist in the last while. It’s not altogether pleasant, to be honest.
I’ve been reading things like Beyond the Reach of God, and realizing that there aren’t really any guarantees of anything in particular. That this universe doesn’t have inherent meaning, or purpose. Or at least, it seems from our perspective to be pretty consistent with one that doesn’t.
I’ve been reading things like Julian Jaynes’ The Origin of Consciousness in the Breakdown of the the Bicameral Mind, and getting a better sense of why humans so deeply crave certainty. By that model, back when the gods (≈ right hemispheres) spoke to everyone and directed their actions, there wasn’t an enduring feeling of subjective uncertainty of what to do. In each moment, people acted out of habit or in accordance to their “god’s” instructions.
What’s the difference between fate and determinism? One way to think about it is that fate is a psychological concept, while determinism is a mathematical concept.
Again following Jaynes’ model: after the breakdown of the bicameral mind, people could no longer access their intuitions so easily, so when they had to make tough decisions they did things like divining with dice. These people, in the first millenia BCE and CE didn’t have the concepts of “chance” or “randomness”, so they interpreted the dice (or pig entrails) as being meaningful in human terms.
A shape like a mountain? Must mean something about an actual mountain. In many cases, probably, what they saw was determined by something that their intuition understood and wanted to make known, and so this process of making decisions was better than say chance. But fate is the notion that these decisions came from the outside, rather than from an internal sense of authority.
The divining system would act as the guide for on what to do, in a way that let the person trust in that. Of course, ultimately, the decision took place somewhere in their brain-made-of-85%-water, just maybe not the part of them that carries around the sense of self.
Determinism is the idea that if you had the right information, you could compute the future state of the universe. This being true doesn’t imply that that future state is meaningfully derived in human terms though.
> meaningful in human terms
I was having a conversation with a friend the other day, and I found myself musing…
What are we computing right now? Like, if all of my neurons are collectively computing “Malcolm’s thoughts”, and there’s a structural similarity between thinking and the thing we’re doing (conversation), then our conversation we’re having can be considered to be computing something. Humanity’s collective conversation as well.
Yeah, people have debated whether or not humanity is computing anything meaningful…
Well of course it’s not meaningful to us. The computation of neurons, viewed at our level of abstraction, are human thoughts, which are meaningful to humans but not to the neurons themselves. To be meaningful to a human is to be meaningful in human terms.
Although the computations themselves are of course made of human meaning. In the same sense that a serotonin molecule or an increase voltage differential is a meaningful signal to a neuron.
Now, strictly speaking, the claim I make in the first paragraph here isn’t quite true. Meaning is closely related to metaphor, in the sense that it’s found in the structural similarities between concepts. So probably someone could explain whatever humanity is computing in a way that is meaningful to humans…
…provided that there’s some sort of overarching structure to what humanity is computing, that we could analogize to human terms. Which was perhaps my friend’s point.
The idea that humanity is computing something large, but something that is meaningful in human terms seems pretty similar to my definition of fate, above. A master narrative.
If you liked this post, you might like The patterns that we’re made of. It’s similar and perhaps better.
“It was a pity thoughts always ran the easiest way, like water in old ditches.” ― Walter de la Mare, The Return
You’re probably more predictable than you think. This can be scary to realize, since the perspective of not having as much control as you might feel like you do, but it can also be a relief: feeling like you have control over something you don’t have control over can lead to self-blame, frustration and confusion.
One way to play with this idea is to assume that future-you’s behaviour is entirely predictable, in much the same way that if you have a tilted surface, you can predict with a high degree of accuracy which way water will flow across it: downhill. Dig a trench, and the water will stay in it. Put up a wall, and the water will be stopped by it. Steepen the hill, and the water will flow faster.
So what’s downhill for you? What sorts of predictable future motions will you make?
…when to correct and when to riff…
Say you’re having a conversation with someone, and you’re trying to talk about a concept or make sense of an experience or something. And you say “so it’s sort of, you know, ABC…” and they nod and they say “ahh yeah, like XYZ”
…but XYZ isn’t quite what you had in mind.
There can be a tendency, in such a situation, to correct the person, and say “no, not XYZ”. Sometimes this makes sense, othertimes it’s better to have a different response. Let’s explore!
The short answer is that this sort of correction is important if it matters specifically what you meant. Otherwise (or if this is ambiguous) it can frustrate the conversation.
The most extreme example of where it feels like it matters is if you have a particular thing in mind that you’re trying to explain to the other person—like maybe someone is asking me to tell them about my app, Complice:
Me: “It’s a system where each day you put in what you’re doing towards your long-term goals, and track what you accomplish.”
Them: “Ohh, so like, you use it to plan out projects and keep track of all of the stuff you need to do… deadlines and so on…”
Me: “Ahh, no, it’s much more… agile than that. The idea is that long-term plans and long task lists end up becoming stale, so Complice is designed to not accrue stuff over time, and instead it’s just focused on making progress today and reflecting periodically.”
Where the shared goal is to hone in on exactly how Complice works, it makes sense for me to correct what they put out.
We might contrast that with a hypothetical continuation of that conversation, in which we’re trying to brainstorm, or flesh out an idea: » read the rest of this entry »
Expectation is often used to refer to two totally distinct things: entitlement and anticipation. My basic opinion is that entitlement is a rather counterproductive mental stance to have, while anticipations are really helpful for improving your model of the world.
Here are some quick examples to whet your appetite…
1. Consider a parent who says to their teenager: “I expect you to be home by midnight.” The parent may or may not anticipate the teen being home on time (even after this remark). Instead, they’re staking out a right to be annoyed if they aren’t back on time.
Contrast this with someone telling the person they’re meeting for lunch “I expect I’ll be there by 12:10” as a way to let them know that they’re running a little late, so that the recipient of the message knows not to worry that maybe they’re not in the correct meeting spot, or that the other person has forgotten.
2. A slightly more involved example: I have a particular kind of chocolate bar that I buy every week at the grocery store. Or at least I used to, until a few weeks ago when they stopped stocking it. They still stock the Dark version, but not the Extra Dark version I’ve been buying for 3 years. So the last few weeks I’ve been disappointed when I go to look. (Eventually I’ll conclude that it’s gone forever, but for now I remain hopeful.)
There’s a temptation to feel indignant at the absence of this chocolate bar. I had an expectation that it would be there, and it wasn’t! How dare they not stock it? I’m a loyal customer, who shops there every week, and who even tells others about their points card program! I deserve to have my favorite chocolate bar in stock!
…says this voice. This is the voice of entitlement.
Hi! I’m Interface. You may remember me from A ritual to upgrade my Face. I’m the part of Malcolm that navigates most social situations, and represents the bulk of his personality. I also like self-expression in the form of blog posts. You’ll meet the rest of the cast on the Malcolm show in a bit. Although most of the characters aren’t that visible from the outside, usually.
The rest of this intro will just be in first person as Malcolm.
Brief context—feel free to skip ahead—as a result of the sci-fi novel Crystal Society (it’s fantastic, go read it) the CFAR alumni list got talking about modeling one’s society of mind. One alum linked to a blog by someone named Mory Buxner, in which he plays a game where he has 8 different parts, each of whom gets a particular kind of score for the kind of thing that they do, and they take turns being in charge of what Mory is up to.
I shared this post on Facebook, which prompted Brienne Yudkowsky to try breaking down her different Drives to Action in this way. She did so by having a dialogue between the different parts, in which they try to map out who all of the parts are, by figuring out which parts were attracted towards different activities. Activities that she’d done while spending a week doing whatever she felt like doing. Her blog post.
And yesterday/today, what I felt like doing was following her lead and doing this myself. I didn’t spend a week doing whatever I felt like—this seems to not actually work very well for me. But I made a list, from intuition. Then I clustered it into groups (some of these will end up merged). Then I talked to myself a bunch and managed to create a decent list of motivations—including a part that had been kind of hidden until now!
Without further ado…
Cluster A: read fiction, watch movies (and occasionally TV shows), look at art / illusions / trippy videos.
Cluster B: play Dominion, MtG, and other games… cuddle, make out, scroll my facebook newsfeed, watch music videos, wikipedia adventures, random research.
I first tried polyphasic sleep almost 5 years ago, in summer 2011. About 6 days into my uberman adaptation, I gave up. Two years later, I tried adapting to everyman 3, which I persisted with for several months with some success, but ultimately it didn’t quite work for me. Towards the end of that summer (2013) I tried uberman again, because a bunch of people were trying it all at once and I still aspired to greatness.
The results of that experiment are pretty telling: out of a dozen people, none of them successfully adapted to uberman or everyman. This, despite doing nearly everything right, and being all in a house together where they could ensure each other stayed awake. But within a month or two, all had reverted, and I hear that there were some negative effects in the form of narcolepsy and one or two other issues.
So if you’re planning to adapt to one of these schedules, your odds of success are low.
That being said, I maintain that my polyphasic sleep experiments ended up having one of the most positive effects on my life. Why? I learned to nap and became biphasic, fixing a sleep issue I’d had for as long as I can remember.
When I was a kid, my parents used to insist I had my lights out by a certain time, but I was almost never able to actually sleep then, so I would sneakily read with a flashlight, or othertimes follow the letter of the law by doing things in my room with the lights out (mostly pushups and sittups).
One of the easiest times to change your personality (to become less shy, for instance) is when you move somewhere new. Personalities are interfaces, so those who are familiar with you will have expectations of how to interface with you—some of which they may cherish; others may be frustrating.
But at any rate, the ways that they’ll interact with you will be designed to interface with the personality they know. Which means that it’ll tend to reinforce the older patterns in you, since those will be easiest and most comfortable. (There’s an additional element related to the logic of appropriateness, too)
I recently found myself wanting to upgrade my personality, without an obvious context change like moving.
And, since I had been talking with my friend Brent about chaos magick, ritual-work and my behaviour change desires, he suggested creating a ritual for myself.
I liked the idea: a ritual would…
As I said above, if you want to have dramatic change, there usually has to be a moment when it happens. Otherwise you’re going to tend to assume that » read the rest of this entry »
About half a million people are injured each year from motor vehicle accidents involving a distracted driver. (This post isn’t actually about driving—we’re going to use driving as an analogy to understand something else.)
This article cites research to answer a bunch of FAQs about the dangers of talking on the phone while driving. One of these is:
Q: Is talking on the phone more distracting than talking to a passenger?
A: The cognitive workload for the driver is the same, according to Strayer. In his test, conversing with a passenger rated a 2.3 on the 1-to-5 scale; talking on a hand-held phone, a 2.4; and a hands-free phone, a 2.3. However, having another person in the car generally results in safer driving, because there’s often an extra set of eyes on the road. Also, passengers tend to stop talking when the demands of driving increase, Strayer says. “So passenger and cell conversations have different crash risks because the passenger helps out.”
There are a couple things going on here. One of them is that the passenger has more situational awareness: the phone-based conversational partner may not even know their counterpart is on the road, let alone any of the details. The passenger can observe not only the driver but also the state of the car and the surroundings. They may additionally be aware of the intended destination, and so on. The other main thing that’s going on is that the passenger is going the same place as the driver, in the same vehicle, so they have a natural built-in interest to help the drive go well. They have a shared intent, and aligned interests.
Now, if I’m on the phone with you while I’m driving, you (hopefully!) don’t want me to crash, but psychologically it’s very different from when you yourself are (a) at risk and (b) your hindbrain knows it.
So I think that there’s something important going on with both of these pieces: awareness and values.
If you get distracted while driving, you might get in an accident. If you get distracted while working, or otherwise pursuing some sort of goal, you might waste time and fail to achieve your aims. And as with driving, other people can totally be distracting.
Given that, what can we learn from the driving analogy, that might inform how and with whom we choose to relate?
» read the rest of this entry »
Naming things! Naming things is hard. It’s been claimed that it’s one of the hardest parts of computer science. Now, this might sound surprising, but one of my favoritely named concepts is Kahneman’s System 1 and System 2.
I want you to pause for a few seconds and consider what comes to mind when you read just the bolded phrase above.
If you’re familiar with the concepts of S1 and S2, then you probably have a pretty rich sense of what I’m talking about. Or perhaps you have a partial notion: “I think it was about…” or something. If you’ve never been exposed to the concept, then you probably have no idea.
Now, Kahneman could have reasonably named these systems lots of other things, like “emotional cognition” and “rational cognition”… or “fast, automatic thinking” and “slow, deliberate thinking”. But now imagine that it had been “emotional and rational cognition” that Kahneman had written about, and the effect on the earlier paragraph.
It would be about the same for those who had studied it in depth, but now those who had heard about it briefly (or maybe at one point knew about the concepts) would be reminded of that one particular contrast between S1 and S2 (emotion/reason) and be primed to think that was the main one, forgetting about all of the other parameters that that distinction seeks to describe. Those who had never heard of Kahneman’s research might assume that they basically knew what the terms were about, because they already have a sense of what emotion and reason are.
I spent last weekend mentoring at a CFAR workshop. One interesting pattern that I and another mentor identified was that sometimes less enthused participants would confront one of us about CFAR’s flaws. These conversations often seemed to have a thing in common:
Participant: “So it seems that CFAR has flaw X, and also flaw Y.”
Me: “Oh yeah, totally. Those are definitely issues that are keeping CFAR from really being as great as it could be.”
Participants: “So like, ugh, CFAR?”
Me: “But like… CFAR!”
Which is to say that the participant was taking flaws X and Y as implying that CFAR was doomed or something, whereas I was thinking that CFAR was pretty great, and would be even better once X and Y were fixed. And hey, great, now that we’ve identified that X and Y are the main flaws, that’s substantial progress towards fixing them.
Neither position is necessarily right. The implications are » read the rest of this entry »