As someone currently experiencing substantial amounts of collective intelligence on Twitter, here’s some of what I’m seeing as the emerging edge of new behaviors and culture, and one bottleneck on our capacity to think together and make sense of the world.
Some of us are pioneering a new experience of Twitter that’s amazing, and that wouldn’t be possible on any other platform that exists today.
Conversation is thinking together.
Collective intelligence is, at its core, good conversation.
Many people, on and off Twitter, think of it as a shouting fest, and parts of it are. And… at the same time, on the same app, with the same features but some different cultural assumptions, there are pockets where people are meeting the others, making scientific progress, falling in love, healing their trauma, starting businesses together, and sharing their learning processes with each other.
Those sorts of metrics—as hard to measure as they are—form a kind of north star for Twitter. This creature has the potential to be the best dating app (for some people) and a way better place for finding your dream job than LinkedIn (for many people). And so on.
Cities have increased creativity & innovation per capita per capita, ie when you add more people each person becomes more, because more people & ideas can bump into each other. The internet is a giant city, and this is far more true on Twitter than any other platform, particularly because of how tightly it allows the interlinking of ideas with Quote Tweets.
Twitter is very much about “what’s happening [now]” but, as the world has been collectively realizing over the past decade, simply knowing “what’s happening” in some isolated way is meaningless and disorienting. Meaning comes from filtering & distilling & contextualizing what’s happening, and this is part of what Twitter is already so brilliant for, because everyone can talk to everyone and the ultra-short-form non-editable medium encourages you to tweet today’s thoughts today rather than drafting them today, editing them tomorrow, then scheduling them for next week’s newsletter.
When someone makes a quote-tweet, they’re essentially saying “I have some thoughts I’d like to share, that relate to the tweet here”. This might be a critique of the quoted tweet/thread, or it might be using the quoted material as a sort of footnote of supportive evidence or further reading or ironic contrast. This meta-commentary is very powerful, whether it’s used by someone reflected “I think what I really meant to say here was” or someone framing a thread they just read as an answer to a particular question they and their followers might care about.
Currently, however, it’s impossible to QT two or more tweets at once. This means that in the natural ontology of Twitter, there is no way to properly compare or contrast or relate different thoughts.
This contributes, I think, to the fragmented & divergent quality of thinking on Twitter: the structure of the app makes it hard to express convergent thoughts. You can use screenshots… but then all context & interlinking & copy-pastability is destroyed. You can have a meta-thread that pulls a bunch of things together… but each tweet in that thread is still only referencing one other tweet, so there’s no single utterance that performs the act of relating other utterances.
The amount of utterances that need to connect two other pre-existing utterances is huge. Thoughts shaped like:
Similarly to how the #hashtag & @-mentions evolved from user behavior, and the Retweet functionality evolved out of people copying others tweets and tweeting them out with “RT @username: ” at the start, and Quote Tweeting evolved out of people pasting a link to another tweet within their tweet… MultiQT is a natural evolution of the “screenshot of multiple tweets” and “linking tweets together as a train of thought using multiple QTs in a thread” behaviors.
I didn’t even realize quite how much I’d want this until I started mocking up the screenshots below by messing with the html in the tweet composer and being so sad I couldn’t just hit “Send Tweet”. I can already tell that like @-mentions and RTs, once we’re used to this it’ll feel absurd to think we ever lived without it.» read the rest of this entry »
I’ve recently added a new page to my website called Work With Me. The page will evolve over time but I’m going to write a short blog post about the concept and share the initial snapshot of how it looks, for archival purposes.
There’s a lot of stories I could tell here. I’ll tell a few slightly fictional versions before getting to the actual series of events that occurred.
One fictional version is that I was inspired by Derek Sivers’ /now page movement but I wanted something that created more affordances for people to connect with me, including regarding opportunities that I’m not actively pursuing now on my own. This is true in the sense that I was thinking about /now by the time I published the page, and in the sense that I would love to see Work With Me pages show up on others’ sites. You could be the first follower, who starts a movement!
Another fictional version is that I was thinking about my Collaborative Self-Energizing Meta-Team Vision and wondering how to make more surface area for people to get involved. I’m someone who thinks a lot about interfaces, not just between humans and products but also between humans and other humans, and it occurred to me that there wasn’t a good interface for people to find out how to plug in with me to work on self-energizing projects together. So I made this page! This was also on my mind, but it’s still not quite how it happened.» read the rest of this entry »
Some years ago, I invented a new productivity system, called Complice. Complice is a productivity app, and it’s also a productivity philosophy, or even an entire paradigm.
Complice is a new approach to goal achievement, in the form of both a philosophy and a software system. Its aim is to create consistent, coherent, processes, for people to realize their goals, in two senses:
Virtually all to-do list software on the internet, whether it knows it or not, is based on the workflow and philosophy called GTD (David Allen’s “Getting Things Done”). Complice is different. It wasn’t created as a critique of GTD, but it’s easiest to describe it by contrasting it with this implicit default so many people are used to.
First, a one-sentence primer on the basic workflow in Complice:
There’s a lot more to it, but this is the basic structure. Perhaps less obvious is what’s not part of the workflow. We’ll talk about some of that below, but that’s still all on the level of behavior though—the focus of this post is the paradigmatic differences of Complice, compared to GTD-based systems. These are:
Keep reading and we’ll explore each of them…» read the rest of this entry »
Originally written October 19th, 2020 as a few tweetstorms—slight edits here. My vision has evolved since then, but this remains a beautiful piece of it and I’ve been linking lots of people to it in google doc form so I figured I might as well post it to my blog.
Wanting to write about the larger meta-vision I have that inspired me to make this move (to Sam—first green section below). Initially wrote this in response to Andy Matuschak’s response “Y’all, this attitude is rad”, but wanted it to be a top-level thread because it’s important and stands on its own.
Hey @SamHBarton, I’m checking out lifewrite.today and it’s reminding me of my app complice.co (eg “Today Page”) and I had a brief moment of “oh no” before “wait, there’s so much space for other explorations!” and anyway what I want to say is:
How can I help?
Because I realized that the default scenario with something like this is that it doesn’t even really get off the ground, and that would be sad 😕
So like I’ve done with various other entrepreneurs (including Conor White-Sullivan!) would love to explore & help you realize your vision here 🚀
Also shoutout to Beeminder / Daniel Reeves for helping encourage this cooperative philosophy with eg the post Startups Not Eating Each Other Like Cannibalistic Dogs. They helped mentor me+Complice from the very outset, which evolved into mutual advising & mutually profitable app integrations.
Making this move, of saying “how can I help?” to a would-be competitor, is inspired for me in part by tapping into what for me is the answer to “what can I do that releases energy rather than requiring energy?” and finding the answer being something on the design/vision/strategy level that every company needs.» read the rest of this entry »
Another personal learning update, this time flavored around Complice and collaboration. I wasn’t expecting this when I set out to write the post, but what’s below ended up being very much a thematic continuation on the previous learning update post (which got a lot of positive response) so if you’re digging this post you may want to jump over to that one. It’s not a prerequisite though, so you’re also free to just keep reading.
I started out working on Complice nearly four years ago, in part because I didn’t want to have to get a job and work for someone else when I graduated from university. But I’ve since learned that there’s an extent to which it wasn’t just working for people but merely working with people long-term that I found aversive. One of my growth areas over the course of the past year or so has been developing a way-of-being in working relationships that is enjoyable and effective.
I wrote last week about changing my relationship to internal conflict, which involved defusing some propensity for being self-critical. Structurally connected with that is getting better at not experiencing or expressing blame towards others either. In last week’s post I talked about how I knew I was yelling at myself but had somehow totally dissociated from the fact that that meant that I was being yelled at.
“It was a pity thoughts always ran the easiest way, like water in old ditches.” ― Walter de la Mare, The Return
You’re probably more predictable than you think. This can be scary to realize, since the perspective of not having as much control as you might feel like you do, but it can also be a relief: feeling like you have control over something you don’t have control over can lead to self-blame, frustration and confusion.
One way to play with this idea is to assume that future-you’s behaviour is entirely predictable, in much the same way that if you have a tilted surface, you can predict with a high degree of accuracy which way water will flow across it: downhill. Dig a trench, and the water will stay in it. Put up a wall, and the water will be stopped by it. Steepen the hill, and the water will flow faster.
So what’s downhill for you? What sorts of predictable future motions will you make?
There are a lot of interfaces that irk me, not because they’re poorly designed in general, but because they don’t interface well with my brain. In particular, they don’t interface well with the speed of brains. The best interfaces become extensions of your body. You gain the same direct control over them that you have over your fingertips, your eyes, your tongue in forming words.
This essay comes in two parts: (1) why this is an issue and (2) advice on how to make the best of what we’ve got.
One thing that characterizes your control over your body is that it (usually) has very, very good feedback. Probably a bunch of kinds you don’t even realize exists. Consider that your muscles don’t actually know anything about location, but simply exerting a pulling force. If all of the information you had were your senses of sight and touch-against-skin, and the ability to control those pulling forces, it would be really hard to control your body. But fortunately, you also have proprioception, the sense that lets you know where your body is, even if your eyes are shut and nothing is touching. For example, close your eyes and try to bring your finger to about 2cm (an inch) from your nose. It’s trivially easy.
One more example that I love and then I’ll move on. Compensatory eye movements. Focus your gaze at something at least two feet away, then bobble your head around. Tried it? Your brain has sophisticated systems (approximating calculus that most engineering students would struggle with) that move your eyes exactly opposite to your head, so that whatever you’re looking at remains in the center of your gaze and really quite incredibly stable even while you flail your head. This blew my mind when I first realized it.
The result of all of these control systems is that our bodies kind of just do what we tell them to. As I type this, I don’t have to be constantly monitoring whether my arms are exerting enough force to stay levitated above my keyboard. I just will them to be there. It’s beyond easy―it’s effortless.
Now, try willing your phone to call your friend. You’re allowed to communicate your will using your voice, your hands, whatever. Why does it take so many steps, or so much waiting?
Growing up, you make decisions, but it’s kind of like a Choose Your Own Adventure Book.
Finally, you reach grade 12. It’s time to choose which university to attend after high school!
- To check out the prestigious university where your dad went, turn to page 15.
- To visit the small campus nearby that would be close enough to live at home, turn to page 82
- To take a road trip with friends to the party college they want to go to, turn to page 40.
That’s a decent set of choices. And you know, there exist hypothetical future lives of yours that are really awesome, along all pathways. But there are so many more possibilities!
Both personal experience and principles like Analysis Paralysis agree that when you have tons of choices, it becomes harder to choose. Sure. But, to the extent that life is like the hypothetical Choose Your Own Adventure Book (hereafter CYOAB) above, I don’t think the issue is that there aren’t enough options. The issue lies in the second sentence, which contains a huge assumption: that in grade 12, it’s time to choose a university to attend. Sure, maybe later in the book is a page that says something about “deferring your offer” to take a “gap year”, but even that is presented as just an option among several others. And so it goes, beyond high school and post-secondary education and into adulthood.
What you don’t get to do, in a CYOAB, is strategize about what you want and how to get it. » read the rest of this entry »
A short reflection on two even shorter words.
The other day, I was reading the details of various phone services while logged into my carrier’s website. I came across a section that read:
Long distance charges apply if you don’t have an unlimited nationwide feature.
…so I’m like “Wait? Do I have an unlimited nationwide feature?” and it occurs to me that there was no reason for them to use the word “if” there. I’m logged in! Their system knows the answer to the if question and should simply provide the result instead of forcing me to figure out if I qualify.
In some cases, of course, it might be valuable to let the user know that the result hinges on the state of things, but there’s an alternative to “if”. It’s called “since”. So that page, instead of what it said, should have been something more like:
Long distance charges would apply, but they don’t since you have an unlimited nationwide feature.
Long distance charges apply since you don’t have an unlimited nationwide feature. Upgrade now
I was initially going to just talk about software, but this actually applies to any kind of service, including one made of flesh and smiles. The keystone of service is anticipation. A good system will anticipate what the user needs/wants and will provide it as available. This means not saying “if” when the if statement in question can be evaluated by the server (machine or human) instead.
Framing is important. There are many other examples of this (in fact, I’m in the process of compiling a list of helpful ways to reframe things) but here’s a simple one. It relates to the word “but”. Specifically, to the order of the two clauses attached to the “but”. The example that prompted me to jot this idea down was deciding which of the following to write in my journal:
As is readily apparent, the second part becomes the dominant or conclusive statement as it gets the final word against the first statement. In this case, I opted in the end to use the former option, because it affirms the value of reading the book rather than suggesting it’s not worth it in the long run. The book in question is a now-finished serial ebook called The Surprising Life and Death of Diggory Franklin, and the sentences above should give you an adequate warning/recommendation
not to read it.
This bit about the buts is obvious in hindsight, but I found that laying it out explicitly like this helped me start noticing it a lot more and therefore reframing both my thoughts and my communication.
Say you want to express to a cook both your enjoyment of a meal and your surprise at its spiciness, there are several options:
…but, maybe the extra spiciness didn’t detract from the enjoyment. In that case, a better conjunction would be “and”. Again, like before, this sounds obvious, but once consciously aware of it I started catching myself saying “but” in places that didn’t adequately capture what I wanted to say or in some cases were rude. The chef remark above has the potential to be rude, for example.
If you want to add to the reframing list, comment below or shoot me an email at malcolm@[thisdomain].
I’ve been the owner of an Android phone (HTC Incredible S) for 9 months now, but today I sent it off to get serviced because the touchscreen has been acting up. I first noticed the touchscreen behaving strangely this fall, when horizontal bands of the screen would sometimes be unresponsive. On the right, in a mockup of a drawing app, you can see how poking the screen produced no dots on the band, and so on. This was even more annoying when trying to type, because the bottom band passed right through the home-row (if you can still call it that on a touchscreen) of the keyboard.
Anyway, at first it would just do this for a few minutes every day, but then it started to act up like this consistently. The problems got progressively worse until by mid-January I could never be certain at any given moment that I’d be able to use my phone at all. Furthermore, touch events started happening in the wrong places—I would try to select “Yes” and the screen would select “No”, or taps would become long presses. Sometimes the phone would seem to think I had touched somewhere on the screen when it was sitting a foot away on my desk, and would navigate interfaces on its own.
Emotional intelligence (hereafter EI, though often called EQ like IQ) is a term that is used to describe one’s ability to perceive others’ emotions and response appropriately in social situations. As the image here from Psychology Today illustrates, EQ is ascribed a fair amount of importance. The same concept is also embodied in maxims like “It’s not what you know, but who you know”.
So if we were to map the concept of IQ to technology, it could refer to a number of things, but at the forefront is processing power and efficiency/effectiveness of algorithms. Of additional consideration is the ability to learn new things, which is likely where the concept of a smartphone comes from: in addition to built-in phone features like SMS and alarms, smartphones can play games, interact with social networks, and let us draw, to mention just some of the hundreds of thousands of apps out there.
How, then, would we map EI, or EQ? Emotional intelligence, for a computer or smartphone, or any piece of software, is its interface. A piece of software has good EI if it responds the way you expect it to, and even better EI if it anticipates your needs and makes it easy to accomplish your goals. When our technology does this, we adore it, and when it fails to do so, we abhor it.
However, this feeling of dislike can actually go further than just general annoyance or frustration at an inability to properly perform a task using some interface. I realized this rather profoundly with my defective smartphone when I had been trying in vain for probably five minutes to do something really simple like call someone. I was completely unable to navigate the interface because the screen would constantly press other places or simply refuse to push where I wanted. How do you think I felt? I’m actually going to give you space to guess. Think of an adjective that you would expect to most accurately describe my feelings at that point.
Did you say frustrated? Angry? Disgusted? Resentful? Those are all true, but that’s not exactly right. When I couldn’t use the interface, I felt hurt. It sounds odd, but I had an emotional response in my chest that I’ve recognized as the one I feel when someone is being cruel to me (I was bullied a bit when I was younger). It was probably the second time I had this response that I realized how strange that was. After all, at no point in this process had anyone set out to hurt me. Why did I feel like my phone was being mean to me?
After some reflection, I concluded that how I really felt was misunderstood. I was trying to communicate with my phone, via its touchscreen interface (which was designed for human fingers) and it seemed to be completely misunderstanding my instructions and ignoring them or vehemently disobeying them. I had unwittingly personified my phone to a huge extent, so it really hurt when I felt like it was ignoring me while I was going out of my way to communicate with it (eg. turning the phone to put UI elements in different places so I could access them). This would be like asking someone close to you (smartphones are companions) for help and having them plug their ears, sing, and then do something random that might be slightly related to what you were asking. They’d be taunting you.
I’m a designer, a hacker, and an engineering student, so I make things with interfaces. In fact, I’m quite passionate about user experience (UX) and interface design. While my phone’s flaky touchscreen was obviously not intentional, I believe that what I’ve learned here apply into conscious interface design as well. This kind of revelation is less of a “how” than a “why”. That is, prior to these experiences, I had had no idea that interfaces could cause such an emotional impact.
When my smartphone became stupid, it didn’t lose processing power. It simply lost the ability to communicate with me, and that felt far worse than I could have possibly expected. I’m going to remember this every time I design an interface.