School taught me to write banal garbage because people would thumbs-up it anyway. That approach has been interfering with me trying to actually express my plans in writing because my mind keeps simulating some imaginary prof who will look it over and go “ehh, good enough”.
Looking good enough isn’t actually good enough! I’m trying to build an actual model of the world and a plan that will actually work.
Granted, school isn’t necessarily all like this. In mathematics, you need to actually solve the problem. In engineering, you need to actually build something that works. But even in engineering reports, you can get away with a surprising amount of shoddy reasoning. A real example:
I feel like I need to add a disclaimer here or something: I’m a ringed engineer, and I care a lot about the ethics of design, and I don’t think any of my shoddy thinking has put any lives at risk. I also don’t believe that any of my shoddy thinking in design reports has violated academic integrity guidelines at my university (e.g. I haven’t made up facts or sources).
But a lot of it was still shoddy. Most students are familiar with the process of stating a position, googling for a citation, then citing some expert who happened to agree. And it was shoddy because nothing in the school system was incentivizing me to make it otherwise, and I reasoned it would have cost more to only write stuff that I actually deeply and confidently believed, or to accurately and specifically present my best model of the subject at hand. I was trying to spend as little time and attention as possible working on school things, to free up more time and attention for working on my business, the productivity app Complice.
What I didn’t realize was the cost of practising shoddy thinking.
Having finished the last of my school obligations, I’ve launched myself into some high-level roadmapping for Complice: what’s the state of things right now, and where am I headed? And I’ve discovered a whole bunch of bad thinking habits. It’s obnoxious.
I’m glad to be out.
(Aside: I wrote this entire post in April, when I was finished my last assignments & tests. I waited two months to publish it so that I’ve now safely graduated.)
I was already aware of a certain aversion I had to planning. So I decided to make things a bit easier with this roadmapping document, and base it on one my friend Oliver Habryka had written about his main project. He had created a 27-page outline in google docs, shared it with a bunch of people, and got some really great feedback and other comments. Oliver’s introduction includes the following paragraph, which I decided to quote verbatim in mine:
This document was written while continuously repeating the mantra “better wrong than vague” in my head. When I was uncertain of something, I tried to express my uncertainty as precisely as possible, and when I found myself unable to do that, I preferred making bold predictions to vague statements. If you find yourself disagreeing with part of this document, then that means I at least succeeded in being concrete enough to be disagreed with.
In an academic context, at least up to the undergrad level, students are usually incentivized to follow “better vague than wrong”. Because if you say something the slightest bit wrong, it’ll produce a little “−1” in red ink.
Because if you and the person grading you disagree, a vague claim might be more likely to be interpreted favorably. There’s a limit, of course: you can’t just say “some studies have shown that some people sometimes found X to help”. But still.
Nate Soares has written about the approach of whole-assed half-assing:
Your preferences are not “move rightward on the quality line.” Your preferences are to hit the quality target with minimum effort.
If you’re trying to pass the class, then pass it with minimum effort. Anything else is wasted motion.
If you’re trying to ace the class, then ace it with minimum effort. Anything else is wasted motion.
My last two yearly review blog posts have followed structure of talking about my year on the object level (what I did), the process level (how I did it) and the meta level (my more abstract approach to things). I think it’s helpful to apply the same model here.
There are lots of things that humans often wished their neurology naturally optimized for. One thing that it does optimize for though is minimum energy expenditure. This is a good thing! Brains are costly, and they’d have to function less well if they always ran at full power. But this has side effects. Here, the relevant side effect is that, if you practice a certain process for awhile, and it achieves the desired object-level results, you might lose awareness of the bigger picture approach that you’re trying to employ.
So in my case, I was practising passing my classes with minimum effort, and not wasting motion, following the meta-level approach of whole-assed half-assing. But while the meta-level approach of “hitting the quality target with minimum effort” is a good one in all domains (some of which will have much, much higher quality targets) the process of doing the bare minimum to create something that doesn’t have any obvious glaring flaws, is not a process that you want to be employing in your business. Or in trying to understand anything deeply.
Which I am now learning to do. And, in the process, unlearning the shoddy thinking I’ve been practising for the last 5 years.
If you want to think about this more, check out the post Guessing the Teacher’s Password on Less Wrong.
Constantly consciously expanding the boundaries of thoughtspace and actionspace. Creator of Complice, a system for improvisationally & creatively staying in touch with what's most important to you, and taking action towards it.
Have your say!