• Prioritized Contradiction Resolution

    I know I said my next post was going to be my position on the existential risk of AI, but I love metaposts too much. I’m going to explain why I’m writing that post, through the lens of how I view self-consistency. Said lens may be obvious to the kinds of people who read my posts, but oh well.

    At a party several months ago, Alice, Bob, and I were talking about effective altruism. (Fake names, for privacy.) Alice self-identified as part of the movement, and Bob didn’t. At one point, Bob said he approved of EA, and agreed with its arguments, but didn’t actually want to donate. From his viewpoint, diminishing marginal utility of money meant the logical conclusion was to send all disposable income to Africa. He wasn’t okay with that.

    Alice replied like so.

    I agree that’s the conclusion, but that doesn’t mean you have to follow it. The advice I’d give is to give up on achieving perfect self-consistency. For EA in particular, you can try to live up to that conclusion by donating some money, which is better than none, and then by donating to effective charities, which is a lot better than donating to ineffective ones. That being said, you should only be donating if it’s something you really want to do. It’s more sustainable that way.

    If you’re like me, you want to have consistent beliefs that lead to consistent actions.

    If you’re like me, you don’t have consistent beliefs, and you don’t execute consistent actions.

    Instead you have a trainwreck of a brain that manages to squeak by every day.

    Let me vent for a bit. Brains are pretty awesome. It’s pretty amazing what we can do, if you stop to think about it. Unfortunately, in many ways our brains are also complete garbage. Our intuitive understanding of probability is good at making quick decisions, but awful at solving simple probability questions. We double check information that challenges our ideas, but don’t double check information that confirms them. And if someone else does try to challenges our world view, we may double down on our beliefs instead of moving away from them, flying in the face of any reasonable model of intelligence.

    What’s worse is when you get into stuff like Dunning-Kruger, where you aren’t even aware your brain is doing the wrong thing. There’s no way I actually remembered what Alice and Bob said at that party several months ago. Sure, that conversation was probably about EA, but it’s pretty likely I’m misremembering it in a way that validates one of my world views.

    As a kid, I once realized I knew how to add small numbers, but had no memory of ever learning addition. Having an existential crisis at the age of seven is a fun experience, I’ll say that much.

    Recently, someone I know pointed out that schizophrenics often don’t know they’re schizophrenic, and there was a small chance I was schizophrenic right now. The most surprising thing was that I wasn’t surprised.

    Somehow, I learned to never trust my ability to know the truth. I then internalized that belief into a deep-seated intuition that affects how I view cognition and perception. So, when I read a Dinosaur Comics strip like this one…

    Dinosaur Comics

    …I don’t trip that many balls, because they’ve mostly been tripped out.

    That went on a bit of tangent. The point is, we’re lying to ourselves on a subconscious level. We all are. Discovering we’re lying to ourselves on a conscious level shouldn’t be much worse.


    Let’s all take a moment to quietly mourn self-consistency, who died prematurely and never had a chance.

    Then, let’s grab shovels and be gravediggers, because it turns out self-consistency is achievable after all.

    Perfect self-consistency is never going to happen. It’s like a hydra. Chop down one contradiction and two grow in its place. Using an imperfect reasoning engine to reason about the world is a tragic comedy in itself. At the same time, it’s worth trying to be self-consistent, and our brains are the best tools we’re going to get.

    The trick to achieving self-consistency is to find a nice, quiet place, and sit down to think. You let the thoughts come, sometimes consciously, sometimes subconsciously, and wait until you understand the implicit thoughts well enough to say them out loud. You refine them over time, until you gain a better understanding of yourself, failings and all.

    There are a few ways you can do this. Writing is one of them. When you write, you’re forced to try words, again and again, until you hit upon words with perfect weight. Rinse, lather, repeat, until you understand why you act the way you do.

    The only problem is that this process can take forever, especially on complicated topics. When I try to do this, my brain has an annoying habit of coming to a different conclusion than I expect it to.

    I was never any good at writing essays in high school. Sometimes, I think it’s because I was never good at bullshitting an analysis of the text I didn’t believe. I’d write to the end of the essay, realize I was arguing a completely different point from my original thesis, and would feel obligated to rewrite the start of my essay. Which then led to a rewrite of the middle, and the end, by which point I’d sometimes diverged yet again…

    Because I’m not made of time, and contradictions are never-ending, I can only resolve so many of them if I want to do other things, like watch YouTube videos and play video games. Thus, after a long circuitous story, we’ve ended at what I call prioritized contradiction resolution. Here’s a summary of my reasoning.

    • I have contradictory thoughts. That’s fine.
    • I have finite time and motivation. That kinda sucks, but okay.
    • It costs time and motivation to resolve contradictions, and those are my limiting factors.
    • Therefore, I should focus on fixing the contradictions most important to me.

    It makes sense for me to think deeply about the AI safety debate. I do research in machine learning, so people are going to expect me to have a well-thought-out opinion on the topic. (I lost track of how many times people asked for my opinion at EA Global after hearing I worked in ML, but I think it was at least ten.)

    In contrast, although it’s been part of my life for over five years, I don’t need to have well-thought-out reasons to like My Little Pony. I’d like to take the time to tease apart why I like the fandom so much, but the value I’d gain from doing so doesn’t seem as large. Thus, resolving that is further down on my internal todo pile.

    So here we are. On looking back, I could have just started here, and skipped all that nonsense with Alice and Bob and rants about cognitive biases.

    But somehow, I find myself a lot more at peace, for taking the time to resolve any lingering contradictions I have about the way I approach resolving contradictions. When I promise you a metapost, you’re damn well getting a metapost.

    Enough navel-gazing. Next post is about AI. I swear. Otherwise, I’m donating $50 to the Against Malaria Foundation.

  • One Year Later

    Over the past few days, I’ve set up the current site and ported over content from the old one. You should see the old blog posts if you scroll down. Out with the old, in with the new…


    As for the title - although I still don’t know what I’m doing with my life, one of my goals is to be an interesting, insightful person. Unfortunately, I’m not one yet. But if I try, maybe I can get there.

    (Hello World, Again)

    Sorta Insightful turns one year old today. To whatever meager readership I have: thanks. It means more than you’d think.

    It’s been six months since I wrote my last retrospective post. Let’s see what’s changed.


    First off, I owe people an apology. Six months ago, I said the following.

    I plan to start flushing my queue of non-technical post ideas. On the short list are

    • A post about Gunnerkrigg Court
    • A post on effort icebergs and criticism
    • A post about obsessive fandoms
    • A post detailing the hilariously incompetent history of the Dominion Online implementation

    I haven’t written any of those posts, and I feel awful about it. I feel especially bad about not writing the Gunnerkrigg Court post. Writing that post has been on my shortlist ever since I started this blog.

    Meanwhile, I’ve forgotten what the effort icebergs post was going to cover. Last I remember, I was planning to resolve the tension between constructive criticism of bad work and respect for the effort that went into it. And there was some infographic I really liked and wanted to use? Wow, it has been a while.

    The obsessive fandom post is officially turning into a post about the My Little Pony fandom in particular, but it hasn’t passed the idea stage yet. As for the Dominion post, I at least have some words written down, but I haven’t touched them since March.

    All these posts that want to be free, and they’re chained to the ground, not because I can’t write them, but because I’m too lazy to write them.

    Yes, I know that there aren’t really people who hate me because I haven’t written these posts, but this is one of those things where I won’t feel better until I make it up to myself.


    Next, the neutral news. My blog still gets basically no traffic. I didn’t expect it to. One year isn’t that long to build a readership, and I didn’t go into this expecting much popularity.

    Yes, in theory I could be promoting my blog more, but there are two things stopping me. One, my blog has no coherent theme. It’s exactly what I want to write about and nothing more. If it was a purely technical blog, I could submit everything to Hacker News. If it was all about machine learning, I could be reaping r/MachineLearning karma. But it’s not all technical, and it’s not all machine learning. It’s fractured, just like me.

    These blog posts aren’t just words on a page. They’re my words, on my page. Marketing something so personal feels wrong, on a level that’s hard to explain. Yes, I’d like the feeling of approval that comes with popularity, but I don’t need the validation to keep writing. Just having the thoughts out there is good enough.


    Enough bittersweet stuff. What’s gone well?

    Well, for one, I started working full time, and I still have time for blog posts. In the past six months, I wrote 38 blog posts, if you include this one. Now, to be fair, 31 of those posts are from the blogging gauntlet I did in May, which had a lot of garbage, but there are definitely a few pearls worth checking out. People seem to like Appearing Skilled and Being Skilled are Very Different, although my personal favorite is Fearful Symmetry. (I’d really like to revisit Fearful Symmetry, when I have time to.)

    I think I’m getting better at articulating my thoughts well the first time. I don’t have any way to measure this quantitatively, but I feel like I’ve been stumbling over my own words less often.

    I continue to be pleasantly surprised that people actually bother to read things I write. You could be anywhere on the Internet, and you chose to be here? Man, you’re nuts, and I love you for it.

    The Future of Sorta Insightful

    The future of Sorta Insightful is pretty simple: I’ll keep writing and see how it goes. Next post is me explaining my position on the existential risk of AI. I’ve had a bunch of conflicting feelings about the topic, but going to EA Global forced me to work through the bigger ones. I’m pretty excited to fully resolve my contradictions on the subject.

    I wrote the start of a really, really stupid short story about dinosaurs. In the right hands, this story could be pretty neat, but unfortunately my hands aren’t the right hands. At the same time, I’m not going to get better at storytelling unless I try, so I’ll keep at it. I hope that story sees the light of day, because the premise is so stupid that it might just wrap back around to brilliant.

    The post about Gunnerkrigg Court is going to happen. I swear it will. Just, not soon. After messing up the post on Doctor Whooves Adventures, I want to make sure I get this explanation right. I love this comic sooooooo much, and it deserves a good advertisement.

    As always, I promise no update schedule beyond “soon”. Bear with me on this.

    Here’s to another year! It’s going to be pretty great.

  • How Pokemon Go Can Teach Us About Social Equality

    I’ve been playing Pokemon Go. If you haven’t played the game, here’s all you need to know for this post.

    • You play as a Pokemon trainer. Your goal is to catch Pokemon. Doing so gives you resources you can spend to power up your Pokemon. Catching Pokemon also increases your trainer level, which increases the power of Pokemon you find in the wild.
    • At level 5, players declare themselves as Team Mystic, Team Valor, or Team Instinct, which are blue, red, and yellow respectively.
    • Your goal is to claim gyms for your team and defend them from enemy teams. Gyms are real world landmarks you can claim for your team if you’re within range of them. Once claimed, players can either train gyms their team owns to make them harder to defeat, or fight enemy gyms to claim them for their own team. Both are easier if your Pokemon have higher CP (Combat Power).

    In my region, Mystic has the most members, followed by Valor, followed by Instinct. I chose Team Instinct, because come on, Zapdos is awesome. Unfortunately, because Instinct has so few members, it’s the butt of many jokes.

    Animated Team Instinct meme

    There are a few gyms in walking distance of where I live, but they’re perpetually owned by Team Mystic. My strongest Pokemon is a 800 CP Arcanine, and when the gym is guarded by a 1700 CP Vaporeon, there’s no point even trying.

    However, on exactly one occasion, the gym near my house was taken over by Team Instinct. Other Instinct players trained it to hell and back. The weakest Pokemon protecting it had over 2000 CP, and the strongest was a 2700 CP Snorlax. Eventually, some Mystic players took it down, but for just over a day I had hope that Instinct players could make it after all.

    Okay. What does that story have to do with social equality?


    To explain myself, I’m going to jump back to a more serious turf war: World War 2.

    The United States was interested in minimizing planes lost during bombing runs. To that end, they hired several academics to study aircraft flown in previous missions. Initially, the study concluded that the regions that suffered the most damage should be reinforced with armor.

    Then, the statistician Abraham Wald had a moment of insight. The aircraft studied were aircraft that survived their missions. That meant the damaged areas were actually the least important parts of the aircraft, because the pilot was able to land the plane safely. He recommended adding armor to the regions with no damage instead.

    This is known as survivorship bias. It’s not enough to analyze just the population we observe. We also need to consider what happened to the population we aren’t able to observe.

    Applying this to Pokemon Go is easy enough. Let’s assume that team choice doesn’t affect the strength of the player. This is a safe assumption, because the fastest ways to level up are catching Pokemon, visiting Pokestops, and evolving Pokemon, all of which can be done no matter what team you’re on. Given this, gyms owned by Team Instinct face the most adversity, because they can be attacked by both Team Mystic and Team Valor, the two largest teams. So, given that I got to observe a Team Instinct gym, it must be stronger than a similar Mystic or Valor gym. Otherwise, it wouldn’t have lasted long enough for me to observe.

    Conclusion: Instinct gyms are badass.

    Team Instinct banner

    Applying vaguely anthropic arguments to Pokemon Go is fun and all, but survivorship bias has more important applications. Implicit discrimination in the workplace, for example. Suppose you’re Hispanic, and want to work in the software industry. A company releases a study showing their Hispanic employees outperform their colleagues. This is actually a bad sign. If a company is implicitly biased against Hispanics in their hiring process, the average Hispanic programmer they hire will outperform their other hires, because they have to survive a tougher hiring process.

    This isn’t a new idea. Paul Graham wrote an essay on the exact same point, noting this was a litmus test people could check at their own companies. What makes this argument nice is that it applies no matter what the job market looks like. If a company doesn’t have many female engineers, they can claim it’s a pipeline problem and there aren’t enough women in the programming workforce. But, if their female engineers consistently outperform their other engineers, they’re likely biasing in the hiring process. To quote Paul Graham’s essay,

    This is not just a heuristic for detecting bias. It’s what bias means.

    It’s incredibly important to understand these arguments, because they fly in the face of intuition. If a person says some of the smartest people they know are black…well, for one, even saying something like that is a red flag. But also, it could mean that they’re only willing to be friends with those people because they’re so smart.


    To be fair, there are a ton of confounding factors that muddle these claims. In Pokemon Go, although Mystic gyms don’t need to be strong to survive, they often become strong anyway, because fewer people attack them and more people can train them. In the software industry, there are strong socioeconomic feedback loops that help perpetuate the current demographic imbalance. If a company notices that rich employees perform better than poor ones, it doesn’t necessarily mean they’re prejudiced against rich people.

    Survivorship bias is not an end-all argument. It’s simply one more filter to protect yourself from making the wrong conclusion. Otherwise, you don’t protect airplanes properly, or you don’t recognize discrimination when you should.

    As for Pokemon Go? Well, honestly, there are more important problems in the world than the troubles of Team Instinct. I’m sure we’ll find ways to deal with it.

    Instinct sucks lol who cares