Posts

  • The Blogging Gauntlet: May 8 - Shut Up And Write

    This is part of The Blogging Gauntlet of May 2016, where I try to write 500 words every day. See the May 1st post for full details.

    When it’s 10:20 PM, I have to write a 500 word post, and I have no idea what’s I’m going to write about, you know I’m going to dredge the bottom of the self-reference barrel.

    So, I have 100 minutes to write a post. By itself, this wouldn’t be the trickiest task. Some of the posts on previous days were written in less time than that. However, on those days I knew I would be time-crunched, and deliberately thought about a topic I wanted to write about. That let me plan out an outline in advance. When you know what you want to say, it’s surprisingly easy to say it.

    Today, I didn’t think of a topic ahead of time, and here we are. Let’s see if my lightly edited stream of consciousness is remotely coherent.

    First off, I could always just give up. It’s $20 if I don’t write a post. Twenty dollars isn’t that much for me, which I’m sure says something about my socioeconomic status. But here I am, writing out words.

    So, let’s assume I’m acting rationally, or at least acting in a way that could be interpreted as the actions of a rational person. Then, we immediately prove some interesting things about myself. For instance, I value 100 minutes of my time at more than $20.

    That assumes I get nothing out of writing blog posts, or rising to meet one of my self-imposed challenges. I get a really big sense of accomplishment when I finish something, and that feeling is definitely worth more than $20 to me. In retrospect, the feeling of accomplishment is worth a LOT more than that - it might be my primary motivator. It was one of the biggest things that made deciding not to go to grad school so difficult. Should probably keep a closer eye on that part of myself.

    Now, on the other hand…I have a final tomorrow. I have not studied for this final as much as I should have. Evidently, I value writing blog posts and going on road trips to LA more than studying for finals. Which I guess makes sense. Writing tons of garbage and doing random things in LA is so much more interesting than reading my scribbled notes from weeks long past. I can justify not studying by saying “I’m a second semester senior”, “I’m taking this class pass-no pass”, and from your perspective you might be wondering why I feel guilty at all. But I do. It’s probably tied to that feeling of accomplishment. The feeling I get when I fail to do something simply because I didn’t put in enough time is terrible. For me, there isn’t some line in the sand marking things I want to do and things I should do but don’t want to. It’s one big gradient of priorities and procrastination. Envisioning achievement motivates working towards that achievement, even if the path required to get there isn’t worth it in the end.

    you’ll never give up, even if there’s, uh… absolutely NO benefit to persevering whatsoever. if i can make that clear. no matter what, you’ll just keep going. not out of any desire for good or evil… but just because you think you can. and because you “can”… … you “have to”

    (Naming the source would be a spoiler.)

    Hey, will you look at that. I hit 500 words! Cool. Nice. See you all next time.

    (Also, I think I’ll be fine for the final. If I’m not, well, life is one big tapestry. In any piece of work that big, it’s expected that patches of it will be baby barf green.)

    Comments
  • The Blogging Gauntlet: May 7 - Shaping Thought Processes

    This is part of The Blogging Gauntlet of May 2016, where I try to write 500 words every day. See the May 1st post for full details.

    About a week ago, a friend of mine made a Facebook post.

    That moment when you accidentally use bodywash instead of shampoo, and think “this should have been caught at compile time”

    Everybody laugh, cue snare drum, etc. But, it got me thinking. This friend of mine is someone who’s done functional programming, and thinks a lot about programming languages. It’s fitting for him to think about catching mistakes at compile time. To quote the Haskell wiki, “A lot of people experience a curious phenomenon when they start using Haskell: Once your code compiles it usually works.”

    In the 2nd-to-last week of classes, a student in the graduate level machine learning/stats course I’m taking said he had very high regret because he procrastinated making his poster.

    Last weekend, I did a puzzle hunt with someone who does systems research in AMPLab. At one point, a few members left to do an on-campus puzzle. He said it made more sense to work on a new puzzle instead of getting caught up on the partial progress of the puzzle they abandoned. Or to be more exact, he said they already had that puzzle loaded into their cache, so it wasn’t worth the context switching time for us.

    In one memorable math session, someone decided the word “three” was a variable, leading to insane sentences like “If three is bigger than eight, then this property should hold.” But why not? “Three” is as reasonable a variable name as any other.

    (Okay, not really, but it’s still amusing.)

    If it wasn’t already clear, I’m among these people who let their technical knowledge blend into their real world thoughts. I literally just wrote a post about online milk tea regret minimization. I’ve also described my decision making as a priority queue. I get offered a task with high priority. I can either do it like a productive person, or procrastinate it by placing it back into the queue with lower priority. Perhaps that’s my algorithms side bleeding out.

    To me, it seems like the more you think about a subject, the more it shapes the way you think in general. I’m sure that leads to all sorts of stereotypes. Physicists round constants to orders of magnitude. Theoretical computer scientists throw out constants entirely. I don’t know of stereotypes for biologists or chemists, but I’d guess they place more on the process of things - how things change from one thing into another. And then of course, math majors are both pedantic and willing to treat absurd scenarios with total seriousness, because math eventually turns into reasoning about abstract structures that have no real world analogue.

    I don’t think this is a bad thing. I also don’t think it’s a good thing. It’s simply a thing; a way that our brains work. I’m sure if you asked, every major could give you a way their major has shaped their thoughts to make them a better person. Math made me a better problem solver and a more detail-oriented person. Astrophysics helped me imagine scales of exponentially large numbers, and gave me more reason to care about Earth. Philosophy made me better at arguing my points. Public health made me more empathetic.

    Which field of study shapes thoughts in the best way? There isn’t one. Humanity probably needs all of these modes of thinking, and we’ll borrow the hats of others if we need to think in a way that’s unfamiliar to us.

    Comments
  • The Blogging Gauntlet: May 6 - State Space Size is a Garbage Metric

    This is part of The Blogging Gauntlet of May 2016, where I try to write 500 words every day. See the May 1st post for full details.

    When talking about games, and especially when talking about game AIs, it’s very common for people to compare the size of the state space to other big numbers, like the number of atoms in the universe.

    But as simple as the rules are, Go is a game of profound complexity. There are 1,000, 000, […] ,000,000,000 possible positions—that’s more than the number of atoms in the universe, and more than a googol times larger than chess. This complexity is what makes Go hard for computers to play, and therefore an irresistible challenge to artificial intelligence (AI) researchers…

    Source

    Google’s DeepMind artificial intelligence team likes to say that there are more possible Go boards than atoms in the known universe, but that vastly understates the computational problem. There are about \(10^{170}\) board positions in Go, and only \(10^{80}\) atoms in the universe. That means that if there were as many parallel universes as there are atoms in our universe (!), then the total number of atoms in all those universes combined would be close to the possibilities on a single Go board.

    Source

    The rules are relatively simple: the goal is to gain the most territory by placing and capturing black and white stones on a 19×19 grid. But the average 150-move game contains more possible board configurations — \(10^{170}\) — than there are atoms in the Universe, so it can’t be solved by algorithms that search exhaustively for the best move.

    Source

    Look, I get it. It’s nice to have analogies for exponentially large numbers because of scope insensitivity. My issue is the conflation of state space size with problem difficulty.

    Chess has about \(10^{43}\) positions. Go on a 19 x 19 board has \(10^{170}\) states as said above. So Go is a harder game to play, and look, AlphaGo took a lot longer to make than DeepBlue, it holds up! Checkers has about \(10^{18}\) to \(10^{20}\) states. Backgammon has about \(10^{20}\) states as well. Chinook beat the top Checkers player a few years before Deep Blue vs Kasparov, and TD-Gammon achieved performance close to top Backgammon players in around the same time frame. Looking good for the “number states = difficulty” hypothesis.

    Alright, here’s a trolly counterexample. In the subtraction game, there are \(N\) rocks in a pile. On each player’s turn, they either remove \(1\) rock or \(2\) rocks. The first player who runs out of moves loses.

    Subtraction game

    The game can also be viewed in this way, where each move is following an arrow to another state. Picture by Alan Chang

    This game has a very simple optimal strategy. If it’s your turn and the number of rocks is a multiple of \(3\), you lose. If you remove \(1\) rock, the other player removes \(2\). If you remove \(2\) rocks, the other player removes \(1\). No matter what you do, the other player will always be able to force you to start your turn with the number of rocks as a multiple of \(3\). If the pile started at \(9\), it’ll be \(6\) on your next turn, then \(3\), then \(0\)…and now you have no moves and lose.

    Subtraction game strategy

    From green states, remove \(1\). From orange states, remove \(2\). From yellow states, it’s a lost game assuming optimal play.

    This optimal strategy is one that’s very easy to encode by computer. Yet, we technically have infinitely many states. Every natural number is different from the rest (citation needed), so this game is infinitely difficult, but oh man AIs can solve it, it’s the end of the wooooooooorld.

    So obviously it’s not the end of the world. (Yet.) In the subtraction game, we can group states into \(0\) mod \(3\), \(1\) mod \(3\), and \(2\) mod \(3\). (Or by their color in the diagram above, if mod math is scary.) There’s really only \(3\) different states we care about - the actions we perform in each are the same.

    What makes a game difficult for an AI is not the raw number of states. It is how well different states of the game generalize to each other, and whether there are nice heuristics that work well on several game states at once. Think of it this way - a computer can explore a billion states in reasonable time. However, \(10^9\) states is a drop in the bucket compared to \(10^{43}\) states. Actually, that metaphor isn’t good enough, so here’s a raw calculation. A liter of water has about \(10^{26}\) water molecules. Seeing \(10^9\) out of \(10^{43}\) states is like having a container of 100,000,000 liters of water, and looking at a single molecule of it.

    The only reason Chess is actually a game instead of a pointless exercise is that there is a deep generalization between different game states. That generalization is easy enough to make Chess worth learning to novices, and hard enough to make Chess worth learning to experts.

    If Chess has \(10^{120}\) states instead, it wouldn’t really change the problem. The fraction of states you observe are so combinatorially low that it just doesn’t matter.

    What made Go harder than Chess wasn’t the number of states. It was because there were no clean heuristics for evaluating board state. You can get surprisingly far in Chess by counting how much material each side has, but Go doesn’t have anything so clear cut. Changing the placement of a single stone can radically alter the value of the board, and it’s very hard to train a computer why that specific stone matters on this specific board.

    Comments