Posts
-
The Blogging Gauntlet: May 7 - Shaping Thought Processes
This is part of The Blogging Gauntlet of May 2016, where I try to write 500 words every day. See the May 1st post for full details.
About a week ago, a friend of mine made a Facebook post.
That moment when you accidentally use bodywash instead of shampoo, and think “this should have been caught at compile time”
Everybody laugh, cue snare drum, etc. But, it got me thinking. This friend of mine is someone who’s done functional programming, and thinks a lot about programming languages. It’s fitting for him to think about catching mistakes at compile time. To quote the Haskell wiki, “A lot of people experience a curious phenomenon when they start using Haskell: Once your code compiles it usually works.”
In the 2nd-to-last week of classes, a student in the graduate level machine learning/stats course I’m taking said he had very high regret because he procrastinated making his poster.
Last weekend, I did a puzzle hunt with someone who does systems research in AMPLab. At one point, a few members left to do an on-campus puzzle. He said it made more sense to work on a new puzzle instead of getting caught up on the partial progress of the puzzle they abandoned. Or to be more exact, he said they already had that puzzle loaded into their cache, so it wasn’t worth the context switching time for us.
In one memorable math session, someone decided the word “three” was a variable, leading to insane sentences like “If three is bigger than eight, then this property should hold.” But why not? “Three” is as reasonable a variable name as any other.
(Okay, not really, but it’s still amusing.)
If it wasn’t already clear, I’m among these people who let their technical knowledge blend into their real world thoughts. I literally just wrote a post about online milk tea regret minimization. I’ve also described my decision making as a priority queue. I get offered a task with high priority. I can either do it like a productive person, or procrastinate it by placing it back into the queue with lower priority. Perhaps that’s my algorithms side bleeding out.
To me, it seems like the more you think about a subject, the more it shapes the way you think in general. I’m sure that leads to all sorts of stereotypes. Physicists round constants to orders of magnitude. Theoretical computer scientists throw out constants entirely. I don’t know of stereotypes for biologists or chemists, but I’d guess they place more on the process of things - how things change from one thing into another. And then of course, math majors are both pedantic and willing to treat absurd scenarios with total seriousness, because math eventually turns into reasoning about abstract structures that have no real world analogue.
I don’t think this is a bad thing. I also don’t think it’s a good thing. It’s simply a thing; a way that our brains work. I’m sure if you asked, every major could give you a way their major has shaped their thoughts to make them a better person. Math made me a better problem solver and a more detail-oriented person. Astrophysics helped me imagine scales of exponentially large numbers, and gave me more reason to care about Earth. Philosophy made me better at arguing my points. Public health made me more empathetic.
Which field of study shapes thoughts in the best way? There isn’t one. Humanity probably needs all of these modes of thinking, and we’ll borrow the hats of others if we need to think in a way that’s unfamiliar to us.
-
The Blogging Gauntlet: May 6 - State Space Size is a Garbage Metric
This is part of The Blogging Gauntlet of May 2016, where I try to write 500 words every day. See the May 1st post for full details.
When talking about games, and especially when talking about game AIs, it’s very common for people to compare the size of the state space to other big numbers, like the number of atoms in the universe.
But as simple as the rules are, Go is a game of profound complexity. There are 1,000, 000, […] ,000,000,000 possible positions—that’s more than the number of atoms in the universe, and more than a googol times larger than chess. This complexity is what makes Go hard for computers to play, and therefore an irresistible challenge to artificial intelligence (AI) researchers…
Google’s DeepMind artificial intelligence team likes to say that there are more possible Go boards than atoms in the known universe, but that vastly understates the computational problem. There are about \(10^{170}\) board positions in Go, and only \(10^{80}\) atoms in the universe. That means that if there were as many parallel universes as there are atoms in our universe (!), then the total number of atoms in all those universes combined would be close to the possibilities on a single Go board.
The rules are relatively simple: the goal is to gain the most territory by placing and capturing black and white stones on a 19×19 grid. But the average 150-move game contains more possible board configurations — \(10^{170}\) — than there are atoms in the Universe, so it can’t be solved by algorithms that search exhaustively for the best move.
Look, I get it. It’s nice to have analogies for exponentially large numbers because of scope insensitivity. My issue is the conflation of state space size with problem difficulty.
Chess has about \(10^{43}\) positions. Go on a 19 x 19 board has \(10^{170}\) states as said above. So Go is a harder game to play, and look, AlphaGo took a lot longer to make than DeepBlue, it holds up! Checkers has about \(10^{18}\) to \(10^{20}\) states. Backgammon has about \(10^{20}\) states as well. Chinook beat the top Checkers player a few years before Deep Blue vs Kasparov, and TD-Gammon achieved performance close to top Backgammon players in around the same time frame. Looking good for the “number states = difficulty” hypothesis.
Alright, here’s a trolly counterexample. In the subtraction game, there are \(N\) rocks in a pile. On each player’s turn, they either remove \(1\) rock or \(2\) rocks. The first player who runs out of moves loses.
The game can also be viewed in this way, where each move is following an arrow to another state. Picture by Alan Chang
This game has a very simple optimal strategy. If it’s your turn and the number of rocks is a multiple of \(3\), you lose. If you remove \(1\) rock, the other player removes \(2\). If you remove \(2\) rocks, the other player removes \(1\). No matter what you do, the other player will always be able to force you to start your turn with the number of rocks as a multiple of \(3\). If the pile started at \(9\), it’ll be \(6\) on your next turn, then \(3\), then \(0\)…and now you have no moves and lose.
From green states, remove \(1\). From orange states, remove \(2\). From yellow states, it’s a lost game assuming optimal play.
This optimal strategy is one that’s very easy to encode by computer. Yet, we technically have infinitely many states. Every natural number is different from the rest (citation needed), so this game is infinitely difficult, but oh man AIs can solve it, it’s the end of the wooooooooorld.
So obviously it’s not the end of the world. (Yet.) In the subtraction game, we can group states into \(0\) mod \(3\), \(1\) mod \(3\), and \(2\) mod \(3\). (Or by their color in the diagram above, if mod math is scary.) There’s really only \(3\) different states we care about - the actions we perform in each are the same.
What makes a game difficult for an AI is not the raw number of states. It is how well different states of the game generalize to each other, and whether there are nice heuristics that work well on several game states at once. Think of it this way - a computer can explore a billion states in reasonable time. However, \(10^9\) states is a drop in the bucket compared to \(10^{43}\) states. Actually, that metaphor isn’t good enough, so here’s a raw calculation. A liter of water has about \(10^{26}\) water molecules. Seeing \(10^9\) out of \(10^{43}\) states is like having a container of 100,000,000 liters of water, and looking at a single molecule of it.
The only reason Chess is actually a game instead of a pointless exercise is that there is a deep generalization between different game states. That generalization is easy enough to make Chess worth learning to novices, and hard enough to make Chess worth learning to experts.
If Chess has \(10^{120}\) states instead, it wouldn’t really change the problem. The fraction of states you observe are so combinatorially low that it just doesn’t matter.
What made Go harder than Chess wasn’t the number of states. It was because there were no clean heuristics for evaluating board state. You can get surprisingly far in Chess by counting how much material each side has, but Go doesn’t have anything so clear cut. Changing the placement of a single stone can radically alter the value of the board, and it’s very hard to train a computer why that specific stone matters on this specific board.
-
The Blogging Gauntlet: May 5 - Exploration-Exploitation Part 2: Milk Tea
This is part of The Blogging Gauntlet of May 2016, where I try to write 500 words every day. See the May 1st post for full details.
Yesterday I introduced multi-armed bandits. Today I’ll give a delicious real world example: deciding on the best place to get milk tea in Berkeley.
Before I get into the details, let’s set up some ground rules.
- Pearl milk tea is a tea drink made of tea, milk, and tapioca balls. The tapioca balls are also known as pearls.
- Many, many people call this drink “boba” or “bubble tea”. These people are wrong. Yes, I understand there are regional dialects that each have a different word for milk tea, but the correct one is PMT. “Boba” and “bubble” refer to only the tapioca inside the tea. Thus, both “boba” and “bubble tea” are incorrect, because they do not signify that the milk tea drink has milk in it.
- “Boba milk tea” or “bubble milk tea” are acceptable terms that I can tolerate.
Anyways, this post isn’t about milk tea linguistics, it’s about multi-armed bandits. In this setting, initially every arm is a milk tea restaurant. The arm’s payout is random, depending on what flavor of tea I get and how I request it be made. Over time, each arm further divides into more choices, because I need to choose what flavor of tea to get within a given milk tea cafe.
Let’s pretend I’m a freshman again, and have no context about the milk tea scene. As a freshman, I know of three milk tea options: Quickly’s, Sweetheart’s, and Lotus House. I try them for a while, and conclude that Sweetheart’s has the best pearls, and Lotus House has the best tea. This continues to hold up as I try different flavors. Eventually, I converge to alternating between Sweetheart’s and Lotus House based on mood.
A few months later, someone mentions PurpleKow to me. This milk tea place is further from the dorms, but I check it out, and my god, it’s so much better. Here, I paid the price for not exploring enough. If I was serious about milk tea (which I am now), I should have done a search of nearby milk tea places. That would have made me find PurpleKow significantly sooner.
Now, fast forward to next year. Near the south side of campus, two stores open.
- Sheng Kee Bakery, a chain of Asian bakeries that also sell milk tea.
- ShareTea.
Sheng Kee’s new, so it deserves a few tries. I quickly decide it’s okay, but not good enough. ShareTea’s also new, so I try it, and
oh
my
god.
ShareTea quickly becomes my go-to spot. At this point, I don’t bother going to PurpleKow or Sweetheart’s. My past observations of the quality of milk tea gives me high confidence in the quality of milk tea from these establishments, and it’s worse than ShareTea’s.
At this point, I have some regret. I could have gotten tea from PurpleKow significantly more than I did. However, it’s outweighed by my decision to explore ShareTea early. I missed PurpleKow for a few months, but I got to exploit ShareTea for years.
Finally, a few months after going to ShareTea several times a week, someone tells me I should really, really try Asha Tea House. I try it, and again I’m surprised. It’s very good, better than ShareTea, but it’s expensive enough to make it a trade-off between quality and quantity. I choose between the two based on which I favor that day.
In the present, it’s worth asking myself how I’ve done. Well, overall my milk tea regret is pretty low. There were a few months where I didn’t explore my options thoroughly enough, but I checked out ShareTea within days of its opening, and got to exploit it for years. There’s some regret from not visiting Asha earlier, but it’s close enough in quality that I don’t feel so bad about it.
The biggest takeaway I want to get across is that we usually don’t do enough exploration. People tend to regret what they didn’t do more than they do, and although there may be some cognitive bias there, the potential reward from trying something amazing means it’s worth exploring more than you think you should. Spending a few months trying different things is cheap compared to years of doing the right thing.