Posts

  • An Overdue Post on AlphaStar, Part 1

    In late January, DeepMind broadcasted a demonstration of their StarCraft II agent AlphaStar. In Protoss v Protoss mirrors on a map used in pro play (Catalyst LE), it successfully beat two pro players from TeamLiquid, TLO (a Zerg player) and MaNa (a Protoss player).

    This made waves in both the StarCraft and machine learning communities. I’m mostly an ML person, but I played a lot of casual Brood War growing up and used to follow the Brood War and SC2 pro scene.

    As such, this is a two-part post. The first is a high-level overview of my reactions to the AlphaStar match and other people’s reactions to the match. The second part, linked here, is a more detailed discussion of what AlphaStar means for machine learning.

    In other words, if you’re interested in deep dives into AlphaStar’s StarCraft strategy, you may want to read something else. I recommend this analysis video by Artosis and this VOD of MaNa’s livestream about AlphaStar.

    The DeepMind blog post for AlphaStar is pretty extensive, and I’ll be assuming you’ve read that already, since I’ll be referring to sections of it throughout the post.

    The Initial Impact

    It was never a secret that DeepMind was working on StarCraft II. One of the first things mentioned after the AlphaGo vs Lee Sedol match was that DeepMind was planning to look at StarCraft. They’ve given talks at BlizzCon, helped develop the SC2 learning environment, and published a few papers about training agents in StarCraft II minigames. It was always a matter of time.

    For this reason, it hasn’t made as big an impact on me as OpenAI’s 1v1 DotA 2 bot. The key difference isn’t how impressive the results were, it was how surprising it was to hear about them. No one knew OpenAI was looking at DotA 2, until they announced they had beaten a top player in 1v1 (with conditions). Even for AlphaGo, DeepMind published a paper on Go evaluation over a year before the AlphaGo Nature paper (Maddison et al, ICLR 2015). It was on the horizon if you saw the right signs (see my post on AlphaGo if curious).

    StarCraft II has had a steady stream of progress reports, and that’s lessened the shock of the initial impact. When you know a team has been working on StarCraft for several years, and Demis Hassabis tweets that the SC2 demonstration will be worth watching…well, it’s hard not to expect something to happen.

    In his post-match livestream, MaNa relayed a story about his DeepMind visit. In retrospect, given how many hints there were, it’s funny to hear how far they went to conceal how strong AlphaStar was in the days up to the event.

    Me [MaNa] and TLO are going to be representing TeamLiquid, right? They wanted to make sure there wasn’t any kind of leak about the event, or what kind of show they were putting on. Around the office, we had to cover ourselves with DeepMind hoodies, because me and TLO are representing TeamLiquid, with the TeamLiquid hoodie and TeamLiquid T-shirt. We walk in day one and the project managers are like, “NOOOO, don’t do that, don’t spoil it, people will see! Here are some DeepMind hoodies, do you have a normal T-shirt?”, and me and TLO are walking in with TeamLiquid gear. We didn’t know they wanted to keep it that spoiler-free.

    (Starts at 1:13:19)

    To be fair, the question was never about whether DeepMind had positive results. It was about how strong their results were. On that front, they successfully hid their progress, and I was surprised at how strong the agent was.

    How Did AlphaStar Win?

    Here is an incredibly oversimplified explanation of StarCraft II.

    • Each player starts with some workers and a home base. Workers can collect resources, and the home base can spend resources to build more workers.
    • Workers can spend resources to build other buildings that produce stronger units, upgrade your existing units, or provide static defenses.
    • The goal is to destroy all your opponent’s buildings.

    Within this is a large pool of potential strategy. For example, one thing workers can do is build new bases. This is called expanding, and it gives you more economy long run, but the earlier you expand, the more open you are to aggression.

    AlphaStar’s style, so to speak, seems to trend in these directions.

    • Never stop building workers, even when it delays building your first expansion.
    • Build lots of Stalkers and micro them to flank and harass the enemy army until it’s weak enough to lose to an all-in engagement. Stalkers are one of the first units you can build, and can hit both ground and air units from range. They also have a Blink ability that lets them quickly jump in and out of battle.
    • Support those Stalkers with a few other units.

    From the minimal research I’ve done, none of these strategies are entirely new, but AlphaStar pushed the limits of these strategies to new places. Players have massed workers in the past, but they’ll often stop before hitting peak mining capacity, due to marginal returns on workers. Building workers all the way to the mining cap delays your first expansion, but it also provides redundancy against worker harass, so it’s not an unreasonable strategy.

    Similarly, Stalkers have always been a core Protoss unit, but they eventually get countered by Immortals. AlphaStar seems to play around this counter by using exceptional Stalker micro to force early wins through a timing push.

    It’s a bit early to tell whether humans should be copying these strategies. The heavy Stalker build may only be viable with superhuman micro (more on this later). Still, it’s exciting that it’s debatable in the first place.

    Below is a diagram from the blog post, visualizing the number of each unit the learned agents create as a function of training time. We see that Stalkers and Zealots dominate the curve. This isn’t surprising, since Stalkers and Zealots are the first attacking units you can build, and even if you’re planning to use other units, you still need some Stalkers or Zealots for defense.

    Unit histograms

    I believe this is the first StarCraft II agent that learns unit compositions. The previous leading agent was one developed by Tencent (Sun et al, 2018), which followed human-designed unit compositions.

    The StarCraft AI Effect

    One of the running themes in machine learning is that whenever somebody gets an AI to do something new, others immediately find a reason to say it’s not a big deal. This is done either by claiming that the task solved doesn’t require intelligence, or by homing in on some inhuman aspect of how the AI works. For example, the first chess AIs won thanks to large game tree searches and lots of human-provided knowledge. So you can discount chess AIs by claiming that large tree searches don’t count as intelligence.

    The same thing has happened with AlphaStar. Thanks to the wonders of livestreaming and Reddit, I was able to see this live, and boy was that a sight to behold. It reminded me of the routine “Everything is Amazing, and Nobody’s Happy”. (I understand that Louis C.K. has a lot of baggage these days, but I haven’t found another clip that expresses the right sentiment, so I’m using it anyways.)

    I do think some of the criticisms are fair. The criticisms revolved around two points: the global camera, and AlphaStar’s APM.

    I’m deferring details of AlphaStar’s architecture to part 2, but the short version is that AlphaStar is allowed to observe everything within vision of units it controls. By contrast, humans can only observe the minimap and the units on their screen, and must move the camera around to see other things.

    There’s one match where MaNa tried building Dark Templars, and the instant they walked into AlphaStar’s range, it immediately started building Observers to counter them. A human wouldn’t be able to react to Dark Templars that quickly. This is further complicated by AlphaStar receiving raw game state instead of the visual render. Getting raw game state likely makes it easier to precisely focus-fire units without overkill, and also heavily nerfs cloak in general. The way cloaking works in StarCraft is that cloaked units are untargetable, but you can spot faint shimmers wherever there’s a cloaked unit. With proper vigilance, you can spot cloaked units, but it’s easy to miss them with everything else you need to focus on. AlphaStar doesn’t have to spot the on-screen shimmer of cloak, since the raw game state simply says “Dark Templar, cloaked, at position (x,y).”

    The raw game state seems like an almost unfixable problem (unless you want to go down the computer vision rabbit hole), but it’s not that bad compared to the global camera. For what it’s worth, DeepMind trained a new agent without the global camera for the final showmatch, and I assume the global camera will not be used in any future pro matches.

    The more significant controversy is around AlphaStar’s APM. On average, AlphaStar acts at 280 actions per minute, less than pro play, but this isn’t the full picture. According to the Reddit AMA, the limitation is at most 600 APM every 5 seconds, 400 APM every 15 seconds, and 300 APM every 60 seconds. This was done to model both average pro APM and burst APM, since humans can often reach high peak APM in micro-intensive situations. During the match itself, viewers spotted that AlphaStar’s burst APM sometimes reached 900 or even 1500 APM, far above what we’ve seen from any human.

    These stats are backed up by the APM chart: AlphaStar’s average APM is smaller than MaNa’s, but has a longer tail.

    APM Chart

    From DeepMind blog post

    Note that TLO’s APM numbers are inflated because the key bindings he uses leads to lots of phantom actions that don’t do anything. MaNa’s numbers are more reflective of pro human APM,

    I mentioned earlier that AlphaStar really likes Stalkers. At times, it felt like AlphaStar was building Stalkers in pure defiance of common sense, and it worked anyways because it had such effective blink micro. This was most on display in game 4, where AlphaStar used Stalkers to whittle down MaNa’s Immortals, eventually destroying all of them in a game-ending victory. (Starts at 1:37:46.)

    I saw a bunch of people complaining about the superhuman micro of AlphaStar, and how it wasn’t fair. And yes, it isn’t. But it’s worth noting that before AlphaStar, it was still an open question whether bots could beat pro players at all, with no restrictions on APM. What, is the defeat of a pro player in any capacity at all not cool enough? Did Stalker blink micro stop being fun to watch? Are you not entertained? Why is this such a big deal?

    What’s Up With APM?

    After thinking about the question, I have a few theories for why people care about APM so much.

    First, StarCraft is notorious for its high APM at the professional level. This started back in Brood War, where people shared absurd demonstrations of how fast Korean pro players were with their execution.

    It’s accepted wisdom that if you’re a StarCraft pro, you have to have high APM. This is to the point where many outsiders are scared by StarCraft because they think you have to have high APM to have any fun playing StarCraft at all. Without the APM to make your units do what you want them to do, you won’t have time to think about any of the strategy that makes StarCraft interesting.

    This is wrong, and the best argument against it is the one Day[9] gave on the eve of the release of StarCraft: Brood War Remastered (starts at 4:30).

    There is this illusion that in Brood War, you need to be excellent at your mechanics before you get to be able to do the strategy. There is this idea that if you practice for three months, you’ll have your mechanics down and then get to play the strategy portion. This is totally false. […] If you watch any pro play, stuff is going wrong all the time. They’re losing track of drop ships and missing macro back at home and they have a geyser with 1 dude in it and they forget to expand. Stuff’s going wrong all the time, because it’s hard to be a commander.

    This execution difficulty is an important human element of gameplay. You can only go so fast, and can’t do everything at once, so you have to choose where to focus your efforts.

    But a computer can do everything at once. I assume a lot of pros would find it unsatisfying if supreme micro was the only way computers could compete with pros at StarCraft.

    Second, micro is the flashiest and most visible StarCraft skill. Any StarCraft highlight reel will have a moment where one player’s ridiculous micro lets them barely win a fight they should have lost. For many people, micro is what makes StarCraft a good competitive game, because it’s a way for the better player to leverage their skill to win. And from a spectator perspective, these micro fights are the most exciting parts of the game.

    The fact that micro is so obvious matters for the third and final theory: DeepMind started by saying their agent acted within human parameters for APM, and then broke the implicit contract.

    Everything DeepMind said was true. AlphaStar’s average APM is under pro average APM. They did consult with pros to decide what APM limits to use. When this is all mentioned to the viewer, it comes with a bunch of implications. Among them is the assumption that the fight will be fair, and that AlphaStar will not do things that humans can’t do. AlphaStar will play in ways that look like a very good pro.

    Then, AlphaStar does something superhuman with its micro. Now, the fact that this is within APM limits that pros thought were reasonable doesn’t matter. What matters is that the implied contract was broken, and that’s where people got mad. And because micro is so obvious to the viewer, it’s very easy to see why people were mad. I claim that if AlphaStar had used thousands of APM at all times, people wouldn’t have been upset, because DeepMind never would have claimed AlphaStar’s APM was within human limits, and everyone would have accepted AlphaStar’s behavior as the way things were.

    We saw a similar thing play out in the OpenAI Five showcase. The DotA team said that OpenAI Five had 250ms reaction times, within human limits. One of the humans picked Axe, aiming for Blink-Call engages. OpenAI Five would insta-Hex Axe every time they blinked into range, completely negating that strategy. We would never expect humans to do this consistently, and questions about reaction time were among the first questions asked in the Q&A section.

    I feel people are missing the wider picture: we can now train ML models that can play StarCraft II at Grandmaster level. It is entirely natural to ask for more restrictions, now that we’ve seen what AlphaStar can do, but I’d ask people not to look down on what AlphaStar has already done. StarCraft II is a hard enough problem that any success should be celebrated, even if the end goal is to build an agent more human-like in its behavior.

    APM does matter. Assuming all other skills are equal, the player with higher APM is going to win, because they can execute things with more speed and precision. But APM is nothing without a strategy behind it. This should be obvious if you look at existing StarCraft bots, that use thousands of APM and yet are nowhere near pro level. Turns out learning StarCraft strategy is hard!

    If anything, I find it very impressive that AlphaStar is actually making good decisions with the APM it has. “Micro” involves a lot of rapid, small-scale decisions about whether to engage or disengage, based off context about what units are around, who has the better position and composition, and guesses on where the rest of your opponent’s army is. It’s hard.

    For this reason, I didn’t find AlphaStar’s micro that upsetting. The understanding displayed of when to advance and when to retreat was impressive enough to me, and watching AlphaStar micro three groups of Stalkers to simultaneously do hit-and-runs on MaNa’s army was incredibly entertaining.

    At the same time, I could see it getting old. When fighting micro of that caliber, it’s hard to see how MaNa has a chance.

    Still, it seems like an easy fix: tighten some of the APM bounds, maybe include limitations at smaller granularity (say 1 second) to limit burst APM, and see what happens. If Stalker micro really is a crutch that prevents it from learning stronger strategies, tighter limits should force AlphaStar to learn something new. (And if AlphaStar doesn’t have to do this, then that would be good to know too.)

    What’s Next?

    DeepMind is free to do what they want with AlphaStar. I suspect they’ll try to address the concerns people have brought up, and won’t stop until they’ve removed any doubt over ML’s ability to beat pro StarCraft II players with reasonable conditions.

    There are times where people in game communities worry that big companies are building game AIs purely as a PR stunt, and that they don’t appreciate the beauty in competitive play. I’ve found this is almost always false, and the same is true here.

    Let me put it this way: one of the faces of the project is Oriol Vinyals. Based on a 35 Under 35 segment in the MIT Technology Review, Oriol used to be the best Brood War player in Spain. Then, he worked on a StarCraft AI at UC Berkeley. Eventually, he joined DeepMind and started working on AlphaStar.

    So yeah, I don’t think the AlphaStar team is looking at StarCraft as just another game to conquer. I think they genuinely love the game and won’t stop until AlphaStar is both better than everyone and able to teach us something new about StarCraft II.

    Continue to Part 2

    Comments
  • MIT Mystery Hunt 2019

    Here is a lesson that should be obvious: if you are trying to hit a paper deadline, going to Mystery Hunt the weekend before that deadline is a bad idea. In my defense, I did not think we were submitting to ICML, but some last-minute results convinced us it was worth trying.

    Combined with the huge airport troubles getting out of Boston, I ended up landing in California at 11 AM the day before the deadline, with the horrible combination of a ruined sleep schedule and a lack of sleep in the first place. But, now I’ve recovered, and finally have time to talk about Hunt.

    I hunted with teammate this year. It’s an offshoot of ✈✈✈ Galactic Trendsetters ✈✈✈, leaning towards the CMU and Berkeley contingents of the team. Like Galactic, the team is pretty young, mostly made of people in their 20s. When deciding between Galactic and teammate, I chose teammate because I expected it to be smaller and more serious. We ended up similar in size to Galactic. No idea where all the people came from.

    Overall feelings on Hunt can be summed up as, “Those puzzles were great, but I wish we’d finished.” Historically, if multiple teams finish Mystery Hunt, Galactic is one of those teams, and since teammate was of similar size and quality, I expected teammate to finish as well. However, since Hunt was harder this year, only one team got to the final runaround. I was a bit disappointed, but oh well, that’s just how it goes.

    I did get the feeling that a lot of puzzles had more grunt work than last time Setec ran a Hunt, but I haven’t checked this. This is likely colored by hearing stories about First You Visit Burkina Faso, and spending an evening staring at Google Maps for Caressing and carefully cutting out dolls for American Icons. (I heard that when we finally solved First You Visit Burkina Faso, the person from AD HOC asked “Did you like your trip to Burkina Faso?”, and we replied “Fuck no!”)

    What I think actually happened was that the puzzles were less backsolvable and the width of puzzle unlocks was smaller. Each puzzle unlocked a puzzle or a town, and each town started with 2 puzzles, so the width only increased when a new town was discovered. I liked this, but it meant we couldn’t skip puzzles that looked time-consuming, they had to be done, especially if we didn’t know how the meta structure worked.

    For what it’s worth, I think it’s good to have some puzzles with lots of IDing and data entry, because these form footholds that let everybody contribute to a puzzle. It just becomes overwhelming if there’s too much of it, so you have to be careful.

    Before Hunt

    Starting from here, there are more substantial puzzle spoilers.

    A few weeks before Hunt, someone had found Alex Rosenthal’s TED talk about Mystery Hunt.

    We knew Alex Rosenthal was on Setec. We knew Setec had embedded puzzle data in a TED talk before. So when we got INNOVATED out of the TED talk, it quickly became a teammate team meme that we should submit INNOVATED right as Hunt opened, and that since “NOV” was a substring of INNOVATED, we should be on the lookout for a month-based meta where answers had other month abbreviations. Imagine our surprise when we learn the hunt is Holiday themed - month meta confirmed!

    The day before Hunt, I played several rounds of Two Rooms and a Boom with people from Galactic. It’s not puzzle related (at least, not yet), but the game’s cool enough that I’ll briefly explain. It’s a social deduction game. One person is the President, another person is the Bomber, and (almost) everyone’s win condition is getting the President and Bomber in the same or different rooms by the end of the game.

    Now, here’s the gimmick: by same room, I mean the literal same room. People are randomly divided across two rooms, and periodically, each room decides who to swap with the other room. People initially have secret roles, but there are ways to share your roles with other players, leading to a game of figuring out who to trust, who you can’t trust, and concealing who you do trust from people you don’t, as well as deciding how you want to ferry information across the two rooms. It’s neat.

    Right before kickoff, I learned about the Mystery Hunt betting pool one of my friends was running, thought it was fun, and chipped in, betting on Palindrome. While waiting for puzzles to officially release, we printed our official team Bingo board.

    Bingo start

    Puzzles

    Every Hunt, I tend to look mostly at metapuzzles, switching to regular puzzles when I get stuck on the metas. This Hunt was no different. It’s not the worst strategy, but I think I look at metas a bit too much, and make bad calls on whether I should solve regular puzzles instead.

    I started Hunt by working on GIF of the Magi and Tough Enough, which were both solid puzzles. After both of those were done, we had enough answers to start looking at Christmas-Halloween. We got the decimal-octal joke pretty quickly, and the puzzle was easy to fill-in with incomplete info, then gave massive backsolving fodder. Based on the answers shared during wrap-up, several other teams had a similar experience.

    Backsolving philosophy is different across teams, and teammate borrows a culture from Galactic - backsolve as much as you want, as long as you wait for your answer to be confirmed wrong before trying it for the next puzzle, and try to have only 1 pending answer per puzzle. This makes our solve accuracy relatively terrible since our frequent backsolve attempts drag down the average. For example, we guessed several random words for Moral Ambiguity, because we knew it thematically and mechanically had to be the Holi Day prank answer.

    We got feedback to backsolve less aggressively and toned down our backsolve strategy by a lot. In fact, we completely forgot to backsolve Making a Difference. This was especially embarassing because we knew the clue phrase started with “KING STORY ADAPTED AS A FILM”, and yet we completely forgot that “RITA HAYWORTH AND THE SHAWSHANK REDEMPTION” was still unclaimed.

    I did not work on Haunted but I know people had a good laugh when the cluephrase literally told them “NOT INNOVATED, SORT FIRST”. teammate uses Discord to coordinate things. Accordingly, our #innovated-shitposting channel was renamed to #ambiguous-shitposting.

    I did not work on Common Flavors, and in fact, we backsolved that puzzle because we didn’t figure out they were Celestial Seasonings. At some point, the people working on Common Flavors tried brewing one of the teas, tried it, and thought it tasted terrible. It couldn’t possibly be a real tea blend!

    Oops.

    Eventually they cut open the tea bags and tried to identify ingredients by phonelight.

    Common Flavors

    We See Thee Rise was a fun puzzle. The realization of “oh my God, we’re making the Canadian flag!” was great. Even if our maple leaf doesn’t look that glorious in Google Sheets…

    We See Thee Rise

    I did not work on The Turducken Konundrum, but based on solve stats, we got the fastest solve. I heard that we solved it with zero backtracking, which was very impressive.

    For Your Wish is My Command, our first question was asking HQ if we needed the game ROMs of the games pictured, which would have been illegal. We were told to “not do illegal things.” I did…well, basically nothing, because by the time I finished downloading an NES emulator, everyone else had IDed the Game Genie codes, and someone else had loaded the ROM on an NES emulator they had installed before Hunt. That emulator had a view for what address was modified to what value, and we solved it quickly from there.

    I want to call out the printer trickery done for 7 Little Dropquotes. The original puzzle uses color to mark what rows letters came from, but if you print it, the colors are removed for Roman numerals, letting you print the page with a black-and-white printer. Our solve was pretty smooth, we printed out all the dropquotes and solved them in parallel. For future reference, Nutrimatic trivializes dropquotes, because the longer you make the word, the more likely it is that only one word fits the regex of valid letters. This means you can use Nutrimatic on all the long clues humans find hard, and use humans on all the short clues Nutrimatic finds hard. When solving this puzzle, I learned I am really bad at solving dropquotes, but really good at typing regexes into Nutrimatic and telling people “that long word is INTELLECTUAL”.

    Be Mine was an absurd puzzle. I didn’t have to do any of the element IDing, which was nice. The break-in was realizing that we should fill in “night” as “nite” and find minerals, at which point it became a silly game of “Is plumbopalladinite a mineral? It is!” and “Wait, cupromakopavonite is a thing?” I had a lot of fun finding mineral names and less fun extracting the puzzle answer. I was trying to find the chemical compounds word-search style. The person I was solving with was trying to find overlaps between a mineral name and the elements in the grid. I didn’t like this theory because some minerals didn’t extract any overlaps at all, but the word search wasn’t going great. It became clear that I was wrong and they were right.

    If you like puzzlehunt encodings, The Bill is the puzzle for you. It didn’t feel very thematic, but it’s a very dense pile of extractions if you’re into that.

    Okay, there’s a story behind State Machine. At some point, we realized that it was cluing the connectivity graph of the continental United States. I worked on implementing the state machine in code while other people worked on IDing the states. Once that was IDed, I ran the state machine and reported the results. Three people then spent several minutes cleaning the input, each doing so in a separate copy of the spreadsheet, because they all thought the other people were doing it wrong or too slowly. This led to the answer KATE BAR THE DOOR, which was…wrong. After lots of confusion, we figured out that the clean data we converged on had assigned 0 to New Hampshire instead of 10. They had taken the final digit for every state and filtered out all the zeros, forgetting that indicies could be bigger than 9. This was hilarious at 2 AM and I broke down laughing, but now it just seems stupid.

    For Middle School of Mines, I didn’t work on the puzzle, but I made the drive-by observation that were drawing a giant 0 in the mines discovered so far, in a rather literal case of “missing the forest of the threes”.

    I had a lot of fun with Deeply Confused. Despite literally doing deep learning in my day job, I was embarrassed to learn that I didn’t have an on-hand way to call Inception-v3 from my laptop. We ended up using a web API anyways, because Keras was giving us the wrong results. Looking at the solution, we forgot to normalize the image array, which explains why we were getting wrong adversarial class.

    Chris Chros was a fun puzzle. I never realized so many people with the name “Chris” were in Infinity War.

    He’s Out!! was a puzzle that went from “huh” to “wow, neat!” when we figured out that a punchout in baseball means a strikeout. I’ve never played Punch-Out!!, but I recognized it from speedruns. A good intro is the blindfolded run by Sinister1 at AGDQ 2014.

    Tree Ring Circus was neat. I felt very clever for extracting the ring sizes by looking up their size in the SVG source. I then felt stupid when I realized this was intentional, since it’s very important to have exact ring sizes for this puzzle.

    Cubic was cute. It’s fune to have a puzzle where you go from, “ah, Cubic means cubic polynomials”, and see it go to “ohhhh, cubic means cubic graphs”. I didn’t really do anything for this but it was fun to hear it get solved.

    Somewhere around Sunday 2 AM, we got caught by the time unlock and starting unlocking a new puzzle every 15 minutes. I’m not sure what exactly happened, but the entire team somehow went into beast mode and kept pace with the time unlock for several hours. We ended up paying for it later that morning, when everyone crashed until noon. One of the puzzles solved in that block was Divine the Rule of Kings, which we just…did. Like, it just happened. Really weird. There were some memes about “can someone pull up the US state connectivity graph, again?”. Turns out puzzle authors really like that graph.

    Of the regular puzzles I solved, I’d say my favorite was Getting Digits. We didn’t hit all the a-has at once, it was more like a staggered process. First “ON and OFF!”, then “Ohhhh it’s START”, then “ohhh it’s a phone number”. It’s a simple extraction but there’s still something cool about calling a phone number to solve a puzzle.

    Metapuzzles

    Despite looking at a lot of metas, I didn’t contribute to many of them. The one where I definitely helped was figuring out the turkey pardon connection for Thanksgiving-President’s Day. Otherwise, it was a bunch of observations that needed more answers to actually execute.

    This section really exists for two reasons. The first is the Halloween-Thanksgiving meta. We were stuck for quite a while, assuming that the meta worked by overlaying three Halloween town answers with three Thanksgiving answers with food substrings, and extracting using blood types in some way. This was a total red herring, since the food names were coming from turkey names. However, according to the people who solved it, the reason they finally tried ternary on the blood types was because we had a bingo square for “Puzzle uses ternary in extraction”, and they wanted to get that square. I’m officially claiming partial credit for that metapuzzle.

    The second reason is the Arbor-Pi meta. We had the core idea the whole time - do something based on digits after the Feynman point. The problem was that we horribly overcomplicated it. We decided to assign numbers to each answer, then substitute numbers based on answer length. So far, so good. We then decided that since there were two boxes, the answer had to be a two digit number, so we took everything mod 100. Then, instead of extracting the digit N places after the Feynman point, we thought we needed to find the digits “N” at some place after the Feynman point, noting how many digits we had to travel to find N. Somehow, of the 8 boxes we had, all of them gave even numbers, so it looked like something was happening. This was all wrong and eventually someone did the simple thing instead.

    Post-Hunt

    Since Palindrome didn’t win, I lost my Mystery Hunt bet. It turns out the betting pool only had four participants, we were all mutual friends, and we all guessed wrong. As per our agreement, all the money was donated to Mystery Hunt.

    Continuing the theme of not winning things, we didn’t get a Bingo, but we got very close. At least it’s symmetric.

    Bingo after

    After Hunt ended, we talked about how we didn’t get the Magic: the Gathering square, and how it was a shame that Pi-Holi wasn’t an MTG puzzle, since it could have been about the color pie. That led to talking about other games with colored wedges, and then we got the Trivial Pursuit a-ha. At this rate, I actually might print out the phrase list to refer to for extraction purposes.

    I didn’t do too much after Hunt. It was mostly spent getting food with friends, complaining about the cold, and waiting in an airport for 12 hours. It turned out a bunch of Bay Area Hunters were waiting for the same flights, so it wasn’t as bad as it could have been. We talked about Hunt with other stranded passegners, and a few people from my team got dinner at Sbarro.

    Sbarro

    It felt obligatory.

    Comments
  • Quick Opinions on Go-Explore

    This was originally going to be a Tweetstorm, but then I decided it would be easier to write as a blog post. These opinions are quick, but also a lot slower than my OpenAI Five opinions, since I have more time.

    Today, Uber AI Labs announced that they had solved Montezuma’s Revenge with a new algorithm called Go-Explore. Montezuma’s Revenge is the classical hard example of difficult Atari exploration problems. They also announced results on Pitfall, a more difficult Atari exploration problem. Pitfall is a less popular benchmark, and I suspect that’s because it’s so hard to get a positive score at all.

    These are eye-popping headlines, but there’s controversy around the results, and I have opinions here. For future note, I’m writing this before the official paper is released, so I’m making some guesses about the exact details.

    What is the Proposed Approach?

    This is a summary of the official release, which you should read for yourself.

    One of the common approaches to solve exploration problems is intrinsic motivation. If you provide bonus rewards for novel states, you can encourage agents to explore more of the state space, even if they don’t receive any external reward from the environment. The detail is in defining novelty and the scale of the intrinsic reward.

    The authors theorize that one problem with these approaches is that they do a poor job at continuing to explore promising areas far away from the start state. They call this phenomenon detachment.

    Detachment diagram

    (Diagram from original post)

    In this toy example, the agent is right between two rich sources of novelty. By chance, it only explores part of the left spiral before exploring the right spiral. This locks off half of the left spiral, because it is bottlenecked by states that are no longer novel.

    The proposed solution is to maintain a memory of previously visited novel states. When learning, the agent first randomly samples a previously visited state, biased towards newer ones. It travels to that state, then explores from that state. By chance we will eventually resample a state near the boundary of visited states, and from there it is easy to discover unvisited novel states.

    In principle, you could combine this paradigm with an intrinsic motivation algorithm, but the authors found it was enough to explore randomly from the previously visited state.

    Finally, there is an optional robustification step. In practice, the trajectories learned from this approach can be brittle to small action deviations. To make the learned behavior more robust, we can do self-imitation learning, where we take trajectories learned in a deterministic environment, and learn to reproduce them in randomized versions of that environment.

    Where is the Controversy?

    I think this motivation is sound, and I like the thought experiment. The controversy lies within the details of the approach. Specifically,

    1. How do you represent game states within your memory, in a way that groups similar states while separating dissimilar ones?
    2. How do you successfully return to previously visited states?
    3. How is the final evaluation performed?

    Point 1 is very mild. They found that simply downsampling the image to a smaller image was sufficient to get a good enough code. (8 x 11 grayscale image with 8 pixel intensities).

    Downscaling

    (Animation from original post)

    Downsampling is enough to achieve 35,000 points, about three times larger than the previous state of the art, Random Network Distillation.

    However, the advertised result of 2,000,000 points on Montezuma’s Revenge includes a lot of domain knowledge, like:

    • The x-y position of the character.
    • The current room.
    • The current level.
    • The number of keys held.

    This is a lot of domain-specific knowledge for Montezuma’s Revenge. But like I said, this isn’t a big deal, it’s just how the results are ordered in presentation.

    The more significant controversy is in points 2 and 3.

    Let’s start with point 2. To return to previous states, three methods are proposed:

    • Reset the environment directly to that state.
    • Memorize and replay the sequence of actions that take you to that state. This requires a deterministic environment.
    • Learn a goal-conditioned policy. This works in any environment, but is the least sample efficient.

    The first two methods make strong assumptions about the environment, and also happen to be the only methods used in the reported results.

    The other controversy is that reported numbers use just the 30 random initial actions commonly used in Atari evaluation. This adds some randomness, but the SOTA they compare against also uses sticky actions, as proposed by (Machado et al, 2017). With probability \(\epsilon\), the previous action is executed instead of the one the agent requests. This adds some randomness to the dynamics, and is intended to break approaches that rely too much on a deterministic environment.

    Okay, So What’s the Big Deal?

    Those are the facts. Now here are the opinions.

    First, it’s weird to have a blog post released without a corresponding paper. The blog post is written like a research paper, but the nature of these press release posts is that they present the flashy results, then hide the ugly details within the paper for people who are more determined to learn about the result. Was it really necessary to announce this result without a paper to check for details?

    Second, it’s bad that the SOTA they compare against uses sticky actions, and their numbers do not use sticky actions. However, this should be easy to resolve. Retrain the robustification step using a sticky actions environment, then report the new numbers.

    The controversy I care about much more is the one around the training setup. As stated, the discovery of novel states heavily relies on using either a fully deterministic environment, or an environment where we can initialize to arbitrary states. The robustification step also relies on a simulator, since the imitation learning algorithm used is the “backward” algorithm from Learning Montezuma’s Revenge from a Single Demonstration, which requires resetting to arbitrary states for curriculum learning reasons.

    Backward algorithm

    (Diagram from original post. The agent is initialized close to the key. When the agent shows it can reach the end state frequently enough, it is initialized further back in the demonstration.)

    It’s hard to overstate how big of a deal these simulator initializations are. They can matter a lot! One of the results shown by (Rajeswaran & Lowrey et al, 2018) is that for the MuJoCo benchmarks, wider state initialization give you more gains than pretty much any change between RL algorithms and model architectures. If we think of simulator initialization as a human-designed state initialization, it should be clear why this places a large asterisk on the results when compared against a state-of-the-art method that never exploits this feature.

    If I was arguing for Uber. I would say that this is part of the point.

    While most interesting problems are stochastic, a key insight behind Go-Explore is that we can first solve the problem, and then deal with making the solution more robust later (if necessary). In particular, in contrast with the usual view of determinism as a stumbling block to producing agents that are robust and high-performing, it can be made an ally by harnessing the fact that simulators can nearly all be made both deterministic and resettable (by saving and restoring the simulator state), and can later be made stochastic to create a more robust policy

    This is a quote from the press release. Yes, Go-Explore makes many assumptions about the training setup. But look at the numbers! It does very, very well on Montezuma’s Revenge and Pitfall, and the robustification step can be applied to extend to settings where these deterministic assumptions are less true.

    To this I would reply: sure, but is it right to claim you’ve solved Montezuma’s Revenge, and is robustification a plausible fix to the given limitations? Benchmarks can be a tricky subject, so let’s unpack this a bit.

    In my view, benchmarks fall along a spectrum. On one end, you have Chess, Go, and self-driving cars. These are grand challenges where we declare them solved when anybody can solve them to human-level performance by any means necessary, and where few people will complain if the final solution relies on assumptions about the benchmark.

    On the other end, you have the MountainCar, Pendulums, and LQRs of the world. Benchmarks where if you give even a hint about the environment, a model-based method will instantly solve it, and the fun is seeing whether your model-free RL method can solve it too.

    These days, most RL benchmarks are closer to the MountainCar end of the spectrum. We deliberately keep ourselves blind to some aspects of the problem, so that we can try our RL algorithms on a wider range of future environments.

    In this respect, using simulator initialization or a deterministic environment is a deal breaker for several downstream environments. The blog post says that this approach could be used for simulated robotics tasks, and then combined with sim-to-real transfer to get real-world policies. On this front, I’m fairly pessimistic on how deterministic physics simulators are, how difficult sim-to-real transfer is, and whether this gives you gains over standard control theory. The paradigm of “deterministic, then randomize” seems to assume that your deterministic solution doesn’t exploit the determinism in a way that’s impossible to reproduce in the stochastic version of the environment

    A toy example here is something like an archery environment with two targets. One has a very tiny bullseye that gives +100 reward. The other has a wider bullseye that gives +50 reward. A policy in a deterministic environment will prefer the tiny bullseye. A policy in a noisy environment will prefer the wider bullseye. But this robustification paradigm could force the impossible problem of hitting a tiny bullseye when there’s massive amounts of unpredictable wind.

    This is the main reason I’m less happy than the press release wants me to be. Yes, there are contributions here. Yes, for some definition of the Montezuma’s Revenge benchmark, the benchmark is solved. But the details of the solution have left me disillusioned about its applicability, and the benchmark that I wanted to see solved is different from the one that was actually solved. It makes me feel like I got clickbaited.

    Is the benchmark the problem, or are my expectations the problem? I’m sure others were disappointed when Chess was beaten with carefully designed tree search algorithms, rather than “intelligence”. I’m going to claim the benchmark is the problem, because Montezuma’s Revenge should be a simple problem, and it has analogues whose solution should be a lot more interesting. I believe a solution that uses sticky actions at all points of the training process will produce qualitatively different algorithms, and a solution that combines this with no control over initial state will be worth paying attention to.

    In many ways, Go-Explore reminds me of the post for Hybrid Reward Architecture paper (van Seijen et al, 2017). This post also advertised a superhuman result on Atari: achieving 1 million points on Ms. Pac-Man, compared to a human high score of 266,330 points. It also did so in a way relying on trajectory memorization. Whenever the agent completes a Pacman level, the trajectory it executed is saved, then replayed whenever it revisits that level. Due to determinism, this guarantees the RL algorithm only needs to solve each level once. As shown in their plots, the trick is the difference between 1 million points and 25,000 points.

    Ms. Pac-Man results

    Like Go-Explore, this post had interesting ideas that I hadn’t seen before, which is everything you could want out of research. And like Go-Explore, I was sour on the results themselves, because they smelled too much like PR, like a result that was shaped by PR, warped in a way that preferred flashy numbers too much and applicability too little.

    Comments