Posts
-
Our Generation's Chernobyl
This post has several spoilers for the HBO mini-series Chernobyl. In some ways, you can’t really spoil a show about a historical event, but you may want to skip this post if you haven’t seen it before.
Coronavirus is the new Chernobyl. Both are crises caused by something we can’t see with the naked eye. Both started in authoritarian countries, and both have made scientists the heroes and trusted officials of the day.
Mechanically, they are very different. Chernobyl is about an exploded reactor pouring out radioactive fallout, while COVID-19 is about a living virus. But the response has felt similar, and that’s what’s making me disappointed.
The Chernobyl mini-series on HBO is really well made, and although it is not always historically accurate, it’s accurate enough to paint some pretty stark parallels.
Everything is Normal
Craig Mazin, the writer and executive producer of Chernobyl, did a podcast about the series, where he talks about the show and the historical event. In episode 2, he mentions one streak of bad luck that didn’t make it to the show.
The Chernobyl accident happened on April 26. Five days later was May 1, International Workers’ Day. This was a big holiday for the Soviet Union, and in the days before May 1, party officials understood the scope of Chernobyl’s danger. They wanted to cancel holiday celebrations, to reduce exposure to radioactive dust. They failed, because the Kremlin told them that Everything Was Normal. Why would you take precautions when there’s nothing to worry about?
We’re talking about parades. In Kiev, in Minsk, there were party officials who, it honestly seems to me, begged, BEGGED, to cancel the parade. They were told that not only would they not cancel the parade, you’ll be walking in it too. And they did.
(starts at 22:10)
Everything’s fine - until it isn’t. Craig emphasizes that people in the Soviet Union bureaucracy weren’t monsters. They knew it was dangerous to be outside, but were overruled by those who cared about reputation more than safety.
Never Mind, This Isn’t Normal
Plenty of Soviet disasters got covered up and were only declassified long after they occurred. Chernobyl couldn’t be kept a secret, because the wind carried radioactive particles to Sweden and other European countries. After that, it couldn’t be kept a secret.
Radioactive fallout is not the same as a virus. A single case of the virus can multiply and turn into a million cases. Radiation does not replicate this way. Chernobyl is special because people close to the reactor were exposed to so much radiation that they ended up giving off radiation themselves. The firefighter uniforms from that day are still in the basement of the abandoned Pripyat hospital, too radioactive to approach without the right protective gear.
Nearby countries told their citizens to avoid eating wild game and vegetables, and to follow basic decontamination measures.
There’s radioactive dust, she said; close all the windows and plug all the cracks. Later, my anxiety grew when I saw her husband Andrei taking off his clothes and putting them in a plastic bag before entering his apartment.
As for the Soviet Union, to their credit, they did take drastic action once it was clear they had to. Thanks to an authoritarian government and a culture that pushed the collective over the individual, the evacuation and cleanup was fairly orderly.
So they finally evacuate Pripyat. One thing I was struck [by] was how orderly it was. I was like, oh you’re evacuating an entire town 50,000 people, and I could only think of what that would be like if they tried to do that to a similar town in America. People would be yelling, people would be complaining, people would be demanding they were allowed to bring that […] Everybody just said “alright” and climbed up onto the [evacuation] buses.
The citizenry, by all accounts except one, was incredibly orderly. Again, reflective of the society in which they lived and grew up. The police said, “You’re coming with us. You’re coming on the bus. You can take one suitcase and no pets, and you’ll be back in a few days.” And everybody just said, “Okay, I’ll get on the bus.” […] And they never, ever, ever came back.
(starts at 28:55)
Scientists Become Heroes
Chernobyl made everyone care a lot more about nuclear physicists. Coronavirus is making everyone care a lot more about epidemiologists. Valery Legasov is the protagonist of the Chernobyl mini-series, and in real life he was the chief of the commission investigating Chernobyl. For the West, he became one of the faces of the Soviet response, since he presented the Soviet report at a meeting of the International Atomic Energy Agency in Vienna, detailing what happened in Chernobyl. He was well-respected for acknowledging failures in the Soviet response, although his public testimony covered up design flaws in Soviet nuclear reactor design.
Now we have Fauci, who is generally popular (56% trust rating as of May 2), and one of the figureheads of the Coronavirus Task Force. The CDC playbook for communication mentions the importance of having a single lead spokesperson who’s a scientist, not a politician. Having a single spokesperson increases trust because of familiarity, and making that person a scientist reduces risk of politicizing the disease. If half the country trusts the CDC less because of a culture war, it’s a disaster for public health. See this New Yorker article for more.
According to the YouGov study above, Fauci’s 56% trust rating is split 68% among Democrats and 48% among Republicans. It is already too late for the United States.
Reality Doesn’t Care About Politics
The Chernobyl mini-series is ostensibly about the events of Chernobyl, but it’s really more about how people responded to Chernobyl. That’s the aspect I was reminded of first, and the reason I started writing this post.
Chernobyl and COVID-19 aren’t really about people. Sure, people are part of both, but their fundamentals are grounded in physical reality: infectious diseases for COVID-19, and radiation for Chernobyl. It’s like the classic Philip K. Dick quote: “Reality is that which, when you stop believing in it, doesn’t go away.” A worryingly large number of decision makers aren’t respecting reality.
I’m not sure of the historical accuracy of this moment, but in Episode 3 of the mini-series, “Open Wide, O Earth”, Legasov and Scherbina are arguing over the size of an evacuation zone. Legasov learns the evacuation zone has been set to 30 km, and wants it to be much, much larger. He is overruled.
Legasov: “How did this happen? Who gave them this idea?”
Scherbina: “Are you suggesting I did?”
Legasov: “Well someone decided the evacuation zone should be 30 km, when we know– (points to map) Here! Cesium-137 in Gomel District. Two HUNDRED kilometers away!”
Scherbina: “It was decided.”
Legasov: “Based on WHAT?”
Scherbina: “I don’t know.”
Legasov: “Forgive me. Maybe I’ve spent too much time in my lab. Or maybe I’m stupid. But is this really how it all works? An uninformed, arbitrary decision that will cost who knows how many lives is made by some apparatchik? Some career Party man?”
This is worth emphasizing: no one is forced to do things that make sense. To be a politician, the only thing you have to understand is people. Who they are, how they think, what they believe, and how to convince them to support you. That’s certainly not easy, it requires you to be shrewd and to have a good read of people. But there’s no particular reason to expect politicians to be good at anything besides navigating structures of power. They don’t have to be well-informed about anything, unless it’s politically expedient to be well-informed. Unfortunately, that approach is exactly what doesn’t work for the coronavirus.
If the coronavirus was a people problem, maybe you could use charm and wit to defuse the situation. But you can’t. You can’t talk to the coronavirus to understand what it does and doesn’t want. You can’t work the room to get the disease to spread slower. You can’t cut a deal with coronavirus to make it kill fewer people. No, it’s there, it exists, and you have to deal with it - or not. This has been repeated elsewhere, but all political instincts are wrong at the start of a pandemic, and you pay a price if those instincts aren’t overruled.
You hope you get a politician that understands the problem, has the grit to take unfavorable actions, and they use their experience in navigating deals to get what matters to the people who need it. By the end of the series, Scherbina is that man.
Scherbina: I’m an inconsequential man, Valery. I hoped one day I would matter, but I didn’t. I just stood next to people who did.
Legasov: There are other scientists like me. Any one of them could have done what I did. You– everything we asked for: men, material, lunar rovers. Who else could have done these things? They heard me, but they listened to you. […] Of all the ministers, and all the deputies, the entire congregation of obedient fools, they mistakenly sent the one good man. For God’s sake Boris, you were the one who mattered most.
There’s this phenomenon, where people in politics and PR will repeat something that isn’t true, and if they do so often enough, people will believe it. Those people will even start using motivated reasoning to create their own justification for what you’re saying. This works for subjects that are complicated enough to have ambiguity in their causality, but it doesn’t work very well for the narrow slice of reality that COVID-19 occupies.
In episode 4, “For the Happiness of All Mankind”, the Soviet Union explores using robots to clear debris off the Chernobyl reactor roof. Radiation damages electronics. The Soviet robots they had from the Space Race could withstand some radiation, but not the highest levels of radiation detected on the roof.
Through tense off-screen negotiations, they get a robot from West Germany which can withstand the reported numbers. They try it out, and the robot fails immediately. Scherbina soon learns why.
Scherbina: The official position of the State is that a global nuclear catastrophe is not possible in the Soviet Union. They told the Germans that the highest detected level of radiation was 2000 roentgen. They gave them the propaganda number. That robot was never going to work.
Think about this for a second. Someone high up in the government decided to give a propaganda number, instead of a real one. That filtered all the way down the bureaucracy, down to the people figuring out how to borrow a robot from the West. That one lie, from someone who cared more about reputation than accuracy, wasted the time of the negotiators, of the robot constructors from West Germany, of the technicians who operated it - all of it, gone.
I cannot think of a better argument for why you should care who your boss is, and who your elected officials are. If technology is a multiplier of both good and bad, then power is too. What does it say when both Republican and Democrat governors hid flight details of testing and PPE shipments from the federal government, to avoid confiscation? It’s just absurd.
The central theme of the Chernobyl mini-series is truth, and the lies surrounding it. This was why the showrunners tried to stay historically accurate. They thought it would be cheap to send a message about truth that was wrapped in artistic lies, and reality was dramatic enough. When the inevitable COVID-19 documentaries arrive, I hope they make the same decision.
-
A Reinforcement Learning Potpourri
I’ve fallen behind on RL literature from the past few months. So, I’ve decided to catch up with a bunch of recent papers.
First Return Then Explore
Let’s start with First Return Then Explore, by Ecoffet et al. This is a continuation and extension of the Go-Explore work from UberAI.
When Go-Explore first came out, I was very excited by its announced results, but got upset by how they were presented. I wrote a post attempting to explain that tension - that I really liked the paper’s ideas, and really disliked its media strategy. The media strategy for First Return Then Explore is comparatively muted. For one, this time they have a draft on arXiv. (Sorry, I’m never going to stop ribbing them for that.) They’ve also been more careful in their claims, and have improved their previous results.
Both First Return Then Explore and Go-Explore aim to first return to a state that has been visited before, then explore from that state. To make this more efficient, states are grouped into “cells” through some encoding. In the original Go-Explore paper, these cells are defined by downsampling by a fixed factor. First Return Then Explore changes this to tune the downsampling factor online, by doing a small search to maximize normalized entropy across a fixed budget of \(T\) cells. There are also more heuristics on choosing which cell to return to, instead of uniformly at random.
Besides this change, the Atari experiments mostly operate the same way: they assume a simulator or deterministic environment, learn the policy by leveraging the determinism, then do a robustification step where they try to reproduce behavior in a stochastic version of the environment.
The part I care about is the part they call Policy-based Go-Explore. My main criticism of the original Go-Explore paper was that it required access to a deterministic analogue of your final environment. They proposed learning a goal-conditioned policy to return to previous states, instead of following a memorized trajectory, which lets you hand stochastic environments at training time. However, they left it as future work.
Well, now they have results. It worked, but it was only tested on Montezuma’s Revenge with domain-specific features. I view papers through survival bias: if there’s an experiment that’s natural in the paper’s context, but isn’t in the paper, then it probably didn’t work, because if it worked, it’d be in the paper. So for now, I’m assuming it didn’t beat SOTA with domain agnostic features.
My final verdict is that the updated paper improved its strengths, but only mildly improved its weaknesses. The paper is an even stronger case that good exploration can be reduced to learning to quickly return to states you’ve visited before, and exploration algorithms without this capability have failure modes that First Return Then Explore fixes. Learning that return policy, however, is still an open problem for general domains. The reduction is valuable, and I hope it encourages more work on efficiently learning goal-conditioned policies.
Data Augmentation
The new hotness in RL is data augmentation. Three papers came out on arXiv in the past week: Constrastive Unsupervised Reinforcement Learning (CURL), from Srinivas and Laskin et al, Image Augmentation is All You Need (DrQ) from Kostrikov and Yarats et al, and Reinforcement Learning with Augmented Data (RAD) from Laskin and Lee et al. It also made it to VentureBeat of all places.
These three papers all find that for image-based RL, data augmentation gives very large gains on several tasks. Now at this point, I should mention that CURL and RAD are from people I know from UC Berkeley, and DrQ is from people I know from Google, so I’m going to step very carefully…
CURL learns a representation by contrastive learning. Two randomly sampled data augmentations are applied to the same image, and their representations are encouraged to be close to one another through an InfoNCE loss. (See the SimCLR paper for an ablation showing this contrastive loss does better than other ones.)
RAD compares just using data augmentation, without any contrastive losses, and finds that it outperforms CURL on the DMControl Suite. The theory is that in these environments, RAD beats CURL because it only optimizes for the task reward we care about, while CURL has to balance RL and contrastive learning. An ablation of the data augmentations used finds that random cropping is by far the most important data augmentation.
DrQ also does data augmentation, using random shifts. This is the same as padding the image, then doing a random crop. In an actor-critic framework, they sample data augmentations to estimate \(Q(s,a)\), sample other data augmentations to estimate target Q-value \(Q(s', a')\), and do a critic update that’s now regularized by the data augmentation.
Now, are these results surprising? Uh, kind of? It isn’t surprising because data augmentation isn’t new. Specifically doing random cropping isn’t new either - the QT-Opt paper I worked on 2 years ago used random cropping. Other groups have used data augmentation as well. The surprising part is the effect size. These papers are the first to carefully design an experimental setup that lets them isolate and measure the gains from data augmentation.
It’s the sort of paper that makes you feel dumb you didn’t write it yourself. I’ve run very similar data augmentation ablations before, with results that were consistent to theirs, but I never did it on standard RL benchmarks and I never dug into it more. If I had, I probably could have written this paper. Ah well, live and learn.
I’m very big on data augmentation. It just seems like the obvious thing to do. You can either view it as multiplying the size of your dataset by a constant factor, or you can view it as decreasing the probability your model learns a spurious correlation, but in either case it usually doesn’t hurt and it often really helps.
AI Economist
Salesforce put out a paper that uses reinforcement learning to design tax policy in a toy economic environment, and they argue their tax policies give better equality-productivity trade-offs, compared to the Saez framework.
I do not understand tax policy very well, but my first instinct is that the economy is really complicated, a model of the economy has to be too simplistic somewhere, and therefore the results should be taken with massive caveats. The authors are aware of this, and the ideas the paper plays with are interesting. I’ve found papers like this are best viewed as idea generators. Within a model, the AI discovers a new strategy, which could be useful in the more complex environment, but you will get better results by asking a human to consider whether the AI’s strategy makes sense, instead of applying the AI’s strategy directly.
Within the simulated economy, the agent preferred higher tax rates for the top brackets and lower tax rates for the middle class. So that’s interesting.
It’s very unlikely this makes it to actual tax policy anytime soon. The real economy is more complicated, the politics is a nightmare to navigate, and the people in charge of economic policy probably care more about the perception of a good economy than the reality of a good economy. Given the ethics questions surrounding economics experiments, perhaps that’s for the best.
Offline Reinforcement Learning
Some colleagues from Google Brain and UC Berkeley have put a tutorial for Offline Reinforcement Learning on arXiv.
By offline reinforcement learning, they mean reinforcement learning from a fixed dataset of episodes from an environment, without doing any additional online data collection during learning. This is to distinguish it from off-policy learning, which can happen in an offline setting, but is commonly used in settings with frequent online data collection.
Offline RL is, in my opinion, a criminally understudied subject. It’s both very important and very difficult, and I’ve been talking about writing a blog post about it for over a year. Suffice it to say that I think this tutorial is worth reading. Even if you do not plan to research offline RL, I feel the arguments for why it’s important and why it’s hard are useful to understand, even if you disagree with them.
-
The Argument for Contact Tracing
A few days ago, Apple and Google announced a partnership to develop an opt-in iOS and Android contact tracing app. Apple’s announcement is here, and Google’s announcement is here.
I felt it was one of the biggest signs of optimism for both ending stay-at-home orders and maintaining control over COVID-19, assuming that people opt into it.
I also quickly realized that a bunch of people weren’t going to opt into it. Here’s my attempt to fix that.
This post covers what contact tracing is, why I believe it’s critical to handling COVID-19, and how the proposed app implements it while maintaining privacy, ensuring that neither people, nor corporations, nor governments learn personal information they don’t need to know.
As a disclaimer, I do currently work at Google, but I have no connection to the people working on this, I’m speaking in a personal capacity, and I’ve deliberately avoided looking at anything besides the public press releases.
What Is Contact Tracing?
Contact tracing is the way we trace who infected people have been in contact with. This is something that hospitals already do when a patient gets a positive test for COVID-19. The aim of contact tracing is to warn people who may be infected and asymptomatic to stay home. This cuts lines of disease transmission, slowing down the spread of the disease.
Much of this is done by hand, and would continue to be done by hand, even if contact tracing apps become widespread. Contact tracing apps are meant to help existing efforts, not replace them.
Why Is Contact Tracing So Necessary?
Stay-at-home orders are working. Curves for states that issued stay-at-home orders earlier are flatter. This is all great news.
However, the stay-at-home orders have also caused tons of economic damage. Now, to be clear, the economic damage without stay-at-home orders would have been worse. Corporate leaders and Republicans may have talked about lifting stay-at-home orders, but as relayed by Justin Wolfers, UMich Economics professor, a survey of over 40 leading economists found 0% of them agreed that lifting severe lockdowns early would decrease total economic damage.
Survey of leading economists:
— Justin Wolfers (@JustinWolfers) March 29, 2020
"Abandoning severe lockdowns at a time when the likelihood of a resurgence in infections remains high will lead to greater total economic damage than sustaining the lockdowns to eliminate the resurgence risk."
0% disagree.https://t.co/6NNAaLlSjq pic.twitter.com/7kcnVVPw2NUnderstand the incentives: CEO's and bankers are calling for workers to be recalled. Economists—whose models also account for what's in the workers' best interests—disagree. Epidemiologists—who understand how pandemics spread—also disagree.
— Justin Wolfers (@JustinWolfers) March 29, 2020So, lockdowns are going to continue until there’s low risk of the disease resurging. As summarized by this Vox article, there are four endgames for this.
- Social distancing continues until cases literally go to 0, and the disease is eradicated.
- Social distancing continues until a vaccine is developed, widely distributed, and enough of the population gets it to give herd immunity.
- Social distancing continues until cases drop to a small number, and massive testing infrastructure is in place. Think millions of tests per day, enough to test literally the entire country, repeatedly, to track the course of the disease.
- Social distancing continues until cases drop to a small number, and widespread contact tracing, plus a large, less massive number of tests are in place.
Eradication is incredibly unlikely, since the disease broke containment. Vaccines aren’t going to be widely available for about a year, because of clinical trial timelines. For testing, scaling up production and logistics is underway right now, but reaching millions of tests per day sounds hard enough that I don’t think the US can do it.
That’s where contact tracing comes in. With good contact tracing, you need fewer tests to get a good picture of where the disease is. Additionally, digital solutions can exploit what made software take over the world: once it’s ready, an app can be easily distributed to millions of people in very little time.
Vaccine development, test production, and contact tracing apps will all be done in parallel, but given the United States already has testing shortfalls, I expect contact tracing to finish first, meaning it’s the best hope for restarting the economy.
What About Privacy?
Ever since the Patriot Act, people have been wary of governments using crises as an excuse to extend their powers, and ever since 2016, people have been wary of big tech companies. So it’s understandable that people are sounding alarm bells over a collaboration between Apple, Google, and the government.
However, if you actually read the proposal for the contact tracing app, you find that
- The privacy loss is fairly minimal.
- The attacks on privacy you can execute are potentially annoying, but not catastrophic.
When you contrast this with people literally dying, the privacy loss is negligible in comparison.
Let’s start with a privacy loss that isn’t okay, to clarify the line. In South Korea, the government published personal information for COVID-19 patients. This included where they traveled, their gender, and their rough age. All this information is broadcasted to everyone in the area. See this piece from The Atlantic, or this article from Nature, for more information.
Exposing this level of personal detail is entirely unnecessary. There is no change in health outcome between knowing you were near an infected person, and knowing you were near an infected person of a certain age and gender. In either case, you should self-quarantine. The South Korea model makes people lose privacy for zero gain.
How does the Apple and Google collaboration differ? Here is the diagram from Google’s announcement.
This is similar to the DP-3T protocol, which is briefly explained in this comic by Nicky Case.
- Each phone continually generates random keys, that are broadcasted by Bluetooth to all nearby devices. These keys change every 5-15 minutes.
- Each device records the random messages it has heard from nearby devices.
- Whenever someone tests positive, they can elect to upload all their messages to a database. This upload requires the consent of both the user and a public health official.
- Apple’s and Google’s servers store a list of all messages sent by COVID-19 patients. They will be stored for 14 days, the incubation period of the virus.
- Periodically, every device will download the current database. It will then, on-device, compare that list to a locally saved list of messages received from nearby phones.
- If there is enough overlap, the user gets a message saying they were recently in contact with a COVID-19 case.
What makes this secure? Since each phone’s message is random, and changes frequently, the messages on each phone don’t indicate anything about who those messages correspond to. Since the database is a pile of random messages, there’s no way to extract further information from the stored database, like age, gender, or street address. That protects people’s privacy from both users and the database’s owner.
The protocol minimizes privacy loss, but it does expose some information, since doing so is required to make contact tracing work. Suppose Alice only meets with Bob in a 14 day period. She later gets a notification that someone she interacted with tested positive for COVID-19. Given that Alice only met one person, it’s rather obvious that Alice can conclude Bob has COVID-19. However, in this scenario, Alice would be able to conclude this no matter how contact tracing is implemented. You can view this as a required information leak, and the aim of the protocol is to leak no more than the required amount. If Alice meets 10 people, then gets a notification, all she learns is that one of the 10 people she met is COVID-19 positive - which, again, is something that she could have concluded anyways.
If implemented as stated, neither the hospital, nor Apple, nor Google should learn who’s been meeting who, and the users getting the notification shouldn’t learn who transmitted the disease to them.
What If Apple and Google Do Something Sketchy?
First, the simpler, less technical answer. So far, Apple and Google have publicized and announced their protocol ahead of time. Their press releases include a Bluetooth specification, a cryptography specification, and the API that both iOS and Android will support. This is standard practice if you want to do security right, because it lets external people audit the security. It also acts as an implicit contract - if they deviate from the spec, the Internet will bring down a firestorm of angry messages and broken trust. If you can count on anyone to do due diligence, it’s the cryptography nerds.
In short, if this was a sneaky power grab, they’re sure making it hard to be sneaky by readily giving out so much information.
Maybe there’s a backdoor in the protocol. I think that’s very unlikely. Remember, it’s basically the DP-3T protocol, which was designed entirely by academic security professors, not big tech companies. I haven’t had the time to verify they’re exactly identical, but on a skim they had the same security guarantees.
When people explain what could go wrong, they point out that although the app is opt-in, governments could keep people in lockdown unless they install the app, effectively making it mandatory. Do we really want big tech companies building such a wide-reaching system?
My answer is yes, absolutely, and if governments push for mandatory installs, then that’s fine too, as long as the app’s security isn’t compromised.
Look, you may be philosophically against large corporations accumulating power. I get it. Corporations have screwed over a lot of people. However, I don’t think contact tracing gives them much power they didn’t already have. And right now, the coronavirus is also screwing over a lot of people. It’s correct to temporarily suspend your principles, until the public health emergency is over. Contact tracing only works if it’s widespread. To make it widespread, you want the large reach of tech companies, because you need as many users as possible. (Similarly, you may hate Big Pharma, but Big Pharma is partnering with the CDC for COVID-19 test production, and at this time, they’re best equipped to produce the massive numbers of tests needed to detect COVID-19.)
NOVID is an existing contact tracing app, with similar privacy goals. It got a lot of traction in the math contest community, because it’s led by Po-Shen Loh. I thought NOVID was a great effort that got the ball rolling, but I never expected it to have any shot of reaching outside the math contest community. Its footprint is too small. Meanwhile, everyone knows who Apple and Google are. It’s much more likely they’ll get the adoption needed to make contact tracing effective. Both companies also have medical technology divisions, meaning they should have the knowledge to satisfy health regulations, and the connections to train public health authorities on how to use the app. These are all fixed costs, and the central lesson of fixed costs is that they’re easier to absorb when you have a large war chest.
Basically, if you want contact tracing to exist, but don’t want Apple or Google making it, then who do you want? The network effects and political leverage they have makes them most able to rapidly spread contact tracing. I’m not very optimistic about a decentralized solution, because (spoiler for next section) that opens you up to other issues. For a centralized solution, the only larger actor is the government, and if you don’t trust Apple or Google, you shouldn’t trust the government either.
Frankly, if you were worried about privacy, both companies have plenty of easier avenues to get personal information, and based on the Snowden leaks, the US government knows this. I do think there’s some risk that governments will pressure Apple and Google to compromise the security for surveillance reasons, but I believe big tech companies have enough sway to avoid compromising to governmental pressure, and will choose to do so if pushed.
What If Other Actors Do Something Sketchy on Top of Apple and Google’s Platform?
These are the most serious criticisms. I’ll defer to Moxie Marlinspike’s first reaction, because he created Signal, and has way more experience on how to break things.
These contact tracing apps all use Bluetooth, to enable nearby communication. A bunch of people who wouldn’t normally use Bluetooth are going to have it on. This opens them up to Bluetooth-based invasions of privacy. For example, a tracking company can place Bluetooth beacons in a hotspot of human activity. Each beacon registers the devices of people who walk past it. One beacon by itself doesn’t give much, but if you place enough of these beacons and aggregate their pings, you can start triangulating movements. In fact, if you do a search for “Bluetooth beacon”, one of the first results is a page from a marketing company explaining why advertisers should use Bluetooth beacons to run location-based ad campaigns. Meaning, these campaigns are happening already, and now those ads will work for a bunch more people.
Furthermore, it’s a pretty small leap to assume that advertisers will also install the contact tracing app to their devices. They’ll place them in a similar way to existing Bluetooth beacons, and bam, now they also know the rough frequency of COVID-19 contacts in a given area.
My feeling is that like before, these attacks could be executed on any contact tracing app. For contact tracing to be widespread, it needs to be silent, automatic, and work on existing hardware. That pretty much leaves Bluetooth, as far as I know, which makes these coordinated Bluetooth attacks possible. And once again, compared to stopping the start of another exponential growth in loss of life, I think this is acceptable.
Moxie notes that he expects location data to be incorporated at some point. If the app works as described, each day, the device needs to download the entire database, whose size depends on the number of new cases that day. At large scale, this could become 100s of MBs of downloads each day. To decrease download size, you’d want each phone to only download messages uploaded from a limited set of devices that includes all nearby devices…which is basically just location data, with extra steps.
I disagree that you’d need to do that. People already download MBs worth of data for YouTube, Netflix, and updates for their existing apps. If each device only downloads data when it’s on Wi-Fi and plugged in, then it should be okay. I’d also think that people would be highly motivated to start those downloads
- without them, they don’t learn if they were close to anyone with COVID-19!
If users upload massive amounts of keys, they could trigger DDOS attacks by forcing gigabytes of downloads to all users. If users declare they are COVID-19 positive when they aren’t, they could spread fake information through the contact tracing app. However, both of these should be unlikely, because uploads will require the sign-off of a doctor or public health authority.
This is why I’m not so optimistic about a decentralized alternative. To prevent abuse, you want a central authority. The natural choice for a central authority is the healthcare system. You’ll need hospitals to understand your contact tracing app, and that’s easiest if there’s a single app, rather than several…and now we’re back where we started.
Summary
Here are the takeaways.
Contact tracing is a key part to bringing things back to normal as fast as is safe, which is important for restarting the economy.
Of the existing contact tracing solutions, the collaboration between Apple and Google is currently the one I expect to get the largest adoption. They have the leverage, they have the network effects, and they have the brand name recognition to make it work.
For that solution, I expect that, while there will be some privacy loss, it’ll be close to the minimum amount of privacy loss required to make widespread contact tracing work - and that privacy loss is small, compared to what it prevents. And so far, they seem to be operating in good faith, with a public specification of what they plan to implement, which closely matches the academic consensus.
If contact tracing doesn’t happen - if it doesn’t get enough adoption, if people are too scared to use it, or something else, then given the current US response, I could see the worst forecasts of the Imperial College London report coming true: cycles of lockdown on, lockdown off, until a vaccine is ready. Their models are pessimistic, compared to other models, but it could happen. And I will be really, really, really mad if it does happen, because it will have been entirely preventable.
I originally posted this essay on Facebook, and got a lot of good feedback. Thanks to the six people who commented on points I missed.