Idle Words > Talks > Build a Better Monster
This is the text version of "Build a Better Monster: Morality, Machine Learning, and Mass Surveillance", a talk I gave on April 18, 2017, at the Emerging Technologies for the Enterprise conference in Philadelphia.
I came to the United States as a six year old kid from Eastern Europe. One of my earliest memories of that time was the Safeway supermarket, an astonishing display of American abundance.
It was hard to understand how there could be so much wealth in the world.
There was an entire aisle devoted to breakfast cereals, a food that didn't exist in Poland. It was like walking through a canyon where the walls were cartoon characters telling me to eat sugar.
Every time we went to the supermarket, my mom would give me a quarter to play Pac Man. As a good socialist kid, I thought the goal of the game was to help Pac Man, who was stranded in a maze and needed to find his friends, who were looking for him.
My games didn't last very long.
The correct way to play Pac Man, of course, is to consume as much as possible while running from the ghosts that relentlessly pursue you. This was a valuable early lesson in what it means to be an American.
It also taught me that technology and ethics aren't so easy to separate, and that if you want to know how a system works, it helps to follow the money.
Today the technology that ran that arcade game permeates every aspect of our lives. We’re here at an emerging technology conference to celebrate it, and find out what exciting things will come next. But like the tail follows the dog, ethical concerns about how technology affects who we are as human beings, and how we live together in society, follow us into this golden future. No matter how fast we run, we can’t shake them.
This year especially there’s an uncomfortable feeling in the tech industry that we did something wrong, that in following our credo of “move fast and break things”, some of what we knocked down were the load-bearing walls of our democracy.
Worried CEOs are roving the landscape, peering into the churches and diners of red America. Steve Case, the AOL founder, roams the land trying to get people to found more startups. Mark Zuckerberg is traveling America having beautifully photographed conversations.
We’re all trying to understand why people can’t just get along. The emerging consensus in Silicon Valley is that polarization is a baffling phenomenon, but we can fight it with better fact-checking, with more empathy, and (at least in Facebook's case) with advanced algorithms to try and guide conversations between opposing camps in a more productive direction.
A question few are asking is whether the tools of mass surveillance and social control we spent the last decade building could have had anything to do with the debacle of the 2017 election, or whether destroying local journalism and making national journalism so dependent on our platforms was, in retrospect, a good idea.
We built the commercial internet by mastering techniques of persuasion and surveillance that we’ve extended to billions of people, including essentially the entire population of the Western democracies. But admitting that this tool of social control might be conducive to authoritarianism is not something we’re ready to face. After all, we're good people. We like freedom. How could we have built tools that subvert it?
As Upton Sinclair said, “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
I contend that there are structural reasons to worry about the role of the tech industry in American political life, and that we have only a brief window of time in which to fix this.
Surveillance Capitalism
The economic basis of the Internet is surveillance. Every interaction with a computing device leaves a data trail, and whole industries exist to consume this data. Unlike dystopian visions from the past, this surveillance is not just being conducted by governments or faceless corporations. Instead, it’s the work of a small number of sympathetic tech companies with likable founders, whose real dream is to build robots and Mars rockets and do cool things that make the world better. Surveillance just pays the bills.
It is a striking fact that mass surveillance has been driven almost entirely by private industry. While the Snowden revelations in 2012 made people anxious about government monitoring, that anxiety never seemed to carry over to the much more intrusive surveillance being conducted by the commercial Internet. Anyone who owns a smartphone carries a tracking device that knows (with great accuracy) where you’ve been, who you last spoke to and when, contains potentially decades-long archives of your private communications, a list of your closest contacts, your personal photos, and other very intimate information.
Internet providers collect (and can sell) your aggregated browsing data to anyone they want. A wave of connected devices for the home is competing to bring internet surveillance into the most private spaces. Enormous ingenuity goes into tracking people across multiple devices, and circumventing any attempts to hide from the tracking.
With the exception of China (which has its own ecology), the information these sites collect on users is stored permanently and with almost no legal controls by a small set of companies headquartered in the United States.
Two companies in particular dominate the world of online advertising and publishing, the economic engines of the surveillance economy.
Google, valued at $560 billion, is the world’s de facto email server, and occupies a dominant position in almost every area of online life. It’s unremarkable for a user to connect to the Internet on a Google phone using Google hardware, talking to Google servers via a Google browser, while blocking ads served over a Google ad network on sites that track visitors with Google analytics. This combination of search history, analytics and ad tracking gives the company unrivaled visibility into users’ browsing history. Through initiatives like AMP (advanced mobile pages), the company is attempting to extend its reach so that it becomes a proxy server for much of online publishing.
Facebook, valued at $400 billion, has close to two billion users and is aggressively seeking its next billion. It is the world’s largest photo storage service, and owns the world’s largest messaging service, WhatsApp. For many communities, Facebook is the tool of choice for political outreach and organizing, event planning, fundraising and communication. It is the primary source of news for a sizable fraction of Americans, and through its feed algorithm (which determines who sees what) has an unparalleled degree of editorial control over what that news looks like.
Together, these companies control some 65% of the online ad market, which in 2015 was estimated at $60B. Of that, half went to Google and $8B to Facebook. Facebook, the smaller player, is more aggressive in the move to new ad and content formats, particularly video and virtual reality.
These companies exemplify the centralized, feudal Internet of 2017. While the protocols that comprise the Internet remain open and free, in practice a few large American companies dominate every aspect of online life. Google controls search and email, AWS controls cloud hosting, Apple and Google have a duopoly in mobile phone operating systems. Facebook is the one social network.
There is more competition and variety among telecommunications providers and gas stations than there is among the Internet giants.
Data Hunger
The one thing these companies share is an insatiable appetite for data. They want to know where their users are, what they’re viewing, where their eyes are on the page, who they’re with, what they’re discussing, their purchasing habits, major life events (like moving or pregnancy), and anything else they can discover.
There are two interlocking motives for this data hunger: to target online advertising, and to train machine learning algorithms.
Advertising
Everyone is familiar with online advertising. Ads are served indirectly, based on real-time auctions conducted when the page is served by a maze of intermediaries. This highly automated market is a magnet for fraud, so much of the complexity of modern ad technology consists of additional (and invasive) tracking.
Curiously, despite years of improvements in the technology, and the amount of user data available to the ad networks, online advertising isn’t targeted all that well. You can convince yourself of this by turning off your ad blocker for a week. In a recent example, Chase stopped serving ads to 95% of its websites and saw no measurable difference in ‘engagement’ metrics.
Many advertisers are simply not equipped to use the full panoply of surveillance options. More importantly, adversaries have become very good at gaming real-time ad marketplaces, which introduces noise into the system. An uncharitable but accurate description of online advertising in 2017 is “robots serving ads to robots”. A considerable fraction (only Google and Facebook have the numbers) of the money sloshing around goes to scammers.
In online advertising, surveillance giveth, and click fraud taketh away.
The relative ineffectiveness of targeted advertising creates pressure to collect more data. Ad networks are not just evaluated by their current ad revenue, but by expectations about what new ad formats will make possible in the future, in a dynamic I’ve called “investor storytime”. The more poorly current ads perform, the more room there is to tell convincing stories about future advertising technology, which of course will require new forms of surveillance.
This trick of constantly selling the next version of the ad economy works because new ad formats really do have better engagement. Advertising is like a disease: it takes people time to develop immunity and resistance. Even the first banner ad had a 70% click through rate.
While the drive for novelty is good for the ad networks, it hurts publishers and anyone else trying to earn a living from the actual ads. Consider Facebook Instant Articles, rolled out with great fanfare a few years ago, now a dead end for online publishers. Facebook promised that this much faster-loading format would drive engagement, but soon after launching it turned their attention to video. Publishers retooling for video now will soon find Facebook distracted with augmented reality, or whatever the next alluring technology might be.
The real profits from online advertising go to the companies running the casino—Facebook and Google.
Machine Learning
The other factor driving data collection is machine learning, a set of mathematical tools that can rival human understanding in a wide set of domains. To work properly, machine learning requires enormous quantities of training data. Companies use machine learning to offer desirable product features like recommendations engines, image classifiers, and machine translation. They also use machine learning techniques to target their advertising.
Sometimes the machine learning algorithms work well enough that they can be packaged up into standalone products. Consider Google Home and Amazon Echo, the always-on home microphones, or the Nest thermostat, which includes a motion detector that can detect people. These devices can only exist because of sophisticated machine learning algorithms. But once installed, they also generate a data stream of their own, closing the feedback loop.
Everything moves in the direction of greater surveillance.
In the past, we assumed that when machines reached near-human performance in tasks like image recognition, it would be thanks to fundamental breakthroughs into the nature of cognition. We would be able to lift the lid on the human mind and see all the little gears turning.
What’s happened instead is odd. We found a way to get terrific results by combining fairly simple math with enormous data sets. But this discovery did not advance our understanding. The mathematical techniques used in machine learning don’t have a complex, intelligible internal structure we can reason about. Like our brains, they are a wild, interconnected tangle.
Because machine learning tracks human performance so well in some domains (like machine translation or object recognition), there is a temptation to anthropomorphize it. We assume that the machine’s mistakes will be like human mistakes. But this is a dangerous fallacy.
As Zeynep Tufekci has argued, the algorithm is irreducibly alien, a creature of linear algebra. We can spot some of the ways it will make mistakes, because we’re attuned to them. But other kinds of mistakes we won’t notice, either because they are subtle, or because they don’t resemble human error patterns at all.
For example, you can fool an image classifier that gives human-equivalent (or better!) performance in object recognition by showing it an image that looks like pure static. Worse, you can take a picture of a school bus, and by superimposing the right kind of noise, convince an image classifier that it’s an ostrich, even though to human eyes it looks the same.
This is what I mean by alien failure modes. The mistakes classifiers make have no relationship to how human vision works, and because the image classifier is normally so good, it shocks us to see it fail in this way.
These failure modes become important when we start using machine learning to manipulate human beings. The learning algorithms have no ethics or boundaries. There’s no slot in the algorithm that says “insert moral compass here”, or any way to tell them that certain inferences are forbidden because they would be wrong. In applying them to human beings, we leave ourselves open to unpleasant surprises.
The issue is not just intentional abuse (by trainers feeding skewed data into algorithms to affect the outcome), or unexamined bias that creeps in with in our training data, but the fundamental non-humanity of these algorithms.
The Political Machine
These, then, are the twin pillars of the online economy. We have an apparatus for harvesting tremendous quantities of data from people, and a set of effective but opaque learning algorithms we train on this data. The algorithms learn to show people the things they are most likely to ‘engage’ with—click, share, view, and react to. We make them very good at provoking these reactions from people. This is our sixty billion dollar industry.
So what happens when these tools for maximizing clicks and engagement creep into the political sphere?
This is a delicate question! If you concede that they work just as well for politics as for commerce, you’re inviting government oversight. If you claim they don’t work well at all, you’re telling advertisers they’re wasting their money.
Facebook and Google have tied themselves into pretzels over this. The idea that these mechanisms of persuasion could be politically useful, and especially that they might be more useful to one side than the other, violates cherished beliefs about the “apolitical” tech industry.
Whatever bright line we imagine separating commerce from politics is not something the software that runs these sites can see. All the algorithms know is what they measure, which is the same for advertising as it is in politics: engagement, time on site, who shared what, who clicked what, and who is likely to come back for more.
The persuasion works, and it works the same way in politics as it does in commerce—by getting a rise out of people.
But political sales techniques that maximize “engagement” have troubling implications in a democracy.
One problem is that any system trying to maximize engagement will try to push users towards the fringes. You can prove this to yourself by opening YouTube in an incognito browser (so that you start with a blank slate), and clicking recommended links on any video with political content. When I tried this experiment last night, within five clicks I went from a news item about demonstrators clashing in Berkeley to a conspiracy site claiming Trump was planning WWIII with North Korea, and another exposing FEMA’s plans for genocide.
This pull to the fringes doesn’t happen if you click on a cute animal story. In that case, you just get more cute animals (an experiment I also recommend trying). But the algorithms have learned that users interested in politics respond more if they’re provoked more, so they provoke. Nobody programmed the behavior into the algorithm; it made a correct observation about human nature and acted on it.
Social dynamics on sites where people share links can compound this radicalizing force. The way to maximize engagement on Twitter, for example, is to say provocative things, or hoist an opponent’s tweets out of context in order to use them as a rhetorical bludgeon. Twitter rewards the captious.
On Facebook, social dynamics and the algorithms’ taste for drama reinforce each other. Facebook selects from stories that your friends have shared to find the links you’re most likely to click on. This is a potent mix, because what you read and post on Facebook is not just an expression of your interests, but part of a performative group identity.
So without explicitly coding for this behavior, we already have a dynamic where people are pulled to the extremes. Things get worse when third parties are allowed to use these algorithms to target a specific audience.
The feeds people are shown on these sites are highly personal. What you see in your feed is algorithmically tailored to your identity and your interaction history with the site. No one else gets the same view.
This has troubling implications for democracy, because it moves political communication that used to be public into a private space. Political speech that tries to fly below the radar has always existed, but in the past it was possible to catch it and call it out. When no two people see the same thing, it becomes difficult to trace orchestrated attempts to target people in political campaigns. These techniques of micro-targeted political advertising were used to great effect in both the Brexit vote and the US election.
This is an inversion in political life that we haven’t seen before. Conversations between people that used to be private, or semi-private, now take place on public forums where they are archived forever. Meanwhile, the kind of political messaging that used to take place in public view is now visible only to an audience of one.
Obviously, in this situation whoever controls the algorithms has great power. Decisions like what is promoted to the top of a news feed can swing elections. Small changes in UI can drive big changes in user behavior. There are no democratic checks or controls on this power, and the people who exercise it are trying to pretend it doesn’t exist
Political and commercial advertising are interdependent financially in a way that further blurs boundaries. Politically engaged people spend more time online and click more ads. Alarmist and conspiracy-minded consumers also make good targets for certain kinds of advertising. Listen to talk radio or go to prepper websites and you will find pure hucksterism—supplements, gold coins, mutual funds—being pitched by the same people who deliver the apocalyptic theories.
Many of the sites peddling fake news during the election operated solely for profit, and field-tested articles on both sides of the political spectrum. This time around, they found the right to be more lucrative, so we got fake news targeted at Trump voters.
Conversely, many propaganda sites find that online advertising is a vital source of revenue, and depend on it to fund their operations.
A Toolkit For Authoritarians
Surveillance capitalism offers exceptionally subtle levers of social control. Apart from the obvious chilling effect on political expression when everything you say is permanently recorded, there is the chilling effect of your own peer group, and the lingering doubt that anything you say privately can ever truly stay private.
We have no way of safeguarding the large amounts of data we collect in the long term, so a real worry for anyone is that their private lives will be publicized. Throughout the election, private communications by low-level staffers were leaked and used as a political weapon. The message was very clear: stay out of politics.
Social media also proved useful at shifting attention away from journalists who were asking uncomfortable questions.
Orwell imagined a world in which the state could shamelessly rewrite the past. The Internet has taught us that people are happy to do this work themselves, provided they have their peer group with them, and a common enemy to unite against. They will happily construct alternative realities for themselves, and adjust them as necessary to fit the changing facts.
Finally, surveillance capitalism makes it harder to organize effective long-term dissent. In an setting where attention is convertible into money, social media will always reward drama, dissent, conflict, iconoclasm and strife. There will be no comparable rewards for cooperation, de-escalation, consensus-building, or compromise, qualities that are essential for the slow work of building a movement. People who should be looking past their differences will instead spend their time on purity tests and trying to outflank one another in a race to the fringes.
Can we fix it?
Institutions can be destroyed quickly; they take a long time to build.
A lot of what we call ‘disruption’ in the tech industry has just been killing flawed but established institutions, and mining them for parts. When we do this, we make a dangerous assumption about our ability to undo our own bad decisions, or the time span required to build institutions that match the needs of new realities.
Right now, a small caste of programmers is in charge of the surveillance economy, and has broad latitude to change it. But this situation will not last for long. The kinds of black-box machine learning that have been so successful in the age of mass surveillance are going to become commoditized and will no longer require skilled artisans to deploy.
Moreover, powerful people have noted and benefited from the special power of social media in the political arena. They will not sit by and let programmers dismantle useful tools for influence and social control. It doesn’t matter that the tech industry considers itself apolitical and rationalist. Powerful people did not get to be that way by voluntarily ceding power.
The window of time in which the tech industry can still act is brief: while tech workers retain relatively high influence in their companies, and before powerful political interests have put down roots in the tech industry.
I’ve divided the changes I think we need into two groups, based on how they affect existing business models. The short-term solutions mitigate some of the harm of the surveillance economy without requiring major reform. The long-term changes are equally necessary, but will threaten established business models.
We can compare this to defusing a bomb. The immediate task is to disconnect the wires and the ticking timer; the longer-term task is to remove the pile of explosives, or render them inert. Removing the timer is urgent and necessary, but if we leave the explosives in place, we have not addressed the problem.
Solutions
The key changes we can make in the short term (without requiring sites to relinquish their business models) are to teach social software to forget, to give it predictable security properties, and to sever the financial connection between online advertising and extremism.
Learning To Forget
Memory in online spaces works in an odd way. On a site like Facebook, everything is permanent. Once provided, data can never be truly deleted (unless you take the Draconian step of deleting your entire account). You can’t even remove your phone number from the site.If you want an interesting surprise, go to your Google Maps history. Unless you’ve explicitly turned it off (or ‘paused’ it, in Google’s parlance), you’ll see your physical trail as reported by your mobile phone for the past several years (if you have an Android phone, or have Google maps installed with the necessary permissions). Not only does Google keep it indefinitely, but it’s available for the asking to anyone who gains access to your account.
This kind of eidetic memory is nothing like real life. Most of what we do and say is forgotten within a short time, and in the rare cases where we want a verbatim record, we have to make a special effort to create it. Things we do in one context are not likely to come back to haunt us years later in an unrelated setting. This forgetfulness gives us freedom, including the freedom to make mistakes, and lets us make decisions about what parts of our self we reveal where.
The online world forces individuals to make daily irrevocable decisions about their online footprint.
Consider the example of the Women’s March. The March was organized on Facebook, and 3-4 million people attended. The list of those who RSVP’d is now stored on Facebook servers and will be until the end of time, or until Facebook goes bankrupt, or gets hacked, or bought by a hedge fund, or some rogue sysadmin decides that list needs to be made public.
Any group that uses Facebook to organize comes up against this problem. But keeping this data around forever is not central to Facebook’s business model. The algorithms Facebook uses for targeting favor recency; and their output won’t drastically change if Facebook forgets what you were doing three months or three years ago.
We need the parts of these sites that are used heavily for organizing, like Google Groups or Facebook event pages, to become more ephemeral. There should be a user-configurable time horizon after which messages and membership lists in these places evaporate. These features are sometimes called ‘disappearing’, but there is nothing furtive about it. Rather, this is just getting our software to more faithfully reflect human life.
The obstacles to giving groups and events a fixed lifespan are organizational, not technical. Someone in management has to care.
In some cases, we don’t want the data to disappear entirely, but simply to be less accessible. Consider the case of mobile phones. People routinely lose phones or have them stolen, particularly when they travel. But right now, anyone carrying a smartphone also carries with them their entire email archive and social media history, including interactions from years ago with people that they don’t remember. All of these are available for inspection by anyone who can gain or compel access to the device. The only choice we’re given is binary—either carry your whole account history with you at all times, or delete your email or social networking accounts.
Here again we need digital reality to come into line with human expectations. You don’t carry all your valuables and private documents when you travel. Similarly, social sites should offer a trip mode where the view of your account is limited to recent contacts and messages.
Security
The second immediate step we can take is to make social media more secure. Security is important because it gives us predictability: if I know that my messages are end-to-end encrypted, I can say things I would not say in email. Given tools with reliable security properties, people can better partition their activity between private, public, and semi-public spaces and use them with confidence. Without proper security, they must always assume that even the most private conversation will be posted in public.Much of the chilling effect in online discourse now comes from the fear that these privacy boundaries will be violated; that the rug will get pulled out from under them and statements made in one context held up for public view in another.
The good news is that we have the technology. The bad news is that the implementations are haphazard, and often implemented in a user-hostile way.
Let me be specific about what we need:
- Universal support for U2F security keys, including user education about how they work and industry pressure to make them the normal way to log into websites.
- Cryptographically secure end-to-end encryption on all forms of private messaging, including Twitter direct messages, iMessage, Google hangouts, and Gmail.
- An Android phone that is as secure as an iPhone.
- Secure and usable group messaging. Right now, you can pick one or the other.
- One-click settings to enable the highest levels of security for users who need them.
- Better defaults and options for attachment handling.
- A trustworthy, privacy-preserving VPN.
- Some viable alternative for mailing lists that does not live inside Facebook.
Security is only achievable when the people in charge decide it matters.
Defunding
A third step we can take immediately is to disconnect blatant forms of online extremism from their money supply. Here the Google monopoly in online advertising works to our advantage.
In recent days, many companies have pulled ads for fear of having them appear on far-right-wing sites. Under the banner of “brand protection”, this is a useful step. It’s also one area where we can usefully turn the tools of surveillance and machine learning in our defense, by monitoring online advertising (along with traditional radio and TV ads) and contacting major brands whose ads appear on those sites.
Long-term solutions
Security, ephemerality, and defunding can help us stabilize the situation. But to have a recognizably human online world, we need to make structural changes.
Above all, people need to have control of their data, a way to carve out private and semi-private spaces, and a functional public arena for politics and civil discourse. They also need robust protection from manipulation by algorithms, well-intentioned or not. It’s not enough to have benevolent Stanford grads deciding how to reinvent society; there has to be accountability and oversight over those decisions.
Some of these changes can only come through regulation. Because companies will always find creative ways to collect data, the locus of regulation should be the data store. In the past, I’ve pushed for “Six Fixes” to the Internet. I’ll push for them again!
- The right to examine, download, and delete any data stored about you. A time horizon (weeks, not years) for how long companies are allowed to retain behavioral data (any data about yourself you didn’t explicitly provide).
- A prohibition on selling or transferring collections of behavioral data, whether outright, in an acquisition, or in bankruptcy.
- A ban on third-party advertising. Ad networks can still exist, but they can only serve ads targeted against page content, and they cannot retain information between ad requests.
- An off switch on Internet-connected devices, that physically cuts their access to the network. This switch should not prevent the device from functioning offline. You should be able to stop the malware on your refrigerator from posting racist rants on Twitter while still keeping your beer cold.
- A legal framework for offering certain privacy guarantees, with enforceable consequences. Think of this as a Creative Commons for privacy. If they can be sure data won’t be retained, users will be willing to experiment with many technologies that would pose too big a privacy risk in the current reality.
Break Up Facebook
If dismantling the $60B online advertising industry is not enough, we also need to break up Facebook.
Because of the potent way in which it combines social life with publishing, Facebook poses a unique challenge. At a minimum, we need to break up Facebook so that its social features are divorced from the news feed. Just like banks have a regulatory ‘Chinese wall’ between investment and brokerage, and newspapers have a wall between news and editorial, there must be a separation between social network features and news delivery.
Ideally, we can find a way to have decentralized social networks, just like we do in real life. Less ideally, a central database of people’s relationships continues to exist.
But social media cannot be the major publishing outlet. Facebook can remain a platform for connecting to friends, long-term photo storage, announcing life events big and small. But it cannot simultaneously be the platform for political organizing, political campaigns, and news delivery.
Code of ethics
We need a code of ethics for our industry, to guide our use of machine learning, and its acceptable use on human beings. Other professions all have a code of ethics. Librarians are taught to hold patron privacy, doctors pledge to “first, do no harm”. Lawyers, for all the bad jokes about them, are officers of the court and hold themselves to high ethical standards.
Meanwhile, the closest we’ve come to a code of ethics is “move fast and break things”. And look how well that worked.
Young people coming into our industry should have a shared culture of what is and is not an acceptable use of computational tools. In particular, they should be taught that power can't be divorced from accountability.
Pay our share
Finally, we need to plow the huge profits of the tech industry back into the communities that sustain them. To be blunt, tech companies have to pay their taxes. A situation like we have today, where Apple keeps $240B sitting overseas, while California has the highest poverty rate in the United States, in part because the tech industry itself has driven up the cost of housing, is not tenable.
Every Apple device says “Designed in California”, but Apple does everything it can to avoid paying taxes in California. This is no way unusual behavior. At the very least, it should be a source of opprobrium. Ideally, it should be illegal.
Paying our share means offering a living wage to all tech workers, not just a privileged caste of programmers. The warehouse pickers, drivers, cooks, cleaners, security guards, and other employees who keep the tech industry running have the right to a living wage. They should not have to work precariously as ‘independent contractors’, but have jobs with benefits and pensions.
Not only is treating these employees as full employees the right thing to do, but it’s another way to bring tech money back into our communities and cities.
How to get there
Companies may agree to the short-term reforms I outlined, but to make substantive structural reforms, we need leverage.
There are very few levers of power over the big tech companies. Because they are essentially monopolies, consumer boycotts don’t work. Opting out of a site like Google would mean opting out of much of online life. Some people could do it on principle, but it is not something we can mobilize a mass movement around.
Indirect pressure through their actual customers—the publishers and advertisers—can work for limited goals (for example, the current panic around “brand safety” that is helping defund sites like Breitbart). But if the goal is more fundamental reform, we’re stuck. We can’t apply pressure through a system we’re trying to abolish.
Shareholder pressure doesn’t work, because the large tech companies are structured to give founders absolute control no matter how many shares they own.
Regulation is tricky. The large tech companies have capable lobbyists and massive legal resources.
Press campaigns are unlikely to work because Facebook and Google control most online publishing. Moreover, what remains of the press has just endured a painful transition to online advertising, and is wholly dependent on that business model to survive.
The one effective lever we have against tech companies is employee pressure. Software engineers are difficult to hire, expensive to train, and take a long time to replace. Small teams in critical roles (like operations or security) have the power to shut down a tech company if they act in concert.
We’ve seen some small demonstrations of the power of employee pressure. The neveragain.tech pledge pushed companies that had been silent for months to publicly commit to not working on a Muslim registry. The employee walkout at Google in support of immigrants during the travel ban prompted the founders and CEOs to issue statements of support. Employee pressure at Uber forced Travis Kalanick off Trump’s advisory council, and one hopes similar pressure at Tesla will have the same effect on Elon Musk.
These victories were small, but real. They were achieved by individual employees organizing informally. What we have yet to see in the industry is true collective action, using the powerful tools of labor law. This is the one lever that could make change happen. Even an ops team threatening to work to rule, let alone go on strike, would create enormous pressure on a tech company.
Unfortunately, the enemy is complacency. Tech workers trust their founders, find labor organizing distasteful, and are happy to leave larger ethical questions to management. A workplace free of 'politics' is just one of the many perks the tech industry offers its pampered employees. So our one chance to enact meaningful change is slipping away.
Unless something happens to mobilize the tech workforce, or unless the advertising bubble finally bursts, we can expect the weird, topsy-turvy status quo of 2017 to solidify into the new reality.
But even though we're likely to fail, all we can do is try. Good intentions are not going to make these structural problems go away. Talking about them is not going to fix them.
We have to do something.
Special thanks to Zeynep Tufekci for contributing many of the ideas in this talk, and for helping me prepare it.