A special consultant to the Electronic Frontier Foundation (EFF), Cory Doctorow is also known for his compelling science fiction novels—including Homeland, Little Brother, and the recently released Walkaway. An activist and journalist, he co-edits the popular Boing Boing blog. Now based in London, Doctorow co-founded the UK Open Rights Group and is a formidable defender of Internet freedom.
Enter: We know we’re being manipulated by social media platforms, but not exactly how. How do we find out what Facebook or Google is doing to us when we use their platforms?
CD: Facebook’s position is that all the creepy, bad stuff that happens on Facebook is an inevitable consequence of all the stuff you love about Facebook: that you can’t have one without the other. There are a few ways we might test that statement, but many of them run up against legal blocks.
One way would be to create lots of different identities that are different in subtle ways. Like, say, two personas that appear to be the same, except one appears to be African American and one appears to be white. And then see: Do you get different financial products advertised to you? Are there other forms of more subtle discrimination you face? Now, doing this kind of test violates the terms of service of Facebook. Under Facebook’s radical interpretation of the Computer Fraud and Abuse Act , it’s a felony.
So, many of the ways that we have to find out what the platforms are doing, and hold them accountable for it--or counter them with new commercial and nonprofit offerings--all those things have gone away because of changes in the regulatory landscape.
I don’t know what a better Facebook would be, but I do know that the way we make a better Facebook is by enabling competitors to try stuff. For example, someone might then prove that Facebook has deliberately added the creepy stuff because it makes them richer, not because there was no other way to do it.
Enter: Are there ways we can detect when we’re being manipulated?
CD: Probably. But I think that anything that you did would stand a very good chance of putting you in conflict with laws like the Computer Fraud and Abuse Act or the Digital Millennium Copyright Act (DMCA). These ban you from tampering with the services, and even the products, that you buy. This causes some problems.
For example, in November, 2017, an academic lab [ED: Yale University Privacy Lab] published a report on malware trackers in mobile apps. They looked at things like Tinder, and they found that not only was Tinder keeping track of you and what you did with the faces and biographies it showed you, it was also keeping track of a whole bunch of other things. And it was sharing that information with dozens and dozens of companies. And Tinder wasn’t the only app; they found 44 trackers in more than 300 apps.
They did this research on Android, because Apple’s platform has technical protection measures--which are positioned as anti-piracy measures--to control the kind of data manipulation you can do with the apps on your phone or your tablet. These controls are not present in Android. The researchers didn’t want to risk five year prison sentences for a first offense (which is what the DMCA provides), so they stuck to Android. They had to infer that all these apps were also present on Apple.
So again, I don’t know the best way to conduct research as to whether there’s bad stuff going on in a platform, but I do know that making it illegal to conduct that research means that no one will ever find out what the best way to research it is.
Enter: There’s been a lot of press about some of the major platforms like Google and Facebook complaining as loudly as anybody about this user manipulation. But are they doing anything to actually stop this self-reinforcing content spiral? Or are they pleased with, or even dependent on, the way things are structured now?
CD: I stand up a lot, lobbying these firms to get them to support things I’ve favored or opposed. One thing I’ve learned is their internal decision-making process is as complicated as any big, diverse company with a lot of conflicting goals.
I’ve often found--at Google, for example-- that it’s not the business unit that’s making the most money for the company that gets to be heard, it’s the one whose executives are literally the loudest people in the room -- literally, whoever has a very loud voice and isn’t afraid to shout in meetings. That’s their way. It’s dangerous to personify companies and what they want and don’t want. We can say, instead, that there are managers who have little empires that they’ve built, and want outcomes that align to their interests. Sometimes those managers have the ability to steer the company and sometimes they don’t. Outcomes are driven by those internal politics.
These companies, on the whole, would like to be able to do as much creepy stuff as makes them money--but not if it makes people so angry that it gets them in trouble. They don’t like it when people use their platforms in a way that tarnishes their reputation, especially if that invites a regulatory response. But if it brings in revenue, and doesn’t create that kind of backlash, they might even like it.
Enter: Most people are not going to set up an alternate identity to find out how they’re being played. Are there any easy ways to opt out of feeding the big data machine?
CD: I think there are! But I want to address the first part of your question.The question isn’t whether everyone can found Facebook. The question is whether the one person who might create a Facebook that isn’t creepy can actually go out and do it. The fact that it’s hard that means that we need to make sure that it’s legal, that it can take place out in the open--because things that are hard and illegal aren’t likely to get done. If you could write your own alternative Facebook crawler, you could probably have a good Facebook experience -- and Facebook might not even ever catch you at it. If we could all do that individually, we could solve this problem tomorrow.
Enter: If you were to go into Facebook and change some things, what would you change?
CD: I am a Facebook vegan. I don’t use Instagram, I don’t use Whatsapp, and I don’t use Facebook. One of the most powerful ways to not feed the big data machine is to not have accounts on those platforms.
If I did have a Facebook presence, I would want to have a crawler that goes through Facebook and finds all the things that my friends have written and that they’re sharing, and the discussions that they’re having, and brings it all to my computer while impersonating me to Facebook. So as far as Facebook can see, I’ve just systematically gone through and grabbed every possible thing that it might have shown me. Then my computer decides what I should see and what I shouldn’t see, based on the feedback I’ve given to it--which it never, ever shares with Facebook.
Sometimes, when I think about which platforms I’m going to use and how they exist in the world, I think about what their victory conditions are. For Apple to succeed, the whole wide Internet has to disappear inside its walled garden, but it doesn’t have to spy on you. For Google to succeed, it doesn’t matter if the web ever disappears inside a walled garden because it can spy on you wherever you are on the web. For Facebook to succeed, the whole web has to disappear inside its walled garden, and they have to spy on you all the time.
Enter: What about Instagram?
CD: I think it’s just a stalking harness for Facebook, appealing to a demographic that’s not interested in Facebook anymore.
Enter: Are there any tools, any apps, that can help people free their brains from their smart devices and their feeds?
CD: Yes. Let me give you three categories of things that you can do.
The first one is, you can upgrade your tools. There’s a project the Electronic Frontier Foundation has called the Surveillance Self Defense Kit. You tell it what kind of person you, are and what you’re worried about —like, “I’m a queer teenager who’s worried that his parents will find out and kick them out of the house.” We’ll tell you which tools to use, how to use them, and how to think about using them in order to minimize those risks. So that’s step one.
Step Two is to think about how your devices are configured to try and grab your attention. Companies have started to optimize for this thing they call “engagement.” They really want to maximize the minutes you use their product for. One good example is, if you have an Android phone, by default they put the Google search bar on your home screen. And again, by default, when you tap on that home bar it tells you the most trending searches on the web. But when we search on the web, we’re not trying to find out what other people are searching for. Let’s say I go to a restaurant, and there is an unfamiliar sauce on the menu. I want to know what’s in that sauce. But when I try to find out the answer to that question, with my Android phone, it gives me seven top searches--and these days they’re all Trump Threatens Nuclear War, Your City is on Fire, Your Democracy is Crumbling, and You are About to Die. Right?
Engagement maximization exists solely to maximize your use of the product, without any regard for your pleasure from your product. It makes the product more compelling, but less useful and enjoyable. Imagine if there was a household cleaning product that was optimized to make you use it longer. It’s really remarkable when you think of it that way. When all you measure is engagement--instead of success at some task, or pleasure, or satisfaction--you get people building, effectively, the world’s heaviest airplane. Like how can we make this as long and fortuitous and torturous as possible right up to the moment you stop using it altogether? That is the line that they want to walk. So go through your tools, find anything that suggests anything to you, and turn it off—even though the setting has been buried 11 settings deep, because someone at Google gets paid based on how many minutes you spend using that bar.
I did that myself, and it made my life infinitely better. I would be running out the door with my daughter to walk her to school, and I would check to see whether she needed a jacket. But in order to look at the weather, I would also have to see if there was an imminent nuclear armageddon. And so our walk to school every morning went from a very contemplative, wonderful, intimate thing, where we talked about everything in the world, to me fretting about imminent nuclear war. So I had to go and find a third party weather app.
The third thing to recognize is that, although the Surveillance Self Defense Kit is a great set of tools, the situation that gave rise to these problems is societal. You can’t solve it individually. You can’t shop your way out of monopoly capitalism. You can’t recycle your way out of climate change. We need systemic policy responses to these problems.
There are organizations and institutions--some of them new and others very well established--that have deep expertise in these domains. I work for one of them, the Electronic Frontier Foundation,
So no matter what your priorities are, or where you come from politically, there is an organization or institution working to make the systemic changes that compliment the individual good choices you’re making. Every year, I take 10% of all the money I earn and give it to charities, most of which work on these issues. They’re the ACLU, the Free Software Foundation, Creative Commons, Public Knowledge… and others.
So that’s the third part. If you only do the first two parts, you are stuck in an arms race that you cannot win. But if you add in the third one, you make the first two a lot easier.
Enter: Speaking as a science-fiction writer, where are we going with all this?
CD: I think we live in a world with two technologies that are really difficult to get our heads around, and that are profoundly different from any technologies we’ve had before. These are the general purpose computer, and the general purpose network: the Internet. Prior to the maturing of these two technologies in recent years, if you wanted to compute something, you built one kind of device to do one kind of thing,. We had cars and we had pacemakers and we had seismic dampers and we had airplanes. These days, because general purpose computers are so powerful, all of those things can be thought of as just fancy cases for a computer.
A Boeing 747 is a Solaris workstation in an aluminum case with jet engines. A pacemaker is a computer with a couple of wires attached to it. A skyscraper is like a giant computer that we put bankers in. This means that as time goes by, more and more things will simply become computers in funny boxes.
Meanwhile, the general purpose Internet is replacing the special purpose networks, where all we had were TV signals going on one channel, GPS signals going on another, and telephone conversations going on a third. Now we have one network, and it uses one protocol. On top of that we make little specialist application protocols that can do anything we can think of. Everything is being wired together with that network.
We treat the Internet variously as though it were like a Jihadi recruiting system or pornography distribution system or a video on demand system, instead of the nervous system of the 21st century. If we continue to screw this up, if we continue to treat this without the gravitas it is due, we will get into a really bad place indeed--because getting it wrong with computers that are in our bodies, or that we put our bodies into, puts us at risk of those computers failing catastrophically-- with us inside them, or them inside of us.
I don’t believe that I can predict the future. ButI do think the future can be changed by what we do. I think that the future is contestable. If the future were predictable, then there would be no point in getting out of bed, because it would arrive no matter what we did. It’s much more hopeful, much more optimistic, to say that something we do will change the future. Hope only requires that you can see your way to doing one thing to improve your situation -- and from there you might find another thing you can do to improve the situation further.
And I can think of things we can do to improve this situation. We could reform the Computer Fraud and Abuse Act or the DGCA. We could safeguard net neutrality. We could take a step up the hill towards the summit -- and once we take that step, we might see another step, another action we can take.
I don’t think we can chart our course from A to Z: The first casualty of any battle is the plan of attack. But I do think that, from moment to moment, we can see which direction we should go next to get to the highest ground.