• Every Slightly Atypical Decision: Talking to Michael Chertoff

    How might post-September-11th national-security strategies have anticipated contemporary commercially driven data-analytics? How might “homeland security” itself take on much more domestic connotations as digital devices collect, store, and share ever more of our personal information? When I want to ask such questions, I pose them to Michael Chertoff. This present conversation (transcribed by Christopher Raguz) focuses on Chertoff’s book Exploding Data: Reclaiming Our Cyber Security in the Digital Age. Chertoff was the second Secretary of Homeland Security, from 2005 to 2009. He has served as Assistant U.S. Attorney General, as a Judge on the United States Court of Appeals for the Third Circuit, and as United States Attorney for the District of New Jersey (where he was one of only two U.S. Attorneys who kept their positions when the Clinton administration took office in 1993). Chertoff, also the author of Homeland Security: Assessing the First Five Years, is Executive Chairman and co-founder of The Chertoff Group — a security consulting company.

    ¤

     ANDY FITCH: In Exploding Data’s account, just as the internet begins as a U.S. defense project (only later migrating to civilian use), big-data collection and analytics come into their own amid a post-September-11th context in which U.S. intelligence agencies shift their focus from, say, seeking to comprehend the internal deliberations of global heads of state, to tracking down the discreetly disguised operative or potential lone wolf amid a vast civilian population. And as U.S. Secretary of Homeland Security during the late 2000’s, amid yet another mass migration of digital technologies from defense to consumer domains, your unique vantage leaves you “acutely aware of the power of data collection and analytics to benefit society,” as well as to “challenge America’s traditional norms and values in the areas of privacy and liberty.” So here could you first give a couple detailed examples crystallizing how this “information-gathering revolution” accomplishes much more than just turning us all into internet users, harnessing and reshaping the global economy and an ordinary individual’s place, purpose, function within it — and how even the potential benefits this data-driven revolution brings to our everyday lives can carry serious (though for now sometimes subtle, obscure) challenges to our most basic cultural, political, legal values?

    MICHAEL CHERTOFF: Well one positive comes from how we can combine data with artificial intelligence, allowing the medical profession to come up with much more granular, individually tailored treatments. We’ll see amazing nuances in disease treatment, through complex pattern-recognition, that can be applied across a large population, and can just keep getting better. But for a more negative data-analytic stream, consider the social-credit system China has put in place, taking data from all kinds of everyday activities: from everything you buy, everything you search for online — eventually including every time you exercise or talk to a friend, and of course who your friends are. All of that goes into your social-credit score, and getting a high score (by doing what the government wants) will give you preferential treatment in a variety of ways, whereas getting deemed a less good citizen will handicap you. And that type of systematically watching, repressing, and controlling society seems scarier than anything George Orwell could have conjured up.

    Well with China’s social-credit system still in mind, I wonder if we could make this even more local, perhaps by looking at your introduction’s four “hypothetical but realistic scenarios.” Here could we start with scenario #2, to illustrate the classic internet adage that if the service is “free,” you are probably the product? Here we first see two parents buy their daughter a Talkie Terry doll, a “smart” yet surprisingly affordable device that (once these parents quickly click on a consent form) gets to “know” this young girl, apparently to provide nurturing companionship throughout her early years — and which simultaneously provides continuous data streams to the doll’s manufacturer and to countless unspecified third parties, with each assembling an ever more comprehensive profile not just of this young girl but of her whole family, whose every motion, every spoken word, every biological data-point eventually can get recorded, distributed, indefinitely stored in distant tech centers? And here could we even extend your book’s hypothetical scenario by picturing this girl, now as, say, a 13-year-old, stepping past the kitchen pantry at a particularly suggestible after-school hour for her, just as sensors catch her blood-sugar dropping, just as some TV-like device delivers a personalized marketing pitch from her most trusted childhood stuffed-animal character on why the two of them should share some corporate-branded snackfood right now, maybe just as this snack’s scent wafts from a slightly heated pantry shelf?

    That’s pretty much how I see this idea going forward. That example shows how all of this information gets integrated. You walk past a retail establishment. Your locational data suggests you’re shopping on your lunch hour. You get a pop-up on your phone, specific to the bathing-suit bargains you had searched for online last week. Some people will say: “Wow. Great. How convenient.” But if you think about it, somebody (multiple firms, actually) has integrated a lot of your personal data to determine what will work best to influence you to buy certain products.

    And of course that personalized type of marketing can lead to more nefarious purposes. With Cambridge Analytica in the 2016 election, for example, we basically had political consultants using your personal data as a way to target you. Maybe they sent you short posts exaggerating criminal trends among immigrant groups. And soon, when you drive through a largely immigrant neighborhood, maybe your phone will get pinged with a text message saying “Look at all those immigrants over there, just hanging out doing nothing.” That would be at a minimum very creepy, and could actually spur some real misbehavior.

    So maybe readers already feel vigilant against an authoritarian China or a scheming Cambridge Analytica. But even for more seemingly benign cases like that snack scene I mentioned, your book also hints at the potentially overwhelming consequences across the next two decades of this young woman’s life, as her personal-worthiness score (quantified to many decimal places, measured in minute competition against so many others) will get assessed by health-insurance underwriters, college admissions boards, potential employers, dating databases — all based on snap decisions she has to make in these micro-targeted moments. And here of course I might have taken your own Talkie Terry scenario too far, but could you sketch how the “relentless” corporate innovation it showcases could take us even further: to your hypothetical scenario #4, set portentously in 2084, with a young man James’s basic capacity for human imagination eclipsed by continual data-mining happening all around him, all for him, but also on him, to him (all culminating in your line: “James wonders about just what is, no longer about what might be”)?

    You already see some health insurers saying that, to get their best rates, you’ll need to give them your Fitbit data. And they can pitch this as: “If you do these recommended things, that will help to lower your blood pressure.” So it sounds like they just want to help you. But of course the flip side is that if you skip your exercise routine today, or if you buy extra dessert at the store this week, that might lower your health score and lead to increased premiums. Again, some people will say: “Great. We should promote good behavior.” But I see two problems. First, these algorithms of course can’t fully explain why you bought that dessert. Maybe you brought it to a friend’s birthday party. And so, for a second problem, do you really want to carry around a device monitoring your every action? Do you really want to have to explain away every slightly atypical decision all day, and to have to ask permission? To me, that sounds an awful lot like an authoritarian regime on steroids.

    So for that 2084 scene in the book: first of all, you see self-consciousness taken to an entirely new level. Everything you do, even just where you choose to look, gets recorded and evaluated. And to be honest, I couldn’t write this book fast enough, because all of these new developments just keep coming. Imagine, for example, sitting in a classroom that allows the professor to monitor you using artificial intelligence. This system knows your body posture when you pay close attention, or when you get bored or distracted. And imagine that if you look bored or distracted, that will end up lowering your grade for the semester. At that point, you literally have lost your individual autonomy. You just focus on keeping the correct posture and apparent level of attentiveness. Imagine how much that actually constrains your ability to learn, and how much of your mental bandwidth gets absorbed by performing for this ubiquitous eye-in-the-sky surveillance.

    I’ll want to keep drawing out those near-future consequences. But, for right now, and again in terms of everyday consumers’ perhaps unwitting contribution to such dystopian scenarios: should we all approach these early-21st-century digital engagements premised on our personal-data sharing basically with the same degree of skepticism with which we should have approached the claims made by some late-20th-century advertisement?

    That’s exactly right. I actually often use the example of when television first came on the scene, and brought all of these new commercials. Nobody knew if audiences would just go out and buy everything they saw on TV. You know, parents would tell kids: “Oh, those commercials: don’t believe them. They’re all kind of phony. Go get a glass of water or something during the commercial.” And eventually people did learn to kind of tune out.

    But today when you do engage digitally, you should try to engage mindfully. You should ask yourself: “Do I really want this item? Do I really need this service? How much? What personal information do I have to give up to get it?” And then further questions like: “Could this smart refrigerator have an insecure connection? Could it become the entry portal for someone to hack into my home network and do real damage? And even if this connection does stay secure, do I want it recording and sharing data about everything I put into this refrigerator, or how long it takes me to finish a bottle of soda, or how much of which particular foods I eat most frequently? And do I know all of the various interested parties who will receive that data, and what they want it for?” And so finally you might get back to: “Why do I actually need this smart refrigerator? Can’t I just open the door, and see if I need milk or bread or something, and go buy it? When would I be better off just exerting a little personal effort, instead of trading in all my data?”

    And even as we sift through some of these concrete personal choices, I do still feel the need to address the complex psychological or emotional implications of your book to get us there. For example, your 2084 scenario of omnipresent / internalized surveillance suggests some futuristic mode of Cultural Darwinism, in which those deemed “virtuous” receive constant micro-advantages over those who don’t comply, those who don’t conform (often simply those who don’t consent to giving up their data). So again Exploding Data gives ample reason to feel paranoid, defensive, resistant to such broader trends. But we typically won’t feel this single-mindedly vigilant in the real-life scenarios that could take us to 2084, right? Exploding Data points out that today Big Brother rarely needs to kick our door down. He gets invited in. And Big Nanny’s persistent nagging at least seems to have our own best interests in mind.

    Yeah that’s right. Again a lot of people will see this all as a positive development, in terms of promoting your health and well-being — just like a mother looks out for her child. Even your boss at work might claim just to be looking out for your well-being. But here we get to the more philosophical question of when do we, as human beings, want to have the freedom and the ease (obviously within reason, within the bounds of the law) to try new options, to sometimes make a bad judgment, to choose some suboptimal thing because we actually prefer it? When can we still give ourselves that extra helping of dessert because we want to celebrate a big day at work, or because we’re down in the dumps and want to cheer ourselves up? Of course you even could argue that, over time, this ability to disrupt and to change things up a bit actually leads to innovation — whereas a world in which everybody always does exactly what some wise nanny wants ultimately stifles innovation. Though even before that, I just think of this human element of not wanting to live like an automaton going through the paces somebody else has set.

    And when you mention a mother’s attitude towards her children, I can’t help assuming that data-mining interfaces increasingly will feel less like Big Nanny and more like Big Mommy — picking up on our most intimate internalized representations of love and devotion, bestowing upon us an affirmation of our distinct individual personalities with a degree of familiarity that even our own mothers eventually won’t be able to match. And here could we start pivoting to privacy — here by tracing how definitions of “privacy” themselves have kept evolving across the historical epochs you describe as Data 1.0 (with privacy conceived in relation to physical property, to one’s physical person, to the physical spaces in which one’s body resides), Data 2.0 (in which protections of one’s reproducible physical likeness and of one’s intimate personal details gradually consolidate to likewise protect conversational intimacies — and to prompt legal parsings of “reasonable expectations” while communicating through, say, furtive whispers or through telephonic transmissions sent across public-utility lines), and Data 3.0 (in which the “default setting” for our digital lives has been set to pseudo-consensual public exposure, and in which we don’t just invite Big Brother into our own homes, but actively play the part of Little Brother / Sister ourselves by constantly surveilling others)? Even if our present-day digital identities weren’t so prone to criminal and / or governmental hacking, why would we still have to reckon with Exploding Data’s stark assessment that “If privacy means the ability to hide or shield our actions and thoughts from prying eyes, that privacy ship has sailed”? And which alternate privacy vessels should we now consider boarding instead?

    Well, first of all, it’s very difficult to stay hyper-vigilant about protecting your data. Increasingly, just to participate, digital technologies require you to consent to providing your data. Of course almost nobody really reads or understands these consent forms. But you don’t really have a choice. So your data gets stored and shared and widely connected across the internet. And no matter how scrupulous you are, other people can hack and access your data, or upload it to the cloud. And again what most people don’t realize right now is how all of this data gets aggregated. No one single firm has to collect all of your data. One firm might provide a photo of you, and someone else might share value information saved on your personal computer. This data then will get stored indefinitely in these giant data centers, where someone with the right authority can search your information across all of these various sets of unencrypted data. So for me, that all means the privacy ship has sailed — because we just don’t have control right now on which personal details get publicly shared.

    “Privacy” used to basically mean property rights. Then later technology allowed someone to track you without invading your personal property or personal space, either through photographs or telephone-wiretap interceptions. So we shifted away from worrying about protecting property as such to protecting an expectation of privacy, which led to a kind of circular reasoning — since in an authoritarian society, for example, you can’t expect much privacy in the first place, so then you supposedly don’t have the right to that privacy. And now I’m saying that, given how little control you have over the vast amounts of personal data generated about you, confidentiality itself is no longer sufficient. We need to focus on what you yourself can do (and decide to have done) with this personal data. And that’s ultimately about your autonomy, your freedom.

    Exploding Data in fact offers compelling accounts of autonomy as this basic “core of freedom,” as “the ability to make our own personal choices, restricted only by transparent laws and…social norms affecting our reputations within our communities.” A crucial passage in your wide-ranging introduction likewise articulates the imperative: “To preserve space for lawful personal choice.” And here I’d also love to unpack further how your notion of proactive personal autonomy depends in part on the simultaneous preservation and / or creation of something like communal free space. Of course your consistent correlation of personal autonomy and personal-data control suggests a libertarian predilection towards fending off governmental or corporate overreach. But how do you see “transparent laws…and social norms” playing their own important ecosystemic roles in establishing a robust public common without which even the most self-motivated, self-protective autonomy cannot enact itself?

    To start with, think of having a day off and choosing how you want to spend it. You know, you might want to get some exercise, or go out for a nice meal, or take a nap, or read a book, or work in the garden, or catch up on chores. And normally you might have to account to your spouse on an average day off, but you don’t need to factor in how these activities might impact your employment prospects or insurance premiums — unless you break the law or do something really stupid. Within those basic parameters, you can decide on however you want to spend the day. But if the whole world gets to monitor (or at least to share data on) your physical activity, on what you eat, what you read, how restful your relaxation period is, then inevitably you’ll start to make these decisions based on how they make you look and affect your life. Do they make your score go up or down? To me, that type of constant calculation would confirm your loss of any freedom to choose. Even with the most innocuous activities, somebody might observe you and say: “Well he was lazy. He must just not be an industrious person. I’d prefer to hire someone always making good use of his time.” So I think you end up stuck in this world of serious constraints and basically coercions.

    And again we here have to think through both the question about whether someone should have the capability to collect this data on you in the first place, and then whether they can store, share, and / or sell this data. I mean, should the people at a restaurant’s neighboring table have the right to turn on their phone’s recording function and tape your conversation and upload it and store it and share it? How might that constant vetting limit your own ability to enjoy a social occasion in this restaurant? And no one single intrusion of this sort will feel like the end of the world, but eventually you might get in the habit of just assuming that any kind of communal activity in any kind of public space opens you up to intense examination. Eventually you might decide it’s never worth it to just be yourself. You might have to stay constantly on guard, constantly paranoid.

    So again, this can be my political perspective and not yours, but it sounds like we have to wall off much more of our lives from corporate intrusion than we ever could have conceived of in the past.

    I don’t think there’s any doubt. We can’t just think about this as insulating ourselves from government overreach. And for a long time we didn’t realize the many ways and the extent to which companies collect and use much more data than the government (in its wildest dreams, at least in a democratic society) ever would imagine collecting. And because companies have pitched this as helping you organize your life, many people have just ignored the social consequences over time. But we need to keep in mind that even data collected with the most benign intentions, once uploaded, never gets expunged. And without proper restraints, this data can at any time get transferred and used for all sorts of purposes, including manipulative and even coercive campaigns. So with Cambridge Analytica for example, essentially people most resented the idea of this shadowy firm pushing your political buttons — based on accumulating and analyzing everything you’ve watched on TV, everything you’ve eaten, every sports team you’ve followed. That feels to many Americans like a direct invasion of personal space and personal choice.

    And even as we talk through these real-life conditions in which personal autonomy can’t flourish, I also wonder about all of the crucial qualitative (psychological, cultural, political) questions that we might set aside, overlook, or discard if data-analytics don’t prove especially competent at answering them. So again, amid your emphases on autonomy, what newly proactive capacities do you see us needing to develop, both as private citizens and as a society, for qualitative valuation, for an imaginative / affirmative personal and social vision that can help to balance out our newfound data-crunching propensities?

    We definitely do need to watch out for confusing data with knowledge, and knowledge with wisdom. We definitely will face the increasing tendency, especially with numerical data, to make what feel like simple straightforward comparisons, but which leave out a lot of desired qualities we can’t easily reduce to a data-point. So even when objective testing can help us to clarify a situation, we shouldn’t confuse that with cultivating a more personal sense of what we most value, and of what’s most important to measure.

    So in terms of that type of qualitative parsing, could you outline this book’s account of appropriate governmental engagements with exploding data — ensuring potential access to and perhaps collection of numerous data streams, while more explicitly restricting possibilities for the use, retention, and dissemination of this data?

    Well I really aim that part of this book at public conversations about government surveillance. We’ve had people worried about that. We’ve had Edward Snowden and everything, and I wanted to make the point that you can’t just always support or always oppose government surveillance. Government data-collection plays a very important role in protecting us. Of course it can also be abused. And again, in our contemporary world, the threats posed by this kind of surveillance don’t always show up immediately. And sometimes only connecting the dots among multiple data-points would be useful, or would expose us in uncomfortable ways. So I wouldn’t want to stop the government from having the capability to access data (assuming appropriate, lawful permission), or even from collecting and holding onto this data — with a clear understanding that it will not get searched or shared or used in any way unless further requirements for further investigation get met. If you don’t collect the data in the first place, then by the time you realize “Oh, we need to look at this,” you’ve probably lost your chance. Whereas if you can store the data, under clear constraints and conditions, you can go back and look. Of course we can and should impose higher legal standards for those types of investigations. That’s how I try to balance what I see as competing imperatives between security and privacy.

    Could you offer a comparable set of criteria for assessing corporate data-collection, with an even sharper distinction in how corporate data-analytics should parse and transparently announce the more open-ended facilitation of an app’s intrinsic functions, and the much more restricted pursuits of this app’s extrinsic functions?

    So I would say that if you know you’ve signed up for personal-data collection, because an app’s basic functionality requires it, I don’t think anybody can object to that. So if I sign in to Google Maps, and they want my location, of course I’ll say yes, because I want to make this map more personally useful. But Google then should have to ask me “Can we share your location with retail establishments, so that they can direct their marketing to you?” since that wasn’t really why I allowed for locational data-sharing on Google Maps, and I don’t feel like being somebody’s marketing guinea pig. So I might say no to that. And throughout this decision-making process, any data-collecting app should be required to stay transparent about the uses to which they want to put my data, and required to receive affirmative permission before they can use my data for non-intrinsic functions.

    Still in terms of sifting through possibilities for data access, collection, storage, and usage, but also returning now to your own lived experience as Homeland Security Secretary, let’s say we suffer (as, presumably, we inevitably will suffer) one or several attacks of greater physical and psychological impact than September 11th. How might Big Brother intrude and / or get invited into our increasingly digitized lives then? What preparatory legal, intellectual, civic steps should we be taking now to anticipate the threat that our most basic liberal-democratic values will face then — perhaps less from some band of rogue actors, than from our own defensive impulses, and then our own digitally instrumentalized / institutionalized responses to these newly felt vulnerabilities?

    First we have learned over time that we absolutely do need to develop a variety of contingency plans before an emergency happens. And then in terms of your specific question, maybe we should agree in advance on some limited period of time for a greater scope of government surveillance, at least for certain data-points. Of course we’ll need to tweak any advance agreement in order to meet whatever conditions we face, but it helps to set these kinds of limits ahead of time, before people just start reacting to a crisis moment. I wrote this book in part to help give us some time to think through these tough questions before something really bad happens, not while it is happening.

    And more generally Exploding Data no doubt argues for substantial reflection by our liberal-democratic state, in order to recalibrate the relative weight of potentially competing constitutional principles of liberty, security, freedom of expression and association. But will most of the remaining questions to sift through on exploding data get left to a handful of tech entrepreneurs, expert legal / regulatory institutions, and crafty pirate interveners? When I mentioned “civic” obligations earlier, I guess I also had in mind how the inner workings of democracy themselves keep getting transformed (potentially undermined) by external technological developments. So what decisive civic roles can the ordinary individuals who make up our democratic populace still play in these broader thought / life / social experiments with big data?

    I consider this a really serious question at the heart of so much that we see right now. Our tech companies finally have woken up to the fact of all these Russian interferences. I mean, some tech companies woke up sooner than others, but now they’ve all woken up to this more affirmative responsibility that they have to manage the uses and address the abuses taking place online. They’ve finally started getting together with NGOs and civil society to talk about this.

    And governments also have responded unevenly. In Europe I think we’ve maybe seen some overreactions, in terms of specific requirements for addressing the use and abuse of information operations. And in the U.S., to be honest, Congress has lagged behind a bit on all of this, and has shown uneven levels of interest and attention. You’ll see some very interested and knowledgeable legislators, and then other legislators not particularly literate in digital technology, who don’t pay much attention to these urgent questions. And similarly, you might see DHS get involved in thinking through how to counteract problematic aspects of information operations, but other agencies might remain (for now at least) much less engaged. So I do think we need a multi-stakeholder approach, requiring various types of responses from government, from civil society, and from the business sector (which operates in a much broader global environment).

    And again, alongside all those groups you just mentioned, all those various expert classes, and since you yourself acknowledged not wanting to feel like a guinea pig: what about an everyday citizen / consumer who feels like a guinea pig right now? What else can that person do?

    Well I think you can take steps within reasonable limits to protect yourself and make intelligent choices about how you engage with the internet. You won’t eliminate every concern, because most data does get collected through some process of observation. But you still can mitigate the damage. You can keep getting ever more mindful about your cyber security, and about which types and amounts of information you share, and what you’ve actually signed up for. That won’t solve all of these problems but will help protect your autonomy, and your ability to shape your future (and all of ours).