• Their Understanding of Who We Are: Talking to Dipayan Ghosh

    Which components of Facebook’s business model get obscured when its CEO tells a Congressional panel: “Senator, we run ads”? Which less visible layers of technical, commercial, and legal infrastructure undergird today’s consumer internet (and now overlay our own everyday experience)? When I want to ask such questions, I pose them to Dipayan Ghosh. This present conversation focuses on Ghosh’s book Terms of Disservice: How Silicon Valley Is Destructive by Design. Ghosh co-directs the Digital Platforms & Democracy Project at the Harvard Kennedy School, where he researches digital privacy and internet economics. He previously worked on public-policy topics at Facebook (leading strategic efforts to address privacy and security concerns), and served as a technology and economic policy advisor in the Obama White House.

    ¤

    ANDY FITCH: Let’s say some CEO at a Congressional hearing describes the typical Silicon Valley business model as “targeted advertising.” First, in broadest terms, which most foundational concerns of personal privacy, corporate transparency, marketplace and civic functionality, would that benign-sounding description obscure?

    DIPAYAN GHOSH: For the dominant internet firms (particularly Facebook and Google), it does not do the complex business model underlying them any justice to describe it merely as “targeted advertising.” There’s an elaborately layered infrastructure behind a company like Google. This starts with its physical infrastructures: server farms (racked with reams of personal and proprietary data), private links to telecom networks, a universe of distributed consumer devices preinstalled with Google software, wireless links in public spaces like Starbucks, and so on. This continues with its digital infrastructures: the operating systems and core internet functions we use every day, powered by machine-learning algorithms geared for content curation, behavioral profiling, and ad targeting. And it also extends to commercial and legal infrastructures of countless contractual arrangements with independent media entities and telecommunications firms.

    Google and Facebook use this apparatus to collect as much information about us as possible: in part by developing extraordinarily engaging ways to addict users to scrolling through their social-media and news feeds all day, and in part by processing our responses to derive behavioral insights on us. So a more explicit description of these firms’ business model should encompass their attempt to learn something about our behaviors, develop the optimal scheme to capture our attention, employ the ad space generated through this mass addiction to monetize their reach — and, as the very last step, sure, coordinate targeted advertising over their platforms.

    A Chanel or a Nike then can utilize Facebook’s ad system and say: “I want to target such-and-such types of consumers with this particular ad content.” Scientia potestas est, and in this case, it is knowledge about the user that gives the platform its power. Facebook can tap its universe of users, identify those who both match the advertiser’s contextual preferences and would maximize the impact of this ad campaign, and target them over its platforms and on third-party media properties (through its little-known but far-reaching Audience Network). That’s how Facebook makes its revenues. That’s how Google and Twitter and Amazon make their share of revenues resulting from related activities. And if we really want to address both the good and the bad of what this book calls the “consumer internet” sector, so that we can assess its true economic and social merits, then we need to factor in the full breadth of the technical, commercial, and legal infrastructure underlying this business model.

    Here Terms of Disservice follows the money itself in part by re-conceiving the “currency” of this marketplace — not just as cash transactions, but as constant, frictionless, ever increasing payment sucked up by platform firms in the form of users’ engagement, attention, and data. Could you further flesh out these payments we make all the time to Google, Facebook, Amazon, and how these accumulating revenues reinforce those particular firms’ uncontestable dominance?

    Today’s economy has many different kinds of currency. We might still think of currency principally as money — as those physical bills in the wallet and digital bucks in the Venmo account. But we often exchange dollars for other kinds of currency. Companies might exchange dollars for material reserves like oil, for example.

    New kinds of currency emerge over time, but have especially done so over the past few decades. Around 15 years ago, with the ongoing advancement in computing powers and data-storage capacities, it finally became cost-effective to develop this consumer-internet business model. Massive quantities of data helped create this new consumer-internet sector’s new currency, which we actually still have trouble describing. This currency’s first pillar comes in the form of our aggregate attention. And this currency’s second pillar comes in the form of our personal data, which I consider an extension of our individual personalities. This novel currency (the amassing of society’s aggregate attention, and the assembling of individual behavioral profiles) gets bundled and then segmented into various classes of users, and ultimately put up for sale in an open digital marketplace — to advertisers and others wishing to influence us, like Chanel, Nike, the NBA, the Trump campaign, and the Biden campaign.

    Facebook has developed an historically lucrative method of extracting this novel currency from us, and exchanging it for revenue. Google has a slightly different model for a slightly different industry and different part of our lives. But again this novel currency flows through all parts of the firm. This systemic amalgamating and marketing of society’s data and attention has made Google the most powerful company in the world. Only now have we begun to develop a rigorous way to analyze this currency at an economic level, to create a language and lens of analysis that can help us design a new policy regime responding to this exploitative business model at the heart of Big Tech — which has foisted a number of equally novel negative externalities (such as the disinformation problem) on us all.

    Ex nihilo nihil fit: tech executives might go on suggesting that their services are “free,” but these services actually deal in a novel currency that national regulators haven’t yet come to terms with. In time we will recognize these dominant internet platforms for the exploitative firms that they have become.

    Well I’d assume you frequently encounter questions such as: “Haven’t US commercial media always operated in these crass, distracting, dishonest ways? What thresholds need to get crossed before exuberant corporate cajoling and attention-capturing become abominable exploitation and domination?” So could you sketch your sense of the scale, the stakes, the urgency of the threat to public interests today?

    Across the consumer internet, within specific market segments, we see a frequent situation where all the economic power resides in just one particular product, from one single firm — with little hope for a redistribution of that power to corporate rivals or to government or to consumers. Take the internet-based text-messaging market in the United States, for example. Facebook absolutely dominates this market through Messenger and WhatsApp. This one conglomerate possesses the vast majority of market share. We see a similar story for market segments such as social media, and internet search.

    Of course, throughout history, capitalists have sought to create this kind of monopoly in new economic sectors. As with today’s digital economy, first movers have tried to establish their hold over vast new commercial territories. So again, over the last 15 or 25 years, we’ve seen various corporate thresholds crossed in email, search, social media, internet-based text messaging, and e-commerce. One or two firms have established dominance in each industry. And as we’ve seen in the past with railroads, or with electricity, when individual firms monopolize emerging fields, exploitation becomes inevitable. America needs to relearn these basic economic terms of the debate, and to renegotiate the societal balance of power.

    Regulating this particular sector gets a bit more complicated since, in the case of digital platforms, these companies, while American, have gone radically global. Their own commercial boundaries do not end at our national border. That introduces new challenges for designing an effective policy regime. But as a start, we need to understand these rapidly developed economic relationships in broader historical terms. We need to see dominant firms in the consumer-internet market for the monopolies that they are.

    Then to start connecting these more abstract antitrust concerns to concrete public discussions of the day, could we take Russia’s frighteningly effective disinformation campaign during the 2016 presidential election, and could you describe this dystopic result coming about through the convergent interests of “nefarious actors” and profit-oriented digital platforms — with both parties seeking to maximize consumer hits and audience absorption at all costs? Why should we distrust any account of well-intentioned, politically neutral platforms getting hijacked here? And in what ways did this manipulative electioneering represent not a worst-case scenario, so much as “a canary in the coalmine”?

    Right, we’ve seen these much broader negative externalities spread through Facebook, YouTube, Twitter, and Google. Certain of those problems have attracted tremendous public interest. Disinformation campaigns stand out — whether they concern bad actors attempting to make a quick buck (like the Macedonian fake-news network), or attempting to achieve political goals through insidious means (like Russia’s Internet Research Agency).

    Unfortunately, many social-media consumers have also been captivated by hateful conduct online. Twitter and Facebook have taken some action against people like Richard Spencer (and finally, in the case of Twitter, President Trump himself). But prominent voices posting falsehoods or conspiracy theories, or sometimes even calls for violence, continue to have great impact in digital spaces. Perhaps most notably, Brenton Tarrant, who committed the unthinkable shootings in New Zealand, shared a video stream of his attack on Facebook Live. Though the list goes on. We can discuss these explicit harms all day long — as well as algorithmic biases, reinforced economic inequities, and a slate of additional dangers engendered and perpetuated by internet platforms. Whether we look for offensive content, or for terrible real-world consequences, we can find it all happening on and through these platforms.

    But what I attempt to do in this book is trace such harms directly back to that core consumer-internet business model mentioned earlier. We have all kinds of speakers online. We have journalists. We have companies promoting their brands to known and desired customers. We have private individuals sharing information about themselves. And of course, we have bad actors. Yet for all of these different voices, digital platforms have engineered their systems to yield maximal ad revenues through optimized engagement. At the end of the day, this results in a really, really noisy media environment — full of offending content, and much worse.

    We can try to suppress these negative externalities through superficial means like Facebook’s Oversight Board or its election war room, or through hiring armies of content moderators, or through the refinement of detection algorithms. But in doing so, we might just be playing an unending game of whack-a-mole. If the corporate model underlying Facebook, Twitter, and Google enables these companies to remain dominant global platforms, then those same problems seem likely to persist. We’ll have a situation in which one sophisticated digital machine (algorithmic profit maximization) fights another (algorithmic content moderation). Which will win? Well, I sense that these companies will prioritize maximizing immediate profits over protecting long-term public interests.

    And just to trace a bit further the extensive reach of Big Tech data harvesting (regardless of whether we as individuals take the Kremlin’s clickbait, or even have social-media accounts), could you sketch, for example, how Facebook’s pixel feature allows this firm to vacuum up data from a much broader swath of the consumer internet — and all while implicating self-described stewards of the public trust such as the New York Times, the Guardian, and WIRED?

    The Facebook pixel offers this perfect example of an internet technology designed purely for data harvesting — for corporate surveillance to the end of raking exploitative profit. But the New York Times or even Consumer Reports can’t survive financially right now without using the Facebook pixel on their websites. Any organization that wants digital traffic (which means just about all of them) needs to drive engagement through Facebook, since almost everyone who uses the internet is on Facebook. The pixel helps steer this traffic to an organization, but that doesn’t come for free. The organization only gets this traffic if it agrees to share certain user information with Facebook.

    When you, as an individual reader, click on a New York Times article, the Times makes an automatic call to Facebook to render this article though Facebook’s pixel. Essentially, the Times contacts Facebook to say: “Hey Facebook, this user wants to visit our website to read this article. Can you send us the most updated version of the pixel?” Typically, Facebook then sends the pixel over to the New York Times. This tiny invisible object on your page contains a small but vicious block of code. At that point, Facebook has established a clear line of communication (and a direct line of data) from the user, to the New York Times page, to Facebook: tracking how you got to the article, where and when you read the article, the content within the article, your behavior on the page, how long you spent on it, and so on. All of that information goes back to Facebook, with most users never realizing it.

    So Facebook gets all of that data. But what makes it worthwhile for the New York Times to serve as the conduit? Well, when the Times enters Facebook’s advertising system to start a new ad campaign (on or off Facebook), it can see exactly who visited its website. It can say: “Look, we want to reach one thousand individuals of such-and-such qualities, reading this feature piece from somewhere in Manhattan.” Facebook then will manage that ad campaign as it understands the Times’ preferences, as well as its own users’ interests. Perhaps most diabolically, this campaign can expand well beyond the Custom Audience of known New York Times customers, to a larger Lookalike Audience of unknown customers who might share similar interests. This is the same system that the Russians and the Trump campaign used in 2016.

    So before readers simply get more paranoid about specific parties monitoring their every move, could you further articulate why you consider the most critical privacy concerns here to stem from that core business model: not from identity hackers, not from Putin, not even from hate groups targeting vulnerable communities? And by extension, even while guarding against those latter dangers, why should consumer-protection advocates keep in mind the strategic need not to get pinned down on the battlefield of content moderation — but again to take this fight to the regulation of industrial-scale, ruthlessly monetized, excessively concentrated digital networks?

    In the US, there are two broad areas of potential regulation under discussion right now. One focuses on content. Russia’s disinformation operations provide a clear example. We do want strong policy frameworks around such problematic content, to ensure we don’t see excessive disinformation leading up to the 2020 election. We also don’t want videos of terrorists killing people to get millions of hits. We don’t want militant nationalists encouraging violence against marginalized populations. We all broadly recognize the need to combat these kinds of harmful content through responses such as flags and takedowns.

    But we can’t forget the need for a second form of regulation, addressing the business model that enables this offending content to spread in the first place. Again, Facebook’s power comes from the set of layered networks it manages. A physical infrastructure of data servers collects and processes personal information. A commercial infrastructure arranged with third parties repackages and disseminates this data. A digital infrastructure sitting above that establishes a network of users across the world. And for all of these countless exchanges at any given moment, one single company exerts monopoly control over the whole social-media realm. So we can discuss content regulation all we want. But the economic machine underlying this network ultimately promotes that offending content — and should be the central target of our regulatory efforts.

    Unless we address this root cause, through regulation targeting these exploitative business practices (instead of merely reflecting on negative externalities after the fact, or sweeping them aside before the next wave of offensive and harmful content rolls in), we’ll never resolve those fundamental societal problems. Facebook itself understand this logic. News stories have now emerged about Facebook quietly establishing a pro-Big Tech advocacy group to battle federal regulations, called American Edge [Laughter]. From that title I’d assume they’ll argue, among others things: “Look, if you regulate us, you’ll end up handing over all the profits and the power to China.”

    In terms then of these layered digital ecosystems, and for framing what a Consumer Privacy Bill of Rights should look like today, could you parse PII and non-PII data, and explain why platform firms might find it in their interest to publicly present themselves protecting the former — if largely to leave the latter less discussed, less regulated?

    In an American legal context, when we try to define PII (personally identifiable information), we end up with a somewhat arbitrary understanding. A few decades ago, American PII legislation focused on identity theft, and related financial concerns. Americans of course didn’t want their money stolen, especially through emerging electronic means. Both private companies and state governments found it necessary to impose greater industry protection of what we today categorize as PII: your name, your email address, your phone number, your mailing address. So we now have this legal infrastructure with some jurisprudence behind it, establishing PII as a kind of data that deserves protection through governmental regulation and industry self-compliance. But PII still has this very narrow definition, often relying on data-breach laws — while neglecting behavioral data, which is often more important to tech companies.

    Over time, the most powerful internet companies have developed a quite clever strategy of supporting legislation that perpetuates this very narrow definition of (and heightened emphasis on) PII — all while arguing that other forms of data collection and analysis, outside of PII, simply push American innovation forward in society-benefitting ways. They’ve found related means to escape the scrutiny of regulators: by creating proprietary corporation-specific identifiers that don’t technically qualify as PII, and by assigning our behavioral data to those proprietary IDs. They gather troves of data about us, use it to profit off our individual personalities, and escape regulatory scrutiny through the technicalities of American digital-privacy law.

    Legislators have finally started to recognize this gap in the law. So we now face the difficult task of redefining and reformulating the basic terms of this whole regulatory approach. But my concerns about digital privacy don’t stop there. They include the much broader monetizing of, say, data generated by our web-browsing on Chrome, or our walk through the local grocery or commercial district, or our purchase history with credit agencies. Much of this data likewise gets sold through open markets. Companies like Facebook draw on seemingly limitless categories of repeating (and often redundant) data streams, perpetually refining their understanding of who we are.

    So again for robust consumer protection in the digital sphere, what seems worth adopting or expanding upon from an EU General Data Protection Regulation (GDPR) framework prioritizing, for example: transparent corporate articulation of data-harvesting practices, realistic consent interfaces, non-punitive data opt-out scenarios, limitations to how cookies get used?

    From a broad theoretical perspective, I think the GDPR gets privacy regulation right. It gives consumers ultimate rights of ownership, deletion, access, and opt-in. It even provides users with the capacity to tell a firm not to process their data — which includes applying machine learning to make a set of inferences about a given user’s behavioral profile. So the GDPR has set an efficacious regulatory standard, taking power from the industry and placing it in consumers’ hands. Americans (and others around the world) should demand something similar.

    For limitations to the GDPR, I’d mostly point to practical snags with ensuring adequate regulatory enforcement. Hopefully those kinds of kinks can be ironed out in due time.

    Returning then to a US context, now in terms of some of this for-profit digital ecosystem’s most acutely personal harms, escalations of bias certainly stand out. Could you give a couple examples of how, say in lending and hiring, today’s decision-making algorithms might further magnify discriminatory impact, regardless of whether any discriminatory human intent plays a part — and, correspondingly, why these scenarios again provide little legal traction for those seeking to combat such injustice?

    Yeah, here again America’s legal system draws clear lines in terms of discrimination, including what kinds of decisions corporate actors and various organizations can make, and where they could face legal liability. Certain protected classes of citizens get defined through federal laws like the Americans with Disabilities Act. Individual states also draw their own lines of protection, sometimes more proactively than federal law. Yet even with these protections in place, damaging socioeconomic imbalances often are perpetuated, reinforcing the historical marginalization of certain groups. This also means that a broader range of people further down the socioeconomic curve never get the opportunities they deserve — as human beings.

    Federal and state and local governments helped create this entrenched marginalization and discrimination. Credit agencies have helped perpetuate it. Banks and other corporate entities have helped perpetuate it — as has our tech sector’s systematic harvesting of data, its extraction of sensitive information about the interests and preferences and beliefs of every person in the United States, its constant selling of our behavioral patterns. In fact, all of that data has only made these exploitative categorizations of us even more profitable.

    Today’s protests are about more than George Floyd’s horrific murder, and the terrors of police violence for many communities in America, and the centuries of inaction on these critical concerns. The protests are also about decades of increasingly corporatized discrimination, insidious processes that have further disadvantaged marginalized people in America. Problems of racism (and systemic discrimination along many more lines than just race) go so much deeper than the abusive acts of individual police officers, and today include the incessant data exploitation by which Silicon Valley profits so immensely.

    Facebook classifies us along lines of race, religion, ethnicity, precise location, and so on. Its open and fluid and intricate infrastructure allows for degrees of granularized targeting that we couldn’t even conceive of just a decade ago. This also hints at the dystopian reality of an ever more discriminating society: whereby educational, employment, housing, consumer, financial, and business opportunities are systematically analyzed by corporate machines, and injected into the media experiences of (only) certain classes of people.

    How does that all play out in concrete terms? Well, let’s say the New York Times publishes an article on a Democrat running for office in Westchester. Who will see this article? Well, it won’t necessarily reach the people who could benefit by receiving this information before they go to the Westchester polls. That would be in the democratic interest, but not in Facebook’s immediate profit interest. Instead, who sees the ad will depend more on maximizing opportunities for both the New York Times and Facebook to extract wealth out of a target audience — which might depend on your income and purchasing history and proximity at the moment to certain stores.

    Again, this raises difficult legal and regulatory questions. Which branches and levels of government might have the constitutional authority to say: “No, you can’t advertise so explicitly,” or “You can’t exclude a certain racial group — you need to make sure they see this housing opportunity”? And even once we clarify who has this authority (to the extent that any regulatory body does have it), companies like Facebook can easily avoid targeting people explicitly by race. They can argue that they’ve complied with anti-discrimination laws, even while focusing on related proxies.

    Zip codes or something.

    Exactly. By proxy, you can filter your way down to a very particular demographic. And since much of this algorithmic filtering gets shielded by Silicon Valley’s protection of its “proprietary” creation (that is, our data), civil-rights advocates and government regulators rarely can produce a clear and provable (or, at least, legally actionable) sense of precisely when and where companies have committed discriminatory acts.

    Then for a second standout form of platform-driven bias, what does the extreme (though apparently not uncommon) scenario of some benign YouTube search swiftly descending down the rabbit hole (often towards aggrieved populist disinformation) likewise reflect about how today’s algorithmic curations operate? And in what ways does the polarizing confirmation bias happening on anybody’s social-media newsfeed not in fact differ so much?

    This gets at the technological capacities that companies like Facebook and YouTube have at hand, and whose interests those capacities serve. Do Facebook and YouTube possess the capability to make their algorithms less prone to reinforcing bias? Without a doubt they do. That would play out slightly differently for social-media newsfeeds versus YouTube searches. In social media, you want to keep the user scrolling. To keep the user scrolling, you first collect some signals about his or her interests, preferences, beliefs, what he or she might have hovered on two minutes ago, etcetera. YouTube, by contrast, tracks our preferences mostly based on what we’ve seen in the recent past: what we’ve actively searched for, what we’ve deliberately watched (especially those videos we’ve engaged with), and what we’ve more passively let play.

    The relative weight of these various signals suggests some basic design differences for the recommendation algorithms implemented in social media versus YouTube. But ultimately, all of these recommendations again have one shared purpose of profiting from our attention and personal data. Each firm sees its main interest as capturing and holding the individual user’s attention, not as protecting the user and our society from disinformation, conspiracy, hatred, or discrimination. From the corporate view, if harmful biases are reinforced along the way, then so be it — so long as those biases are not technically illegal to perpetuate. Perhaps that’s only natural in a capitalistic economy. But we still can improve on this situation through federal policy-making.

    So if we accept your account of profit-driven algorithmic decision-making as “discriminatory by design,” what could a more proactive, more robust conception of fairness (beyond just non-lawbreaking) look like here? Do possibilities exist, say, for adding effective affirmative-action protocols to algorithmic decisions — perhaps preventing the amplified marginalization of certain demographic groups in some scenarios, perhaps simply prioritizing values beyond maximized profitability in others?

    This is such a difficult concept to even begin thinking about, and a good question. You’ve hit on a core desire of mine for corporate and public algorithms. For a start, I think that much of the bias forced upon us by digital platforms can be diminished through expanded privacy rights. This means giving users the right to control which behavioral inferences companies like Google can make about them, or to completely delete these profiles if they wish. Of course, applying such a protective standard across this whole sector might get difficult, but that is another matter — you can still require these rights by law quite easily. By enabling such radical privacy rights, we can effectively put algorithmic power in the individual user’s hands. In my view, social-media algorithms should exclusively serve the social and media wants of the consumer, not the profit interests of the firm.

    So where do you see today’s consumer internet symptomizing broader 21st-century failings in our outmoded antitrust apparatus, and where directly causing corrosive market concentration and corresponding democratic dysfunction? And what distinct challenges arise when applying a pro-competition regulatory framework to a digital-platform sector fueled by the monetization of (more or less monopolistic) network effects?

    First we should acknowledge that, in prevailing legal terms, monopolistic concentration exists, right now, throughout every market segment of this consumer internet. I don’t consider that a trivial assertion. And it points to other important analyses that academics and regulators must undertake. For instance, say we decide to treat “social media” as a market. Well, in that case, if we include Snapchat, Twitter, YouTube, TikTok, and any number of additional applications, then Facebook might not cross a 50 percent threshold, or officially be considered a monopoly.

    So we face these basic definitional questions as a practical matter, but anybody can see that YouTube and Facebook serve very different purposes in our media ecosystem. We go to Facebook for one specific purpose. We go to YouTube for entirely different purposes. They dominate distinct subsectors of the consumer internet, yet their CEOs argue that they compete with each other (and with equivalent internet firms) in one single industry. And the law hasn’t yet evolved to a point where we can easily distinguish the markets in which these two separate platforms participate. But let’s be clear: Facebook occupies the industry of social media. Instagram occupies the industry of image-sharing. YouTube: video-sharing. Gmail: email.

    Google: search. Amazon: e-commerce.

    Exactly. Once we clarify these different functions defining different tech-sector industries, we can begin to see that Facebook presently possesses more than 90 percent of social-media market share. As such, Facebook holds a monopoly. Social-media users have a very hard time avoiding Facebook. Facebook has no real competitors when it comes to this industry’s basic function of allowing us to connect with people, maintain those connections over years, reach out to particular friends, and simultaneously consume media content. We can debate whether new apps coming down the pike could eventually seize some of that market share. I would argue that they won’t.

    Or Facebook might just buy anybody who looks like a threat.

    Precisely. And the power to block market entry in this way often suggests monopolistic concentration — and indeed begets ever greater market concentration, strengthening the firm’s monopoly position.

    But even after regulators can establish the monopoly status of companies like Google and Facebook, the next challenge arises: we need to demonstrate that tangible harms come from this monopoly. These harms may appear, for example, in quality of service. Here you can take the terrible state of our contemporary information environment as a first signal of such failures in quality. Sure, we love certain parts of Facebook’s big blue app. But many of us would much prefer a social-media app less prone to poisonous conversations and privacy invasions. Perhaps we’d also like to see a social-media app with a lower energy footprint, or one not so reliant on sharing our personal information. Yet Facebook doesn’t have to worry much about rivals taking advantage of those potential openings, since it can hold them at bay. In the end, the average consumer (and citizen) pays the price.

    Facebook also poses a different kind of systemic harm through unchecked economic exploitation. Given the absence of any real competition breathing down its neck, Facebook can extract enormous quantities of currency (in the form of attention and data) from its entire user population. It can charge us at a monopoly rent, collecting as much as it wants, and can make this extraction so frictionless that we barely notice at all. And firms like Facebook don’t stop there. They keep pushing their anti-competitive conduct. We’ve seen, for example, a stronger regulatory apparatus in Europe force these companies to acknowledge bad behavior and reform certain practices. We’ve seen settlements for Google Android and Google Shopping, and new inquiries by the European competition authorities into Facebook’s and Google’s uses of data.

    You’ve also brought up the tricky legal dynamics around network effects. For a service like YouTube or Facebook, the platform’s value exponentially increases as more users join, which is why they are so focused on aggressive platform growth: qui n’avance pas, recule. Various experts have developed novel ways to monitor and regulate network effects.

    So along related lines, why, at this historical point, does it make sense to think of Google, Facebook, and Amazon as in fact natural monopolies within their respective user-facing domains (again of search, social media, e-commerce)? And why, as one crucial policy consequence, should we move towards regulating them quite extensively — as public utilities, akin to major railroad, electricity, and telecommunications companies?

    Once we’ve established that these firms play off the powerful network effect that is a central feature of internet platforms, we can then compare them to historical examples of monopolies that benefited from similarly strong network effects. When the US government has recognized these natural (basically unavoidable) monopolies, it hasn’t opposed them, so much as it has regulated them — heavily. Indeed, it makes sense for society only to have one electric grid. It doesn’t make sense to build two highways or two railroads running parallel to each other. Society should only invest in one network that we all can use.

    I see equivalently strong network effects leading to natural monopoly for various internet companies. It makes sense to have only one Facebook. Why rebuild this complex physical and digital infrastructure, just so that another service can provide the exact same functionalities of connecting us to friends? Why have our venture-capital dollars and engineering talent and carbon footprint directed towards that redundant kind of competition? Instead, our radically capitalistic economy has decided to invest in only one. But this type of situation can soon create undue power. At some point, the public typically must take that power back, through heavy regulation of the natural monopoly.

    Terms of Disservice makes a four-phased argument on this point. First, I argue that we clearly have industry concentration. Second, that this concentration produces harm. Third, that the nature of this concentration (here the presence of powerful network effects) suggests natural monopoly. Fourth, that the public must follow up on historical precedents for overcoming natural monopoly’s political power, by re-designing and regulating today’s digital sector.

    That doesn’t necessarily mean we need to break up these firms. It might mean we do break them up somewhere down the line. But most immediately, we have to acknowledge their market power and diminish it. Until we recognize that basic fact, we won’t get anywhere. With previous natural monopolies in railroads, electricity, and telecommunications, we eventually had to say: “Look, our society now understands how you work. Consumers need you. Businesses need you. And your marginal economic power increases with each incremental person who uses your service. You’ve become a natural monopoly. We’ll grant you this monopoly. But we also need to regulate you heavily, so that you don’t exploit our people.” Our notion of public utilities emerged from that line of thought.

    I don’t foresee targeted regulation around specific content (or around narrow questions of privacy, or of political-ad transparency) getting quite to the level that we need, useful as they would be. I see us needing a complete redistribution of power. We might not need a “tax rate” as high as we have on other public utilities. But the regulatory community must earnestly consider the digital sector’s broader economic merits — in the end demanding and enforcing a categorical change on this sector.

    What then can a broader “digital social contract” look like today? How might it foster corporate transparency, consumer privacy, social fairness, and market dynamism by instituting, for example: algorithmic accountability, redistribution of excessive profit-making, automatic stabilizers to regulate concentrated power and to promote ongoing innovation?

    If we can agree that this business model is premised on uninhibited data collection, the development of opaque algorithms (to enable content curation and ad targeting), and the maintenance of platform dominance (through practices that diminish market competition, including raising barriers to entry for potential rivals), then three basic components of possible intervention stand out. First, for data collection and processing, all the power currently lies within corporate entities. For now, Google can collect whatever information it desires. It can do whatever it wants with this data. It can share this information basically with whomever.

    Europe’s GDPR has begun to implement some better industry norms. But to truly resolve these problems, we’ll need to transfer more power away from private firms. Again, the power to determine which of your consumer data can be collected, and how your data can be used, should not belong to any firm. The individual user should be able to tell Facebook: “Yes, you can collect this particular stream of data for now. You can use it in these particular ways.” A baseline data-privacy bill could establish such rights as belonging to the individual, not the corporations.

    We also need more transparency. Basic awareness of how this whole sector works should not be treated as some contrived trade secret. Individual consumers should have the right to understand how these businesses work, and shouldn’t just get opted in by default through an incomprehensible terms-of-service contract. We likewise need much better transparency on how platform algorithms and data-processing schemes themselves work.

    And finally, we need to improve market competition. We need data-portability arrangements, interoperability agreements — and most importantly, a serious regulatory regime to contend realistically with monopolistic concentration.

    Of course the consumer-internet sector sees this hypothetical three-fold regulatory regime (and any contumacious soul who advocates it) as baleful — as it should. While this new regime wouldn’t slash the business model entirely, it would certainly cut profit margins from the exploitative levels at which they currently stand, encouraging economic equity.

    Now for present-day regulatory dynamics, what looks most promising at the state level? And could you explain why, in the current political environment, we should suspect any national regulations that do happen to gain traction of preempting stronger state measures?

    Qui vivra verra. I do see hope in that many states have acted. Perhaps most significantly, California has once again stood up for consumer rights in the digital space, with the California Consumer Privacy Act. I’d assume other states will begin to introduce new privacy protections over the next year. Some will mirror what California has done (a form of GDPR-lite, going maybe one-third as far). And even for the CCPA, California had its own unique process, allowing legislators to push this bill forward. But I do think we’ll eventually see far more meaningful state activity, beyond just California’s.

    We’ll also see (especially if many states start passing bills, and if these bills have even small differences between them) internet companies seeking to evade the most stringent state’s requirements — and desperately hoping that the other 49 states don’t innovate off of this most stringent legislation. To avoid that possibility, and to clear up any inconsistent requirements, the tech industry will push hard for a single, preemptive, federal law. And as consumers, we would want a very protective federal bill that hands users meaningful rights. But we will not get that strong legislation unless we push back against this industry’s arguments, hard.

    Finally, with your own government service in mind, with your own despair at past Congressional bungling, could you describe what it would look like for Congressional hearings to fulfill their responsibility of providing the fulcrum for effective regulation? What specific questions coming from our elected officials would assure you that this critical function was being carried out properly in relation to Big Tech platforms?

    Congressional hearings are crucial in moving this ball forward, and our ignorance has definitely slowed progress toward much-needed regulation. I certainly wouldn’t suggest Congress has been guileless in approaching internet regulation. In fact, some proposed measures (by Mark Warner, Amy Klobuchar, Richard Blumenthal, John McCain, Josh Hawley, Anna Eshoo, David Cicilline, and many others) have been impressively inventive. But I would say that tech executives and policy experts have not been forced to testify quite enough, at a time when Congress needs to be intransigent (minatory, even) in its dealings with the dominant internet firms. Hauling tech executives, particularly chief executives, before Congress (and a national audience), and adroitly questioning them about how their business works, is absolutely critical.

    Think about it: Big Tech is still far more responsive to America’s government than to any other. No comparable forum can force meaningful answers out of this industry — regarding its interactions with consumers, or its engagements with marketers who use these platforms. And the only way to get a truthful answer concerning these companies’ business model (which can then allow you to assess how their business objectives impede or improve the broader public’s social or economic circumstances) is by bringing chief executives to Congress to testify under oath, not in some senator’s private office.

    There’s no need to be bilious or boorish about it. We simply need to ask the right series of questions, to get at the heart of this business model and its impact on society, and to put all of that on the record. Too many questions come to mind for me to list them all here. But most generally, we need to ask about the nature of the business model for each testifying firm, and how each firm conceptualizes its place in society and its impact on the media environment. Rightly tailored, I believe that those questions can force the truth out of chief executives: that the modern media ecosystem has been wholly monopolized — and that Congress can now begin to peel the orange, dissect its pieces, and identify the seeds of exploitation.

     

    Portrait credit: Andrew Magnum/New York Times