When should we think of “free-of-charge” platform firms as con artists? When should our concerns about Big Tech data-surveilling business models and monopolistic corporate practices stretch far beyond that? When I want to ask such questions, I pose them to David Pozen. This present conversation focuses on Pozen’s recent Harvard Law Review article (co-written with Lina Khan) “A Skeptical View of Information Fiduciaries.” Pozen, a professor at Columbia Law School, teaches and writes about constitutional law and information law, among other topics. From 2010 to 2012, Pozen served as special advisor to Harold Hongju Koh at the Department of State. Previously, Pozen was a law clerk for Justice John Paul Stevens on the US Supreme Court and for Judge Merrick Garland on the US Court of Appeals, and a special assistant to Senator Ted Kennedy on the Senate Judiciary Committee. In 2019, the American Law Institute named Pozen the recipient of its Early Career Scholars Medal, which is awarded every other year to “one or two outstanding early-career law professors whose work is relevant to public policy and has the potential to influence improvements in the law.”
ANDY FITCH: Since this paper describes itself as “an exercise in critique, not prescription,” could you first outline on a broader policy level the “grand bargain” that advocates for information fiduciaries see themselves proposing — and why you might not see such a bargain as so grand after all? Where might you agree with figures such as Jack Balkin and Jonathan Zittrain on the pressing need to protect consumer autonomy, to ensure data security, and to promote Big Tech corporate transparency? Where might you depart from Balkin and Zittrain, by considering such goals fundamentally incompatible with tech-sector business models premised upon ever-escalating (and exploitative) surveillance of users? And where might you see campaigns coaxing platform firms no longer to operate as “con artists” overlooking (or even obscuring and normalizing) much more consequential structural problems concerning market concentration economy-wide?
DAVID POZEN: The phrase “grand bargain” comes from a 2016 essay by Balkin and Zittrain in The Atlantic. They propose a new federal statute that enables digital companies to elect to become information fiduciaries. These companies would promise, as fiduciaries, “not to leverage personal data to unfairly discriminate against or abuse the trust of end users.” The statute would then “preempt a wide range of state and local laws” on consumer privacy and data security. Basically, the companies agree not to act like “con artists” (as Balkin puts it), and in exchange the federal government relieves them of potential liability for privacy violations and related offenses under state law.
My co-author Lina Khan and I do agree that there is a pressing need right now to protect consumer autonomy, ensure data security, and promote Big Tech corporate transparency. The law is failing on all of those fronts. But the “grand bargain” strikes us as a raw deal for the public. State attorneys general have played an important role in developing privacy norms and in enforcing laws against unfair and deceptive trade acts and practices. California has enacted sweeping consumer-privacy legislation. If the federal government is going to kick the states out of this area, it had better have in place a robust legal alternative. And there is little reason to believe that an information-fiduciary scheme would be up to the task.
At least to date, advocates of the information-fiduciaries proposal have suggested that it would not require fundamental changes to the prevailing Big Tech business models. A large part of the proposal’s appeal is that it promises, in Zittrain’s words, to protect consumers “without the need for heavy-handed government intervention.” Everyone wins… Mark Zuckerberg included! Yet while getting companies to stop betraying people’s trust is a necessary goal, it is not a sufficient goal. Internet reformers also need to address problems of market dominance, pervasive surveillance, rampant discrimination and harassment, proliferating “fake news,” declining local news coverage, and much more besides.
The United States has numerous state and federal laws that prohibit unfair, deceptive, and fraudulent commercial practices. To suggest that the biggest regulatory problem we face right now with Big Tech is a lack of norms against “con artistry” is to ignore or discount these laws, rather than focus on invigorating their enforcement — and to give dominant platforms a pass for everything except a small set of especially outrageous behaviors.
Specifically for this application of a fiduciary model, could you describe why, both in legal contexts and everyday corporate operations, managing any conflict between, say, a Facebook or Google’s fiduciary obligations to shareholders and to users would mean that shareholders’ interests inevitably win out — again unless some categorical rebalancing of each party’s respective claims occurs, far beyond what information-fiduciary advocates tend to recommend? And then, on a more conceptual level, could you sketch why comparing, say, the informational asymmetry that comes about through a doctor or attorney’s professional expertise, or through their circumstantial need to acquire confidential information from clients, bears at best superficial resemblance to platform firms with a core strategy of eliciting and monetizing as much information (and as personal of information) as possible from increasingly habituated and compromised users? And finally here, perhaps just in terms of the follies of mixing metaphors: how could some online platform’s marketplace of goods or ideas, supposedly steered by an impersonal and invisible hand, simultaneously wrap its arm around one’s shoulder and act (always act, in your account of fiduciary obligations) with one’s own personal best interests at heart?
For this idea that platforms such as Facebook and Google should owe fiduciary obligations to their users, I’d first note that the platforms already owe fiduciary obligations to their stockholders. They are publicly traded companies incorporated in Delaware. And as the Delaware Court of Chancery recently explained, the officers and directors of such companies “must, within the limits of [their] legal discretion, treat stockholder welfare as the only end.” So, unless you have a reform that gives a platform’s officers and directors no legal choice on a business matter, you will continue to see the interests of its stockholders (for whom online addiction, exposure, and discord are all potential cash cows) prioritized over the interests of its users. Yet that kind of reform sounds exactly like the “heavy-handed” intervention that information-fiduciary advocates seem to think they can avoid.
Traditional professional fiduciaries know more sensitive information about their customers than the customers know about the fiduciaries. By assigning the fiduciary (a doctor or a lawyer, say) duties of loyalty, care, and confidentiality to the beneficiary (a patient or client), fiduciary law allows the beneficiary to take advantage of the fiduciary’s superior knowledge and expertise without having to worry about being taken advantage of herself. In the digital realm, as Balkin observes, there are likewise asymmetries of information between users and platforms, whose “operations, algorithms, and collection practices are mostly kept secret.” This is part of what leads him to a fiduciary solution.
But not all information asymmetries are asymmetric in the same way. In the case of an online platform like Facebook, surveys show that most users are in the dark not only about technical details but also about the platform’s basic operations and revenue streams. Moreover, disclosure of sensitive personal data is often functionally necessary to obtain good legal advice or medical care. Social-media networks don’t need so much sensitive personal data to serve their users. They’ve just chosen to adopt business models that make the constant collection of such data highly lucrative. These companies go out of their way to create extreme asymmetries of information and power — asymmetries that Khan and I submit should be attacked directly by reformers, rather than tolerated and modestly mitigated after the fact.
On the mixed metaphor, I suppose a platform might contend that, by tracking your online activities and then steering you toward advertisements tied to your economic, demographic, and psychological profile, the platform is acting with your best interests at heart even as it monetizes those interests. Targeted advertising can be more or less predatory, and some Facebook users purport to like it (although the vast majority say the opposite). But it strains credulity to suggest that in showing you these ads, the platform is doing what every fiduciary must do: putting your welfare first. As long as companies make most of their revenue from personally targeted ads, they will be motivated to extract as much data from their users as they can, with predictably negative consequences for privacy, security, autonomy, and other values.
Could we pivot then towards this paper’s basic question not of how Big Tech firms should wield their dominant power within the industry (as well as in relation to individual users), but of whether firms should have the ability to acquire such dominance in the first place? How might, say, restricting any firm’s possibilities for market dominance help to protect individual participants within our data-driven economy, while simultaneously addressing a much wider range of systemic concerns? And since information-fiduciary advocates likewise might suggest the need to address such broader concerns, could you clarify why their approach in fact would not mesh with potentially more consequential policy initiatives incorporating “most antitrust and procompetition tools…public certification or safe harbor programs… requirements that firms pay people for their data… data portability and interoperability mandates… co-regulation schemes that incentivize businesses to continually produce and share compliance information… and any number of front-end limits or ‘taxes’ on private data collection”? Or similarly, even when information-fiduciary advocates then claim their approach’s strategic advantage, in terms of skirting today’s First Amendment protections, where might you see such advocates’ corporatized conception of speech protections again stymieing more robust calls for what proactive First Amendment regulation might look like (“from arguments that commercial speech and computer algorithms deserve only modest, if any, constitutional protection…to the contention that online service providers should be treated as public trustees or public utilities…to ‘systemic’ perspectives on free speech that read the First Amendment as permitting or even requiring the government to take affirmative measures”)?
Khan and I do not believe that restricting the market dominance of leading digital platforms would be any panacea. But depending on how they’re designed, such restrictions may help to protect individual users, as well as deliver other public benefits. For instance, antitrust actions can open up markets, reduce risks of political capture, and facilitate competition on privacy; data interoperability requirements can make it easier for users to exit unhealthy online environments; and bright-line prohibitions on certain modes of earning revenue can curb socially destructive business incentives and practices.
Could these sorts of pro-competition structural reforms be combined with an information-fiduciary reform? In theory, yes. But as a matter of practical political reality, we are dubious. One of the main selling points of a fiduciary approach has been its light-touch, open-ended character. Structural reforms, in contrast, don’t purport to please everyone, least of all the most powerful firms. They require regulators to confront hard tradeoffs and make substantive choices. They aim to disrupt the status quo, not merely soften its sharp edges. And they certainly don’t transmit the message that Big Tech companies ought to be seen as loyal, other-regarding stewards of our data.
Zuckerberg has expressed interest in the information-fiduciary idea. That alone should give pause to anyone who wants to keep options like breaking up Facebook on the table.
The First Amendment issues are complicated. The most intriguing aspect of Balkin’s proposal, in my view, is his contention that if tech companies were recognized as fiduciaries for their users, the government would have greater leeway to shape their behavior without running afoul of the First Amendment. The Supreme Court has indicated in other contexts that the speech of fiduciaries may be subject to special rules: for instance, laws limiting what doctors can disclose about their patients. Perhaps, then, the government would be allowed to regulate Facebook-as-professional-fiduciary in ways it wouldn’t be allowed to regulate Facebook-as-ordinary-company. The immediate flaw in this argument, however, is that the Court recently issued a decision in a case called National Institute of Family & Life Advocates v. Becerra, which denied that professional speech may be treated any differently from other sorts of speech for First Amendment purposes. And even the cases that offer the strongest support for Balkin’s argument emphasize the close personal nature of a fiduciary’s relationship with her beneficiary, which of course is not true of Facebook’s relationship with its billions of users.
In short, it seems unlikely that there would be any significant constitutional payoff from designating digital companies as fiduciaries for their users. It’s a First Amendment fantasy. And it’s not even a very appealing fantasy. There are many other theories of the First Amendment out there, alluded to in the passage you quote at the end of your question, that would permit many forms of social-media regulation. The Roberts Court may not like some of those theories either, but they at least focus attention on the companies’ role as increasingly essential platforms for mass communication, instead of trying to reason from precedents involving doctors and lawyers.
Finally then, while acknowledging the limits of any conceptual analogy for an emergent cluster of social/legal/economic questions, could you flesh out two potential alternatives to a fiduciary model here? First, what might it look like to conceive of online platforms as rough equivalents to public infrastructure, and to adopt a regulatory approach comparable to the Progressive Era’s establishment of public utilities — not necessarily breaking up the biggest firms, but enforcing strict prohibitions (through “nondiscrimination and common carrier regimes, limits on the lines of business in which firms could engage…corporate governance reforms”) on certain monopolistic practices, while simultaneously reshaping markets to promote greater competition and user autonomy (for instance, by providing public options)? And second, what might it look like to conceive of the collection, aggregation, and exploitation of users’ data as a negative economic externality equivalent to environmental pollution — again not just at the level of private personal damages, but of broader harms to public interests (and of course requiring clear legal enforcement and systemic economic disincentives, not just morally laden normative standards)?
Most of our paper is dedicated to critiquing the analogy between online platforms and traditional fiduciaries (doctors, lawyers, accountants), in terms of their relationships with customers. The analogy is inapposite on numerous levels. Its adoption threatens to mislead policy makers and mystify surveillance capitalism. At the end of the paper, we suggest that it would be more fruitful to analogize the large online platforms to “offline” providers of public infrastructure, and to analogize the data they accumulate and aggregate to environmental pollution.
These analogies are imperfect, too. But I think they better capture the nature of the online platforms and the regulatory challenges these platforms pose. The infrastructure idea points toward Progressive Era tools to restrain private control over key channels of communication and commerce — tools such as those you mentioned: nondiscrimination and common carrier regimes, limits on the lines of business in which one firm can engage, and public options. The pollution idea points toward environmental-law techniques, adapted to reduce the negative externalities of digital surveillance — techniques such as taxes on data collection and retention, or liability rules for data “spills.” Both ideas point away from relatively narrow frameworks focused on the bilateral relationship between any given company and any given consumer.
Translating these general principles into granular policy details is a major challenge. My expertise as a legal theorist (such as it is) runs out here. But the fight over platform regulation hasn’t reached that granular level yet. Americans are still at the stage of debating the basic concepts and categories with which we will think and talk about the dominant digital platforms. And whatever else these entities might be, Khan and I are here to tell you that they are not your fiduciaries.