American antitrust law is at a crossroads, characterized by calls from the Biden Administration and members of Congress to “break up” big technology companies. Traditional measures for conducting merger reviews and enforcement actions have been challenged, with suggestions that the evolving digital economy warrants new standards to promote competition. This Article examines the founding principles of antitrust law and reviews major media and technology cases brought against motion picture studios, IBM, and Microsoft, to help analyze the long-term impact of such cases. The author, a former technology executive and law professor, advocates new laws to protect and value data privacy and personal information, but warns against revising anti-competition principles to new constructs that can shift with the political winds and cause economic uncertainty.
This article analyses the challenges of regulating the digital technology sector to support journalism in the era of platformization. It examines the interdependence between three categories of policy interventions proposed by regulators worldwide to rebalance the dynamics between journalism and online platforms: taxation and subsidies, copyright and licensing, and competition and anti-trust. By examining the theory of change driving each intervention, the benefits to publishers, and the potential for government intervention, this paper explores the risks of capture inherent in different approaches. It analyses the potential for media capture in each regulatory approach and with respect to further tying the future of journalism to the infrastructure provided by tech platforms. Capture through platformization is not well understood or considered by policymakers, and many debates over regulation rightly focus on the potential for political influence, but they fail to consider the broader implications of specific policy interventions on infrastructure capture. This article argues that policymakers must establish a transparency framework to provide better data and understanding of the relationship between online platforms and news media. Without it, interventions will be ineffective, and dependency ensured. It concludes with a discussion on the importance of defining the objectives of new laws and crafting them in ways that minimize threats to media independence and sustainability. This article provides a theoretical contribution to the broader emerging discourse on platformization and media capture and offers practical recommendations for policymakers based on comparative analysis and an assessment of evidence and impact.
The open data revolution—a movement to liberate government-held information by publishing it online—holds enormous promise as a “force multiplier” for cash-strapped news organizations. Rather than consuming the resources of journalists and lawyers in fighting for access to government records, opening data voluntarily enables news organizations to devote their resources to adding value to government information through analysis and contextualization. But the reality of open data has yet to fully match its promise. This paper examines how and why, and recommends protocols to guide government agencies in selecting the highest-value datasets to publish. Access to government data and documents is the fuel that powers investigative reporting. But the freedom-of-information (FOI) laws that entitle the public and press to government-held information are notoriously frustrating to use; compliance is often incomplete, costly and slow. Voluntarily opening high-value datasets to public inspection can complement FOI laws in ways useful both to agencies (sparing them from repeat requests for the same information) and for requesters (sparing them from the adversarial process of suing for access). The first generation of government open data portals, however, has failed to garner widespread public engagement. Researchers have suggested that part of the problem is agencies’ failure to prioritize publishing the data that users actually need. To illustrate the shortcomings of municipal open data sites, the authors chose 30 cities of varying sizes and checked their websites for one of the highest-priority datasets in contemporary America: Instances in which police officers use force. Predictably, the review found that big cities – with the capacity to hire well-qualified information officers – were likely to publish the data, while small towns invariably did not. Investing in open data is an investment in rebuilding frayed trust with a skeptical public. Reliable government data can be an asset in combating dis- and mis-information. But making this investment will require changing both government spending priorities and government custodians’ widespread cultural predisposition toward secrecy. The authors recommend that, for open data portals to realize their civic potential, government agencies should prioritize the data they choose to publish by considering three priorities: Urgency, actionability, and verifiability. Although there is considerable controversy over whether news organizations should accept direct government subsidies, it would be uncontroversial for government agencies to support quality journalism indirectly, by lowering the barriers to obtaining useful information.
Social media markets present a number of market failures. Some lead to users’ underexposure to diversity of content. There may be a variety of ways to address this challenge. One solution could be to require large platforms to unbundle hosting and content curation activities, while also obliging them to grant fair and non-discriminatory access to third-party players that offer content curation activities to the platforms’ users. This paper explores the pros and cons of such a remedy, basing its analysis on a series of semi-structured interviews with the various stakeholders that would be directly impacted should the remedy be put in place. The paper summarizes the principal input and feedback received from the interviews and makes suggestions for decision makers and for further research.
This paper discusses the differences in patent eligibility in the US and the EU, and how the resulting inconsistencies have specifically affected RNAi research and patents. Courts around the world have struggled with defining clear distinctions between patentable and unpatentable biotechnological inventions. Gene-editing technology is unlike anything patent systems have seen before and the field continues to rapidly evolve. The changing technology thus challenges both legislators and the judiciary as they attempt to keep patent rules balanced and current. In the landmark case Association for Molecular Pathology v. Myriad Genetics, the US Supreme Court found that naturally occurring DNA sequences were unpatentable. European courts, however, have not yet followed suit, with the European Directive on the Protection of Biotechnological Inventions noting that isolated DNA sequences are patentable subject matter. In fact, the corresponding European patents at issue in Myriad remained valid in the EU despite various oppositions. RNAi-based research and technology is still in the early stages of its evolution. Thus the relative ease or difficulty of protecting RNAi IP is likely to have a large effect on the growth of this new class of therapeutics.