Economic Analysis and Competition Policy Research

Home   •   About   •   Analytics   •   Videos

As the DOJ’s antitrust case against Google begins, all eyes are focused on whether Google violated antitrust law by, among other things, entering into exclusionary agreements with equipment makers like Apple and Samsung or web browsers like Mozilla. Per the District Court’s Memorandum Opinion, released August 4, “These agreements make Google the default search engine on a range of products in exchange for a share of the advertising revenue generated by searches run on Google.” The DOJ alleges that Google unlawfully monopolizes the search advertising market.

Aside from matters relating to antitrust liability, an equally important question is what remedy, if any, would work to restore competition in search advertising in particular and online advertising generally?

Developments in the UK might shed some light. The UK Treasury commissioned a report to make recommendations on changes to competition law and policy, which aimed to “help unlock the opportunities of the digital economy.” The report found that Big Tech’s monopolizing of data and control over open web interoperability could undermine innovation and economic growth. Big Tech platforms now have all the data in their hands, block interoperability with other sources, and will capture more of it, through their huge customer-facing machines, and so can be expected to dominate the data needed for the AI Period, enabling them to hold back competition and economic growth.

The dominant digital platforms currently provide services to billions of end users. Each of us has either an Apple or Android device in our pocket. These devices operate as part of integrated distribution platforms: anything anyone wants to obtain from the web goes through the device, its browser (often Google’s search engine), and the platform before accessing the Open Web, if not staying on an app on an apps store within the walls of the garden.

Every interaction with every platform product generates data, refreshed billions of times a day from multiple touch points providing insight into buying intent and able to predict people’s behavior and trends.

All this data is used to generate alphanumeric codes that match data contained in databases (aka “Match Keys”), which are used to help computers interoperate and serve relevant ads to match users’ interests. These were for many years used by all from the widely distributed Double Click ID. They were shared across the web and were used as the main source of data by competing publishers and advertisers. After Google bought Double Click and grew big enough to “tip” the market, however, Google withdrew access to its Match Keys for its own benefit.

The interoperability that is a feature of the underlying internet architecture has gradually been eroded. Facebook collected its own data from user’s “Likes” and community groups and also withdrew access for independent publishers to its Match Key data, and recently Apple has restricted access to Match Key data that is useful for ads for all publishers, except Google has a special deal on search and search data. As revealed in U.S. vs Google, Apple is paid over $10 billion a year by Google so that Google can provide its search product to Apple users and gather all their search history data that it can then use for advertising. The data generated by end user interactions with websites is now captured and kept within each Big Tech walled garden.

If the Match Keys were shared with rival publishers for use in their independent supply channel and used by them for their own ad-funded businesses, interoperability would be improved and effective competition could be generated with the tech platforms. Competition probably won’t exist otherwise.  

Both Google and Apple currently impose restrictions on access to data and interoperability. Cookie files also contain Match Keys that help maintain computer sessions and “state” so that different computers can talk to each other and help remember previous visits to websites and enable e-commerce. Cookies do not themselves contain personal data and are much less valuable than the Match Keys that were developed by Double Click or ID for advertisers, but they do provide something of a substitute source of data about users’ intent to purchase for independent publishers.

Google and Apple are in the process of blocking access to Match Keys in all forms to prevent competitors from obtaining relevant data about users needs and wants. They also prevent the use of the Open Web and limit the inter-operation of their apps stores with Open Web products, such as progressive web apps.

The UK’s Treasury Report refers to interoperability 8 times and the need for open standards as a remedy 43 times; the Bill refers to interoperability and we are expecting further debate about the issue as the Bill passes through Parliament.

A Brief History of Computing and Communications

The solution to monopolization, or lack of competition, is the generation of competition and more open markets. For that to happen in digital worlds, access to data and interoperability is needed. Each previous period of monopolization involved intervention to open-up computer and communications interfaces via antitrust cases and policy that opened market and liberalized trade. We have learned that the authorities need to police standards for interoperability and open interfaces to ensure the playing field is level and innovation can take place unimpeded. 

IBM’s activity involved bundling computers and peripherals and the case was eventually solved by unbundling and unblocking interfaces needed by competitors to interoperate with other systems. Microsoft did the same, blocking third parties from interoperating via blocking access to interfaces with its operating system. Again, it was resolved by opening-up interfaces to promote interoperability and competition between products that could then be available over platforms.

When Tim Berners Lee created the World Wide Web in the early 1990s, it took place nearly ten years after the U.S. courts imposed a break-up of AT&T and after the liberalization of telecommunications data transmission markets in the United States and the European Union. That liberalization was enabled by open interfaces and published standards. To ensure that new entrants could provide services to business customers, a type of data portability was mandated, enabling numbers held in incumbent telecoms’ databases to be transferred for use by new telecoms suppliers. The combination of interconnection and data portability neutralized the barrier to entry created by the network effect arising from the monopoly control over number data.

The opening of telecoms and data markets in the early 1990s ushered in an explosion of innovation. To this day, if computers operate to the Hyper Text Transfer Protocol then they can talk to other computers. In the early 1990s, a level playing field was created for decentralized competition among millions of businesses.

These major waves of digital innovation perhaps all have a common cause. Because computing and communications both have high fixed costs and low variable or incremental costs, and messaging and other systems benefit from network effects, markets may “tip” to a single provider. Competition in computing and communications then depends on interoperability remedies. Open, publicly available interfaces in published standards allow computers and communications systems to interoperate; and open decentralized market structures mean that data can’t easily be monopolized. 

It’s All About the Match Keys

The dominant digital platforms currently capture data and prevent interoperability for commercial gain. The market is concentrated with each platform building their own walled gardens and restricting data sharing and communication across. Try cross-posting among different platforms as an example of a current interoperability restriction. Think about why messaging is restricted within each messaging app, rather than being possible across different systems as happens with email. Each platform restricts interoperability preventing third-party businesses from offering their products to users captured in their walled gardens.

For competition to operate in online advertising markets, a similar remedy to data portability in the telecom space is needed. Only, with respect to advertising, the data that needs to be accessed is Match Key data, not telephone numbers.    

The history of anticompetitive abuse and remedies is a checkered one. Microsoft was prohibited from discriminating against rivals and had to put up a choice screen in the EU Microsoft case. It didn’t work out well. Google was similarly prohibited by the EU in Google search (Shopping) from (1) discriminating against rivals in its search engine results pages, (2) entering exclusive agreements with handset suppliers that discriminated against rivals, and (3) showing only Google products straight out of the box in the EU Android case. The remedies did not look at the monopolization of data and its use in advertising. Little has changed and competitors claim that the remedies are ineffective.

Many in the advertising publishing and ad tech markets recall that the market worked pretty well before Google acquired Double Click. Google uses multiple data sources as the basis for its Match Keys and an access and interoperability remedy might be more effective, proportionate and less disruptive.     

Perhaps if the DOJ’s case examines why Google collects search data from its search engine, its use of search histories, browser histories and data from all interactions with all products for its Match Key for advertising, the court will better appreciate the importance of data for competitors and how to remedy that position for advertising-funded online publishing. 

Following Europe’s Lead

The EU position is developing. Under the EU’s Digital Markets Act (DMA), which now supplements EU antitrust law as applied in the Google Search and Android Decisions, it is recognized that people want to be able to provide products and services across different platforms or cross-post or communicate with people connected to each social network or messaging app. In response, the EU has imposed obligations on Big Tech platforms in Articles 5(4) and 6(7) that provide for interoperability and require gatekeepers to allow open access to the web.

Similarly, Section 20.3 (e) of the UK’s Digital Markets, Competition and Consumers Bill (DMCC) refers to interoperability and may be the subject of forthcoming debate as the bill passes further through Parliament. Unlike U.S. jurisprudence with its recent fixation on consumer welfare, the objective of the Competition and Markets Authority is imposed by the law. The obligation to “promote competition for the benefit of consumers” is contained in EA 2013 s 25(3). This can be expressly related to intervention opening up access to the source of the current data monopolies: the Match Keys could be shared, meaning all publishers could get access to IDs for advertising (i.e., operating systems generated IDs such as Apple’s IDFA or Google’s Google ID or MAID).

In all jurisdictions it will be important for remedies to stimulate innovation, and to ensure that competition is promoted between all products that can be sold online, rather than between integrated distribution systems. Moreover, data portability needs to apply with reference to use of open and interoperable Match Keys that can be used for advertising, and that way address the data monopolization risk. As with the DMA, the DMCC should contain an obligation for gatekeepers to ensure fair reasonable and nondiscriminatory access, and treat advertisers in a similar way to that through which interoperability and data potability addressed monopoly benefits in previous computer, telecoms, and messaging cases.        

Tim Cowen is the Chair of the Antitrust Practice at the London-based law firm of Preiskel & Co LLP.

In July, a proposed $13 billion mega-merger between Sanford Health, the largest rural health system in the county, and Fairview Health Services, one of the largest systems in Minnesota’s Twin Cities metro, was called off. Abandonment of the merger came after concerted opposition from farmers, healthcare workers, and medical students, emboldened by passage of state legislation that creates much stronger oversight of healthcare mergers. The new law addresses several of the challenges the Federal Trade Commission (FTC) has encountered while trying to block hospital mergers and demonstrates the important role states can play in policing monopoly power. 

Hospital consolidation has been rapid and relentless over the past two decades, with over 1,800 hospital mergers since 1998 leaving the United States with around 6,000 hospitals instead of 8,000. This consolidation has raised healthcare costs, reduced access to care, and lowered wages for healthcare workers. Although nearly half of all FTC merger challenges between 2000 and 2018 involved the healthcare industry, that effort still only amounted to challenging around one percent of hospital mergers.

While the FTC has made efforts to protect competition among hospitals and health systems over the years, it has faced key obstacles, including (1) limits on pre-merger notification, (2) a self-imposed limit to focus exclusively on challenging mergers of hospitals within a single geographic region, and (3) exemptions in the FTC’s antitrust authority over nonprofits. 

Parties to small healthcare mergers don’t have to notify the FTC before merging due to the limits on pre-merger notification under the Hart-Scott-Rodino Act. Thus, the FTC is unaware of many smaller healthcare mergers, and the agency is left trying to unwind those mergers after the fact.

The FTC’s election to refrain from challenging ”cross-market mergers,” which involve hospitals operating in different geographic markets, has enabled such systems to become the predominant health system nationwide. This hands-off approach occurs despite mounting evidence that cross-market mergers give health systems even more power to raise prices. A study in the RAND Journal of Economics found that hospitals acquired by out-of-market systems increased prices by about 17 percent more than unacquired, stand-alone hospitals; these mergers were also found to drive up prices at nearby rivals. 

While the FTC has broad authority to challenge hospital mergers, the agency’s authority to prevent anticompetitive conduct is more limited. The FTC Act gives the agency the authority to prohibit “unfair methods of competition” and “unfair or deceptive acts or practices” but that authority does not extend to nonprofits, which account for 48.5 percent of hospitals nationwide. This has meant that antitrust cases like the one against Atrium Health in 2016 for entering into contracts with insurers that contained anti-steering and anti-tiering clauses, have been brought by the DOJ.

Minnesota Serves as a Testing Ground

Minnesota is no stranger to the hospital consolidation that has visited the rest of the country. Over two decades ago, 67 percent of Minnesota’s hospitals were independent, but because of a wave of consolidation that has bolstered the largest health systems, only 28 percent of Minnesota’s hospitals remain independent. Just six health systems control 66 of Minnesota’s 125 hospitals, compared to 51 a decade prior. Just three health systems (Fairview, The Mayo Clinic, Allina Health System) receive nearly half of all hospital operating revenue in Minnesota. Amidst this consolidation, Minnesota has lost ten hospitals since 2010 and seen per capita spending for hospital care rise from six percent below the national average in 1997 to over eight percent above the national average in 2021, according to Personal Consumption Expenditures data from the Bureau of Economic Analysis. 

The Sanford-Fairview hospital merger would have doubled-down on these trends. The combination would have given Sanford control of a fifth of Minnesota’s hospitals, with a geographic footprint spanning across several corners of the state. The merger also would have established the largest operator of primary care clinics. In addition to the sheer size of the merger, Fairview’s control of the University of Minnesota Medical Center, which is home to the teaching hospital that trains 70 percent of Minnesota’s doctors, generated labor concerns and provided an opening for passage of tougher regulations on healthcare transactions. 

The initial legislative activity around the Sanford-Fairview merger leveraged the work by Attorney General (AG) Keith Ellison when the transaction was first announced. Ellison’s office held four community meetings across the state to gather input from Minnesotans on the deal, and legislators followed with their own informational hearings. Initial legislative concerns specifically related to granting an out-of-state entity control over a teaching hospital. Because of the work of Ellison’s office alongside organizations like the Minnesota Farmers Union (the author’s employer), the Minnesota Nurses Association, and SEIU-Healthcare Minnesota, legislative discussions turned more broadly to fixing the lack of safeguards Minnesota law provided against healthcare consolidation.

Sanford and Fairview initially failed to provide information Ellison’s office needed to properly investigate the merger, which left Ellison publicly pleading with the systems to delay their initial timeline. While the entities agreed to do so, the delay created uncertainty over whether Ellison’s office would be able to conduct a proper review before the transaction was finalized. 

The law that passed makes three critical changes that help address the obstacles the FTC has run into. First, the law created a robust pre-merger notification regime that will give the Minnesota AG access to a broader set of information than the FTC currently receives under the HSR Act. This requirement is also much broader than the minimal notice requirements that previously existed in state law, and should help avoid a repeat of a key issue during Ellison’s review of the merger. Healthcare entities will now be required to provide specific information to the AG’s Office at the outset. The law also makes the failure to provide this information a reason for blocking a proposed transaction. Health systems will be required to provide geographic information, details on any existing relationships between the merging systems, terms of the transaction, any plans for the new system to reduce workforce or eliminate services as a result of the transaction, any analysis completed by experts or consultants used to facilitate and evaluate the transaction, financial statements, and any federal filings pertaining to the merger including information filed pursuant to the Hart-Scott-Rodino Act. 

Second, the new law requires that health systems provide a financial and economic analysis of the proposed transaction, as well as an impact analysis of the merger’s effects on local communities and local labor. This broad set of information in some ways resembles the changes that the FTC recently proposed to HSR filings. These first two requirements apply to any transaction that involves a healthcare entity that has average annual revenues of $80 million or more or will result in the creation of an entity with annual revenues of $80 million or more. This is a lower revenue threshold than contained in the HSR Act.

Third, the new law establishes a public interest standard for evaluating healthcare transactions. The law spells out a wide range of factors the AG can consider when determining whether a proposed transaction is in the public’s interest. These broad factors include a transaction’s potential impact on the wages, working conditions or collective bargaining agreements for healthcare workers, the impact on public health, access to care in affected communities, access to care for underserved populations, the quality of medical education, workforce training or research, access to health services, insurance or workers, costs for patients and broader healthcare costs trends.  

This broad public interest standard helps ensure that the narrowness of current antitrust law and its mountains of bad case law, do not restrict Minnesota’s ability to address the harms of hospital monopolies. Instead of having to fight with courts over technical definitions of healthcare markets, the AG can point to the many harms flowing from consolidation, regardless of whether the transaction is a cross-market merger. In addition to the public interest standard, the law explicitly prohibits any transaction that would substantially lessen competition or tend to create a monopoly or monopsony.

The New Law Soon Will Be Put to Practice

While Sanford-Fairview will no longer provide a potential test case of the new law, two mergers in northern Minnesota were proposed just last month. As policymakers were told throughout the legislative session, Sanford-Fairview was far from the last healthcare merger with which Minnesota would need to grapple. One proposal would combine Minnesota-based Essentia Health with Wisconsin-based Marshfield Clinics Health System into a four-state system stretching across northern North Dakota, Michigan, Minnesota, and Wisconsin. The other proposed merger would fold the small two-hospital St. Luke’s Duluth system into the 17-hospital Wisconsin-based Aspirus Healthcare.  

Whether in healthcare or elsewhere in the economy, mergers are not inevitable, nor are they beyond the capacity of state governments to address. With Congressional gridlock and legislative capture posing a challenge to any federal antitrust reforms, states are a necessary battleground for anti-monopolists. Minnesota’s battle with Sanford and Fairview can serve as an instructive model for the rest of the country. Mobilizing state legislators and state AGs to pass bold antitrust reforms and challenge corporate power not only creates a laboratory for these reforms, but also serves an important part of dealing with monopolists in a world where federal enforcers face significant resource and legal constraints. 

Justin Stofferahn is Antimonopoly Director for the Minnesota Farmers Union.

If I were to draft new Merger Guidelines, I’d begin with two questions: (1) What have been the biggest failures of merger enforcement since the 1982 revision to the Merger Guidelines?; and (2) What can we do to prevent such failures going forward? The costs of under-enforcement have been large and well-documented, and include but are not limited to higher prices, less innovation, lower quality, greater inequality, and worker harms. It’s high time for a course correction. But do the new Merger Guidelines, promulgated by Biden’s Department of Justice (DOJ) and Federal Trade Commission (FTC), do the trick?

Two Recent Case Studies Reveal the Problem

Identifying specific errors in prior merger decisions can inform whether the new Guidelines will make a difference. Would the Guidelines have prevented such errors? I focus on two recent merger decisions, revealing three significant errors in each for a total of six errors.

The 2020 approval of the T-Mobile/Sprint merger—a four-to-three merger in a highly concentrated industry—was the nadir in the history of merger enforcement. Several competition economists, myself included, sensed something was broken. Observers who watched the proceedings and read the opinion could fairly ask: If this blatantly anticompetitive merger can’t be stopped under merger law and the existing Merger Guidelines, what kind of merger can be stopped? Only mergers to monopoly?

The district court hearing the States’ challenge to T-Mobile/Sprint committed at least three fundamental errors. (The States had to challenge the merger without Trump’s DOJ, which embraced the merger for dubious reasons beyond the scope of this essay.) First, the court gave undue weight to the self-serving testimony of John Legere, T-Mobile’s CEO, who claimed economies from combining spectrum with Sprint, and also claimed that it was not in T-Mobile’s nature to exploit newfound market power. For example, the opinion noted that “Legere testified that while T-Mobile will deploy 5G across its low-band spectrum, that could not compare to the ability to provide 5G service to more consumers nationwide at faster speeds across the mid-band spectrum as well.” (citing Transcript 930:23-931:14). The opinion also noted that:

T-Mobile has built its identity and business strategy on insulting, antagonizing, and otherwise challenging AT&T and Verizon to offer pro-consumer packages and lower pricing, and the Court finds it highly unlikely that New T-Mobile will simply rest satisfied with its increased market share after the intense regulatory and public scrutiny of this transaction. As Legere and other T-Mobile executives noted at trial, doing so would essentially repudiate T-Mobile’s entire public image. (emphasis added) (citing Transcript at 1019:18-1020:1)

In the court’s mind, the conflicting testimony of the opposing economists cancelled each other out—never mind such “cancelling” happens quite frequently—leaving only the CEO’s self-serving testimony as critical evidence regarding the likely price effects. (The States’ economic experts were the esteemed Carl Shapiro and Fiona Scott Morton.) It bears noting that CEOs and other corporate executives stand to benefit handsomely from the consummation of a merger. For example, Activision Blizzard Inc. CEO Bobby Kotick reportedly stands to reap more than $500 million after Microsoft completes its purchase of the video game publishing giant.

Second, although the primary theory of harm in T-Mobile/Sprint was that the merger would reduce competition for price-sensitive customers of prepaid service, most of whom live in urban areas, the court improperly credited speculative commitments to “provide 5G service to 85 percent of the United States rural population within three years.” Such purported benefits to a different set of customers cannot serve as an offset to the harms to urban consumers who benefited from competition between the only two facilities-based carriers that catered to prepaid customers.

Third, the court improperly embraced T-Mobile’s proposed remedy to lease access to Dish at fixed rates—a form of synthetic competition—to restore the loss in facilities-based competition. Within months of the consummated merger, the cellular CPI ticked upward for the first time in a decade (save a brief blip in 2016), and T-Mobile abandoned its commitments to Dish.

The combination of T-Mobile/Sprint represented the elimination of actual competition across two wireless providers. In contrast, Facebook’s acquisition of Within, maker of the most popular virtual reality (VR) fitness app on Facebook’s VR platform, represented the elimination of potential competition, to the extent that Facebook would have entered the VR fitness space (“de novo entry”) absent the acquisition. In disclosure, I was the FTC’s economic expert. (I commend everyone to read the critical review of the new Merger Guidelines by Dennis Carlton, Facebook’s expert, in ProMarket, as well as my thread in response.) The district court sided with the FTC on (1) the key legal question of whether potential competition was a dead letter (it is not), (2) market definition (VR fitness apps), and (3) market concentration (dominated by Within). Yet many observers strangely cite this case as an example of the FTC bringing the wrong cases.

Alas, the court did not side with the FTC on the key question of whether Facebook would have entered the market for VR fitness apps de novo absent the acquisition. To arrive at that decision, the court made three significant errors. First, as Professor Steve Salop has pointed out, the court applied the wrong evidentiary standard for assessing the probability of de novo entry, requiring the FTC to show a probability of de novo entry in excess of 50 percent. Per Salop, “This standard for potential entry substantially exceeds the usual Section 7 evidentiary burden for horizontal mergers, where ‘reasonable probability’ is normally treated as a probability lower than more-likely-than-not.” (emphasis in original)

Second, the court committed an error of statistical logic, by crediting the lack of internal deliberations in the two months leading up to Facebook’s acquisition announcement in June 2021 as evidence that Facebook was not serious about de novo entry. Three months before the announcement, however, Facebook was seriously considering a partnership with Peloton—the plan was approved at the highest ranks within the firm. Facebook believed VR fitness was the key to expanding its user base beyond young males, and Facebook had entered several app categories on its VR platform in the past with considerable success. Because de novo entry and acquisition are two mutually exclusive entry paths, it stands to reason that conditional on deciding to enter via acquisition, one would expect to see a cessation of internal deliberation on an alternative entry strategy. After all, an individual standing at a crossroads would consider alternative paths, but upon deciding which path to take and embarking upon it, the previous alternatives become irrelevant. Indeed, the opinion even quoted Rade Stojsavljevic, who manages Facebook’s in-house VR app developer studios, testifying that “his enthusiasm for the Beat Saber–Peloton proposal had “slowed down” before Meta’s decision to acquire Within,” indicating that the decision to pursue de novo entry was intertwined with the decision to entry via acquisition. In any event, the relevant probability for this potential competition case was the probability that Facebook would have entered de novo in the absence of the acquisition. And that relevant probability was extremely high.

Third, like the court in T-Mobile/Sprint, the district court again credited the self-serving testimony of Facebook’s CEO, Mark Zuckerberg, who claimed that he never intended to enter VR fitness apps de novo. For example, the court cited Mr. Zuckerberg’s testimony that “Meta’s background and emphasis has been on communication and social VR apps,” as opposed to VR fitness apps. (citing Hearing Transcript at 1273:15–1274:22). The opinion also credited the testimony of Mr. Stojsavljevic for the proposition that “Meta has acquired other VR developers where the experience requires content creation from the developer, such as VR video games, as opposed to an app that hosts content created by others.” (citing Hearing Transcript at 87:5–88:2). Because this error overlaps with one of the three errors identified in the T-Mobile/Spring merger, I have identified five distinct errors (six less one) needing correction by the new Merger Guidelines.

Although the court credited my opinion over Facebook’s experts on the question of market definition and market concentration, the opinion did not cite any economic testimony (mine or Facebook’s experts) on how to think about the probability of entry absent the acquisition.

The New Merger Guidelines

I raise these cases and their associated errors because I want to understand whether the new Merger Guidelines—thirteen guidelines to be precise—will offer the kind of guidance that would prevent a future court from repeating the same (or similar) errors. In particular, would either the T-Mobile/Sprint or Facebook/Within decision (or both) have been altered in any significant way? Let’s dig in!

The New Guidelines reestablish the importance of concentration in merger analysis. The 1982 Guidelines, by contrast, sought to shift the emphasis from concentration to price effects and other metrics of consumer welfare, reflecting the Chicago School’s assault on the structural presumption that undergirded antitrust law. For several decades prior to the 1980s, economists empirically studied the effect of concentration on prices. But as the consumer welfare standard became antitrust’s north star, such inquiries were suddenly considered off-limits, because concentration was deemed to be “endogenous” (or determined by the same factors that determine prices), and thus causal inferences of concentration’s effect on price were deemed impossible. This was all very convenient for merger parties.

Guideline One states that “Mergers Should Not Significantly Increase Concentration in Highly Concentrated Markets.” Guideline Four states that “Mergers Should Not Eliminate a Potential Entrant in a Concentrated Market,” and Guideline Eight states that “Mergers Should Not Further a Trend Toward Concentration.” By placing the word “concentration” in three of thirteen principles, the agencies make it clear that they are resuscitating the prior structural presumption. And that’s a good thing: It means that merger parties will have to overcome the presumption that a merger in a concentrated or concentrating industry is anticompetitive. Even Guideline Six, which concerns vertical mergers, implicates concentration, as “foreclosure shares,” which are bound from above by the merging firms’ market share, are deemed “a sufficient basis to conclude that the effect of the merger may be to substantially lessen competition, subject to any rebuttal evidence.” The new Guidelines restore the original threshold Herfindahl-Hirschman Index (HHI) of 1,800 and delta HHI of 100 to trigger the structural presumption; that threshold had been raised to an HHI of 2,500 and a change in HHI of 200 in the 2010 revision to the Guidelines.

This resuscitation of the structural presumption is certainly helpful, but it’s not clear how it would prevent courts from (1) crediting self-serving CEO testimony, (2) embracing bogus efficiency defenses, (3) condoning prophylactic remedies, (4) committing errors in statistical logic, or (5) applying the wrong evidentiary standard for potential competition cases.

Regarding the proper weighting of self-serving employee testimony, error (1), Appendix 1 of the New Guidelines, titled “Sources of Evidence,” offers the following guidance to courts:

Across all of these categories, evidence created in the normal course of business is more probative than evidence created after the company began anticipating a merger review. Similarly, the Agencies give less weight to predictions by the parties or their employees, whether in the ordinary course of business or in anticipation of litigation, offered to allay competition concerns. Where the testimony of outcome-interested merging party employees contradicts ordinary course business records, the Agencies typically give greater weight to the business records. (emphasis added)

If heeded by judges, this advice should limit the type of errors we observed in T-Mobile/Sprint and Facebook/Within, with courts crediting the self-serving testimony by CEOs and other high-ranking employees.

Regarding the embrace of out-of-market efficiencies, error (2), Part IV.3 of the New Guidelines, in a section titled “Procompetitive Efficiencies,” offers this guidance to courts:

Merging parties sometimes raise a rebuttal argument that, notwithstanding other evidence that competition may be lessened, evidence of procompetitive efficiencies shows that no substantial lessening of competition is in fact threatened by the merger. When assessing this argument, the Agencies will not credit vague or speculative claims, nor will they credit benefits outside the relevant market. (citing Miss. River Corp. v. FTC, 454 F.2d 1083, 1089 (8th Cir. 1972)) (emphasis added)

Had this advice been heeded, the court in T-Mobile/Sprint would have been foreclosed from crediting any purported merger-induced benefits to rural customers as an offset to the loss of competition in the sale of prepaid service to urban customers. 

Regarding the proper treatment of prophylactic remedies offered by merger parties, error (3), footnote 21 of the New Guidelines state that:

These Guidelines pertain only to the consideration of whether a merger or acquisition is illegal. The consideration of remedies appropriate for otherwise illegal mergers and acquisitions is beyond its scope. The Agencies review proposals to revise a merger in order to alleviate competitive concerns consistent with applicable law regarding remedies. (emphasis added)

While this approach is very principled, the agencies cannot hope to cure a current defect by sitting on the sidelines. I would advise saying something explicit about remedies, including mentioning the history of their failures to restore competition, as Professor John Kwoka documented so ably in his book Mergers, Merger Control, and Remedies (MIT Press 2016).

Finally, regarding courts’ committing errors in statistical logic or applying the wrong evidentiary standard for potential competition cases, errors (4) and (5), the New Merger Guidelines devote an entire guideline (Guideline Four) to potential competition. Guideline Four states that “the Agencies examine (1) whether one or both of the merging firms had a reasonable probability of entering the relevant market other than through an anticompetitive merger.” Unfortunately, there is no mention that reasonable probability can be satisfied at less than 50 percent, per Salop, and the agencies would be wise to add such language in the Merger Guidelines. In defining “reasonable probability,” the Guidelines state that evidence that “the firm has successfully expanded into other markets in the past or already participates in adjacent or related markets” constitutes “relevant objective evidence” of a reasonable probably. In making its probability assessment, the court in Facebook/Within did not credit Facebook’s prior de novo entry in other app categories on Facebook’s VR platform. The Guidelines also state that “Subjective evidence that the company considered organic entry as an alternative to merging generally suggests that, absent the merger, entry would be reasonably probable.” Had it heeded this advice, the court would have ignored, when assessing the probability of de novo entry absent the merger, the fact that Facebook did not mention the Peloton partnership two months prior to the announcement of its acquisition of Within.

A Much Needed Improvement

In summary, I conclude that the new Merger Guidelines offer precisely the kind of guidance that would have prevented the courts in T-Mobile/Sprint and in Facebook/Within from committing significant errors. The additional language suggested here—taking a firm stance on remedies and defining reasonable probability—is really fine-tuning. While this review is admittedly limited to these two recent cases, the same analysis could be undertaken with respect to a broader array of anticompetitive mergers that have approved by courts since the structural presumption came under attack in 1982. The agencies should be commended for their good work to restore the enforcement of antitrust law.

This piece originally appeared in ProMarket but was subsequently retracted, with the following blurb (agreed-upon language between ProMarket’s Luigi Zingales and the authors):

“ProMarket published the article “The Antitrust Output Goal Cannot Measure Welfare.” The main claim of the article was that “a shift out in a production possibility frontier does not necessarily increase welfare, as assessed by a social welfare function.” The published version was unclear on whether the theorem contained in the article was a statement about an equilibrium outcome or a mere existence claim, regardless of the possibility that this outcome might occur in equilibrium. When we asked the authors to clarify, they stated that their claim regarded only the existence of such points, not their occurrence in equilibrium. After this clarification, ProMarket decided that the article was uninteresting and withdrew its publication.”

The source of the complaint that caused the retraction was, according to Zingales, a ProMarket Advisory Board member. The authors had no contact with that person, nor do we know who it is. We would have welcomed published scholarly debate versus retraction compelled by an anonymous Board Member.

We reproduce the piece in its entirety here. In addition, we provide our proposed revision to the piece, which we wrote to clear up the confusion that it was claimed was created by the first piece. We will let our readers be the judge of the piece’s interest. Of course, if you have any criticisms, we welcome professional scholarly debate.

(By the way, given that the piece never mentions supply or demand or prices, it is a mystery to us why any competent economist could have thought it was about “equilibrium.” But perhaps “equilibrium” was a pretext for removing the article for other reasons.)

The Antitrust Output Goal Cannot Measure Welfare (ORIGINAL POST)

Many antitrust scholars and practitioners use output to measure welfare. Darren Bush, Gabriel A. Lozada, and Mark Glick write that this association fails on theoretical grounds and that ideas of welfare require a much more sophisticated understanding.

By Darren Bush, Gabriel A. Lozada, and Mark Glick

Debate seems to have pivoted in the discourse on consumer welfare theory to the question of whether welfare can be indirectly measured based upon output. The tamest of these claims is not that output measures welfare, but that generally, output increases are associated with increases in economic welfare. 

This claim, even at its tamest, is false. For one, welfare depends on more than just output, and increasing output may detrimentally affect some of the other factors which welfare depends on. For example, increasing output may cause working conditions to deteriorate; may cause competing firms to close, resulting in increased unemployment, regional deindustrialization, and fewer avenues for small business formation; may increase pollution; may increase the political power of the growing firm, resulting in more public policy controversies and, yes, more lawsuits being decided in its interest; and may adversely affect suppliers. 

Even if we completely ignore those realities, it is still possible for an increase in output to reduce welfare. These two short proofs show that even in the complete absence of these other effects—that is, even if we assume that people obtain welfare exclusively by receiving commodities, which they always want more of—increasing output may reduce welfare. 

We will first prove that it is possible for an increase in output to reduce welfare under the assumption that welfare is assessed by a social planner. Then we will prove it assuming no social planner, so that welfare is assessed strictly via individuals’ utility levels.

The Social Planner Proof 

Here we show that a shift out in a production possibility frontier does not necessarily increase welfare, as assessed by a social welfare function.

Suppose in the figure below that the original production possibility frontier is PPF0 and

the new production possibility frontier is PPF1. Let USWF be the original level of social welfare, so that the curve in the diagram labeled USWF is the social indifference curve when the technology is represented by PPF0. This implies that when the technology is at PPF0, society chooses the socially optimal point, I, on PPF0. Next, suppose there is an increase in potential output, to PPF1. If society moves to a point on PPF1 which is above and to the left of point A, or is below and to the right of point B, then society will be worse off on PPF1 than it was on PPF0. Even though output increased, depending on the social indifference curve and the composition of the new output, there can be lower social welfare.

The Individual Utility Proof

Next, we continue to assume that only consumption of commodities determines welfare, and we show that when output increases every individual can be worse off. Consider the figure below, which represents an initial Edgeworth Box having solid borders, and a new, expanded Edgeworth Box, with dashed borders. The expanded Edgeworth Box represents an increase in output for both apples and bananas, the two goods in this economy.

The original, smaller Edgeworth Box has an origin for Jones labeled J and an origin for Smith labeled S. In this smaller Edgeworth Box, suppose the initial position is at C. The indifference curve UJ0 represents Jones’s initial level of utility with the smaller Edgeworth Box, and the indifference curve US represents Smith’s initial level of utility with the smaller Box.  In the larger Edgeworth Box, Jones’s origin shifts from J to J’, and his UJ0 indifference curve correspondingly shifts to UJ0′.  Smiths’ US indifference curve does not shift. The hatched areas in the graph are all the allocations in the bigger Edgeworth Box which are worse for both Smith and Jones compared to the original allocation in the smaller Edgeworth Box.

In other words, despite the fact that output has increased, if the new allocation is in the hatched area, then Smith and Jones both prefer the world where output is lower. We get this result because welfare is affected by allocation and distribution as well as by the sheer amount of output, and more output, if mis-allocated or poorly distributed, can decrease welfare.

GDP also does not measure aggregate Welfare 

The argument that “output” alone measures welfare sometimes refers not to literal output, as in the two examples above, but to a reified notion of “output.” A good example is GDP.  GDP is the aggregated monetary value of all final goods and services, weighted using current prices. Welfare economists, beginning with Richard Easterlin, have understood that GDP does not accurately measure economic well-being. Since prices are used for the aggregation, GDP incorporates the effects of income distribution, but in a way which hides this dependence, making GDP seem value-free although it is not. In addition, using GDP as a measure of welfare deliberately ignores many important welfare effects while only taking into account output. As Amit Kapoor and Bibek Debroy put it:

GDP takes a positive count of the cars we produce but does not account for the emissions they generate; it adds the value of the sugar-laced beverages we sell but fails to subtract the health problems they cause; it includes the value of building new cities but does not discount for the vital forests they replace. As Robert Kennedy put it in his famous election speech in 1968, “it [GDP] measures everything in short, except that which makes life worthwhile.”

Any industry-specific measure of price-weighted “output” or firm-specific measure of price-weighted “output” is similarly flawed.

For these reasons, few, if any, welfare economists would today use GNP alone to assess a nation’s welfare, preferring instead to use a collection of “social indicators.”

Conclusion

Output should not be the sole criterion for antitrust policy. We can do a better job of using competition policy to increase human welfare without this dogma. In this article, we showed that we cannot be certain that output increases welfare even in a purely hypothetical world where welfare depends solely on the output of commodities. In the real world, where welfare depends on a multitude of factors besides output—many of which can be addressed by competition policy—the case against a unilateral output goal is much stronger.

Addendum

The Original Sling posting inadvertently left off the two proposed graphs that we drew as we sought to remedy the Anonymous Board Member’s confusion about “equilibrium.” We now add the graphs we proposed. The explanation of the graphs was similar, and the discussion of GNP was identical to the original version.

The Proof if there is a Social Welfare Function (Revised Graph)

A diagram of apples and apples

Description automatically generated

The Individual Utility Proof (Revised Graph)

A diagram of a graph

Description automatically generated

The New Merger Guidelines (the “Guidelines”) provide a framework for analyzing when proposed mergers likely violate Section 7 of the Clayton Act that is more faithful to controlling law and Congressional intent than earlier Guidelines. The thirteen guidelines presented in the new Guidelines go quite a long way in pulling the Agencies back from an approach that placed undue burden on plaintiffs and ignored important factors such as the trend in market concentration and serial mergers that were addressed by earlier Supreme Court precedent. The Guidelines also incorporate the modern, more objective economics of the post-Chicago school of economics. For these reasons, and others, the Guidelines should be applauded.

Unfortunately, remnants of Judge Bork’s Consumer Welfare Standard remain. In several places the Guidelines refer to a merger’s anticompetitive effects as price, quantity (output), product quality or variety, and innovation. These are all effects that can shift demand curves or equilibrium positions in the output market and thus increase consumer surplus, the only goal recognized by the Consumer Welfare Standard.

To their credit, the Guidelines also mention input markets, referring to mergers that decrease wages, lower benefits or cause working conditions to deteriorate. Lower wages reduce labor surplus (rent), a consideration that would come within a Total Trading Partner Surplus approach. However, the traditional goals of antitrust as articulated by Congress and many Supreme Court opinions, including protecting democracy through dispersion of economic and political power, protection of small business, and preventing unequal income and wealth distribution, are conspicuously absent.

The basis for these traditional goals is well known. Prominent economist Stephen Martin has documented the judicial and congressional statements concerning the antitrust goal of dispersion of power. The historical support for the goal of preserving small business can be found in a recent paper by two of the authors of this piece. Lina Khan and Sandeep Vaheesan, and Robert Lande and Sandeep Vaheesan, have laid out the textual support for the antitrust inequality goal. Moreover, welfare economists have empirically demonstrated significant positive welfare effects from democracy, small business formation, and income equality.

Indeed, the Brown Shoe opinion, on which the Guidelines heavily rely, examined whether the lower court opinion was “consistent with the intent of the legislature” which drafted the 1950 Amendments, and the opinion itself refers to the goal of “protection of small businesses” in at least two places. The legislative history of the 1950 Amendment deemed important by the Brown Shoe Court evinced a clear concern that rising concentration will, according to Senator O’Mahoney, “result in a terrific drive toward a totalitarian government.”

The remnants of the Consumer Welfare Standard are most evident in the Guidelines’ rebuttal section on efficiencies. The Guidelines open the section with the recognition that controlling precedent is clear that efficiencies are not a defense to a merger that violates Section 7; accordingly, the section is offered as a rebuttal rather than a defense. In essence, if the merging parties can identify merger-specific and verifiable efficiencies, it can rebut a finding that the merger substantially lessened competition. The Guidelines do not define “efficiencies.”  However, the context makes clear that Guidelines mean to follow previous versions of the Merger Guidelines, that assume “efficiencies” are primarily cost savings. A defendant can offer a rebuttal to a presumption that a merger may significantly harm competition, if such cost savings are passed through to consumers in lower prices, to a degree that offsets any potential post-merger price increase. There are at least six reasons why the Agencies should jettison this “efficiency” rebuttal.

First, lower prices resulting from cost savings are quite a bit different than lower prices resulting from entry (rebuttal by entry). New entry reduces concentration, but cost savings at best will only lower output price, and higher prices (or reduced output) is not the sole problem that results from high concentration except under a strict Consumer Welfare Standard.

Second, to the extent the Guidelines equate efficiencies with cost savings (as in earlier merger guidelines), they have adopted the businessman’s definition of efficiencies. In contrast, economic theory suggests that some cost savings lower rather than raise social welfare. For example, cost savings from lower wages, greater unemployment, or redistribution between stakeholders can both lower welfare and reduce prices. An increase in consumer or producer surplus that comes at the expense of input supplier surplus can also lower welfare.

Third, only under the output-market half of a surplus theory of economic welfare, which is the original Consumer Welfare Standard, can one clearly link cost savings to economic welfare, because lower cost increases consumer and/or producer surplus. As we show elsewhere, this theory has been thoroughly discredited by welfare economists. In fact, for economists, “efficiency” only means Pareto efficiency. As discussed by Gregory Werden and by Mas-Colell et al.’s leading Microeconomics textbook (Chapter 10), the assumptions necessary to ensure that maximizing surplus results in Pareto Efficiency are extreme and unrealistic. These assumptions include quasilinear utility, perfectly competitive other markets, and lump sum wealth redistributions that maximize social welfare. This discredits the surplus approach, which is the only way to reconcile Pareto Efficiency, which is what efficiencies mean in economic theory, with cost savings, which is the definition implied in the Guidelines.

Fourth, the efficiency section is superfluous. As many economists have recognized, most recently Nancy Rose and Jonathan Sallet, the merging parties are already credited for efficiencies (cost savings) in the “standard efficiency credit” which undergirds Guideline 1. After all, absent any efficiencies, why allow any merger that evenly weakly increases concentration? A concentration screen that allows some mergers and not others must be assuming that all mergers come with some socially beneficial cost savings. Why do we need another rebuttal section when cost savings have already been credited?

Fifth, there is no empirical research to suggest that mergers that increase concentration actually lower costs and pass on sufficient benefits to consumers to constitute a successful rebuttal. As one district court commented, “The Court is not aware of any case, and Defendants have cited none, where the merging parties have successfully rebutted the government’s prima facia case on the strength of the efficiencies.” We have identified nine studies measuring either cost savings or productivity gains or profitability from mergers spanning industries like health insurance, banking, utility, manufacturing, beer, and concrete industries. Five of these studies find no evidence of productivity gain or a cost reduction. The other four studies find productivity gains in terms of cost savings; but three of these four studies report a significant increase in prices to the consumers post-merger, and the remaining study does not report price effects post-merger. In other words, we have not been able to find any empirical study showing post-merger pass on of cost savings to consumers. These results are consistent with those of Professor Kwoka, who performs a comprehensive meta-analysis of the price effects of horizontal mergers and finds that the post-merger price at the product-level increases by 7.2 percent on average, holding all other influences constant. More than 80 percent of product prices show increases, and those increases average 10.1 percent.

Sixth, even if there were cost savings from mergers it is unlikely that they would be merger- specific and verifiable. Earlier versions of the Merger Guidelines expressed skepticism that economies of scale or scope could not be achieved by internal expansion (1968 Merger Guidelines) or that cost savings related to “procurement, management or capital costs” would be merger specific (1997 Merger Guidelines). In their article on merger efficiencies, Fisher and Lande write that “it would be extremely difficult for merging firms to prove that they could not attain the anticipated efficiencies or quality improvements through internal expansion.” Louis Kaplow has argued that the ability to use contracting to achieve claimed efficiencies is seriously underappreciated or studied. Verification of future efficiencies is also inherently problematic. The 1997 Merger Guidelines state that efficiencies related to R&D are “less susceptible to verification.” This problem and other verification hurdles are discussed by Joe Brodley and John Kwoka. In summary, the New Merger Guidelines could be improved by a footnote in Guideline One clarifying the multiple antitrust goals Congress sought to achieve by preventing concentrated markets through mergers. In addition, the Agencies should take seriously the holdings of at least three Supreme Court Opinions, none of which have been overturned (Brown Shoe, Phila. Nat’l Bank and Procter & Gamble Co.) that (as quoted in the Guidelines) “possible economies [from a merger] cannot be used as a defense to illegality.” There are good reasons to abandon an efficiencies rebuttal as well.

Mark Glick, Pavitra Govindan and Gabriel A. Lozada are professors in the economics department at the University of Utah. Darren Bush is a professor in the law school at the University of Houston.

In 2023, Columbia University announced that it would no longer be participating in US News’ college rankings. At the time, the conventional interpretation of Columbia’s withdrawal was that it signaled the incoming demise of US News’ college rankings. Yet to date, no other elite undergraduate university has followed Columbia in withdrawing from US News’ undergraduate rankings. Could the conventional interpretation about Columbia’s withdrawal be wrong? 

In 1988, Columbia was ranked eighteenth in US News’ college rankings. But in the years that followed, Columbia’s undergraduate rank kept improving. By 2021, Columbia had surged to an all-time high of second. Naturally, Columbia’s breathtaking climb in US News rankings raised questions. What had Columbia done so well? What should Columbia do more of? How could other universities learn from Columbia? Among the people asking these questions was Columbia’s very own math professor, Dr. Michael Thaddeus. Skeptical by nature, he started studying Columbia’s rankings surge. What he found sent shockwaves through higher education.  

In February 2022, Dr. Thaddeus released a 21-page report exposing widespread misrepresentation of data provided by Columbia University to US News’ college rankings. For example, Dr. Thaddeus demonstrated that Columbia’s reported spending per student was inflated by “a substantial portion” of the $1.2 billion spent by its hospital on patient care, a function of the university completely unrelated to education. Because US News’ rankings are calculated, in part, by how much an institution spends per student, this overstatement greatly improved Columbia University’s ranking, or at least that’s what Dr. Thaddeus alleged. 

At first, Columbia intimated that Dr. Thaddeus was mistaken. Eventually Columbia came clean. In September 2022, Mary Boyce, Columbia’s provost, said in a statement, “We deeply regret the deficiencies in our prior reporting and are committed to doing better.” While Columbia’s acknowledgement was a step in the right direction, it was also silent on a crucial question. What possessed Columbia to lie to US News in the first instance?

In the edition of the US News rankings that followed Columbia’s rankings scandal, Columbia was demoted to a rank of 18th. The drop from second to eighteenth was incredibly steep. Observers wondered how it was computed. Indeed, Columbia hadn’t submitted any new data to US News following Dr. Thaddeus’s report. Instead, US News seemingly arrived at the new ranking without accurate data from Columbia to correct the inaccurate data from the past. The speculative nature of the ranking was on display, but so too was something else. While some drop for Columbia was nevertheless proper, was the drop to 18th justified? Or was US News making an example out of Columbia?  

Finally, in June 2023, Columbia withdrew from US News’ rankings, implying that the US News ranking was reductive, flawed, and distortive. After Columbia’s deeply embarrassing rankings scandal, it’s perhaps not surprising that Columbia would leave the party loudly and in protest. But the more interesting question is, if Columbia felt this way about US News’ ranking, why did it stay at the party so long to begin with? Why did it keep lying to muscle its way into the front of the party? And now that Columbia is gone, why are others refusing to leave the party? What explains all these contradictory facts? 

One theory, the charitable theory, is that the elite college ecosystem is just naturally full of uncoordinated institutions, where each institution pursuing its own interpretation of society’s best interests somehow leads to dysfunction in the aggregate. Per this theory, US News tries its best to create a good ranking but falls short because it is impossible to create truly objective ranking. Elite colleges are constantly looking to expand access, as evidenced by their commitment to affirmative action, but they are held back by constraints of resources, now the courts, regulation, or efficiency which are all outside their control. Per this theory, the cost of elite higher education rises because of Baumol’s cost disease. And per this theory, Columbia and other elite colleges don’t purposely lie to US News. Instead, elite colleges get mixed up in vague definitions that lead to understandable mistakes in their submissions. 

But the charitable theory is sometimes hard to swallow, in light of the facts. With each passing year, a different, more cynical theory feels increasingly plausible. Per this theory, elite colleges aren’t just independent, uncoordinated actors, but members of a commercially collusive cartel. It implies that US News is a vital hub for collusion among the elite colleges, helping elite colleges coordinate systemic scarcity of seats and raise each other’s costs. It means that elite colleges aren’t committed to access but its opposite. Per this alternative, the cost of education rises because of market structure, not natural economic laws; and it suggests that if elite colleges are merely doing what’s in their best interests, it’s in the context of a rigged system they designed and uphold. Such a cynical theory is inherently speculative, but is there an obvious reason to reject the elite college cartel theory outright? 

One obvious reason for objection might be that elite colleges are often in conflict with US News, not in cahoots. To take just one example, Columbia certainly wasn’t doing US News any favors, and US News may have retaliated against Columbia. So, how could US News be a hub for collusion, when it is clearly antagonistic to those that it ranks?   

Lessons from The Toy Cartel

A few decades ago, Toys “R” Us was the dominant toy retailer in America. But Toys “R” Us’ future dominance wasn’t assured. The retail toy market was rapidly changing. Disruptive entrants had created a new form of retail experience, the warehouse club.  By 1992, warehouse club chains like Costco, Sam’s Club, Pace, Price Club, and BJ’s were expanding quickly. 

The secret to the warehouse clubs’ success was that they were able to offer far cheaper prices because they slashed all sorts of operating costs. Club stores opened in places where real estate was cheaper, operated with less staff, and decorated themselves in a spartan way. As the President of Costco testified in the 1990s, “almost invariably our presence in the community is going to have a tendency to drive prices down.” 

For Toys “R” Us, there was reason to be nervous. In the early 90s, Toys “R” Us’ average margin on toys was above thirty percent. Costco’s margin on toys was nine percent. When Toys “R” Us’ chairman was asked whether the warehouse clubs could hurt his business, he responded, “Sure they could hurt us. Yeah.” When asked, “How so?,” he sharply replied, “By selling that product for a price that we couldn’t afford to sell it at. Simple economics.”

Competition was coming, but in the early 1990s, Toys “R” Us was still the toy manufacturers’ largest and most important customer, often buying 30 percent or more of the output of Hasbro, Mattel, and others. So, to prevent warehouse clubs from catching up, Toys “R” Us organized a cartel conspiracy with the toy manufacturers. Toys “R” Us offered the toy manufacturers a stronger relationship with itself, but only if they sold inferior products to the warehouse clubs like Costco. 

The conspiracy worked. As the FTC concluded, “By the end of 1993, all of the big, traditional toy companies were selling to the clubs only on discriminatory terms that did not apply to any other class of retailers.” When the toy manufacturers sold inferior toys to warehouse clubs, fewer consumers bought their toys there. For example, Mattel’s sales to warehouse clubs declined from over $23 million in 1991 to under $8 million in 1993. But it wasn’t pure sacrifice for the toy manufacturers. After all, the toy manufacturers were benefiting from Toys “R” Us’ big purchase orders even as Toys “R” Us was benefiting from suppressing warehouse clubs’ emergence as a threat in the retail toy market. 

Still, it wasn’t an easy cartel to operate. Some of the toy manufacturers wanted it all. They wanted to have a strong relationship with Toys “R” Us and they wanted to secretly increase their sales to the warehouse clubs too. As Toys “R” Us’ then-President Roger Goddu testified, “I would get phone calls all the time from Mattel saying Hasbro has this in the clubs or Fisher Price has that in the clubs…. So that occurred all the time.” Importantly, if one toy manufacturer cheated, the other manufacturers that stayed true lost out on market share. A cartel couldn’t run like that.  

In response, Toys “R” Us had to punish the cheating toy manufacturer for defecting from this toy cartel. Toys “R” Us would withhold its own orders from that cheating firm until it got back in line by pulling out of the warehouse clubs. Punishment from Toys “R” Us was key to making the whole system work. Indeed, the toy manufacturers acknowledged as much, often explaining to Toys “R” Us executives that they wouldn’t be a part of the toy cartel unless their competitors were too.      

The structure of the toy cartel was a variation on a traditional horizontal cartel. Instead of competitors colluding among themselves, a third-party ring leader helped them coordinate. Such a cartel has many names. Sometimes, it is called “hub-and-spoke.” Other times it is known as “rim-and-wheel.” I prefer “head-and-tentacles.” Whatever one calls it, it’s often illegal; and, Toys “R” Us and the toy manufacturers found that out in both Administrative and Appellate Court after the FTC sued them for antitrust violations in 1996. 

Why Rankings Matter

Returning to the elite college market, one major question implicit in the elite college cartel theory is whether the obvious tension between US News and the elite colleges it ranks is consistent with a cartel theory. As the toy cartel demonstrates, however, antagonisms are often a natural part of cartels, far from being inconsistent with, some antagonism may be evidence of a cartel. Ultimately, wasn’t US News’ demotion of Columbia all the way down to 18th, after Columbia got caught cheating, eerily reminiscent of the punishment Toys “R” Us used to dole out to promote cartel compliance? 

A different reason to scoff at the elite college cartel theory is that the mechanics of US News’ not-so-rigorous ranking surely couldn’t coordinate the policies of universities, each with billions of dollars, sometimes tens of billions of dollars, in their endowments. How could the satellite dictate the movements of the planet?  

One reason may be that rankings are an odd market. Credit rating agencies are often much smaller than the companies, states, or nations that they rate. Yet credit ratings often have huge effects on the valuations of those larger entities. For the unfamiliar, US News publishes the “definitive” college ranking. While other publications also publish rankings, US News has near monopoly market share, measured by views, in the college rankings market. Within the first few days of its annual releases, US News’ college ranking routinely captures tens of millions of views from anxious students and parents. One study finds that being on US News’ top 25 list can lead a school’s applications to go up between six and ten percent. A 2013 Harvard Business School study found that “a one-rank improvement leads to a 1-percentage-point increase in the number of applications to that college.” Empirically, when Cornell rocketed in US News’ rankings from fourteenth in the fall of 1997 to sixth in the fall of 1998, applications to Cornell rose by over ten percentage points the following cycle. To the extent that competition among elite colleges exists, Stanford Sociologist Mitchell Stevens describes US News’ rankings as “the machinery that organizes and governs this competition.” 

One reason rankings are so influential is that choosing a college is a complicated purchase. Young prospective students just don’t have the ability to forecast differences between four-year experiences with very little information about colleges themselves. Therefore, students and their parents often rely on proxy information, and the US News’ rankings are deeply influential in the admissions process. Plainly, a high US News ranking is a critical input that a college needs to compete in the elite college market. Therefore, because US News is a necessary upstream supplier for elite colleges, it is perfectly positioned to play the hub, consciously or unconsciously. Still, just because students rely on rankings doesn’t explain why students rely on US News’ ranking uniquely. How did US News become so dominant, and why can’t it be replaced?

US News’ path to dominance was paved by elite colleges. The incumbent elite universities of the 1980s implicitly agreed to lend US News an air of credibility, by filling out annual surveys, something that made US News popular with students. In parallel, US News’ helped the incumbent elite universities extend their incumbency into the future by creating a ranking that rated them highly not for their educational quality, but instead for their wealth and exclusivity. It’s unclear how conscious or unconscious this partnership between US News and elite colleges was. Maybe it was totally unconscious, with both sides merely pursuing their dominant business strategy. 

Regardless of the mental state, the descriptive truth is that this basic relationship between elite colleges and US News is still operative today. Harvard helps US News dominate the college rankings market in the present, and US News helps Harvard extend its dominance in the elite college market in the future. Bob Morse, one of the architects of US News’ rankings, admitted as much in an interview in 2009 saying, “When the public sees that the schools are wanting to do better in our rankings, they say, well if the schools want to improve in these rankings, they must be worth looking at. So, in essence, the colleges themselves have been a key factor in giving us the credibility.” Credibility is the most important factor for a ranking to be successful. Students and parents can’t independently adjudicate the quality of a ranking for the same reason that they can’t independently adjudicate the quality of a college: it’s complicated and subjective. Therefore, the ranking with the most credibility wins out with students and parents. And a path of least resistance to such credibility was to get the incumbent elites to qualify US News as worthwhile.   

US News likely knows that it can’t afford to lose the support of the incumbent elite schools; without it, US News couldn’t offer a credible ranking. US News likely knows that it cannot afford the perception of a boycott from those incumbent elite colleges. This may be why US News publishes a ranking that weights selectivity at seven percent, financial-resources-per-students at ten percent, class-size at eight percent, student-faculty-ratio at one percent, and colleges ranking each other at twenty percent. The US News ranking criteria has a very simple logic. As Washington Monthly magazine observed in 2000, “the perfect school is rich, hard to get into, harder to flunk out of, and has an impressive name.” 

There may well be evidence of hyper-elite colleges lobbying US News for changes to this or that criteria to better serve their needs. Yet even in the absence of such an explicit conspiracy, US News uses a criteria that self-justifies why incumbents are already on top of the prestige ladder. Presumably, both US News and the elite colleges know this. But the circular logic of US News’ rankings doesn’t just keep the elite colleges on top. It also incentivizes them to become more extreme versions of themselves. 

US News incentivizes colleges to pull in more applications so they can reject them. US News incentivizes colleges to stagnate enrollment growth. US News incentivizes colleges to raise prices further and spend more money per student. Worse, US News incentivizes less-elite colleges to adopt all the worst parts of the elite colleges, such as out-of-control spending. 

Plainly, incentivizing colleges to spend more money to compete in the elite market raises barriers to entry by shifting supply curves up for each participant. Plus, incentivizing colleges to be unduly rejective to attract students forces colleges to undersupply more than they otherwise would. In sum, US News’ rankings incentivize intense market dysfunctions like scarcity on the supply side.   

Per the elite college cartel theory, the role of Toys “R” Us is played by US News. US News allegedly coordinates collusion among all the different elite colleges. But the way US News allegedly coordinates its spokes is subtle and ingenious, as it facilitates seat scarcity coordination through its ranking formula instead of explicit communication. In this narrative, US News became dominant precisely because it chose to play the coordinating role that elite colleges may have wanted, and it played that role just as elite colleges may have wanted it to.

There are facts that support an inference of collusion. For one, links between US News and hyper-elite colleges are deep. In the 2000s and 2010s, Mortimer Zuckerman, the owner of US News, became a mega-donor to Ivy League colleges like Harvard and Columbia. He served on the Board of Trustees at Princeton. In that same period, US News’ monopoly consolidated. The hyper-elite colleges continued to support the regime. They didn’t boycott the rankings. They embraced them, continuing to give US News exclusive answers to surveys that no other ranking receives. When President Obama’s administration sought to roll out a public competitor to US News’ college rankings in 2013, elite college administrators rallied to kill the ranking effort. Instead, the public got a limp scorecard, and US News didn’t face a public, credible competitor. 

For another, elite colleges have horizontal links among themselves. For instance, there is large surface area for coordination in things such as lobbying for government policies, joint research, patent commercialization, and admissions. Moreover, elite colleges often have interlocking governance boards. Jurisprudentially, these types of interlocking links have supported inferences of conspiracy in many antitrust cases. 

Empirically, elite colleges have been pulled into court for allegedly collusive cartel behavior before. The Ivy League colleges were sued by the Department of Justice for fixing prices on financial aid in the 1990s. In 2022, a group of seventeen elite colleges was sued for price-fixing by a class of students on financial aid. The NCAA has been sued many times for cartel tactics that limit compensation for student-athletes. So, might seat scarcity, and exclusion of competitors, be another area where elite colleges collude?

Lastly, another factor supporting an inference of conspiracy are the market dynamics themselves. The demand for seats at elite institutions has proven to be remarkably inelastic. If the Varsity Blues scandal proves anything, it’s that people will go through a lot of trouble to capture a seat at an elite school. Importantly, inelastic demand, in other markets, has attracted cartel formation. For example, OPEC operates in the inelastic oil market. Big tobacco companies operate in the inelastic cigarette market. In those markets, cutting output by one percent has often raised prices by more than one percent, making scarcity a profitable strategy.   

In some markets with inelastic demand, a cartel isn’t needed to produce scarcity because only one or two companies control the entire market anyway. But the elite college market isn’t like that. Many different elite colleges exist. Without some machinery to coordinate scarcity, it likely would not be possible to produce systemic scarcity. 

In a system structured as ours, explicitly or accidentally, Penn’s acceptance rate necessarily drops from 47 percent in 1991 to less than 6 percent in 2023. As authors Carnevale, Schmidt, and Strohl quantify, in their book, The Merit Myth, “There are 1.5 million high school seniors with better than an 80 percent chance of graduating from one of the top 193 colleges, but those colleges annually admit only 250,000 freshmen.” As economists Kent and Smetters quantified in 2021, if elite colleges ignored relative prestige and simply maintained student quality, since 1990 their total enrollments would have doubled or tripled. Instead, Harvard, Princeton, Yale, and Stanford only increased enrollment by seven percent between 1990 and 2015. 

Let’s be clear: The artificial scarcity in slots, made possible by the ranking system, allows elite colleges to under-produce relative to a world in which such coordination was not possible; and by under-producing, the elite colleges are able to impose supra-competitive prices for admission.   

You can see the market failure all around you. Tuition prices keep rising well beyond the average inflation rate. Scarcity-induced admissions scandals like Varsity Blues continue to pop up. But perhaps the most obvious sign is that the demographics at formerly less exclusive colleges have also gone wacky. The rankings are warping the whole market into copycatting the worst parts of the hyper-elite colleges. Safety schools have morphed into reach schools. Reach schools have slipped out of sight. Seats at American universities have turned needlessly scarce, and the privileged few have largely outcompeted the aspiring many for those seats.  

Indeed, elite colleges might object to the entire conjecture of this article. They might say that there is no fusion between themselves and US News. They might claim that they are increasingly diverging from US News’ rankings. They might point to their boycott of the US News’ law school rankings. Harvard might point out that it pulled out of US News’ medical school rankings. 

Yet it’s not clear if these objections comprehensively rule out a cartel explanation. Even if elite colleges are pulling out of US News’ rankings at the graduate levels, they refuse to do so at the undergraduate level. There is absolutely no evidence that a boycott is forthcoming for the undergraduate rankings. Instead, we see only continuing participation. Importantly, undergraduate rankings matter far more than law school or medical school rankings. They matter more because an elite college’s undergraduate reputation is also often used as a proxy for its graduate programs. 

Pitiful Growth in Seats

Skeptics might also assert that elite colleges have always been exclusive, independently of US News. Elite colleges might argue that the whole reason that they’re elite is because they’re exclusive. This argument is more persuasive on first glance than on close examination. While it’s true that elite colleges must reject some students to maintain class quality, the question is one of degree. 

How exclusive does an elite college need to be? Do we need Ivy League colleges to reject 95 percent of applicants? Or will rejecting 80 percent, as they did in the 1980s, suffice? It is a false argument to assert that growth and quality are necessarily opposed. For example, before the rankings-era, Stanford increased its enrollment by over 250 percent from 1920 to 1970. It managed to stay very elite in that period of time. 

Nor is undue rejection necessary to maintain academic quality. A common quip on Harvard’s campus is that the hardest thing about Harvard is getting in. There are many students of diligent character and great intellect who are routinely rejected by the elite colleges, and those rejections have nothing to do with quality. Instead, those rejections may be the collateral damage that a hub-and-spoke cartel produces. Absent this concern for relative prestige, driven home by the rankings, the elite colleges would naturally admit more students.

Elite colleges might alternatively object that acceptance rates are a poor measure for increasing exclusion. They might argue that each student applies to far more schools than she once did. This is true. But the reason each student applies to more schools today is because she is dramatically more likely to get rejected at each one. If you leave behind acceptance rates, and merely look at raw numbers, the growth in seats at elite colleges has been pitiful. As Economists Kent and Smetters explained in 2021, “While college enrollment has more-than doubled since 1970, elite colleges have barely increased supply, instead reducing admit rates.” For example, in the 2005–06 school year, Yale enrolled 1,321 undergrads, and in 2016–17, Yale enrolled a whopping 1,367 students.  

A market fundamentalist might argue that the scarcity produced by elite colleges is opening space for formerly less elite colleges like Tulane and BU to fill the new market need by becoming more exclusive themselves. Fundamentalists might argue that the market is responding as it should, by creating more supply to meet the growing demand of qualified students eager for prestigious degrees. But, of course, such fundamentalists miss the crucial point. 

At what cost is Tulane filling in for Penn? The rankings that US News has set up requires everyone to get more expensive. So, as more  students were rejected by Penn, Tulane experienced an increase in demand for its slots, which justified higher prices. Even then it’s not a real substitute. As economists Blair and Smetters quantified in 2021, the consumer welfare loss of being rejected from Harvard, Yale, Stanford, or Princeton is estimated to be around 140 percent of the mean total tuition, an amount in the order of hundreds of thousands of dollars. This is despite the rise of the so-called substitutes. 

In the end, whether the elite colleges have explicitly colluded to produce dysfunction or whether it is some freak accident is probably the least interesting thing about the elite college market to the vast majority of Americans. For the average student coming of age, it doesn’t matter if the elite college market is dysfunctional because of an explicit conspiracy or because of an unfortunate accident of market development. What matters to the applicant is that she may not be accepted to the college of her dreams because rankings incentivize each elite college to slow growth in enrollments. With each new college scandal, a simple fact becomes more and more clear. We need a serious conversation about how to restructure this market.

Sahaj Sharda is a student at Columbia Law School and author of the book The College Cartel.

Over the past two years, heterodox economic theory has burst into the public eye more than ever as conventional macroeconomic models have failed to explain the economy we’ve been living in since 2020. In particular, theories around consolidation and corporate power as factors in macroeconomic trends–from neo-Brandeisian antitrust policy to theories of profit seeking as a driver of inflation–have exploded onto the scene. While “heterodox economics” isn’t really a singular thing–it’s more a banner term for anything that breaks from the well established schools of thought–the ideas it represents challenge decades of consensus within macro- and financial economics. This development, of course, has left the proponents of the traditional models rather perturbed.

One of the heterodox ideas that has seen the most media attention is the idea of sellers’ inflation: the theory that inflation can, at least partially, be a result of companies using economic shocks as smokescreens to exercise their market power and raise the prices they charge. The name most associated with this theory is Isabella Weber, a professor of economics at the University of Massachusetts, but there are certainly other economists who support this theory (and many more who support elements of it but are holding out for more empirical evidence before jumping into the rather fraught public debate.)

Conventional economists have been bristling about sellers’ inflation being presented as an alternative to the more staid explanation of a wage-price spiral (we’ll come back to that), but in recent months there have been extremely aggressive (and often condescending, self-important, and factually incorrect) attacks on the idea and its proponents. Despite this, sellers’ inflation really is not that far from a lot of long standing economic theory, and the idea is grounded in key assumptions about firm behavior that are deeply held across most economic models.

My goal here is threefold: first, to explain what the sellers’ inflation and conventional models actually are; second, to break down the most common lines of attack against sellers’ inflation; third, to demonstrate that, whatever its shortcomings, sellers’ inflation is better supported than the traditional wage-price spiral. Many even seem to recognize this, shifting to an explanation of corporations just reacting to increased demand. As we’ll see, that explanation is even weaker.

What Is Sellers’ Inflation?

The Basic Story

As briefly mentioned above, sellers’ inflation is the idea that, in significantly concentrated sectors of the economy, coordinated price hikes can be a significant driver of inflation. While the concept’s opponents generally prefer to call it “greedflation,” largely as a way of making it seem less intellectually serious, the experts actually advancing the theory never use that term for a very simple reason: it doesn’t really have anything to do with variance in how greedy corporations are. It does rely on corporations being “greedy,” but so do all mainstream economic theories of corporate behavior. Economic models around firm behavior practically always assume companies to be profit maximizing, conduct which can easily be described as greedy. As we’ll see, this is just one of many points in which sellers’ inflation is actually very much aligned with prevailing economic theory.

Under the sellers’ inflation model, inflation begins with a series of shocks to the macroeconomy: a global pandemic causes an economic crash. Governments respond with massive fiscal stimulus, but the economy experiences huge supply chain disruptions that are further worsened with the Russian invasion of Ukraine. All of these events caused inflation to increase either by decreasing supply or increasing demand. The stimulus checks increased demand by boosting consumers’ spending power–exactly what it was supposed to do. Both strained supply chains and the sanctions cutting Russia off from global trade restricted supply. Contrary to what some opponents of sellers’ inflation will say, the theory does not deny the stimulus being inflationary (though some individual proponents might). Rather, sellers’ inflation is an explanation for the sustained inflation we saw over the past two years. Those shocks led to a mismatch between demand and supply for consumer goods, but something kept inflation high even after the effects of those shocks should have waned.

The culprit is corporate power. With such a whirlwind of economic shocks, consumers are less able to tell when prices are rising to offset increases in the cost of production versus when prices are being raised purely to boost profit. This, too, is not at odds with conventional macro wisdom. Every basic model of supply and demand tells us that when supply dwindles and demand soars, the price level will rise. Sellers’ inflation is an explanation of how and why prices rise and why prices will increase more in an economy with fewer firms and less competition. 

Sellers’ inflation is really just a specific application of the theory of rent-seeking, which has been largely accepted since it was introduced by David Ricardo, a contemporary of the father of modern economics, Adam Smith. (Indeed, this point, which I raised nearly a year and a half ago in Common Dreams, was recently explored in a new paper from scholars at the University of London.) As anyone who has ever watched a crime show could tell you, when you want to solve a whodunnit, you need to look at motive, means, and opportunity. The greed (which, again, is at the same level it always is) is the motive. Corporations will always seek to charge as high of a price as they can without being dangerously undercut by competitors. Sellers’ inflation doesn’t posit a massive increase in corporate greed, but a unique economic environment that allows firms to act upon the greed they have possessed.

Concentration is the means; when the market is in the hands of only one or a few firms, it becomes easier to raise prices for a couple of reasons. First, large firms have price-setting power, meaning they control enough of the sector that they are able to at least partially set the going rate for what they sell. Second, when there’s only a few firms in a sector, wink-wink-nudge-nudge pricing coordination is much easier. Just throw in some vague but loaded phrases in press releases or earnings calls that you know your competition will read and see if they take the same tack. For simplicity, imagine an industry dominated by two firms, A and B. At any given point, both are choosing between holding prices steady and raising them (assume lowering prices is off the table because it’s unprofitable, let’s keep it simple.) This sets up the classic game-theoretical model of the prisoner’s dilemma:

A Maintains PriceA Raises Price
B Maintains Price, ,
B Raises Price, ,

In the chart above, the red arrows represent the change in A’s profit and the blue represent the change in B’s. If both hold the price steady, nothing changes, we’re at an equilibrium. If one and only one firm raises prices without the other, the price-hiker will lose money as price-conscious consumers switch to their competitor, who will now see higher profits. This makes the companies averse to raising prices on their own. But, if both raise their prices, both will be able to increase their profits. That’s why collusion happens. But, wait, isn’t that illegal? Yes, yes it is. But it is nigh on impossible to police implicit collusion, especially when there is a seemingly plausible alternative explanation for price hikes.

As James Galbraith wrote, in stable periods, firms prefer the safer equilibrium of holding prices relatively stable. As he explains:

In normal times, margins generally remain stable, because businesses value good customer relations and a predictable ratio of price to cost. But in disturbed and disrupted moments, increased margins are a hedge against cost uncertainties, and there develops a general climate of “get what you can, while you can.” The result is a dynamic of rising prices, rising costs, rising prices again — with wages always lagging behind.

And that gets us to opportunity, which is what the macroeconomic shocks provide. Firms probably did experience real increases in their production costs, which gives them good reason to raise their prices…to a point. But what has been documented by Groundwork Collaborative and separately by Isabella Weber and Evan Wasner is corporate executives openly discussing increasing returns using “pricing power,” which is code for charging more than is needed to offset their costs. This is them signaling that they see an opportunity to get to that second equilibrium in the chart above, where everyone makes more money. And since that same information and rationale is likely to be present at all of the firms in an industry, they all have the incentive (or greed if you prefer) to do the same. This is easiest to conceptualize in a sector with two firms, but it holds for one with more that is still concentrated. At some point, though, you reach a critical mass where suddenly there’s one or more firms who won’t go along with it. As the number of firms increases, it becomes more and more probable that one won’t just go along with it, which is why concentration facilitates coordination

And that’s it. In an economy with significant levels of concentration — more than 75 percent industries in the American economy have become more concentrated since the 1990s — and the smokescreen of existing inflation, corporate pricing strategy can sustain rising prices due to the uncertainty. Now, if you ask twenty different supporters of sellers’ inflation, you’ll likely get twenty slightly different versions of the story. However, the main beats are mostly agreed upon: 1) firms are profit maximizing, 2) they always want to raise prices but usually won’t out of fear of either being undercut by the competition or being busted for illegal collusion, and 3) other inflationary pressures provide some level of plausible deniability which lowers the potential downside of price increases.

What Evidence Is There?

The evidence available to support theories of sellers’ inflation is one of the main points of contention between its proponents and detractors. Despite that, there is strong theoretical and empirical evidence that backs the theory up.

First is a basic issue of accounting that nobody in the traditional macro camp seems to have a good answer for. Profits are always equal to the difference between revenues (all the money a company brings in) and costs (all the money a company sends out). 

Profits= Revenue – Costs

This is inviolable; that is simply the definition of profits. As I’ve written before, this means that the only two possible ways for a company to increase profits is by generating more revenue or cutting costs (or a combination of the two, but let’s keep it simple). Costs can’t be the primary driver in our case because we know they’re increasing, not decreasing. Inflationary pressures should still have increased production costs like labor and any kind of input that is imported. Companies also have been adamant about the fact that they are facing rising costs; that’s their whole justification for price hikes. And mainstream economists would agree. They blame lingering inflation on a wage price spiral, which says that workers demanding higher wages have driven cost increases that force companies to raise prices – resulting in higher inflation. As both sides agree that input costs are rising, the only possible explanation for increased profits is an increase in revenue. Revenue also has itself a handy little formula:

Revenue = Price * Units Sold

While the units sold may have increased, price was the bigger factor. We know this for at least two key reasons: because of evidence showing that output (the units sold) actually decreased and because of the evidence from earnings calls compiled by Groundwork. Executives said their strategy was to raise prices, not to sell more products. And there’s two very good reasons to believe the execs: (1) they know their firms better than anyone, and (2) they are legally required to tell the truth on those calls. (That second reason is also evidence of sellers’ inflation on its own; if the theory’s opponents don’t buy the explanation given by the executives to investors, they must think executives are committing securities fraud.) 

In rebuttal to the accounting issue, Brian Albrech, chief economist at the International Center for Law and Economics, has argued that using accounting identities is wrongheaded:

Just as we never reason from a price change, we need to never reason from an accounting identity. My income equals my savings plus my consumption: I = S + C. But we would never say that if I spend more money, that will cause my income will rise.

This, on face, seems like a reasonable argument, except all it really shows is that Albrecht doesn’t understand basic math. Tracking just one part of the equation won’t automatically tell us what the others do…duh. But we can track what a variable is doing empirically and use that relationship to make sense of it. We would never say that someone spending more money on consumption causes their income to rise. But we certainly could say that if we observe an increase in personal consumption, then we can reason that either their income increased or their savings decreased. The mathematical definition holds, you just have to actually consider all of the variables. In fact, Albrecht agrees, but warns “Yes, the accounting identity must hold, and we need to keep track of that, but it tells us nothing about causation.” No, it tells us correlation. Which, by the way, is what econometrics and quantitative analyses tell us about as well. 

The way you get to causation in economics is by tying theory and context to empirical correlations to explain those relationships. Albrecht’s case is just a very reductive view of the actual logic at play. He continues:

After all, any revenue PQ = Costs + Profits. So P = Costs/Q + Profits/Q. If inflation means that P goes up, it must be “caused” by costs or profits.

No, again. Stop it. This is like saying consumption causes income.

Once again, Albrecht is wrong here. This is like saying higher consumption will correspond to either higher income or lower savings. Additionally, there’s a key difference between the accounting identities for income and for profits: income is broken down into consumption and savings after you receive it, whereas costs and revenues must exist before profits. This makes causal inference in the latter much more reasonable; income is determined exogenously to that formula, but profits are endogenous to their accounting identity. 

In addition to these observations, though, there is also various economic research that supports the idea of seller’s inflation. Some of the best empirical evidence comes from this report from the Federal Reserve Bank of Boston, this one from the Federal Reserve Bank of San Francisco, and this one from the International Monetary Fund.

Another key piece of evidence is a Bloomberg investigation that found that the biggest price increases came from the largest firms. If market power were not a factor, then prices should have been rising roughly proportionally across firms, regardless of their size. If anything, large firms’ economies of scale should have cut down on the need to hike prices. Especially because basic economic theory tells us that when demand increases, companies want to expand supply, which should have resulted in more products (especially from larger firms with more resources) and a corresponding drop in price increases. And yet, what we actually saw was a drop in production from major companies like Pepsi, who opted instead to increase profits by maintaining a shortfall in supply.

That said there’s plenty more, including this from the Kansas City Fed, this from Jacob Linger et al., this from French economists Malte Thie and Axelle Arquié, this from the European Central Bank, this one from the Roosevelt Institute, and more. The Bank of Canada has also endorsed the view. It seems unlikely that the Federal Reserve, European Central Bank and the Bank of Canada have all become bastions of activist economists unmoored from evidence. Perhaps it’s time those denying sellers’ inflation are labeled the ideologues.

The Case Against Sellers Inflation

A Few Notes on Semantics

Before we get into the substance of critiques against sellers’ inflation as a theory, there are a few miscellaneous issues with the framing its opponents often use. There is a tendency for arguments against sellers’ inflation to use loaded words or skewed phrasing to implicitly undermine the legitimacy of people who are spearheading the push for greater scrutiny of corporations as a part of managing inflation.

For instance, Eric Levitz says the debate sees “many mainstream economists against

heterodox progressives.” This phrasing suggests that the debate is between economists on the one hand and proponents of sellers’ inflation on the other. But that’s not true! There are both economists and non-economists on both sides of the issue. Weber is an economist, as are the researchers at the Boston and San Francisco Feds. And others, including James Galbraith, Paul Donovan, Hal Singer, and Groundwork’s Chris Becker and Rakeen Mabud are on board. Notably, Lael Brainard, the head of President Biden’s Council on Economic Advisors (and former Federal Reserve Vice Chair) recently endorsed the view.

Or take how Kevin Bryan, a professor of management at the University of Toronto described Isabella Weber as a “young researcher” who “has literally 0 pubs on inflation.” Weber is old enough to have two PhDs and tenure at UMass and–will you look at that–has written about inflation before! Presenting her as young sets the stage for making her seem inexperienced, which saying she has no publications doubles down on. But his claims are false. Weber wrote a paper with Evan Wasner specifically about sellers’ inflation. But even if we take Bryan’s point as true and ignore the very real work Weber has done on inflation and pricing, Weber still has significant experience with political economy, which helps to explain how institutional power is able to influence markets—exactly the type of thinking sellers’ inflation is based upon.

(And this is nothing compared to the abuse that Weber endured after an op-ed in The Guardian provoked a frenzy of insulting, condescending attacks from many professional economists. For more on that, see Zach Carter’s New Yorker profile of Weber and/or this Twitter thread that documents Noah Smith’s outlash at Weber.)

But even the semantics that don’t get into ad hominem territory are confusing. Here is a list of the topline concerns that Kevin Bryan raised:

Let’s just run through that list of concerns real quick:

  1. What does very online even mean? Sellers’ inflation has been embraced as at least a plausible concept by the President of the United States, the European Central Bank, at least two Federal Reserve Banks, and the International Monetary Fund. If that’s not enough legitimization it’s hard to know what would be. This concern makes it sound like the proponents are random reddit users, rather than the serious academics and policy makers they are.
  2. I don’t know why the presence of “virulent defenders”  undermines the idea itself. Defenders of traditional economics are virulent as well; Larry Summers called the idea of relating antitrust policy to inflation “science denial.”
  3. Traditional monetary policy is often (but not always) associated with centrist, pro-business politics. Also, conventional Industrial Organization theory and even Borkian consumer welfare theories recognize a relationship between price and the structure of firms and markets, so the fundamental ideas are certainly not leftist.
  4. That proponents of sellers’ inflation refer to gatekeepers shooting down these theories  seems disingenuous. Everyone who supports sellers’ inflation would probably rather be discussing it because of its merits. But when people like Bryan or Larry Summers refuse to even consider the idea as potentially legitimate, the only option left is to discuss it because of the iconoclasm. If there isn’t a story about changing academic opinions, then the story about challenges to conventional wisdom being shut out by the old guard will have to do.

All of this is to set up the next point in that Twitter thread, which is that “being an Iconoclast is not the same thing as being rigorous, or being right.” True, but dodging the debate by attacking the credibility of an idea’s advocates and taking issue with the method of dissemination are also not the same as being rigorous. Or as being right.

These are just a couple of examples, but opponents of this theory really lean into making it sound like its champions are inexperienced and don’t know what they’re talking about. Aside from being in bad faith, this also indicates a lack of confidence in comparing the contemporary story to that of sellers’ inflation.

The Theoretical Substance of the Opposition

With the semantics out of the way, it’s time to get into the meat of the case(s) against sellers’ inflation. There is no singular, unified case here, more of a constellation of related ideas. 

The first line of defense against theories of sellers’ inflation is asserting that traditional macroeconomics is good and has solved our inflation problem. For example, Chris Conlon of NYU has credited rate hikes with inflation cooling. Conlon says “I for one am glad Powell and Biden admin followed boring US textbook ideas.” But there’s a problem with that: the contemporary economic story does not actually explain how rate hikes can cool inflation without a corresponding rise in unemployment. 

The traditional story starts in the same place as the sellers’ inflation story: macroeconomic shocks create inflation. (Although the traditionalists prefer to emphasize fiscal stimulus as the primary shock, rather than supply chains. The evidence largely indicates that stimulus did have some inflationary effect, but not much. The global nature of inflation also undercuts the idea that American domestic fiscal policy could be the main explanation.) The shock(s) create a supply and demand mismatch, with too much money chasing too few available goods. After that, however, the traditional mechanism for explaining inflation remaining high is supposed to be a wage-price spiral. 

The story goes something like this: the stimulus boosted consumer demand, which overheated the economy, and created more jobs than could be filled, meaning job seekers negotiated higher pay when they took positions. They then spent that extra money which increased demand further, leading to even higher prices as supply couldn’t keep up with demand. Workers saw that their cost of living went up, so they took the opportunity to demand better pay. Companies were forced to give in because they knew in a hot labor market, their workers could leave and earn more elsewhere if employers didn’t meet workers’ demands. Once their wages went up, those workers had more spending power, which they used to buy more things, further increasing demand. That elevated prices more, as the supply-demand mismatch increased. Now workers see their cost of living rising again, so they ask for another raise. If this pattern has held for a few rounds of pay negotiations, maybe workers ask for more than they otherwise would, trying to get out ahead of their spending power shrinking again. Rinse and repeat.

But we know that this story doesn’t describe the inflation that we saw over the last couple of years. Wage growth lagged behind inflation, which indicates that something else had to be driving price increases. Plus the Phillips curve, which is meant to illustrate this relationship between higher employment and higher inflation, has been broken in the US for years. It simply does not show a meaningful positive relationship any more. 

It’s important that we understand this story as a whole. Levitz, in his piece, tries to separate the initial supply-demand mismatch from the wage-price spiral as a way of making the conventional model stack up better against sellers’ inflation. But that doesn’t actually hold because if you omit the wage-price spiral (which Levitz agrees seems dubious), the mainstream macro story has no mechanism for inflation staying high. If it were just a one-time stimulus, that would explain a one-time inflation spike, but once that money is all sent out (say by the end of 2021), there’s no source for further exacerbating the supply-demand mismatch (in say the end of 2022 or early 2023). (Remember, inflation is the rate of change of prices, so if prices spike and then stay the same afterwards, that plateau will reflect a higher price level but not sustained high inflation.) 

Similarly, focusing on only the supply-side shocks provides no reason for why inflation remained elevated long after supply chain bottlenecks had cleared and shipping prices had fallen.

The incentive shift that occurs in concentrated markets is key to understanding this. In a competitive market, firms’ response to a surge in demand is to produce more. But, when the market is concentrated and some level of implicit coordination is possible, increased production is actually against a firm’s best interest, it will just put them back at that first equilibrium from earlier. They want to enjoy the high prices and hang out in the second equilibrium as long as they can

Sellers’ inflation, at least, has an internal mechanism that can explain how we got from one-off shocks to the economy to sustained inflation. Yet its opponents wrongly describe what that mechanism is. Remember the story from earlier: the motive of profit maximization, the means of market power in concentrated industries, and the opportunity of existing inflation. The most basic objection to this mechanism is to mischaracterize it as blaming sustained upward pressure on prices on an increase in the level of greed among corporations. That’s what economist Noah Smith did in a number of blogs that have aged quite poorly. But no one is seriously arguing companies are greedier, only that there is an innate level of greed, which conventional models also assume. 

The strawmanning continues when we get to the means, which is what this Business Insider piece by Tevan Logan of Ohio State does by pointing out how Kingsford charcoal tried and failed to rent seek by raising prices, which just caused them to lose market share to retailers’ generic brands. Exactly! The competition in the charcoal market demonstrates why consolidation is a key ingredient in sellers’ inflation. If Kingsford had a product without so many generic substitutes, then consumers would not have had the chance to switch products. And that’s why a lot of the biggest price hikes occurred with goods like gas, meat, and eggs, all of which are controlled by cartel-esque oligopolies.

The opportunity component actually seems to be a point that there’s broad agreement on. For example, Conlon says that the “idea that firms might raise prices by more than their costs is neither surprising nor uncommon.” He goes on to suggest, however, that this is likely because firms expect costs to continue rising. There’s certainly an element of truth to that, but also consider the basic motivation of corporations: maximizing profits. As a result, if companies expect their costs to rise by, say, 5 percent over the next year and they’re going to adjust prices anyway, why not raise prices by 7 percent, more than enough to offset expected cost increases? 

The theoretical case against sellers’ inflation is, as Eric Levitz noted, “deeply confused;” he was just wrong about which side was getting stumped. 

The Empirical Case Against Sellers’ Inflation

The other side of the opposition to sellers’ inflation focuses on the empirics. To be fair, there’s certainly more work that needs to be done. But that’s about as far as the critique goes. The response is just “the data isn’t there.” I’ll refer you to Groundwork’s excellent work on executives saying that they are raising prices beyond costs, Weber’s paper, the Boston and San Francisco Fed papers, Bloomberg’s findings about larger firms charging higher prices, Linger et al.’s case study of concentration and price in rent increases, and the IMF working paper. 

Setting aside the very real empirical evidence in support of seller’s inflation, the argument about a lack of empirics still gives no reason to default to the traditional model of inflation. Even if we accept a lack of data for sellers’ inflation, we have quite a lot of data that directly contradicts the mainstream story. Surely, something unproven is still preferable to something disproven.

Some economists like Olivier Blanchard have raised questions about methodology and the need for more work. Great! That’s what good discourse is all about; being skeptical of ideas is fine, as long as you don’t throw them out on gut instinct. Unfortunately, critics often simply reject the theory, rather than express skepticism. When they do, however, they often fall into the same methodological gaps in which they accuse “greedflation” proponents. For example, Chris Conlon egregiously conflating correlation and causation of the Fed’s monetary policy. Or Brian Albrecht taking issue with inductive logic while siding with a traditional story that makes up ever more convoluted, illusory concepts

So Where Does That Leave Us?

The traditional model of inflation is broken. The Phillips curve is no longer a useful tool for understanding inflation, a wage-price spiral flies in the face of reality, and there’s no viable alternative mechanism for sustained inflation within the demand-side model. Enter sellers’ inflation.

From the same starting point, and drawing on several cornerstone pieces of economic theory, sellers’ inflation is able to provide a consistent vehicle for one-off shocks to create prolonged upward pressure on price levels as firms exercise their market power. The bedrock ideas of the theory are consistent with seminal economic thought from the likes of David Ricardo and even Adam Smith himself and has the support of a number of subject matter experts. Is it a perfect theory? No, but to paraphrase President Biden, don’t compare it to the ideal, compare it to the alternative. More empirics would be preferable, but the case for sellers’ inflation remains much stronger than the case for a fiscal stimulus igniting a wage-price spiral, which is entirely anathema to most of the evidence we do have.

One way or another, inflation is trending down and, by some measures, is closing in on the target rate again. Many have rushed to credit the Federal Reserve for following the textbook course, but they don’t have any internal story about how the Fed could have done that without increasing employment. As Nobel laureate Paul Krugman (who supported rate hikes and once bashed the theory of sellers’ inflation) asked, “Where’s the rise in economic slack?” The conventional story is missing its second chapter and yet its advocates are eager to point to an ending they can’t explain as all the justification they need to avoid reconsidering their priors. One possibility Krugman notes, which Matthew Klein explicates here, is that inflation really was transitory the whole time. The sharp upward pressures were, indeed, caused by one-off shocks from the pandemic, supply chains, and Russian aggression, but the effects had unusually long tails. This theory aligns very well with sellers’ inflation; corporate price hikes could simply be the explanation for such long lasting effects. 

Additionally, as Hal Singer pointed out, the recent drop in inflation corresponds to a downturn in corporate profits. Some, including Noah Smith (in that tweet’s comments), disagree and argue that both lower profits and less inflation are caused by new slack in demand. But that doesn’t really match what we’re seeing across macroeconomic data. True, employment growth has slowed, as has the growth of personal consumption, but that still doesn’t match up with the type of deflationary pressure that we were supposed to need; Larry Summers was citing figures as high as 6 percent unemployment. Plus, the metrics that do show demand softening largely only show that employment and consumption are steadying, not decreasing. On top of that, the contraction in output that The Wall Street Journal identified makes the case for simple shifts in demand driving price levels dubious. Additionally, if a wage-price spiral were at fault, leveling off employment growth would not be enough, the labor market would still be too tight (aka inflationary), hence why we’d need to increase unemployment. 

Good economic theories always need more work to apply them to new situations and produce quality empirics. But pretending that sellers’ inflation is a wacky idea while the conventional macro story maps perfectly onto the economy of the past three years is thumbing your nose at the most complete story available, significant empirical evidence, and centuries of economic theory.

Dylan Gyauch-Lewis is Senior Researcher at the Revolving Door Project.

The Federal Trade Commission’s scrutiny of Microsoft’s acquisition of game producer Activision-Blizzard did not end as planned. Judge Jacqueline Scott Corley, a Biden appointee, denied the FTC’s motion for preliminary injunction, ruling that the merger was in the public interest. At the time of this writing, the FTC has pursued an appeal of that decision to the Ninth Circuit, identifying numerous reversible legal errors that the Ninth Circuit will assess de novo.

But even critics of Judge Corley’s opinion might find agreement on one aspect: the relative lack of enforcement against anticompetitive vertical mergers in the past 40+ years. As Corley’s opinion correctly observes, United States v. AT&T Inc, 916 F.3d 1029 (D.C. Circuit 2019), is the only court of appeals decision addressing a vertical merger in decades. Absent evolution of the law to account for, among other recent phenomena, the unique nature of technology-enabled content platforms, the starting point for Corley’s opinion is misplaced faith in case law that casts vertical mergers as inherently pro-competitive.

As with horizontal mergers, the FTC and Department of Justice have historically promulgated vertical merger guidelines that outline analytical techniques and enforcement policies. In 2021, the Federal Trade Commission withdrew the 2020 Vertical Merger Guidelines, with the stated intent of avoiding industry and judicial reliance on “unsound economic theories.” In so doing, the FTC committed to working with the DOJ to provide guidance for vertical mergers that better reflects market realities, particularly as to various features of modern firms, including in digital markets.

The FTC’s challenge to Microsoft’s proposed $69 billion acquisition of Activision, the largest proposed acquisition in the Big Tech era, concerns a vertical merger in both existing and emerging digital markets. It involves differentiated inputs—namely, unique content for digital platforms that is inherently not replaceable. The FTC’s theories of harm, Judge Corley’s decision, and the now-pending appeal to the Ninth Circuit provide key insights into how the FTC and DOJ might update the Vertical Merger Guidelines to stem erosion of legal theories that are otherwise ripe for application to contemporary and emerging markets.

Beware of must-have inputs

In describing a vertical relationship, an “input” refers to goods that are created “upstream” of a distributor, retail, or manufacturer of finished goods. Take for instance the production and sale of tennis shoes. In the vertical relationship between the shoe manufacturer and the shoe retailer, the input is the shoe itself. If the shoe manufacturer and shoe retailer merge, that’s called a vertical merger—and the input in this example, tennis shoes, is characteristic of a replaceable good that vertical merger scrutiny has conventionally addressed. If such a merger were to occur and the newly-merged firm sought to foreclose rival shoe retailers from selling its shoes, rival shoe retailers would likely seek an alternative source for tennis shoes, assuming the availability of such an alternative.

When it comes to assessing vertical mergers in digital content markets, not all inputs are created equal. To the contrary, online platforms, audio and video streaming platforms, and—in the case of Microsoft’s proposed acquisition of Activision—gaming platforms all rely on unique intellectual property that cannot simply be replicated if a platform’s access to that content is restricted. The ability to foreclose access to differentiated content that flows from the merger of a content creator and distributor creates a heightened concern of anticompetitive effects, because rivals cannot readily switch to alternatives to the foreclosed product. This is particularly true when the foreclosed content is extremely popular or “must-have,” and where the goal of the merged firm is to steer consumers toward the platform where it is exclusively available. (See also Steven Salop, “Invigorating Vertical Merger Enforcement,” 127 Yale L.J. 1962 (2018).)

The 2020 Vertical Merger Guidelines fall short in their analysis of mergers involving highly differentiated products. The guidelines emphasize that vertical mergers are pro-competitive when they eliminate “double marginalization,” or mark-ups that independent firms claim at different levels of the distribution chain. For example, when game consoles purchase content from game developers, they may decide to add a mark-up on that content before offering it for consumer consumption. (In the real world of predatory pricing and cross-subsidization, the incentive to add such a mark-up is a more complex business calculation.) Theoretically, the elimination of those markups creates an incentive to lower prices to the end consumer.

But this narrow focus on elimination of double marginalization—and theoretical downward price pressure for consumers—ignores how the reduction in competition among downstream retailers for access to those inputs can also degrade the quality of the input. Let’s take Microsoft-Activision as an example. As an independent firm, Activision creates games and downstream consoles engage in some form of competition to carry those games. When consoles compete on terms to carry Activision games, the result to Activision includes greater investment in game development and higher quality games. When Microsoft acquires Activision, that downstream competition for exclusive or first-run access to Activision’s games is diminished. Gone is the pro-competitive pressure created by rival consoles bidding for exclusivity, as is the incentive for Activision to innovate and demand greater third-party investment in higher quality games.

Emphasizing the pro-competitive effects of eliminating double marginalization—even if that means lower prices to consumers—only provides half of the picture, because consumers will likely be paying for lower quality games. Previous iterations of the Vertical Merger Guidelines emphasize the consumer benefits of eliminating double marginalization, but they stop short of assessing the countervailing harms of mergers involving differentiated inputs. They should be updated accordingly.

Partial foreclosure will suffice

During the evidentiary hearings in the Northern District of California, the FTC repeatedly pushed back against the artificially high burden of having to prove that Microsoft had an incentive to fully foreclose access to Activision games. In the midst of an exchange during the FTC’s closing arguments, FTC’s counsel put it directly: “I don’t want to just give into the full foreclosure theory. That’s another artificially high burden that the Defendants have tried to put on the government.” And yet, in her decision, Judge Corley conflates the analysis for both full and partial foreclosure, writing, “If the FTC has not shown a financial incentive to engage in full foreclosure, then it has not shown a financial incentive to engage in partial foreclosure.”

Although agencies have acknowledged that the incentive to partially foreclose may exist even in the absence of total foreclosure (see, for instance, the FCC’s 2011 Order regarding the Comcast-NBCU vertical transaction), the Vertical Merger Guidelines do not make any such distinction. Again, that incomplete analysis hinges in part on the failure to distinguish between types of inputs. Take for instance a producer of oranges merging with a firm that makes orange juice. Theoretically, the merged firm might fully foreclose access to oranges to rival orange juice makers, who may then go in search for alternative sources of oranges. Or the merged firm might supply lower quality produce to rival firms, which may again send it in search of an alternative source.

But a merged firm’s ability and incentive to foreclose looks different when foreclosure takes the subtler form of investing less in the functionality of game content with a gaming console, subtly degrading game features, or adding unique features to the merged firm’s platforms in ways that will eventually drive more astute gamers to the merged firm (even though the game in question is technically still available on rival consoles). Such eventualities are perhaps easier to imagine in the context of other content platforms—for example, if news content were less readable on one social media platform than another. When a merged firm has unilateral control over those subtle design and development decisions, the ability and incentive to engage in more subtle forms of anticompetitive partial foreclosure is more likely and predictable.

In finding that Microsoft would not have a financial incentive to fully foreclose access to Activision games, Judge Corley’s analysis hinges on a near-term assessment of Microsoft’s financial incentive to elicit game sales by keeping games on rival consoles. (Never mind that Microsoft is a $2.5 trillion corporation that can afford near-term losses in service of its longer-view monopoly ambitions.) Regardless, a theory of partial foreclosure does not mean that Microsoft must forgo independent sales on rival consoles to achieve its ambitions. To the contrary, partial foreclosure would still allow users to purchase and play games on rival consoles. But it also allows for Microsoft’s incentive to gradually encourage consumers to use its own console or game subscription service for better game play and unique features.

Finally, Judge Corley’s analysis of Microsoft’s incentive to fully foreclosure is irresponsibly deferential to statements made by Activision Blizzard CEO Bobby Kotick that the merging entities would suffer “irreparable reputational harm” if games were not made available on rival consoles. Again, by conflating the incentives for full and partial foreclosure, the court ignores Microsoft’s ability to mitigate that reputational harm—while continuing to drive consumers to its own platforms—if foreclosure is only partial.

Rejecting private behavioral remedies

In a particularly convoluted passage in the district court’s order, the Court appears to read an entirely new requirement into the FTC’s initial burden of demonstrating a likelihood of success on the merits—namely, that the FTC must assess the adequacy of Microsoft’s proposed side agreements with rival consoles and third-party platforms to not foreclose access to Call of Duty. Never mind that these side agreements lack any verifiable uniformity, are timebound, and cannot possibly account for incentives for partial foreclosure. Yet, the Court takes at face value the adequacy of those agreements, identifying them as the principal evidence of Microsoft’s lack of incentive to foreclose access to just one of Activision’s several AAA games.

In its appeal to the Ninth Circuit, the FTC seizes on this potential legal error as a basis for reversal. The FTC writes, “in crediting proposed efficiencies absent any analysis of their actual market impact, the district court failed to heed [the Ninth Circuit’s] observation ‘[t]he Supreme Court has never expressly approved an efficiencies defense to a Section 7 claim.’” The FTC argues that Microsoft’s proposed remedies should only have been considered after a finding of liability at the subsequent remedy stage of a merits proceeding, citing the Supreme Court’s decision in United States v. Greater Buffalo Press, Inc., 402 U.S. 549 (1971). Indeed, federal statute identifies the Commission as the expert body equipped to craft appropriate remedies in the event of a violation of the antitrust laws.

In its statement withdrawing the 2020 Vertical Merger Guidelines, the FTC announced it would work with the Department of Justice on updating the guidelines to address ineffective remedies. Presumably, the district court’s heavy reliance on Microsoft’s proposed behavioral remedies is catalyst enough to clarify that they should not qualify as cognizable efficiencies, at least at the initial stages of a case brought by the FTC or DOJ.

If this decision has taught us anything, it is that the agencies can’t come out with the new Merger Guidelines fast enough. In particular, those guidelines must address the competitive harms that flow from the vertical integration of differentiated content and digital media platforms. Even so, updating the guidelines may be insufficient to shift a judiciary so hostile to merger enforcement that it will turn a blind eye to brazen admissions of a merging firm’s monopoly ambitions. If that’s the case, we should look to Congress to reassert its anti-monopoly objectives.

Lee Hepner is Legal Counsel at the American Economic Liberties Project.

At some point soon, the Federal Trade Commission is very likely to sue Amazon over the many ways the e-commerce giant abuses its power over online retail, cloud computing and beyond. If and when it does, the agency would be wise to lean hard on the useful and powerful law at the core of its anti-monopoly authority. 

The agency’s animating statute, the Federal Trade Commission Act and its crucial Section 5, bans “unfair methods of competition,” a phrase Congress deliberately crafted, and the Supreme Court has interpreted, to give the agency broad powers beyond the traditional antitrust laws to punish and prevent the unfair, anticompetitive conduct of monopolists and those companies that seek to monopolize industries. 

Section 5 is what makes the FTC the FTC. Yet the agency hasn’t used its most powerful statute to its fullest capability for years. Today, with the world’s most powerful monopolist fully in the commission’s sights, the time for the FTC to re-embrace its core mission of ensuring fairness in the economy is now.

The FTC appears to agree. Last year, the agency issued fresh guidance for how and why it will enforce its core anti-monopoly law, and the 16-page document read like a promise to once again step up and enforce the law against corporate abuse just as Congress had intended. 

Why Section 5?

The history of the Section 5—why Congress included it in the law and how lawmakers expected it to be enforced—is clear and has been spelled out in detail: Congress set out to create an expert antitrust agency that could go after bad actors and dangerous conduct that the traditional anti-monopoly law, the Sherman Act, could not necessarily reach. To do that, Congress crafted Section 5 so that the FTC could stop tactics that dominant corporations devise to sidestep competition on the merits and instead unfairly drive out their competitors. Congress gave the FTC the power to enforce the law on its own, to stop judges from hamstringing the law from the bench, as they have done to the Sherman Act. 

As I’ve detailed, the Supreme Court has issued scores of rulings since the 1970s that have collectively gutted the ability of public enforcement agencies and private plaintiffs to sue monopolists for their abusive conduct and win. These cases have names—Trinko, American Express, Brooke Group, and so on—and, together, they dramatically reshaped the country’s decades-old anti-monopoly policy and allowed once-illegal corporate conduct to go unchecked. 

Many of these decisions are now decades old, but they continue to have outsized effects on our ability to policy monopoly abuses. The Court’s 1984 Jefferson Parish decision, for example, made it far more difficult to successfully prosecute a tying case, in which a monopolist in one industry forces customers to buy a separate product or service. The circuit court in the government’s monopoly case against Microsoft relied heavily on Jefferson Parish in overturning the lower court’s order to break Microsoft up. More recently, courts deciding antitrust cases against Facebook, Qualcomm and Apple all relied on decades of pro-bigness court rulings to throw out credible monopoly claims against powerful defendants. 

Indeed, the courts’ willingness to undermine Congress was a core concern for lawmakers when drafting and passing Section 5. Three years before Congress created the FTC, the U.S. Supreme Court handed down its verdict in the government’s monopoly case against Standard Oil, breaking up the oil trust but also establishing the so-called “rule of reason” standard for monopoly cases. That standard gave judges the power to decide if and when a monopoly violated the law, regardless of the language of, or democratic intent behind, the Sherman Act. Since then, the courts have marched the law away from its goal of constraining monopoly power, case by case, to the point that bringing most monopolization cases under the Sherman Act today is far more difficult than it should be, given the simple text of the law and Congress’ intent when it wrote, debated, and passed the act.

That’s the beauty and the importance of Section 5. Congress knew that the judicial constraints put on the Sherman Act meant it could not not reach every monopolistic act in the economy. That’s now truer than ever. Section 5 can stop and prevent unfair, anticompetitive acts without having to rely on precedent built up around the Sherman Act. It’s a separate law, with a separate standard and a separate enforcement apparatus. What’s more, the case law around Section 5 has reinforced the agency’s purview. In at least a dozen decisions, the Supreme Court has made clear that Congress intended for the law to reach unfair conduct that falls outside of the reach of the Sherman Act.

So the law is on solid footing, and after decades of sidestepping the job Congress charged it to do, the FTC appears ready to once again take on abuses of corporate power. And not a moment too soon. After decades of inadequate antitrust enforcement, unfairness abounds, particularly when it comes to the most powerful companies in the economy. Amazon perches atop that list. 

A Recidivist Violator of Antitrust Laws

Investigators and Congress have repeatedly identified Amazon practices that appear to violate the spirit of the antitrust laws. The company has a long history of using predatory pricing as a tactic to undermine its competition, either as a means of forcing companies to accept its takeover offers, as it did with Zappos and Diapers.com, or simply as a way to weaken vendors or take market share from competing retailers, especially small, independent businesses. Lina Khan, the FTC’s chair, has called out Amazon’s predatory pricing, both in her seminal 2017 paper Amazon’s Antitrust Paradox, and when working for the House Judiciary Committee during its big tech monopoly investigation. 

Under the current interpretation of predatory pricing as a violation of the Sherman Act, a company that priced a product below cost to undercut a rival must successfully put that rival out of business and then hike up prices to the point that it can recoup the money it lost with its below-cost pricing. Yet with companies like Amazon—big, rich, with different income streams and sources of capital—it might never need to make up for its below-cost pricing by hiking up prices on any one specific product, let alone the below-cost product. Indeed, as Jeff Bezos’s vast fortune can attest, predatory pricing can generate lucrative returns simply by sending a company’s stock price soaring as it rapidly gains market share. 

If Amazon wants to sell products from popular books to private-label batteries at a loss, it can. Amazon makes enormous profits by taxing small businesses on its marketplace platform and from Amazon Web Services. It can sell stuff below cost forever if it wants to–a clearly unfair method of competing with any other single-product business–all while avoiding prosecution under the judicially weakened Sherman Act. Section 5 can and should step in to stop such conduct. 

Amazon’s marketplace itself is another monopolization issue that the FTC could and should address with Section 5. The company’s monopoly online retail platform has become essential for many small businesses and others trying to reach customers. To wit, the company controls at least half of all online commerce, and even more for some products. As an online retail platform, Amazon is essential, suggesting it should be under some obligation to allow equal access to all users at minimal cost. Of course, that’s not what happens; as my organization has documented extensively, Amazon’s captured third-party sellers pay a litany of tolls and fees just to be visible to shoppers on the site. Amazon’s tolls can now account for more than half of the revenues from every sale a small business makes on the platform. 

The control Amazon displays over its sellers mirrors the railroad monopolies of yesteryear, which controlled commerce by deciding which goods could reach buyers and under what terms. Antitrust action under the Sherman Act and legislation helped break down the railroad trusts a century ago. But if enforcers were to declare Amazon’s marketplace an essential facility today, the path to prosecution under the Sherman Act would be difficult at best. 

Section 5’s broad prohibition of unfair business practices could prevent Amazon’s anticompetitive abuses. It could ban Amazon from discriminating against companies that sell products on its platform that compete with Amazon’s own in-house brands, or stop it from punishing sellers that refuse to buy Amazon’s own logistics and advertising services by burying their products in its search algorithm. The FTC could potentially challenge such conduct under the Sherman Act, as a tying case, or an essential facilities case. But again, the pathway to winning those cases is fraught, even though the conduct is clearly unfair and anticompetitive. If Amazon’s platform is the road to the market, then the rules of that road need to be fair for all. Section 5 could help pave the way. 

These are just a few of the ways we could see the FTC use its broad authority under Section 5 to take on some of Amazon’s most egregious conduct. If I had to guess, I imagine the FTC in a potential future Amazon lawsuit will likely charge the company under both the Sherman Act and the FTC Act’s Section 5 for some conduct it feels the traditional anti-monopoly statute can reach, and will rely solely on Section 5 for conduct that it believes is unfair and anticompetitive, but beyond the scope of the Sherman Act in its current, judicially constrained form. For example, while the FTC could potentially use the Sherman Act to address Amazon’s decision to tie success on its marketplace to its logistics and advertising services, the agency’s statement makes clear that Section 5 has been and can be used to address “loyalty rebates, tying, bundling, and exclusive dealing arrangements that have the tendency to ripen into violations of the antitrust laws by virtue of industry conditions and the respondent’s position within the industry.”

Might this describe Amazon’s conduct? Very possibly, but that will ultimately be up to the FTC to decide. Suing Amazon under both statutes would invite the court to make better choices around the Sherman Act that are more critical of monopoly abuses, and help develop the law so that the FTC can eagerly embark on its core mission under Section 5: to help ensure markets are fair for all.

Ron Knox is a Senior Researcher and Writer for ILSR’s Independent Business Initiative.

For those not steeped in antitrust law’s treatment of single-firm monopolization cases, under the rule-of-reason framework, a plaintiff must first demonstrate that the challenged conduct by the defendant is anticompetitive; if successful, the burden shifts to the defendant in the second or balancing stage to justify the restraints on efficiency grounds. According to research by Professor Michael Carrier, between 1999 and 2009, courts dismissed 97 percent of cases at the first stage, reaching the balancing stage in only two percent of cases.

There is a fierce debate in antitrust circles as to what constitutes a cognizable efficiency. In April, the Ninth Circuit upheld Judge Yvonne Gonzalez Rogers’ dismissal of Epic Game’s antitrust case against Apple on the flimsiest of efficiencies.

A brief recap of the case is in order, beginning with the challenged conduct. Epic alleged Apple forces certain app developers to pay monopoly rents and exclusively use its App Store, and in addition requires the use of Apple’s payment system for any in-app purchases. The use of Apple’s App Store, and the prohibition on a developer loading its own app store, as well as the required use of Apple’s payment system are set forth in several Apple contracts developers must execute to operate on Apple’s iOS. The Ninth Circuit found that Epic met its burden of demonstrating an unreasonable restraint of trade, but Epic’s case failed because Apple was able to proffer two procompetitive rationales that the Appellate Court held were non-pretextual and legally cognizable. One of those justifications was that Apple prohibited competitive app stores and required developers to only use Apple’s payment system because it was protecting its intellectual property (“IP”) rights.

Yet neither the District Court nor the Ninth Circuit ever tell us what IP Apple’s restraints are protecting. The District Court opinion states that “Apple’s R&D spending in FY 2020 was $18.8 billion,” and that Apple has created “thousands of developer tools.” But even Apple disputes in a recent submission to the European Commission that R&D has any relationship to the value of IP: “A patent’s value is traditionally measured by the value of the claimed technology, not the amount of effort expended by the patent holder in obtaining the patent, much less ‘failed investments’ that did not result in any valuable patented technology.”

Moreover, every tech platform must invest something to encourage participation by developers and users. Without the developers’ apps, however, there would be few if any device sales. If all that is required to justify exclusion of competitors, as well tying and monopolization, is the existence of some unspecified IP rights, then exclusionary conduct by tech platforms for all practical purposes becomes per se legal. Plaintiffs challenging these tech platform practices on antitrust grounds are doomed from the start. Even though the plaintiff theoretically can proffer a less restrictive alternative for the tech platform owners to monetize their IP, this alternative per the Ninth Circuit must be “virtually as effective” and “without increased cost.” Again, the deck was already stacked against plaintiffs, and this decision risks making it even less likely for abusive monopolists to be held to account.

Ignoring the Economic Literature on IP

In addition to bestowing virtual antitrust immunity on tech platforms in rule-of-reason cases, there are important reasons why IP should never qualify as a procompetitive business justification for exclusionary conduct. Had the Ninth Circuit consulted the relevant economic literature, it would have learned that IP is fundamentally not procompetitive. Indeed, there is virtually no evidence that patents and copyrights, particularly in software, incentivize or create innovation. As Professors Michele Boldrin and David Levine conclude, “there is no empirical evidence that [patents] serve to increase innovation and productivity…” This same claim could be made for the impact of copyrights as well. Academic studies find little connection between patents, copyright, and innovation. Historical analysis similarly disputes the connection. Surveys of companies further find that the goals of patenting are not primarily to stimulate innovation but instead the “prevention of rivals from patenting related inventions.” Or, in other words, the creation of barriers to entry. Innovation within individual firms is motivated much more by gaining first-mover advantages, moving quickly down the learning curve or developing superior sales and marketing in competitive markets. As Boldrin and Levine explain:

In most industries, the first-mover advantage and the competitive rents it induces are substantial without patents. The smartphone industry-laden as it is with patent litigation-is a case in point. Apple derived enormous profits in this market before it faced any substantial competition.

Possibly even more decisive for innovation are higher labor costs that result from strong unions. Other factors have also been found to be important for innovation. The government is responsible for 57 percent of all basic research, research that has been the foundation of the internet, modern agriculture, drug develop, biotech, communications and other areas. Strong research universities are the source of many more significant innovations than private firms. Professor Margaret O’Mara’s recent history of Silicon Valley demonstrates how military contracts and relationships with Stanford University were absolutely critical to the Silicon Valley success story. Her book reveals the irony of how the Silicon Valley leaders embraced libertarian ideologies while at the same time their companies were propelled forward by government contracts.

In an earlier period, the antitrust agencies ordered thousands of compulsory licensing decrees, which were estimated to have covered between 40,000 and 50,000 patents. Professor F.M. Scherer shows how these licenses did not lead to less innovation. Indeed, the availability of this technology led to significant economic advances in the United States. In his book, “Inventing the electronic Century,” Professor Alfred Chandler documents how Justice Department consent decrees with RCA, AT&T and IBM, which made important patents available to even rivals, created enormous competition and innovation in data processing, consumer electronics, and telecommunications. The evidence is that limiting or abolishing patent protection has far more beneficial impact than its protection, let alone allowing its use to justify anticompetitive exclusion.

Probably the weakest case for the economic value of patents exists in the software industry. Bill Gates, reflecting on patents in the software industry said in 1991 that:

If people had understood how patents would be granted when most of today’s ideas were invented and had taken out patents, the industry would be at a complete standstill today…A future start-up with no patents of its own will be forced to pay whatever price the giants choose to impose.

The point is that there is very little support for antitrust courts to elevate IP to a justification for market exclusion. The case for procompetitive benefits from patents is nonexistent, while much evidence supports an exclusionary motive for obtaining IP by big tech firms.

As Professors James Bessen and Michael Meurer show, patents on software are particularly problematic because they have high rates of litigation, are of little value, and many appear to be trivial. In particular, Bessen and Meurer argue that many software patents are obvious and therefore invalid. Moreover, the claim boundaries are “fuzzy” and therefore infringement is expensive to resolve.

When asserted in a rule-of-reason case under the Apple precedent, software patents would seem to escape all scrutiny. The defendant would simply assert IP protection without any obligation to reveal with specificity the nature of the IP. The plaintiff then would have no way to challenge validity or infringement or to be able to demonstrate an ability to design around the defendant’s IP. Instead, they must show, per the Ninth Circuit’s opinion, that there is a less restrictive way for the plaintiff to be paid for its IP that is “virtually as effective” and “without increased cost.” This makes no sense at all. It would make far more sense to force any tech platform that seeks to exclude competitors on the basis of IP to simply file a counterclaim to the antitrust complaint alleging patent or copyright infringement and seeking an injunction that excludes the plaintiff. In such a case, the platform’s IP can be tested for validity. The exclusion by the antitrust defendant can be compared to the patent grant, and patent misuse can be examined.

Ignoring Its Own Precedent

It is unfortunate that the Apple court did not take seriously the Circuit’s earlier analysis in Image Technical Services v. Eastman Kodak. There, Kodak defended its decision to tie its parts and service in the aftermarket by claiming that some of its parts were patented. The Court noted that “case law supports the proposition that a holder of a patent or copyright violates the antitrust laws by ‘concerted and contractual behavior that threatens competition.’” The Kodak Court’s example of such prohibited conduct was tying, a claim made by Epic. Because we know that there are numerous competing payment systems, and because nothing in the Ninth Circuit’s opinion addresses the specifics of Apple’s IP that must be protected, it is likely the case that Apple does not have blocking patents that preclude use of alternative payment systems. And if this is the case, Epic alleged the very situation where the Ninth Circuit earlier (citing Supreme Court precedent) found that patents or copyrights violate the antitrust laws. Moreover, the Ninth Circuit thought it was significant that Kodak refused to allow use of both patented or copyrighted products and non-protected products. This may also be true of Apple’s development license in the Epic case. The Court didn’t seem to think that an inquiry into what IP was licensed by these agreements to be significant.

In sum, use of IP as a procompetitive business justification has no place in rule-of-reason cases. There is no evidence IP is procompetitive, and use of IP as a business justification relieves the antitrust defendant of the burden to demonstrate validity and infringement required in IP cases. It further stacks the deck in rule-of-reason cases against plaintiffs, and unjustly favors exclusionary practices by dominant tech platforms.

Mark Glick is a professor in the economics department of the University of Utah.

The mid-sized town of Springfield maintained a speed limit of 25 miles per hour on a one-mile stretch of Main Street that was home to both an elementary school and a middle school. The speed limit had been in force for decades. Children as young as three walked on the sidewalks and sometimes unexpectedly darted across the street. By forcing drivers to slow down, the speed limit minimized the risk of serious injury and death. While collisions occurred occasionally on this busy road, no pedestrian, driver, or passenger, had ever suffered a serious injury. For years, the 25-mph limit attracted little attention, positive or negative, and was accepted by residents as a fact of life in the town.

One day though, a group of prominent businesspeople and professionals petitioned for a change. These local notables called on the mayor to eliminate the speed limit because it contributed to congestion on the important road and delayed drivers from reaching their destination. In their petition, they contended that removal of the speed limit would allow people to spend less time on the road and more time being productive at their place of work and socializing with their near and dear. They commissioned an economic study that concluded that removing the speed limit would allow children visiting their grandparents to spend less time in the car and more time with their doting grandma or grandpa. Attempting to preempt concerns about road safety, they claimed the speed limit was not necessary, as drivers would naturally be concerned for the safety of kids. They argued that police could pull drivers over for reckless behavior or for driving unsafely. Further, drivers who negligently caused injuries or deaths would face serious consequences, including prison. That threat would deter dangerous driving.

Given the standing of opponents of the speed limit, the mayor soon after lifted speed restrictions on the road. He declared, “The 25 mph may have worked when we led more leisurely lives and could afford to spend an extra 10 or 15 minutes in traffic. But that is the past, we are all busy people now. The speed limit is an impediment to the smooth flow of traffic today.” He did not dismiss concerns about traffic safety and directed the town’s police force to pull over drivers who drive in an “unreasonably unsafe manner.”

The new system appeared to work fine at first. Vehicles proceeded past the schools much faster than they had previously. Congestion was a thing of the past. As proponents of the repeal predicted, the people of Springfield were getting to spend a little more time with their coworkers, friends, and families.

But the repeal of the speed limit was not an unalloyed benefit for the town. With a local bottleneck relieved, many people stopped using the town’s famous monorail and got into their cars, trucks, and vans instead. Many living near Main Street who had previously walked to nearby grocery stores and restaurants started driving. Although traffic congestion on Main Street had been addressed, it had a cost. Rescinding the speed limit encouraged more driving and increased air pollution.

Some drivers who scrupulously followed the 25-mph speed limit began to drive more aggressively. Because there was no speed limit, some felt emboldened to drive past the school at 50 mph or faster, so long as they couldn’t spot any children in harm’s way. That speed was not illegal under the letter of the law unless an observing police officer deemed it to be “unreasonably unsafe.” No one knew quite what this meant. It was rumored that police officers considered the time of day, level of traffic, weather conditions, the proximity of children to the road, and the importance of driver’s trip before passing judgment. When teachers at the elementary school complained that the sound of cars sometimes traveling at 70 mph scared the young children, the mayor said, “While we can’t quantify the subjective terror felt by kids, we can measure the shortened commutes for Springfielders.” To keep their children safe, the elementary school ended recess and other outdoor activities for all children up through fourth grade.

Enforcement of the new “unreasonably unsafe” standard for the rule also drew concern. When a local executive was pulled over for driving 80 mph, the police officer, whose conversation was recorded on a bodycam, let him off with a friendly “warning,” obsequiously saying, “I get it, sir. You are a busy man. If we had kept the 25 mph as some wanted, you’d be spending time stuck here, instead of tending to your important work.”

But others were not so lucky. Black drivers, especially those driving late model cars, were frequently pulled over for going 30 mph. That was only five miles per hour faster than the old speed limit, but many officers deemed it “unreasonably unsafe.” The discriminatory pattern of enforcement was impossible to ignore.

Proponents of the new approach dismissed growing criticisms. They said the improved flow of traffic trumped other considerations. They conceded fewer people were taking the monorail and walking for short trips, but insisted these are not “traffic-related” issues. The city should address these problems though other measures, they said. Moreover, discriminatory enforcement was not inherent to the new standard and could be resolved. The mayor pledged to improve police training and socialize officers “not to see color” in performing their duties.

But after one deadly incident, even the strongest proponents were at a loss for defenses. One afternoon, the 20-year-old scion of a local real estate magnate took his new red Ferrari out for a spin. He wanted to test its acceleration and went from zero to 60 mph on Main Street in four seconds. Focused on his immediate aim, he did not notice a 12-year-old schoolboy who had run into the street to retrieve an errant soccer ball and struck him. The boy was killed instantly. The local prosecutor pledged to prosecute the driver and seek the maximum possible sentence. But whatever the result, no prison sentence would bring the young boy back to life or provide solace to his parents and siblings.

The tragic death of the child made clear to almost everyone that the new system was a failure. While its proponents rationalized or offered solutions for increased driving, forcing schoolchildren indoors, and discriminatory enforcement, they had no ready answers for the clearly avoidable fatality. The old 25-mph speed limit had created modest inconveniences, but it would have prevented the fatal accident. In addition to allowing schoolchildren to play safely outside, the old rule encouraged people to use public transit and to walk and reduced the potential for subjective and discriminatory law enforcement. It was an example of what the economist Gardiner Means called a good “canalizing rule.”

For the past 40 years, the federal judiciary has followed the model of Springfield and overturned or weakened bright-line antitrust rules for mergers and other business practices. For instance, the Supreme Court held that manufacturers dictating resale prices on their goods to retailers and wholesalers through contract—an example of a “vertical restraint” imposed on a firm at another level in the same supply chain—was no longer a categorically illegal practice. In place of such clear “speed limits,” it adopted the rule of reason as its default analytical framework—a standard that requires case-by-case assessment of “effects” and has practically legalized many formerly restricted business practices.

Much like Springfield’s decision gave license to residents to drive as they wish on Main Street, the courts have granted corporate executives broad discretion to compete and grow their enterprises as they wish. In theory, this case-by-case approach allows business leaders to engage in socially beneficial mergers and to use vertical restraints to protect against harmful free riding. But as the story of Springfield shows, legal rules are used not only to decide specific cases but also to structure individual and organizational behavior.

Congressional and regulatory enactment of bright-line rules on mergers and unfair practices would channel business strategy in different and better directions. Strong rules against mergers, such as a general prohibition on all acquisitions by firms with more than a 30% share in any market or $10 billion in total assets, might sacrifice the occasional beneficial consolidation (there are ample grounds to be skeptical of such losses to be sure). Yet these bright-line rules would channel business strategy toward internal expansion and development of new production methods. Similarly, a prohibition on non-compete clauses could prevent an employer from stopping an employee from departing for a rival after receiving valuable training on the job, but it would also encourage employers to retain workers through regular raises and promotions and fair treatment and to use more targeted tools for protecting their proprietary information. And bright-line rules for antitrust enforcement would limit governmental discretion and the ability of unscrupulous officials to reward friendly businesses and punish their perceived enemies. These rules would deprive the CEOs of the largest corporations of autonomy and surely make them unhappy. But for the rest of us, life would be better.

The announced PGA-DP World Tour-LIV Golf “partnership” (read, merger) has reverberated throughout the sporting world, sending shockwaves across not only the golf industry but the sports world as well. ESPN reported players reacting with “complete and utter shock” at the announcement, as did, much of the sports media, calling the news “stunning”.

Let’s clear the air. None of this was surprising in the slightest.

The golf industry merger is but the latest example of, as the Propellerheads and the legendary Dame Shirley Bassey melodically put it, history repeating. Sports historians and economists have recounted the episodes of consolidation that have precipitated the modern-day U.S. professional sports leagues. These commercial joinders all share a common theme: a response by an entrenched, dominant entity faced with the threat of entry and the prospect of seeing its monopsony power diluted by the crucible of competition.

The real question with which golfers, as the primary interested party, should concern themselves is quo vadem: where do we go from here? This article aims to shed some light on that question by first recounting what has befallen athletes in other leagues following similar consolidation, evaluating what similar or differing conditions characterize this golf industry consolidation, then evaluating what path such conditions presage for current players. Finally, I address what steps players could take to protect their interests from any wage suppression that may result from the merger.

Wage suppression warrants concern here for the same reason it has in other sports cases (not to mention the broader labor market): the leverage of monopsony power. Monopsony power in a labor market reflects an entity’s ability to restrain wages below the levels that would prevail under competitive conditions. Such actions reflect worker exploitation, defined as the ability to reduce wages below the marginal revenue product (MRP) of labor (the marginal revenues generated by the next unit of labor). When faced with an upward sloping labor supply curve, a firm will set its wages at MRP, just as, under similar competitive conditions in an output market, a seller will set its price equal to marginal cost.

As the FTC has observed, the exercise of monopsony power in input markets reflects the mirror image of monopoly power in output markets. Historically, the PGA Tour has behaved like a monopsonist: it unilaterally set Tour members’ pay below competitive levels, reduced the input of labor, and excluded competitors. (Those familiar with the NCAA antitrust litigation will immediately notice its similarities to the PGA.)

The PGA Tour’s actions mirror those of other entities that held the same power over workers. Faced with the possibility of increased competition for labor, a monopsonist will commonly seek to either 1) prevent such entry or 2) acquire the entrant, and thus reduce or eliminate workers’ ability to choose among alternatives. Prior to the announced merger, the PGA Tour sought to do just that, by threatening golfers who considered joining LIV.

On that note, let’s start with a quick trip down memory lane to acquaint ourselves with how much this latest merger resembles previous sports industry consolidation.

In 1962, the AFL sued the NFL, alleging the latter had monopolized the market for professional football leagues; the district court ruled against the AFL, finding that the NFL did not have monopoly power. In the following year, the 4th Circuit Court of Appeals affirmed the lower court’s finding against the AFL, ending the litigation. In June of 1966, following a series of secret meetings, the two leagues announced the decision to merge. The September 1975, Congressional oversight hearings on the NFL labor-management dispute provide some details of the effects on labor. In his statement, Ed Garvey, Director of the NFL Players’ Association president at the time explained that, between the birth of the AFL and 1966, little if any bargaining between labor and management in the NFL occurred, noting that “Because of competition for player services, salaries nearly tripled, and the NFL was anxious to institute some fringe benefits to attract players to the NFL. When merger plans were announced in the summer of 1966, efforts were mounting within the NFLPA to oppose the merger, but Congress exempted the merger before there could be any serious opposition mounted.” The exemption refers to Congress’ statutory enshrinement of monopoly and monopsony power for major sports leagues in the form of 15 USC Ch. 32, §1291,Exemption from antitrust laws of agreements covering the telecasting of sports contests and the combining of professional football leagues.” The provision passed in 1966 amended the Sports Broadcasting Act of 1961 to exempt merging sports leagues from antitrust laws as long as the merger “increases rather than decreases the number of football clubs.”

A similar scenario played out in the history of professional basketball in the United States, in which the rise of the American Basketball Association (ABA) militated against the exercise of monopsony power by owners of National Basketball Association (NBA) teams. As sports economist David Berri observed in the Antitrust Bulletin, while NBA players received a wage share of approximately 27 percent in 1970, “By 1972–1973, the NBA had to increase salaries to prevent players from joining a league that clearly was a legitimate competitor.” At the time, the leagues considered merging; however the existence of a reserve clause that gave a team control over a player’s mobility represented an untenable situation for the athletes, who sued to block it in 1970. (Robertson v. National Basketball Association, 556 F.2d 682 (2d Cir. 1977). The 1976 settlement culminated in the Oscar Robertson Rule and resulted in the elimination of the reserve clause (also known as the “option clause”) and rise of free agency rules that exist today.

National Hockey League (NHL) player wages also benefited from inter-league competition. Like other professional sports leagues, the NHL included a reserve clause in player contracts, bestowing exclusive negotiating rights on the original team with which a player signed. In the early 1970s, the World Hockey Association (WHA) entered the market, seeking to fill demand for teams in various major cities that represented relatively smaller markets than those having NHL teams.

Like LIV Golf, the WHA attracted top players by offering greater mobility and the potential for more lucrative compensation. As a result, it signed sixty-seven players, including the legendary Bobby Hull, in its inaugural 1972 season; for the following year, the WHA also signed another hockey superstar, Gordie Howe. While NHL sued to block the players from leaving from the WHA (just as the PGA threatened potential LIV signees with expulsion), a Massachusetts district court denied injunctive relief, allowing two stars, Gerry Cheevers and Derek Sanderson to join the WHA. Concurrently, a federal court in Philadelphia enjoined the NHL from taking legal action to enforcing its reserve clause. (In total, litigation between the WHA and NHL spanned four separate suits.)The following year, 1973, the NHL abolished the reserve clause, replacing it with a one-year option. The 1975 collective bargaining agreement between the NHL and its players’ association included additional modifications, including guaranteed contracts and enabling players to enter into contracts without an option year. Negotiations between the NHL and WHA culminated in the June 22, 1979 NHL expansion, where four WHA teams, the Edmonton Oilers, New England (now Hartford) Whalers, Quebec Nordiques (now Colorado Avalanche), and Winnipeg Jets received NHL expansion slots and the rest of the WHA folded.

At this point, the common thread between the NFL, NBA, NHL and Major League Baseball merits emphasis, as it highlights the dichotomy between the PGA Tour and other major professional sports leagues. While athletes in other leagues have the benefit of a union and a collective bargaining agreement between the players’ association and the league, PGA Tour athletes do not. (Somewhat ironically, the caddies themselves do: The Association of Professional Tour Caddies.) The presence of a union could have cautioned Tiger Woods and Rory McIlroy, who apparently turned down a combined $1.5 billion from LIV only to now find themselves in the same position but sans the lucrative offer.

In response to the threat of entry, the PGA adopted a tripartite strategy that was one part carrot, one part stick, and one part a morality play. The latter, of course, referenced the source of LIV’s funding: a Saudi regime run by crown prince Muhammad bin-Salman, whom the US government found approved the brutal murder of Washington Post journalist Jamal Khashoggi. Of course, the merger exposed the lack of sincerity in such appeals. The carrot refers to the fact that the PGA Tour previously announced more than $100 million in purse increases for 2022, an apparent response to the threat of competition that resembles similar action by the NBA and NFL discussed above. Indeed, the PGA Commissioner acknowledged the treat to the Tour’s margins that LIV represented. As CBS Sports Adam Silverstein reported, Monahan explained that “f you just look at the environment we’re in, the PIF was controlling LIV, and we were competing against LIV.It felt good about the changes we’d made and the position we were in, but ultimately, to get the competitor off the board — to have them exist as a partner, not as an owner…” The PGA also leveraged its stick, banning golfers who participated in LIV tournaments.

The PGA Tour’s threats of lifetime bans against players who sought to avail themselves financially of LIV’s competition with the PGA Tour also bears close resemblance to the reserve clause that existed in baseball (and other major sports). The reserve clause in a contract bound a professional baseball player to a single team; prior to 1887, baseball contracts stipulated that the player agreed to “abide by the constitution and bylaws of organized baseball” a phrase remarkably similar to the language used by PGA Tour Commissioner Jay Monahan in his letter to tour members reminding them that “You have made a different choice, which is to abide by the Tournament Regulations you agreed to when you accomplished the dream of earning a PGA TOUR card.”

The 1889 version of the baseball contract included a clause that permitted a team to “reserve” a player for the following season at a rate at least as high as the player’s current-year compensation. As economists James Quirk and Rodney Fort explained in their book “Pay Dirt: The Business of Professional Sports Teams”, interpretation of the clause effectively granted the owners a perpetual option to retain a player’s services over his entire career, particularly considering that owners agreed not to hire players from other teams’ reserve lists. Subsequently, following the Supreme Court decision in Federal Baseball Club v. National League, 259 U.S. 200 (1922), which granted baseball’s antitrust exemption, the following clause was incorporated in every player contract from the 1920s into the 1950s:

[If] the player and the club have not agreed upon the terms of such contract [for the next playing season], then … the club shall have the right to renew this contract for the period of one year on the same terms, except that the amount payable to the player shall be such as the club shall fix in said notice…

This incarnation of the reserve clause lasted nearly a century, binding a player to a team and prohibiting his ability to obtain better wages elsewhere. During the 1957 Congressional hearings on organized professional team sports, Major League Baseball reported revenues from the previous five years. In a recent paper, sports economist David Berri analyzed these data, showing that, during this period, MLB paid less than 25% of the revenues to players, a clear indication of the wage-restraining effects of the reserve clause. The first major cracks in owners’ hegemony over labor came at the hands of Golden Glove winner Curt Flood in 1969. Upon being traded from St. Louis to the Philadelphia Phillies and told that he had no say in the matter, Flood filed suit, Flood v. Kuhn, 407 U.S. 258 (1972), telling Commissioner Bowie Kuhn that “I do not regard myself as a piece of property to be bought or sold.” While the Supreme Court ruled 5-3 in 1922 to exempt baseball from the jurisdiction of the Sherman Act, an agreement between Major League Baseball and the players’ union granted free agency to players with at least six years of tenure beginning after 1976. This agreement accorded greater bargaining power to players and permitted them to capture a higher wage share. The eponymous 1998 Curt Flood Act revoked baseball’s antitrust exemption as it related to the employment of players, allowing them to enjoy the fruits of competition for their services.

More recently, a group of approximately 1,200 fighters who sued the Ultimate Fighting Championship (UFC) mixed martial arts (MMA) promotion company, won class certification. The Plaintiffs alleged that Zuffa, Inc. (d/b/a, UFC) sought to exclude competition from other promoters in an attempt to exert monopsony power to restrain fighters’ wages. In doing so, Plaintiffs claimed that 1) Defendants’ use of long-term exclusive contracts prevented them from competing elsewhere, 2) Defendants leveraged market power over the labor to force fighters to resign contracts, effectively locking them into perpetuity, 3) acquiring and terminating rival MMA promoters. Documents revealed in the litigation indicated that fighters’ wage share is only approximately 17 percent. Critically, this figure represented a substantial drop from the approximately 29 percent wage share that existed prior to the UFC’s acquisition of Strikeforce.

So what does all this presage for golfers’ futures? Absent potential regulatory intervention, nothing positive.

Two primary external forces counteract the exercise of monopsony power: 1) competitive entry and 2) collective bargaining. While lacking the latter, golfers benefited from the former in the form of LIV, which exposed the PGA’s ability to restrain compensation below competitive levels as evidenced by the purse increases once LIV became a viable competitor. Post-merger, golfers have neither and now find themselves facing an even stronger monopsonist. If golfers hope to maintain any semblance of a fair wage split with the newly formed industry leviathan, their primary recourse is organization into a Professional Golfers Union that can bargain collectively on their behalf. Otherwise, the outcome will mirror the UFC and the state of inequality in American society generally: a small cadre of patricians surrounded a multitude of workers fighting over table scraps. Of course, the newly merged entity will fight tooth and nail against such organization; those with market power seldom if ever concede it willingly. Nonetheless, one would do well to remember that the NBA has crossed swords with its Players’ Association numerous times, yet both sides have prospered. And so has the game.

Ted Tatos teaches econometrics at the University of Utah and regularly consults on economic issues involving antitrust, intellectual property, labor issues, and others. He can be reached at ttatos@eaecon.com.