Alissa Cooper and Zander Arnao argue that a lack of competition in social media has allowed dominant platforms to design algorithms to maximize for user engagement without concern for user experience, which may produce feelings of negativity and partisanship among users. The authors further contend that there exist alternative algorithmic designs that optimize for both engagement and user experience, and regulation may be necessary to promote these different approaches.


When a few ad-funded firms dominate the social media landscape, the resulting concentration incentivizes product designs that fail to maximize consumer surplus, leading to inferior quality and poor user experience. Recent research has shown how social media platforms could be designed to deliver higher quality experiences to consumers, but such designs have not seen widespread adoption in the marketplace.

This problem manifests every day when consumers open their feeds and scroll through the queue of recommended content. While the pathologies of design differ across platforms, studies have documented how the algorithmic systems used by platforms to select and rank content—“recommender systems”—may stoke anger and division, expose consumers to abuse, and even diminish the quality of news diets.

These and other potential harms result from firms’ product design choices made in the absence of significant competitive threats. Firms are not incentivized to maximize product quality because returns to scale, network effects, and lock-in have concentrated social media markets. This article explores the nexus of platform product design and competition and how the resulting incentives inform the design of recommender systems.

Today the design of social media is fine-tuned to manipulate the behavior of consumers. Platforms track user behavior and experimentally iterate designs to curate an environment that is maximally attention-grabbing and optimized for advertising. While traditional economic models assume that consumer behavior is rational, research demonstrates that consumers often behave impulsively and without reflection, leaving them susceptible to influence. Platforms understand this reality and design their products to encourage this automatic behavior. The resulting product leaves consumers at times feeling a sense of negativity and undermined self-control.

This problem is rooted in the barriers to competition between social media platforms. As a network industry, social media benefits from increasing returns to scale. The more users adopt a platform, the better its value proposition to advertisers and potential new users. As platforms gain in scale, they also collect and synthesize more data, which enables them to deliver more targeted advertising and to match consumers with more engaging content. These market dynamics create significant barriers to entry for new competitors.

Moreover, after a dominant platform becomes established, its position is very difficult to displace. Once a sufficient number of users have joined a platform, it can become so valuable to other users that, even if they prefer the design of an alternative, they adopt the dominant platform, reflecting lock-in. This position enables platforms to enjoy widespread adoption without optimizing the user’s experience. Research has shown that fear of missing out may motivate some individuals to continue to use social media, rather than some desirable product aspect.

Social media platforms are primarily funded by advertising. Because firms are unable to raise prices on the consumer side of the platform, they face strong incentives to maximize advertisers’ willingness to pay. If many viable substitutes were present in the market, platforms would have the opposite incentives: orienting design around what consumers want to increase their demand. But when competition is weak, bias towards advertisers can incentivize excessive data collection and steer innovation toward raising advertiser demand. Existing social media platforms do not face significant entry threats from new competitors, and the ad-supported competitors they do have all operate under the same set of incentives.

This dynamic especially diminishes product quality in the design of recommender systems employed by popular social media platforms. A recommender system operates by mapping the universe of content a user could potentially be interested in and algorithmically identifying the content most advantageous to a platform’s goals. To do this, recommender systems process data about users, their behaviors, the content itself, and the context (such as time of day and location) to predict each user’s likelihood of engagement with an individual piece of content. “Engagement” refers to actions taken by users on recommended content, such as clicks, likes, comments, reposts, watch time, and many other actions.

This practice of maximizing the predicted probability of engagement is the dominant paradigm for how social media recommender systems are designed. Platforms are incentivized to design their recommender systems in this way because user actions are a reliable indicator of attention, and more attention means a greater ability to monetize through advertising. Each platform (and feed) optimizes for specific forms of predicted engagement that vary from platform to platform, but all follow the same high-level paradigm. For example:

  • Facebook and Instagram’s recommender system makes predictions about how likely users are to tap, watch, and otherwise respond to recommended content, although predictions differ across media types (e.g., different engagement predictions are used for ranking stories and posts).
  • X’s recommender system optimizes for behaviors such as likes, reposts, and replies, and it assigns ranking scores based on predictions about user behaviors. X sources roughly half of its recommended posts from accounts users follow and are likely to engage with and half from items engaged with by users who have similar interests.
  • TikTok’s recommender system has two key metrics—maximizing time spent on the app and continued use over time—and predicts how likely a user is to like, comment, and watch posts.

Yet maximizing predicted engagement is not the only way to design a recommender system. In theory, these systems could be designed to maximize a wide variety of other values, such as self-expression, informativeness, or safety. Researchers have implemented and tested designs for recommender systems intended to advance values other than engagement, and some of these have seen sparse deployment in limited contexts on social media platforms.

For example, so-called “bridging systems” are recommender systems designed to increase mutual trust and understanding across social divides online. Bridging systems do this by recommending content that receives approval from diverse audiences. The deployment of bridging systems on social media has thus far been confined to community notes that provide context about user-generated content and is yet to see use in platforms’ main feeds.

Another design approach is to produce content recommendations based on user responses to surveys displayed in their feeds. For instance, surveys can ask users about recent negative experiences or about their feelings toward outgroups. To make up for the fact that survey data is typically sparse, platforms can combine survey data with other user data to predict user responses to these questions. This ranking strategy may help detect unwanted experiences and surface higher-quality content. Platforms use survey data today, but they put far more emphasis on predicted engagement because it generates significantly more data.

Encouraging the adoption of alternative designs for recommender systems is important because maximizing predicted engagement may promote negative user experiences. Optimizing for predicted engagement assumes that what consumers do accurately reflects their preferences. However, ample research has demonstrated that there is often a gap between a user’s observed behavior and their stated preferences. As a result, although consumers may engage with content, this may reflect a loss of control that they later regret (e.g., staying up late just to scroll) rather than a satisfying, positive experience.

To illustrate how this occurs, consider an analogy: engaging with content on social media is like eating potato chips at a party. When someone attending a party eats a whole bowl of chips, the party’s host might assume that this is a sign to refill the bowl. But perhaps the party guest is eating impulsively, when in fact they have a long-term goal of eating healthier food. Even though the guest’s impulsive behavior may not reflect their underlying preferences, the host straightforwardly interprets it as a signal of what the guest wants.

Designing recommender systems to maximize certain forms of predicted engagement makes the same mistake. Impulsively scrolling, liking, clicking, and dwelling on content (say, content that is risky in some way) does not necessarily align with the user’s forward-looking desires. Yet a platform that optimizes its algorithm based on engagement predictions from impulsive behaviors is akin to a host who keeps pushing potato chips on their guests.

This insight has important implications for the surplus consumers derive from using social media. Consumers’ aggregate willingness to pay for access to social media platforms (a direct measure of their quality) would likely be greater if recommender systems were designed to achieve goals more aligned with consumer welfare. Designs that continue to be optimized for maximizing certain forms of predicted engagement are lower in quality and produce less consumer surplus than the recommender systems that would emerge if competitive pressures were greater.

The ubiquity of engagement-based recommender systems also likely drives some consumers off social media platforms. The intensity of demand for access to platforms varies between consumers, and although some may prefer to use social media platforms, they may refrain from doing so because the few available options are all designed to maximize predicted engagement based on impulsive behaviors. This digital monoculture limits the array of experiences available to consumers on social media platforms, meaning that some users may choose to exit the market.

Some may argue that TikTok’s entry complicates the argument that the lack of competitive pressure degrades the quality of product design. Indeed, since 2020 the platform has exploded in popularity by designing a recommender system for its “For You” feed that personalizes recommendations of popular content from across the platform, rather than focusing on the user’s social graph. The rise of TikTok has catalyzed the creation of similar offerings from competitors like YouTube and Instagram.

While TikTok’s entry certainly spurred the development of new recommender system designs, those designs continue to be focused on maximizing predicted engagement. Finding more effective ways to maximize predicted engagement benefits advertisers substantially more than consumers, and this direction of innovation continues to neglect other advances in recommender system design that may raise consumer surplus. As long as social media markets remain difficult to enter and designed to grab attention, this trend should be expected to continue. To consumers the current social media landscape resembles a sea of potato chip vendors, with few salad bars to come by.

If concentration in social media undermines product quality, public policy offers a variety of options that could help better align platforms’ incentives with that of consumers. Antitrust enforcement, platform-specific competition regulation, policies supporting mandatory interoperability and social media middleware (gaining attention with the recent growth of Bluesky), and sectoral regulations of manipulative designs and excessive data collection are all potential options for addressing this apparent market failure. Two decades into the social media revolution, this problem seems unlikely to resolve on its own.

Authors’ Disclosures: Alissa Cooper and Zander Arnao work for the Knight-Georgetown Institute (KGI), Georgetown University. Cooper is a member of the board of The Tor Project, Inc. Neither she nor Arnao have any other engagements, affiliations, or conflicts of interest to disclose. You can read our disclosure policy here.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.