Modelling Media Misbehaviour: A First Step

A new paper helps us understand why media behave so badly so much of the time.

This is our little example of how media behaviour can attract attention. We've put up a picture of a big, ugly vulture to get everybody's attention. Hey, we made you look!
Media vultures may eventually consume their own livers. Metaphorically, of course. But we bet you looked at the picture...and you now have an example of what we discuss in this short article. (Image courtesy Andreas Hoja via PixaBay.)

I have a very smart, very eccentric friend who is fascinated with weird news. For years, he has cut out front pages and stories from tabloid papers to tape them up all over his office. When you visit him, you step into a Time Tunnel plastered with images of Elvises, Bigfoots (feets?), Kennedys, gray aliens, and Bat Boys. The more bizarre a story seems, the longer it stays up on his walls. He is creative, brilliant, and productive, but his walls are distracting, compelling, fascinating ... I can't find a place to put my eyes that allows me to concentrate on getting to the end of any sentence I might want to speak. As you can imagine, I choose to FaceTime him (voice only of course) when I want to get any work done.

His walls are a growing testament to something we all know instinctively.

People in the media make stuff up. They do it for obvious reasons.

Day-to-day life in any town is sorta mundane. Work, school, then family and friends at night. Maybe a good book. Roasting hot dogs, mowing the lawn, walking the dog. It should all be quite calm. After all, big events are far apart. But when you go online, the world doesn't seem calm, because engagement—the attention of audiences—makes money.

Media—be it mainstream or social—is in an open, obvious, needy war for your attention, driven by competition for revenue from advertising and subscriptions.1 To win the war, to capture the revenue, people knowingly say things that are not true.

Quelle surprise, as the French say in their rare indulgences in sarcasm. Without much effort, most of us can come up with a whole sackful of aphorisms like "if it bleeds it leads," or "scandal sells," or "a lie can travel halfway around the world while the truth is putting on its pants."

If you know a war for your attention exists in social and traditional media, then you pretty much already understand the starting point of a paper by Arash Amini (and colleagues) in Science Advances in June, 2025: How Media Competition Fuels the Spread of Misinformation.2

Amini et al begin with what most of us already realize: in a world where attention is a salable commodity, bloggers, podcasters, newsletter writers, journalists, and media companies often compete with each other through speculation, exaggeration, sensationalism, rumours, and lies. But instead of doing a case study or two of a specific example of misinformation and how it influenced its readers or viewers, Amini et al chose to create a mathematical model of the interaction between competitive organizations (or authors) and their readers or viewers. They wanted to look at media falsehood as an overall system.

Other studies have done this, but have used epidemiological frameworks ("media virus," right?) or social network game theories. Amini borrows from some of this, but does make a core contribution and advance important thinking.

As do others, Amini models media competition as a zero-sum environment: one organization only gains subscribers or influence at the expense of another organization. This lets the researchers explore in novel and quantifiable ways the relative advantages and disadvantages of facts and falsehoods. They use a fairly sophisticated suite of established techniques to model human decision-making as a limited, bounded, and erratic process. Their experimental model allows them to examine how the credibility of a source and dissemination of falsehoods can, over time, shape public opinion and, implicitly, change behaviour.

They are able to see in some detail how competition—either in an established media sphere or within social networks—creates incentives for sensational and outrageous content, because sensation, outrage, and falsehood provide an advantage.

No wonder some social networks use "outrage algorithms"—computer code that feeds readers a steady diet of outrageous content. The emotional kick of irregular provocation is very much like Skinner's intermittent reinforcement, yielding greater reader or viewer persistence, elevated anticipation, and increased compulsion.

Such media are one-armed bandits where the payoff to you is in outrage and compulsion rather than in cash. The media companies collect the cash while they use your persistent, reinforced engagement to turn you into product. "They're guaranteed to show up!" they tell their advertisers.

Within the Amini model, media who misbehave enjoy financial success and exert enhanced cultural influence, but the model also shows how such advantages are not permanent because misinformation poses a strategic risk.

Other studies have demonstrated that just because an audience responds to the emotional and psychological pull of an outrage algorithm does not mean the audience has a preference for falsehood. This is an exit opportunity for those in the audience who become weary of manipulation. The credibility of the outlet declines. The competitive advantage begins to vanish.

We can pretty easily imagine what happens in a credibility crisis. The media outlet must change something or lose out. Strategies might be to double down on falsehoods or to make more claims and make them more outrageous. This, of course, might work for a little while but the competitive advantage still fades and the reputation of the outlet suffers yet more. This is a reasonable (if not complete) explanation for the death spiral we sometimes see as outlets get desperate to generate attention and retain or grow audiences. Polarized outlets are almost always less truthful.

Alternately, an outlet can cede ground. We see this in political discourse, where some outlets leave the centre to hunt new ground, perhaps by finding new customers to the left or to the right. This, too, is a limited strategy; not only can an outlet only move so far, but to engage more-extreme adherents must, by necessity, erode credibility with more-moderate customers. It accelerates.

In any case, the media vultures are forced to "eat their own livers": that is, they can only survive by consuming or exhausting the base that has sustained them.

The Amini model can incorporate a range of audience responses. When an outlet has credibility, it can influence a broad range of people, but as outlet credibility fades, skeptical individuals (who generally tend to reflect upon claims) fall off quickly. People behave differently depending on if they are cynical, gullible, biased, deluded, or discerning. Community and cultural contexts matter, too. As media outlet reputations shift, so too do the responses of people. In Amini's words, they can become "uninformed, confused, misinformed, or well-informed"...in other words, consumers shift, too, in responses according to their situations. Highly credible outlets can misinform even the most cynical or skeptical people simply because they are credible, but they can't do it too often, as you might expect.

Of course, the Amini paper is only recently released, so we haven't yet seen the reactions of the academic community, the media, and thoughtful readers. Even though the paper discusses the importance of community susceptibility, source accountability, debunking, and outlets that concentrate on facts rather than editorial positions (hey! that's us!), they don't model those things in great detail.

Of course, like most such systems, the Amini model is an abstraction and a simplification, wrapped in a decision framework with certain binary characteristics.

Models are always artificial and limited. The authors frankly discuss many limitations of their work and suggest ways in which it can be improved and extended.

Even a casual examination of their paper reveals many assumptions about rationality and audience behaviour. Lots of room remains to explore vast subtleties and shades of grey.

Like us, you can easily make a wish list of expansions to a model like this one. It could explore how misinformation and cult-like behaviour interact, how media loyalty, platform power, and peer group pressure can preserve audiences even when media reputation is dropping, how various diversities (education, political affiliation, nutrition levels, loneliness, and more) change and influence group and individual responses to media misbehaviour. The study doesn't differentiate among misinformation, malinformation, and disinformation (see our recent piece about these terms). It doesn't consider differences in political manipulation, manipulation of funding sources, interaction with denialism, and propagation of pseudoscience. Application of this model to the issues that concern us at Sweet Lightning—energy and the environment—remains largely undiscussed.

As with all such papers, How Media Competition Fuels the Spread of Misinformation may be received well or it may be received badly. Our article here isn't a review—academic or otherwise—of the paper and its assumptions and assertions. Peer reaction will do that, better and more thoroughly than we ever could.

Even so—even if the paper ultimately does not succeed—it still DOES undertake a systematized approach to understanding media misbehaviour. It provides a reasonable start at modelling nuances in the interplay between constituent behaviour, media accuracy, and credibility.

It's a first step in a dispassionate, quantified analysis of misinformation and media competition.


⚡️

Reading

  1. Giraldo-Luque, Santiago, Pedro Nicolás Aldana Afanador, and Cristina Fernández-Rovira. 2020. “The Struggle for Human Attention: Between the Abuse of Social Media and Digital Wellbeing.” Healthcare 8 (4): 497. https://doi.org/10.3390/healthcare8040497.
  2. Amini, Arash, Yigit Ege Bayiz, Eun-Ju Lee, Zeynep Somer-Topcu, Radu Marculescu, and Ufuk Topcu. 2025. “How Media Competition Fuels the Spread of Misinformation.” Science Advances 11 (25): eadu7743. https://doi.org/10.1126/sciadv.adu7743.
  3. Skinner, B. F., and Charles B. Ferster. Schedules of Reinforcement. New York: Appleton-Century-Crofts, 1957.
  4. Stewart, Alexander J., Antonio A. Arechar, David G. Rand, and Joshua B. Plotkin. 2024. “The Distorting Effects of Producer Strategies: Why Engagement Does Not Reveal Consumer Preferences for Misinformation.” Proceedings of the National Academy of Sciences 121 (10): e2315195121. https://doi.org/10.1073/pnas.2315195121.