What an all-bot social network tells us about social media

  • Thread starter Thread starter Chas Newkey-Burden, The Week UK
  • Start date Start date
C

Chas Newkey-Burden, The Week UK

Guest
Why have social media platforms become so polarised? And, can they ever be fixed? These two questions are at the heart of a novel experiment at the University of Amsterdam.

The researchers simulated a social media platform, populated it entirely with AI chatbots and then kept tweaking it to see what happened. Sadly, their findings offered little suggestion that the networks on which we spend so much time scrolling will become more pleasant anytime soon.

'Dysfunctional effects'​


To see if they could prevent their simulated platform from "turning into a polarised hellscape", the experts tried "six specific intervention strategies", said Futurism. These included "switching to chronological news feeds, boosting diverse viewpoints, hiding social statistics like follower counts, and removing account bios".

But, disappointingly, only some of the six strategies "showed modest effects" and others actually "made the situation even worse", said Ars Technica. When they ordered the news feed chronologically, "attention inequality" was reduced, but it led to the "amplification of extreme content". Boosting the diversity of viewpoints to "broaden users' exposure to opposing political views" had no significant impact at all.

The strategy of "bridging algorithms to elevate content that fosters mutual understanding rather than emotional provocation" significantly diminished the link between "partisanship and engagement" but slightly enhanced "viewpoint diversity", while also expanding "attention inequality".

Overall, the results were "far from encouraging" and none of the methods implemented was able to "fully disrupt the fundamental mechanisms producing the dysfunctional effects" of social media platforms.

'Evil things'​


The researchers went into the project wondering whether the problems with social media are "the platforms doing evil things with algorithms" or users "choosing that we want a bad environment", one of the report's co-authors, Petter TΓΆrnberg, told Ars Technica.

But they found that the answer doesn't have to be either because "often the unintended outcomes" come from interactions "based on underlying rules". It’s "not necessarily because the platforms are evil" or because people "want to be in toxic, horrible environments", but more that the "mechanism producing these problematic outcomes is really robust and hard to resolve". It comes down to the basic structure of the platforms.

The findings "don’t exactly speak well" of humans, said Gizmodo, considering the chatbots were meant to clone how we interact. So, it seems social media may just be illogical for us to "navigate without reinforcing our worst instincts and behaviours".

It's "a fun house mirror for humanity" that "reflects us, but in the most distorted of ways". And it might just be that there are no lenses "strong enough" to "correct how we see each other online".

Continue reading...
 


Join 𝕋𝕄𝕋 on Telegram
Channel PREVIEW:
Back
Top