The Badger smiled and then sighed when Meta and YouTube were recently found liable for harming a young woman through the addictive design of their products and their failure to warn users of the risks. The smile was because it’s good to see tech giants not getting their own way. The sigh was because it’s taken far too many years to get to this point. Sensible people have known for years that these apps are designed to keep users compulsively engaged for as long as possible because it’s the clever monetisation of this that underpins their business models.
The Badger recalls the early days of social media when it simply helped people stay in touch, share milestones, and reconnect with old friends. In those days there was a clear divide between real and online life. Conversations ended on leaving a room or putting the phone down, photographs lived in physical albums, and social media was used as a harmless tool and not something that shaped or dominated how we lived. Today things are quite different. Social media has grown in power, profitability, and influence, to such an extent that the average person spends more time online using it than is prudent. What’s changed since those early days is the design of the apps and platforms. Endless scrolling, algorithm-driven recommendations, push notifications, and short video loops aren’t accidental. They’re features engineered to keep people engaged for as long as possible. Indeed, the BBC was reporting way back in 2018 that social media apps were deliberately addictive to users. The Badger thinks all this has certainly eroded the real-world routines, relationships, and boundaries for users over the last decades.
In the Meta and YouTube case, the prosecution lawyers have cleverly focused on how platforms are designed rather than what’s posted on them to win. The two giants plan to appeal but it’s debatable whether the appeals will succeed. Social media is thus having to grapple with the fact that this could be a reckoning similar to that experienced by the tobacco industry some decades ago. This ‘tobacco moment’ prompted the Badger to muse on whether AI will ultimately experience such a moment too. He concluded that it will. AI has the potential to harm institutions, elections, markets, information ecosystems, and critical infrastructure, and so its reckoning moment could happen faster, globally, and structurally. The possible triggers might relate to bias, misinformation, autonomy, and safety failures. Like the ‘tobacco moment’ for social media, AI’s moment will not be about banning it, but about liability.
A ‘tobacco moment’ isn’t about a single lawsuit. It happens when society collectively decides that an industry has externalized too much harm and the legal, regulatory, and cultural tides all turn at once. It seems foolhardy, therefore, to think that AI will be immune to a ‘tobacco moment’ of its own at some stage in the future…