AI – A Golden Age or a new Dark Age?

The Badger’s experimented with Microsoft’s Copilot for a while now, sometimes impressed, but often irritated when the tool ends its answer to a question by asking the user’s opinion on the underlying topic of the question. For example, the Badger asked Copilot ‘When will self-driving cars be the majority of vehicles in the UK?’  Copilot’s answer was sensible and distilled from quoted sources, but it ended with ‘What are your thoughts on self-driving cars? Do you think they’ll revolutionize transportation?’. The Badger wanted an answer to his question, not a conversation that will capture, store, and use his opinion for the tool’s own purpose. Responding with ‘None of your business’ gets the reply ‘Got it! If you have any other questions or need assistance with something else, feel free to ask. I’m here to help’. That last phrase should be supplemented with ‘and make money!

Overall, his experimentation has made him wonder if AI is leading to a new Golden Age for humanity, or a new Dark Age. So, what’s the answer? A new Golden Age, or a worrying Dark Age? AI and Machine Intelligence advocates, giant businesses investing huge amounts of money in the technology, and even governments with a ‘fear of missing out’, are quick to say it’s the former. The Nobel Laureate Geoffrey Hinton, the godfather of AI, isn’t so sure. He articulates the risks well, and he’s highlighted that the ability of AI to eventually wipe out humanity isn’t inconceivable. Listening to him interviewed recently on the Today programme, BBC Radio 4’s flagship news and current affairs programme, struck a chord. It made the Badger realise that such concerns are valid, and that a Dark Age is a possibility.

So where does the Badger stand on the Golden or Dark Age question? Well, the last 25 years has made us believe tech-driven change is a good thing, but that premise should be challenged. New technology may drive change, but it doesn’t necessarily drive progress because it’s politics that really determines whether change makes people better off overall. Politicians, however, have struggled woefully to deal with tech-driven change and the new problems it’s created for society so far this century. There’s little sign this is changing for AI. Humans are fallible and can make poor judgements, but if we become reliant on AI to make choices for us, then there’s a real danger that our confidence and capacity to make our own independent decisions will be lost.

The Badger’s answer is thus nuanced. A Golden Age will unfold in areas where AI is a tool providing a tangible benefit under direct human control, but if AI is allowed to become completely autonomous and more intelligent than humans, then a Dark Age is inevitable. Why? Because things with greater overall intelligence always control things of lower overall intelligence. Can you think of an example where the reverse is true?

Human Space travel to Mars? Just send Intelligent Machines…

‘Space, the final frontier. These are the voyages of the Starship Enterprise. Its five-year mission – to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before’.

In September 1966, these iconic words were heard for the first time when Star Trek arrived on television. They, in essence, remind us that the pursuit of knowledge and the exploration of the unknown are central to what it means to be human. They inspire us to dream big, embrace challenges, and continually seek to expand our understanding. The words were narrated before man landed on the moon, before the internet, before smartphones and laptops, and when the computing power available to astronauts was miniscule compared to that of a mid-range smartphone. Things have changed extraordinarily since 1966,  but the opening words to Star Trek episodes are just as relevant to what it means to be human today.

Space travel is difficult, as US billionaires will attest (see here and here, for example). Today’s Space-race is different to that of the 1960s with, for example, the likes of India and China part of the fray. Putting humans back on the Moon is a key objective, and the USA’s Artemis programme intends to do just that within the next few years, if things go to plan.  Putting human feet on Mars, as reaffirmed by the USA’s President Trump during his inauguration this week, is also an objective. The Badger, however, senses that it’s unlikely to happen for decades yet, if at all.

Why the scepticism? Well, two things. The first is that putting humans on Mars and bringing them back is much more challenging than returning to the Moon. The second thing is more fundamental. In the ~60 years since Star Trek’s iconic words were first heard, life and our knowledge of Space has been transformed through technological advances, especially in the sphere of capturing, processing, and using information digitally. Advances in digital technology continue apace with AI and intelligent machines fast becoming a reality. Indeed, Mr Trump has announced huge investment in Stargate, AI infrastructure.  The automation of everything with machines becoming as intelligent as humans begs a question, namely ‘Is prolonged human travel in Space really viable and economically sensible?’

The evidence implies that humans are unsuited to prolonged Space travel (e.g. see here and here). So why send humans to Mars when intelligent machines are a better option? Perhaps a rethink of putting humans on Mars will happen as AI and intelligent machines become mainstream, perhaps it won’t. Meantime the Badger wholly subscribes to the pursuit of knowledge and exploration of the unknown, but he will enjoy Star Trek for what it is, just imaginative  entertainment…

Today’s cutting edge is tomorrow’s obsolescence…

As the Badger sat in traffic, a news item on the car radio grabbed his attention. It was a report that there are now no new car models in the UK that come with a CD player. The built-in CD player is joining the cassette tape player in the great scrapyard in the sky! The Badger’s reaction on listening to the report? A little sadness, but not surprised given the speedy evolution of in-car digital infotainment over the last 15 years. The march of connected, integrated, digital technology and the advent of Apple CarPlay and Android Auto have rendered CDs in vehicles obsolete. The Badger glanced at the half-dozen music CDs, a mix of factory-pressed and self-burned, resting in the cubbyhole behind the handbrake and was hit by a wave of nostalgia.

Nostalgia is a natural and common human experience that can help in navigating the  present by drawing comfort and strength from the past. The Badger has a kinship  with his car CDs because they’ve often been played during notable journeys full of either happiness or great sadness. There’s something personally satisfying and engaging about physically selecting a CD, taking it from its case, putting it into the car’s player, adjusting the volume, and then doing the reverse when the last track’s played. Tapping a digital screen or giving voice commands to play your music in a vehicle is a different, less engaging experience. The Badger’s CDs will continue to be played in his car until it too is beckoned by the great scrapyard in the sky.

The demise of in-car CD players is just another illustration that obsolescence is an unavoidable aspect of the rapidly advancing digital age. In the 1980s, the CD put the in-car cassette tape on the path to oblivion with the fitment of cassette players as standard in new vehicles ending completely in the first decade of this century. Now digital systems sourcing music and other entertainment from the ether have essentially done the same thing for the CD player. This implies, of course, that what’s replacing the CD in vehicles today will itself become obsolete in due course, especially as obsolescence is happening faster and faster in the consumer electronics, software, media and entertainment, manufacturing, and automotive industries.

Things once acclaimed as cutting edge are always eventually relegated to the side lines by something else, so what will in-car entertainment look like in a few decades time? Well, if mass adoption of truly self-driving cars becomes a reality, then occupants will absorb entertainment without the distraction of actually driving. In-car entertainment will be dominated by immersive technologies, AI, well-being/mood sensors, and so on, making the driving experience into something akin to that of lazing about in a mobile digital living room. The thought makes the Badger shudder because it represents  another step towards the potential obsolescence of the human race!

The Law of Unintended Consequences…

If you’ve a couple of minutes spare then read the item here. It was published in 2013 and what’s striking is that the exact same words could be used if it had been written today! A 2010 item, ‘Technology: The law of unintended consequences, by the same author also stands the test of time. Reading both has caused the Badger to muse on unintended consequences, especially those that have emerged from the digital and online world over the last few decades.

The ‘Law of Unintended Consequences’ is real and is, in essence, quite simple. It declares that every action by a person, company, or government can have positive or negative consequences that are unforeseen. An amusing manifestation of the law in action happened in 2016 when a UK Government agency conducted an online poll for the public to name the agency’s latest polar research ship. The public’s choice, Boaty McBoatface, wasn’t the kind of name the agency anticipated!

One characteristic of unintended consequences is that they tend to emerge over a long period. The internet and social media illustrate this neatly. Both have changed the behaviour of people (especially the young), companies, and governments, and both have challenged safety, security, and privacy like never before. Indeed, the Australian government’s recent decision to ban those under 16 years old from social media demonstrates just how long it’s taken to address some of social media’s unintended consequences since its advent a couple of decades ago.

During his IT career, the Badger participated in delivering the many benefits of digital and online technology to society, but now, more mindful of unintended consequences, he wonders if a future dominated by virtuality, AI, and colossal tech corporations is a good thing for his grandson’s generation. After all, the online and digital world is not where real, biological, life takes place, and there’s more to life than being a slave to our devices.

The ‘Law of Unintended Consequences’ can never be ignored. Although a professional and disciplined approach to progress always reduces the scope for unintended consequences, the fact is these will happen. This means, for example, that there’ll be unintended consequences from the likes of AI, driverless vehicles, and robots at home, and that, in practice, it will take years for these unintended consequences to emerge properly. But emerge they will!

Looking back over recent decades, it’s clear that digital and online technology has delivered benefits. It’s also clear that it’s brought complication, downsides, and unintended consequences to the lives of people in all age groups. The Badger’s concluded that we need a law that captures the relationship between progress, unintended consequences, and real life. So, here’s Badger’s Law: ‘Progress always produces unintended consequences that complicate and compromise the real life of people’. Gosh, it’s astonishing where articles penned over a decade ago can take your thoughts…

The digital world needs nuclear fusion!

When reading recently  that the ITER experimental nuclear fusion reactor under construction in Cadarache, France, is delayed by at least a decade, the Badger sighed deeply. The delay to this huge endeavour, a collaboration involving 35 nations,  inevitably puts back the holy grail of ‘limitless clean energy for the benefit of mankind’ from a tokomak reactor by many decades…again! Why the Badger’s sigh? Because his PhD and subsequent research, many years ago, related to the damage helium ions can cause in potential tokomak first wall materials. At that time, the Joint European Torus (JET) was under construction at Culham in the UK and the ultimate goal of ‘limitless clean energy for the benefit of mankind’ via fusion seemed achievable within the Badger’s lifetime. Realistically, that’s no longer the case, hence the sigh. The holy grail of ‘limitless clean energy‘ from fusion reactors is still far away even though the need for it has never been greater.

ITER, however, is getting a run for its money from private firms within the Fusion Industry Association (FIA). In July 2023, the FIA said that 4 private firms believed they could deliver fusion power into the grid by 2030 and 19 firms believed they could do so by 2035. The Badger’s sceptical. However, given the speedy technological advances of recent decades, these beliefs cannot be completely dismissed if recent technological momentum continues unabated. Wait a minute, you might say if the word ‘nuclear’ always sends uncomfortable shudders down your spine, why do we need power produced by nuclear fusion at all? The answer’s quite simple. As this article points out, and this one reinforces, the world is 86% driven by fossil fuels and energy demand is forecast to rise by 50% from today’s level by 2050. Global energy demand is then expected to triple between 2050 and 2100!  To get anywhere near meeting these forecasts, and have a decarbonised world, requires fusion to provide ‘limitless clean energy for the benefit of mankind’. Yes, wind, solar, and tidal power will play their part, but can they service the scale of this demand without blighting every picturesque part of our planet? That’s debatable.

So, here’s the thing. Digital transformation of the world economy continues at pace. The amount of data created, captured, copied, and consumed will be nearly three times as much in 2025 as in 2020.   AI, the Internet of Things, cryptocurrency, and the digital automation of everything comes with a dramatic increase in electricity usage which cannot be met by non-nuclear renewables alone. When we use our computers, tablets and smartphones we are contributing to the rising demand for electricity, and we are also thus unconsciously making the case for why we need fusion reactors to provide ‘limitless clean energy for the benefit of mankind’. Let’s hope ITER isn’t the only game in town, because if it is then a digital future may not be quite what we currently envisage.

A walk in the woods, swarms of drones embodying AI, and fly spray…

A walk in the woods is good for body and soul, especially in the Spring when bluebells abound. Every walk is memorable in some way, as a couple of encounters reminded the Badger recently. The first encounter involved wildlife. A vixen with two cubs emerged from the undergrowth and sat in the middle of the path to stare at a stationary Badger drinking from his water bottle. They were ~3 metres away, unfazed by human presence, and nonchalantly disappeared back into the undergrowth after about a minute. The second encounter happened ten minutes later as the path bisected an open expanse of scrubland. It was with a police officer landing a drone which had been used in a successful search for someone who’d failed to return from walking their dog in the area. ‘That’s a useful bit of kit’, the Badger quipped to the officer. ‘Yep, but a drone swarm would be better’, the officer responded adding that whereas people knew that individual drones are routine tools for many, swarms embodying AI warrant greater public awareness.

Drones vary in shape, size, function, and sophistication. Everyone has some awareness of them through their appearance in many movies (see here for example) over decades. The capabilities of drones imagined in such movies are today either a reality, or soon to be so. Drones are a growth area. Indeed, the UK Government has envisaged  that 900,000 commercial drones could be operating in UK skies by 2030.  Drones have long been tools in many commercial sectors (e.g. agriculture, energy supply, and property marketing), in the media/broadcasting, and with hobbyists and the TikTok generation, and so this vision seems possible. Drones are also already key tools in law enforcement where they help in monitoring major incidents, events, suspects, crime scenes, traffic, and in the search for missing persons. Military use of them is common and rapidly expanding for reconnaissance, intelligence gathering, and lethal force, as readily illustrated in the Ukraine and Middle East conflicts. Military drone use continues to expand (e.g. see here ), and swarms of drones embodying AI will eventually transform  military operations even more dramatically. It thus seems inevitable that drone swarms will eventually become a regular facet of civilian life too.

Personal security and safety advice for when you are away from your home has long centred on being aware of your environment and listening to and observing the behaviour of those around you. With drone swarms on the horizon, we should now be observing and listening to what’s in the sky too! Of course, someone will eventually produce a drone countermeasure for personal use by anyone in the general public. Now that’s an off-the-wall thought to end with, probably triggered by learning that fly spray and insect repellent are essential when walking through woods in the warm Spring sunshine…

History suggests that a future generation will face a ‘Digital Crisis’…

Spanish philosopher George Santayana is credited with saying ‘Those who cannot remember the past are condemned to repeat it’, and Karl Marx remarked that ‘History repeats itself first as a tragedy, and then as a farce’. These came to mind while quietly musing on a future which is in the hands of younger generations who’ve grown up with global communication, the internet, social media, and online services as a norm. It’s sobering to be reminded that in just a few decades, digital technology and IT has transformed life faster than at any time in human history. AI adds to the unabated momentum of tech-driven change. But here’s the thing. History shows that many things that have a transformational impact on society have serious consequences that only become fully apparent decades later, creating a crisis for society that a future generation is forced to address. History thus implies that a future generation will have to deal with a crisis caused by the digital revolution.

Bold thinking? Maybe, but consider this. History shows that motor vehicles revolutionised transportation. It’s only in recent decades, however, that society has realised, and started addressing, the true impact of motor vehicles on public health and the planet. History also shows that the use of fossil fuels (particularly coal) during the Industrial Revolution transformed the world. Our dependence on them since, however, has impacted the climate and sustainability of life forcing society into corrective action, but only in recent decades. Similarly, plastic – a material that’s made the modern world possible – has gone from being a wonder substance a century ago to being reviled as an environmental scourge today. It therefore seems perfectly feasible that history will repeat itself with regard to the digital revolution we are living through.

Falling happiness in younger generations (see here, for example) and a tense interview with Elon Musk , who remarked that ‘moderation is a propaganda word for censorship’, illustrate that history may well repeat itself regarding social media. Social media platforms have revolutionised information sharing over two decades, but amplifying misinformation, disinformation, bullying, mental health issues, and eroding personal privacy in the process. They are commercial enterprises bound by the law, but they set their own rules and guidelines for content and its moderation. When a US Surgeon General says allowing young people to use social media is like giving them medicine not proven to be safe, and that it’s insane that governments have failed to adequately regulate them, then society has a problem regardless of Mr Musk’s dislike of challenging scrutiny. History means that society today is having to face up to a ‘Climate Crisis’. Taking note of history is always wise, which is why it’s not outlandish to think that a future generation will face and need to address some kind of existential ‘Digital Crisis’ …  

Air Canada and the ‘hallucination’ of a chatbot…

An email arrived from the Badger’s car insurance provider recently. It advised that a renewal quote was in his online account. Logging in revealed a 25% increase in premium! A check using market comparison sites provided quotes for the same cover within a few pounds of his existing premium. The Badger thus used the provider’s chatbot within his account to signal his intent to take his business elsewhere. The chatbot dialogue, however, ultimately resulted in the Badger staying with his provider with the same cover at the price he currently pays!

This is a commonplace renewal dynamic, but the Badger found himself musing on his experience. Apart from being irritated by his provider’s attempted 25% price rise when they were obviously prepared to retain their customer for a much lower price, using the chatbot was easy, efficient, and quick. However, it  wasn’t obvious at any stage in the chatbot dialogue whether the Badger had really conversed with a human in the provider’s organisation. This meant that both he and the provider were implicitly accepting the validity of the chatbot’s deal. A number of ‘what if’ scenarios regarding customer use of AI chatbots started bubbling in the Badger’s brain. And then he read, here and here, about Air Canada and its AI chatbot!

An AI chatbot on the Air Canada website advised a passenger that they could book a full-fare flight to attend their grandmother’s funeral and claim for a discounted bereavement fare thereafter. Guidance elsewhere on the website was different. The passenger did as the chatbot guided and subsequently claimed for the bereavement discount. Air Canada refused the claim, and the parties ended up at a Tribunal with the airline arguing that the chatbot was ‘responsible for its own actions’. The Tribunal ruled for the passenger and that the airline was liable for negligent misrepresentation. The case not only establishes the principle that companies are liable for what their AI chatbots say and do, but it also highlights – as noted here – broader risks for businesses when adopting AI tools.

The amount of money for the discount claim was small (<CAN$500) but the Tribunal’s findings will reverberate widely. The case also exposes something which is commonplace with many big companies, namely the dominance of a legalistic behavioural culture regardless of common-sense within an organisation. This was a bereaved customer complying with advice given by the company’s AI chatbot on the company’s own website, and yet rather than be empathetic, take responsibility, and apply common-sense, the company chose a legal route and to hide behind ‘the hallucination’ of its chatbot. So, bravo to the passenger for fighting their corner, bravo to the Tribunal for their common-sense judgement, and yes bravo to Air Canada for making sure that we all now know that companies cannot shirk responsibility for the behaviour of their AI chatbots…

Future-gazing while eating fish in Riyadh…

The Badger visited Riyadh with some members of his London-based project team. The team was developing the SARIE Real Time Gross Settlement (RTGS) computer system for the Saudi Arabian Monetary Agency (SAMA). We stayed at the Riyadh Intercontinental Hotel, the normal base for short visits to meet client staff and Kingdom-based project staff. The work schedule for the visit was intense because the project was at a crucial stage in its delivery. On the penultimate night of the visit, the hotel had a ‘fish night’. The Badger and his companions duly booked a table for what turned out to be a memorable meal. It took place outside under a night sky full of twinkling stars in near 30C heat. Riyadh is in the desert 250 miles from the nearest seaport, and so it felt a little surreal seeing not only unfamiliar fresh fish on a mountain of crushed ice, but also choosing one to eat! This was more than 25 years ago.

Unsurprisingly for a group of relaxing IT professionals, we future-gazed while eating our fish and drinking alcohol-free fizzy apple juice – ‘Saudi Champagne’. Mobile phones at the time provided voice communication and SMS messaging. They were rudimentary compared with today’s smartphones, and we knew that the new one in our hand would be usurped by a newer model within weeks. Communication network technology, internet use, and IT were high growth areas, and the PCs and laptops of the time, see here for example, had nowhere near the capabilities taken for granted today.

Three areas dominated our future-gazing during the meal. The first was MMS (Multimedia Messaging Service). Would it actually arrive, be useful, and popular? (It arrived in 2002). The second was off-the-shelf, reusable, software products and kernels. Would they decimate bespoke software development and speed up systems development for clients? (They did. Software has become commoditised). The third was outsourcing. Would it change the IT industry and stifle innovation and technical creativity? (It has, although views on innovation and creativity vary). We debated affably as we ate. We did not foresee the tech and online world that has emerged to be the global critical infrastructure of personal, business, governmental, and military life today!

With the Middle East in the headlines and tech CEOs savaged while testifying at a US Senate hearing, the Badger wonders what discussion he and his companions would have during a ‘fish night’ in Riyadh today.  One area would inevitably be AI and given the history of the last 25 years of digital revolution, whether its dark side would eventually overwhelm its benefits. With general points from the Senate hearings like ‘Because for all the upside, the dark side is too great to live with’ (made by Senator Lindsey Graham) rattling in his head, the Badger thinks that the dark side of AI alone would dominate the discussion and make the conversation even livelier than it was 25 years ago!  

Protecting your privacy…

The arrival of a scam email, a television programme on Banking Scams, scurrilous AI generated images of Taylor Swift, news of a fake robocall using President Biden’s voice, and the UK’s National Cyber Security Centre’s warning that the global ransomware threat will rise with AI, made the Badger think about protecting privacy this week.

The following facts underpinned his musing. LinkedIn, Facebook, X (Twitter), Instagram, Snapchat, and TikTok were launched in 2003, 2004, 2006, 2010, 2011, and 2016, respectively. Amazon was founded in 1994, Netflix in 1997, Google in 1998, Spotify in 2006, and WhatsApp in 2009. The first smartphone with internet connectivity arrived in 2000 when life was very different, as neatly illustrated here. Over barely 30 years, tech and these companies have changed the dynamics of daily life, and what constitutes personal privacy, for everyone. These companies, fledglings 25 years ago but now more powerful than many countries, harvest, hold, and use vast swathes of our personal data. What constitutes privacy for an individual has thus inevitably changed, and, the Badger feels, not for the better compared with 25 years ago. What other conclusion could you make when huge data breaches and scandals like Cambridge Analytica expose individuals to security threats and privacy risk like never before? And along comes AI making the risk to individuals much, much worse!

Everything done online today is tracked and used for some purpose. If you use an internet-connected personal device then the world’s plumbing knows where you are and what you’re doing. When it comes to privacy, therefore, the old saying ‘an Englishman’s home is his castle’ was much more relevant 30 years ago than it is today. With vast swathes of our personal data held online it’s hardly surprising that bad actors want to get their hands on it for nefarious purposes. As Channel 5’s  ‘Banking Scams; Don’t get caught out’ programme recently highlighted, just a small amount of your personal data in the wrong hands can make your life a misery. AI just adds another dimension to the potential scale of that misery.

With online interactions a norm of modern life and AI manipulation of images, video, and speech becoming more widespread, the Badger wondered if there’s something other than good cyber security practices that anyone can do to bolster their personal privacy. Well, there is. Don’t post photos, videos, or voice recordings of yourself on social media platforms! Your face, your body, and your voice are part of your real identity, so why make them easy pickings for anyone of a wicked disposition? The Badger’s lost the plot, you may think, but his fundamental point is this. Think about your privacy the next time you post photos, video, or voice recordings on a social media platform. After all, the responsibility for protecting your privacy fundamentally rests with you…