Once upon a time there was the Strategic Defense Initiative (Starwars)…

There comes a time when a room at home needs a decorative refresh. That time recently came in the Badger household, and so he deployed his practical skills to refurbish the room himself. The project was planned, agreed with an important stakeholder (the wife), and fastidiously executed. The room’s now in the post-delivery phase with the small list of defects pointed out at acceptance by the important stakeholder now corrected. Painting walls listening to good music playing on the radio during the project proved a more satisfying experience than expected. On finishing one wall, and while stepping back admiring his handiwork, the Badger found himself listening to the broadcaster’s regular news bulletin and sighing deeply on hearing that President Trump had unveiled plans for a ~$175 billion US ‘Golden Dome’ missile defence system. Memories of President Reagan’s 1983 Strategic Defence Initiative (SDI) came flooding back.

The goal of SDI was to develop a system that could intercept and destroy incoming nuclear missiles, effectively shielding the USA from a potential Soviet attack during the Cold War. Many dubbed it ‘Star Wars’ because of its proposed use of space-based technology. At the time, the Badger was working on the software design and development of a Relational Database Management System (RDMS) product – pretty cutting edge at the time. He remembers thinking that SDI would never come to fruition. Indeed, SDI itself was never fully realised, but its ideas have shaped military technology and policies in Missile and Space-based defence, Cybersecurity strategy, and International Collaboration ever since.

Rolling forward 40 years, the world is a quite different place geopolitically, technologically, economically, and militarily. Daily civilian and military life now depends on digital capabilities that didn’t exist in 1983, and continued rapid tech advances, innovation and AI are changing both domains at a rate never imagined just a few decades ago. Reagan’s SDI initiative and President Trump’s ‘Golden Dome’ share some similarities, but whilst the available tech in 1983 meant the former’s space-based missile defence was largely theoretical, President Trump’s benefits from modern, real, sophisticated satellite, space, sensor, and missile technologies. ‘Golden Dome’ revives elements of SDI but it also suffers from some of the same challenges, particularly, around cost, scepticism about its effectiveness, and concern that it dramatically escalates the global arms race. It’s certain, however, that just as happened when SDI was announced in 1983, military and tech sector commercial organisations will be relishing the prospect of picking up ‘Golden Dome’ contracts regardless of whether its stated ambitions will ever fully come to fruition.

But why did the Badger sigh so deeply on hearing about ‘Golden Dome’ on the radio? It was simply an instant reaction to the feeling that it’s another step on the road to creating the Terminator film’s SKYNET system for real, and that our species seems intent on a path that can lead to eventual self-inflicted extinction.

AI and copyright…

Elton John recently had some sharp words to say about the UK government’s plans to exempt AI technology firms from copyright laws. Apparently, there’s currently a game of ping-pong underway between the House of Commons and the House of Lords regarding this plan. Many writers, musicians, and artists are furious about the plan, and Elton’s comments caused the Badger to scratch his head and ponder. Why? Because, like many individuals and bloggers, his website’s content could be plundered by AI without his knowledge or permission regardless of the copyright statement on its home page. With AI models and tools increasingly mainstream, Elton’s words made the Badger realise that he, and probably many others around the globe, should have copyright more prominent in our thoughts.

Copyright law is complex and, as far as the Badger understands, ‘fair dealing’ or ‘fair use’ allows limited use of copyright material without permission from the copyright owner under specific circumstances. Fair dealing/use is not a blanket permission, and what constitutes this depends on factors such as how much of the material is used, whether its use is justified, and whether it affects the copyright owner’s income. The Badger’s not a lawyer, but  he senses that AI and copyright is a legal minefield that will keep experts with digital and legal qualifications in lucrative work for years to come.

As the Badger pondered, he scratched his head again and then asked Copilot if AI used material held on copyrighted websites. The short response was that it (and other AI) follows strict copyright guidelines and only generates brief summaries of copyrighted material respecting fair use principles and with pointers to official sources. To test the efficacy of the answer, the Badger asked Copilot for the lyrics of Elton John’s song ‘Candle in the wind’. Copilot responded with ‘Can’t do that due to copyright’. Typing the same request, however, into the Badger’s browser readily produced the lyrics. Make of that what you will, but it does make you wonder why you would need to use AI like Copilot for this kind of interaction.

At the heart of Elton John’s point is the long-established principle that if someone or an enterprise wants to use copyrighted material in something that produces a commercial gain for themselves, then the copyright owner should give prior permission and be paid. AI is a disruptive technology, much of it controlled by the same giant US corporations that already dominate the tech world. AI cannot be ignored, but exempting tech firms from copyright law seems wrong on many different levels. The Badger’s concluded that he should improve his understanding of copyright law, and that AI tech firms must not be exempt from such laws. After all, if you were to take a leaf out of President Trump’s playbook then if you want something, you need permission AND  you must pay.

AI – A Golden Age or a new Dark Age?

The Badger’s experimented with Microsoft’s Copilot for a while now, sometimes impressed, but often irritated when the tool ends its answer to a question by asking the user’s opinion on the underlying topic of the question. For example, the Badger asked Copilot ‘When will self-driving cars be the majority of vehicles in the UK?’  Copilot’s answer was sensible and distilled from quoted sources, but it ended with ‘What are your thoughts on self-driving cars? Do you think they’ll revolutionize transportation?’. The Badger wanted an answer to his question, not a conversation that will capture, store, and use his opinion for the tool’s own purpose. Responding with ‘None of your business’ gets the reply ‘Got it! If you have any other questions or need assistance with something else, feel free to ask. I’m here to help’. That last phrase should be supplemented with ‘and make money!

Overall, his experimentation has made him wonder if AI is leading to a new Golden Age for humanity, or a new Dark Age. So, what’s the answer? A new Golden Age, or a worrying Dark Age? AI and Machine Intelligence advocates, giant businesses investing huge amounts of money in the technology, and even governments with a ‘fear of missing out’, are quick to say it’s the former. The Nobel Laureate Geoffrey Hinton, the godfather of AI, isn’t so sure. He articulates the risks well, and he’s highlighted that the ability of AI to eventually wipe out humanity isn’t inconceivable. Listening to him interviewed recently on the Today programme, BBC Radio 4’s flagship news and current affairs programme, struck a chord. It made the Badger realise that such concerns are valid, and that a Dark Age is a possibility.

So where does the Badger stand on the Golden or Dark Age question? Well, the last 25 years has made us believe tech-driven change is a good thing, but that premise should be challenged. New technology may drive change, but it doesn’t necessarily drive progress because it’s politics that really determines whether change makes people better off overall. Politicians, however, have struggled woefully to deal with tech-driven change and the new problems it’s created for society so far this century. There’s little sign this is changing for AI. Humans are fallible and can make poor judgements, but if we become reliant on AI to make choices for us, then there’s a real danger that our confidence and capacity to make our own independent decisions will be lost.

The Badger’s answer is thus nuanced. A Golden Age will unfold in areas where AI is a tool providing a tangible benefit under direct human control, but if AI is allowed to become completely autonomous and more intelligent than humans, then a Dark Age is inevitable. Why? Because things with greater overall intelligence always control things of lower overall intelligence. Can you think of an example where the reverse is true?

Human Space travel to Mars? Just send Intelligent Machines…

‘Space, the final frontier. These are the voyages of the Starship Enterprise. Its five-year mission – to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before’.

In September 1966, these iconic words were heard for the first time when Star Trek arrived on television. They, in essence, remind us that the pursuit of knowledge and the exploration of the unknown are central to what it means to be human. They inspire us to dream big, embrace challenges, and continually seek to expand our understanding. The words were narrated before man landed on the moon, before the internet, before smartphones and laptops, and when the computing power available to astronauts was miniscule compared to that of a mid-range smartphone. Things have changed extraordinarily since 1966,  but the opening words to Star Trek episodes are just as relevant to what it means to be human today.

Space travel is difficult, as US billionaires will attest (see here and here, for example). Today’s Space-race is different to that of the 1960s with, for example, the likes of India and China part of the fray. Putting humans back on the Moon is a key objective, and the USA’s Artemis programme intends to do just that within the next few years, if things go to plan.  Putting human feet on Mars, as reaffirmed by the USA’s President Trump during his inauguration this week, is also an objective. The Badger, however, senses that it’s unlikely to happen for decades yet, if at all.

Why the scepticism? Well, two things. The first is that putting humans on Mars and bringing them back is much more challenging than returning to the Moon. The second thing is more fundamental. In the ~60 years since Star Trek’s iconic words were first heard, life and our knowledge of Space has been transformed through technological advances, especially in the sphere of capturing, processing, and using information digitally. Advances in digital technology continue apace with AI and intelligent machines fast becoming a reality. Indeed, Mr Trump has announced huge investment in Stargate, AI infrastructure.  The automation of everything with machines becoming as intelligent as humans begs a question, namely ‘Is prolonged human travel in Space really viable and economically sensible?’

The evidence implies that humans are unsuited to prolonged Space travel (e.g. see here and here). So why send humans to Mars when intelligent machines are a better option? Perhaps a rethink of putting humans on Mars will happen as AI and intelligent machines become mainstream, perhaps it won’t. Meantime the Badger wholly subscribes to the pursuit of knowledge and exploration of the unknown, but he will enjoy Star Trek for what it is, just imaginative  entertainment…