It’s not wrong to be rewarded for working hard…

Over the years, the Badger’s been an independent observer in numerous formal meetings dealing with an employee performance or disciplinary issue, or employee complaint. There were robust procedures for these, and HR always ensured that a record was kept of what was said at the meeting. Many of those the Badger attended were memorable, not because of the particular issue, but because they provided an insight to the character and attitude of the employee concerned.

With elections in the UK imminent, the Badger recalls one employee complaint meeting which highlighted that people not only make different life choices, but they also have different reasons for why they work. The Badger was asked to be the company’s independent observer at the meeting which involved HR, the complainant’s boss, the complainant, and a friend supporting them. The Badger didn’t know any of them; they were all from a different part of the company. The complaint seemed straightforward. The complainant had asserted that they were being unfairly treated because another colleague of the same age and length of service working on the same project had a higher salary. There’d been a previous meeting, but the issue was unresolved because the interactions between the individual and their boss became antagonistic.

The Badger quickly tuned into the complainant’s attitude to work and life. They were intelligent, articulate, likeable, and passionate about their many costly interests and hobbies outside of work. They always arrived for work on time and always left on time. They never worked extended hours even when incentivised financially to do so. It was obvious that their hobbies and interests outside of work were their priority and that work was simply the vehicle to fund them. Also, they had no interest going the extra mile at work to earn a higher salary because they believed that salary progression came primarily with length of service. Their project colleague with a higher salary was the opposite and motivated to do what needed to be done to build a career and accumulate the benefits that come from going the extra mile.

The meeting concluded with the HR person pointing out that the complainant and their higher-paid colleague had made different lifestyle choices, and that a complaint about someone else’s choices had no validity. They added ‘It’s not wrong for your colleague to be rewarded for going the extra mile. This country and this company were built by people who did just that’. The complaint was closed with no further action. For the Badger, it was memorable because it highlighted that people make different choices and have different motivations, attitudes, and views about working hard to build wealth. As the UK goes to the polls, the Badger senses that the HR person’s words capture a sentiment which the country needs to revive in order to be great again…  

AI in the dock?

Consider this scenario. Someone approaches an individual and asks them to provide answers to some questions. The individual performs some Google searches of the internet, consults books in a local library, and then pieces together the answers to the questions. These are then communicated to the requestor face to face, or by phone or video call. The requestor uses the answers to commit a wicked crime for which they are prosecuted. The person providing the answers to the requestor’s questions is deemed by law to have some culpability for the crime and so they are prosecuted too. Now consider the same scenario but with the perpetrator directly asking ChatGPT (or similar) the same questions. The AI’s answers are used to commit the same wicked crime for which the perpetrator is prosecuted. The AI, however, does not have the same legal culpability for the crime as the individual noted above.

Reports that Florida’s attorney general has opened a criminal investigation into whether ChatGPT provided advice to a murdering gunman last year, see here for example, made the Badger wonder about the following question: ‘Are people using AI professionally or personally really aware of where the boundaries of responsibility sit?’ Probably not, was the conclusion after musing in the Spring sunshine. If a doctor follows a wrong diagnosis delivered by an AI, is the doctor responsible or the hospital, the engineers who built the AI model, or some other organisation in the chain? Some who build and deploy AI models appear to think such responsibility questions can be sorted out later when something goes awry and causes a crisis. This is never a sensible approach.

The more AI develops, the more it impacts important aspects of everyone’s life. However, it isn’t obvious, at least to the Badger, that professionals or the public understand much about how AI arrives at its answers. The Badger, who’s not a lawyer, thus spent a little time exploring how the law deals with the question of responsibility when someone takes action guided by AI’s output. It appears that you – not the AI vendor nor the algorithm – but you the user are legally responsible. This means that anyone – organisations, professionals, or members of the general public – using AI is always responsible and liable for the actions taken on guidance from AI. Organisations and humans can be sued, but AI cannot. When AI makes a mistake, liability flows to the humans and organisations that deployed it and used it,

That’s not really a surprise, but it’s a reminder for all users that they are more likely to find themselves in the dock than AI. It’s also a reminder that proper human consideration and diligence is imperative before acting on AI’s outputs. The Badger also thinks it’s a reminder that we must never allow AI to autonomously rule the world…

Digital backlash…

The Artemis mission around the dark side of the moon, the sight of humanoid robots running a half marathon (here and here), Anthropic’s Claude Mythos AI model and comments by ex-PM Rishi Sunak, all illustrate the power and relentless advance of digital technology. Having a decades-long career in the IT industry, it’s been routine for the Badger to deal with perpetual change in digital technology. The rapidity of that change kept the Badger and his colleagues motivated, challenged, learning and eager for new skills, and greatly satisfied when systems were delivered to clients and put into operational use. With this background you might think the Badger is an ardent digital technophile today, but he’s not. He’s ‘neutral’ with no strong affinity for, or aversion to, digital technology. He’s not overly enthusiastic about digital technology’s constant impact on our lives, but not overly critical of it either. Why is that?

The answer lies in three points: there’s no putting digital advances back in the box once they exist, not all digital technology is good for society, and digital technology dominated by a handful of individuals, corporations, or countries does not lead to a focus on benefiting humanity as a whole. Regarding the first point, innovation is a human attribute that will always produce advances, and there’s nothing wrong with that. It’s the second and third points which have moved the Badger to ‘neutral’ over the last decade, because digital technology has taken over our lives by stealth, driven ostensibly by agendas set by giant US and Chinese corporations controlled by a handful of individuals. Regulators have been slow, and tech giants have strongly resisted the introduction of sensible new laws that benefit wider society at every turn because of the threat to their own agendas. Digital advances have infiltrated society by default and diffusion without too much regard for the impact on the public. AI simply illustrates the point. Philosophical objectivity is thus at the heart of why the Badger’s become a neutral rather than ardent technophile.

Everyone today is more aware than ever before of digital technology’s downsides. There’s a growing willingness for the public to push back on the digital world. The UK government backtracked on Digital ID ambitions after a backlash, there’s a growing backlash against AI in the US (see here, here, and here), Swedish schools are cutting back on digital learning and returning to books, pen, and paper, numerous countries are  moving to ban social media for under 16s, a ban on children using smartphones at school has just been announced in the UK, and big tech has just lost a landmark social media addiction case. Society’s pushing back and questioning an unrestrained digital world more and more, and this backlash seems likely to grow with time. Indeed, with the world as it is today, the Badger’s unlikely to move from a neutral affinity any time soon…

OpenAI pausing Stargate UK is hardly a surprise!

As widely reported (see here for example), OpenAI is pausing its multi-billion-dollar Stargate UK project. The project was first announced in September 2025 with the declared purpose of ensuring ‘OpenAI’s world leading AI models can run on local computing power in the UK, for UK – particularly for specialist use cases where jurisdiction matters. This will help power the UK’s future economy, boost its global competitiveness, and deliver on the countries national AI Opportunities Action Plan’. The UK government’s AI Opportunities Action Plan had been announced in January 2025 as a focus for ramping up AI adoption to boost economic growth, jobs, and improvements to people’s everyday lives. A year later, in January 2026, a seemingly positive  progress update was published. The government’s thus likely to be wringing its hands about OpenAI’s pause. Why? Because it puts a dent in the country’s desire to be an ‘AI superpower’, especially when the company asserts that regulation and high energy costs are obstacles. The Stargate UK pause, however, is hardly a surprise given that the holistic situation faced by OpenAI today is really no different to when the project was announced last September.

OpenAI announced the project on the date President Trump started his state visit to the UK. With tariffs as a backdrop, the pressure on the UK government to make the visit a success was huge, and a centrepiece during the visit was the signing of a technology partnership involving new investment and cooperation on AI. Domestically, the government needed this to promote its growth agenda, but a ‘technology partnership’ and tangible realities are different. Given the pressure for the visit to be a success, OpenAI’s Stargate UK announcement was part of an overall joint PR strategy – at least that’s what the Badger senses. At that time, the UK had some of the highest costs for electricity in the world, and that’s still the same today! If there’s one thing an aspirant AI superpower needs, it’s economically competitive electricity and so it can hardly be a surprise when a commercial company focused ‘on the business case and numbers’ decides to hold off further investment. Additionally, there’s uncertainty about changes to UK law to allow AI firms to train their systems using copyrighted works, ongoing investor anxiety about an AI bubble, the fact that OpenAI hasn’t delivered a profit yet and is forecast to make losses of ~$44 billion before becoming profitable in 2029, and that OpenAI is facing massive competition from Google (and others) which is raising significant questions about its future. All of these points were material when Stargate UK was announced 7 months ago, and they remain so today.

A sceptic could thus be excused for thinking that the project was driven by a geopolitical public relations necessity in the first place. For the Badger, with his instincts rattling from experience, it’s thus hardly a surprise that Stargate UK is paused…   

Required leadership qualities – Competence, Consistency, Clarity, Communication and Charisma…

Early in the Badger’s IT career before the internet arrived, training for delivery people – project and team leaders, and technical staff – took place face to face in a group led by a senior delivery person and a professional trainer. Such training was often a one or two-day event conducted away from the hubbub of the workplace so that participants were not distracted by their normal work activities. At one course the Badger attended, participants were challenged to express the qualities that the members of project teams look for in their delivery leader. Participants, all team, or project leaders with various levels of experience, had ten minutes to produce five words for discussion with the course leaders and the wider group.

Many found it more difficult than expected because they struggled to think about delivery leadership from the perspective of team members who were not, and never aspired to be, leaders. Nevertheless, at the end of the exercise and subsequent discussion, the group converged on the following five words as required leadership qualities: Competence, Consistency, Clarity, Communication and Charisma. These words became known as the 5Cs and provided the theme underpinning the rest of the course. Whilst their context related to what delivery people look for from their delivery leader, the Badger’s found over the years that they are a good reference point for what to look for in leaders more generally.

The Badger’s worked for, and with, many senior leaders of all kinds over the years. They all had different personalities, strengths, and weaknesses. Some were more competent than others, some were more consistent and clearer than others, and some were better and more inspiring communicators than others. None were extroverts, but they all had a charisma that you couldn’t quite put your finger on. Underpinned by the 5Cs, the Badger considered some as much better leaders than others. There’s lots of broader and more detailed information available about the traits of good leaders, but the Badger’s routinely used the simple, qualitative, 5Cs as his mental ‘initial leadership quality’ checklist over the years to shape an initial opinion – which sometimes has subsequently changed. Sometimes, however, that initial opinion has not been very flattering and has not changed.

With the 5Cs concept in your psyche, you can’t help but use it to judge leaders who regularly appear on broadcast or social media even though you’ve never met them. Inevitably that’s unfair, but rather than relying on instinct alone, the 5Cs provides some structure in forming an opinion about where that person is on the POOR to GOOD leadership qualities spectrum. Ego, wealth, and having a powerful position is not the same as having good leadership qualities. For example, any leader who rants publicly and profanely on social media is unprofessional and sets a bad example for online behaviour. Someone with GOOD leadership qualities would never do this…

Will AI experience a ‘tobacco moment’?

The Badger smiled and then sighed when Meta and YouTube were recently found liable for harming a young woman through the addictive design of their products and their failure to warn users of the risks. The smile was because it’s good to see tech giants not getting their own way. The sigh was because it’s taken far too many years to get to this point. Sensible people have known for years that these apps are designed to keep users compulsively engaged for as long as possible because it’s the clever monetisation of this that underpins their business models.

The Badger recalls the early days of social media when it simply helped people stay in touch, share milestones, and reconnect with old friends. In those days there was a clear divide between real and online life. Conversations ended on leaving a room or putting the phone down, photographs lived in physical albums, and social media was used as a harmless tool and not something that shaped or dominated how we lived. Today things are quite different. Social media has grown in power, profitability, and influence, to such an extent that the average person spends more time online using it than is prudent. What’s changed since those early days is the design of the apps and platforms. Endless scrolling, algorithm-driven recommendations, push notifications, and short video loops aren’t accidental. They’re features engineered to keep people engaged for as long as possible. Indeed, the BBC was reporting way back in 2018 that social media apps were deliberately addictive to users. The Badger thinks all this has certainly eroded the real-world routines, relationships, and boundaries for users over the last decades.

In the Meta and YouTube case, the prosecution lawyers have cleverly focused on how platforms are designed rather than what’s posted on them to win. The two giants plan to appeal but it’s debatable whether the appeals will succeed. Social media is thus having to grapple with the fact that this could be a reckoning similar to that experienced by the tobacco industry some decades ago. This ‘tobacco moment’ prompted the Badger to muse on whether AI will ultimately experience such a moment too. He concluded that it will. AI has the potential to harm institutions, elections, markets, information ecosystems, and critical infrastructure, and so its reckoning moment could happen faster, globally, and structurally. The possible triggers might relate to bias, misinformation, autonomy, and safety failures. Like the ‘tobacco moment’ for social media, AI’s moment will not be about banning it, but about liability.

A ‘tobacco moment’ isn’t about a single lawsuit. It happens when society collectively decides that an industry has externalized too much harm and the legal, regulatory, and cultural tides all turn at once. It seems foolhardy, therefore, to think that AI will be immune to a ‘tobacco moment’ of its own at some stage in the future…

Rage against the screen…

The Badger’s 6-year-old grandson likes trains! Books about trains, Brio train sets, and Lego trains are favourite toys, but seeing and riding on real trains brings a special sparkle to his eyes. He loves to watch steam engines chuff along the Watercress Line, see historic locomotives in museums, ride miniature railways at visitor attractions, and travel on the regular trains that commuters use every day. He’s fascinated by how trains work, which is great, but his persistent questions about ‘how’ and ‘why’ can sometimes be wearing!

Last weekend the Badger and his grandson did something that didn’t relate to trains. We visited the Tangmere Military Aviation Museum, a small place with a number of static military jets as well as memorabilia from when Tangmere was a World War II RAF fighter base. The visit spawned an observation about 6-year-olds that he had not anticipated. At each exhibit there’s a computer that can be used to engage with the exhibit’s story, pull up photographs, and watch film clips. At many exhibits it’s possible to sit in the cockpit, peer into the fuselage, and use a computerized simulator. The Badger’s grandson observed that planes are engineered and work differently to trains!

It was all fun, but the Badger noticed that his grandson preferred using the computers rather than engaging physically with the exhibit itself. For example, the Canberra has part of the fuselage removed so visitors can easily lean in to see the environment around the pilot and crew. Adjacent to the jet is a computer showing images streamed from a camera mounted inside the fuselage. The camera can be panned through 360 degrees using a mouse and the user can zoom in on any part of the pilot and crew area. This 6-year-old used this computer rather than physically looking inside the fuselage. This preference was clear with other exhibits too. Seeing that ‘the screen’ had a greater pull with the youngster than exploring the exhibit physically made the Badger uneasy. If youngsters in their early formative years prefer screens to engaging with the physical real world, then we should surely all be worried.

On the car radio driving home, the Badger listened to the CEO of Mumsnet, being interviewed about Mumsnet’s Rage against the Screen’ campaign which is calling on politicians to ban social media for under-16s, stop Big Tech using data to target children with addictive algorithms, and to put children’s safety and wellbeing ahead of platform profits. The Badger found himself agreeing with the points made. In the UK, you must be 16 years or older to do many things (see here), so why not ban social media for under 16s? If the Badger’s grandson is already ‘virtual rather than physical world first’ at the age of 6, then ‘Raging against the Screen’ is surely a campaign that needs to succeed…   

AI and progress towards nuclear fusion for power generation…

When the radio alarm signals that it’s time to rise and prepare for the day ahead, it’s easy to doze for a few extra minutes without listening to the programme being broadcast from the radio. Sometimes, of course, there’s something in the babble which grabs your attention, sharpens alertness, and forces you to concentrate on whatever’s being said. That’s exactly what happened with the Badger earlier this week, The babble included an item of interest because it related to the Badger’s post-doctoral research many decades ago. That item was about the scientific and engineering drive to harness the power of nuclear fusion for the generation of limitless, sustainable, carbon-free electricity.  

The item covered the UK government’s written statement on the UK’s Fusion Strategy, it’s investment in STEP  – building a prototype fusion plant in Nottinghamshire by 2040 – and its investment in the world’s most powerful fusion-dedicated AI supercomputer to accelerate fusion design, modelling, and operations. It asserted that this is the most ambitious push yet to establish the UK’s complete energy independence from foreign price shocks. Investing £45m in this supercomputer, part of a wider government effort in AI and supercomputing infrastructure that has already seen a separate £36m supercomputer investment at the University of Cambridge, is a step along the road. However, let’s face it, it’s a tiny step when the country spends >£60bn on Defence, >£300bn on Welfare, and >£90bn on Education.

Harnessing nuclear fusion is the holy grail of getting clean, limitless energy. It’s been that way for as long as the Badger can remember and so has its reputation for always being ’50 years away’. The scientific and engineering challenges to be overcome in order to build and operate a commercially viable nuclear fusion reactor are enormous. However, there’s been huge advances over the last 30 years or so, a timeline that in parallel has also seen huge advances in computers and information processing. The latter has already helped enormously in getting fusion to its current position and there’s little doubt that further computing advances, particularly in AI and machine learning, will continue to accelerate progress towards achieving the holy fusion grail of large scale, carbon-free, sustainable energy on this planet.

But with the first experimental reactors currently forecast to start operating around 2040 and beyond, usable power from fusion still seems ’50 years away’ in practice. Generation Alpha and their children are thus likely to be the first generations to use power from viable fusion reactors. So, here’s a thought. Enormous amounts of money are being spent on AI across the globe. In comparison, a pittance is being spent on getting to power-generating fusion reactors that will hugely benefit our planet. Unless there’s a 1960’s-like ‘let’s go to the moon’ moment for fusion, the Badger can’t help but feel that it will always be ’50 years away’ regardless of investments in dedicated AI supercomputers…

Nuclear Power for AI Data Centres…

According to the World Nuclear Industry Status Report, whose data can be explored visually here, there are 407 operational nuclear reactors currently generating electricity across the world. Of these, 94 are in the USA, 62 are in China, 57 are in France, and 34 are in Russia. The average age of the world’s operational reactors is 32.6 years, and they generate ~9% of global electricity. There are ~11,800 data centres worldwide with a rapidly growing proportion incorporating AI-specific infrastructure. Whereas traditional data centres require 10-15KW of electricity per rack, AI data centres need 40 – 250 KW per rack to support the heavy computational demand of AI models. So, where’s this extra electricity coming from? It’s a question brought into sharper focus by the conflict in the Middle East and its potential impact on the availability and price of gas which is used to generate ~20% of electricity globally.

All the major tech giants have been considering this question for some time. They want a reliable electricity supply and low emissions for their AI endeavours and are thus turning to nuclear power. For example, Microsoft wants to restart a Three Mile Island reactor mothballed in 2019, and Meta have signed a trio of nuclear deals  securing enough electricity to power ~5 million homes for its AI data centres. It takes some decades to build new, large-scale, nuclear reactors like those currently connected to electricity grids, and the surge in power demand for AI data centres is surpassing the planned new generation and transmission capacity. Amazon and most of the tech giants are thus keen to harness Small Modular Reactors (SMR) to sustain AI growth. SMRs are new with just two in the world currently operable. However, you’ll see from the World Nuclear Association’s SMR project tracker that we can expect many more to come on stream over the next decade.

Nuclear SMRs will thus be key providers of the power for the AI data centres needed to underpin this digital technology’s ever more rapid momentum. Is that a problem? No, provided there’s strict regulatory control before, during, and after SMRs are built and put into service, and that global institutions exist with real teeth to ensure that commercial organisations and nation states do not flout the necessary balance between AI self-interest, the greater good, and the proliferation of nuclear material. That may be a tall ask in a world which is full of conflict, extremism, and volatility, and is already embarked on a huge race for AI dominance. SMRs, however, are new and things may not go to plan. If SMR delays happen, then we may see AI momentum slow over the coming decade. Electricity, after all, is the blood of the digital world, and if there isn’t enough blood then things are bound to go awry…

Drone – The word of the decade…

Most people try to live the best life they can, and most want to live in a world where rules help their chances of doing so. Most don’t want to live in a world dominated by those who ignore or flout rules to suit their own purpose. The world order, however, is changing, the United Nations appears toothless, and disruptive geriatric leaders are making life hard for everyone. Conflicts around the world are making ordinary people increasingly worried, but anyone who wants to live their best life must focus on the things they can control and change rather than worry about the things they can’t. That’s sound guidance, but easier said than done.

The future is more uncertain today than for many years, and so when an old IT colleague asked what the Badger’s word or phrase of the 2020/30 decade would likely be, they didn’t get the answer they expected. They anticipated phrases like ‘Artificial Intelligence’, machine learning’ or ‘deep fake’, but the Badger’s answer was one word, namely ‘Drone’. There’re still some years of the decade to go, but on the evidence so far, and with further rapid tech advances inevitable in the coming years, the Badger feels that he’s unlikely to change his mind about his choice of word.

Drone’ is a word that’s growing in importance for anyone who wants to live the best life they can. It’s a fascinating word with a range of biological, sonic, technological, and metaphorical uses. For example, drone is a function, a sound, and a warning and a weapon. It can describe the buzz of a bee, the whirl of a machine, a worker, some of humanity’s most advanced tools, and a shadow overhead to be feared by civilian and military personnel alike. Ten years ago, it was mainly used to refer to bees or the experimental technology of unmanned aerial vehicles (UAVs), but at the start of this decade it became used mostly as a descriptor for any autonomous or remotely controlled civilian or military flying object. Today it is a blanket term for any man-made, autonomous, or remotely controlled flying object that can perform any civilian or military function. When someone uses the word today, it will mostly be in the context of weapons used in Ukraine and the Middle East, and not bees!

Declaring ‘the word of the decade’ halfway through a decade might be foolhardy, but the Badger’s sticking with it, because he feels that clever, man-made, affordable, flying objects for civilian and military purposes will continue to evolve rapidly and become a historically significant feature of this decade. Meanwhile, the bee population, essential pollinators in nature, is in decline. Somehow the word ‘drone’ highlights that humans have their priorities the wrong way round. If you want to live your best life, then change something – plant something in the garden to attract bees…