It’s not wrong to be rewarded for working hard…

Over the years, the Badger’s been an independent observer in numerous formal meetings dealing with an employee performance or disciplinary issue, or employee complaint. There were robust procedures for these, and HR always ensured that a record was kept of what was said at the meeting. Many of those the Badger attended were memorable, not because of the particular issue, but because they provided an insight to the character and attitude of the employee concerned.

With elections in the UK imminent, the Badger recalls one employee complaint meeting which highlighted that people not only make different life choices, but they also have different reasons for why they work. The Badger was asked to be the company’s independent observer at the meeting which involved HR, the complainant’s boss, the complainant, and a friend supporting them. The Badger didn’t know any of them; they were all from a different part of the company. The complaint seemed straightforward. The complainant had asserted that they were being unfairly treated because another colleague of the same age and length of service working on the same project had a higher salary. There’d been a previous meeting, but the issue was unresolved because the interactions between the individual and their boss became antagonistic.

The Badger quickly tuned into the complainant’s attitude to work and life. They were intelligent, articulate, likeable, and passionate about their many costly interests and hobbies outside of work. They always arrived for work on time and always left on time. They never worked extended hours even when incentivised financially to do so. It was obvious that their hobbies and interests outside of work were their priority and that work was simply the vehicle to fund them. Also, they had no interest going the extra mile at work to earn a higher salary because they believed that salary progression came primarily with length of service. Their project colleague with a higher salary was the opposite and motivated to do what needed to be done to build a career and accumulate the benefits that come from going the extra mile.

The meeting concluded with the HR person pointing out that the complainant and their higher-paid colleague had made different lifestyle choices, and that a complaint about someone else’s choices had no validity. They added ‘It’s not wrong for your colleague to be rewarded for going the extra mile. This country and this company were built by people who did just that’. The complaint was closed with no further action. For the Badger, it was memorable because it highlighted that people make different choices and have different motivations, attitudes, and views about working hard to build wealth. As the UK goes to the polls, the Badger senses that the HR person’s words capture a sentiment which the country needs to revive in order to be great again…  

AI in the dock?

Consider this scenario. Someone approaches an individual and asks them to provide answers to some questions. The individual performs some Google searches of the internet, consults books in a local library, and then pieces together the answers to the questions. These are then communicated to the requestor face to face, or by phone or video call. The requestor uses the answers to commit a wicked crime for which they are prosecuted. The person providing the answers to the requestor’s questions is deemed by law to have some culpability for the crime and so they are prosecuted too. Now consider the same scenario but with the perpetrator directly asking ChatGPT (or similar) the same questions. The AI’s answers are used to commit the same wicked crime for which the perpetrator is prosecuted. The AI, however, does not have the same legal culpability for the crime as the individual noted above.

Reports that Florida’s attorney general has opened a criminal investigation into whether ChatGPT provided advice to a murdering gunman last year, see here for example, made the Badger wonder about the following question: ‘Are people using AI professionally or personally really aware of where the boundaries of responsibility sit?’ Probably not, was the conclusion after musing in the Spring sunshine. If a doctor follows a wrong diagnosis delivered by an AI, is the doctor responsible or the hospital, the engineers who built the AI model, or some other organisation in the chain? Some who build and deploy AI models appear to think such responsibility questions can be sorted out later when something goes awry and causes a crisis. This is never a sensible approach.

The more AI develops, the more it impacts important aspects of everyone’s life. However, it isn’t obvious, at least to the Badger, that professionals or the public understand much about how AI arrives at its answers. The Badger, who’s not a lawyer, thus spent a little time exploring how the law deals with the question of responsibility when someone takes action guided by AI’s output. It appears that you – not the AI vendor nor the algorithm – but you the user are legally responsible. This means that anyone – organisations, professionals, or members of the general public – using AI is always responsible and liable for the actions taken on guidance from AI. Organisations and humans can be sued, but AI cannot. When AI makes a mistake, liability flows to the humans and organisations that deployed it and used it,

That’s not really a surprise, but it’s a reminder for all users that they are more likely to find themselves in the dock than AI. It’s also a reminder that proper human consideration and diligence is imperative before acting on AI’s outputs. The Badger also thinks it’s a reminder that we must never allow AI to autonomously rule the world…

OpenAI pausing Stargate UK is hardly a surprise!

As widely reported (see here for example), OpenAI is pausing its multi-billion-dollar Stargate UK project. The project was first announced in September 2025 with the declared purpose of ensuring ‘OpenAI’s world leading AI models can run on local computing power in the UK, for UK – particularly for specialist use cases where jurisdiction matters. This will help power the UK’s future economy, boost its global competitiveness, and deliver on the countries national AI Opportunities Action Plan’. The UK government’s AI Opportunities Action Plan had been announced in January 2025 as a focus for ramping up AI adoption to boost economic growth, jobs, and improvements to people’s everyday lives. A year later, in January 2026, a seemingly positive  progress update was published. The government’s thus likely to be wringing its hands about OpenAI’s pause. Why? Because it puts a dent in the country’s desire to be an ‘AI superpower’, especially when the company asserts that regulation and high energy costs are obstacles. The Stargate UK pause, however, is hardly a surprise given that the holistic situation faced by OpenAI today is really no different to when the project was announced last September.

OpenAI announced the project on the date President Trump started his state visit to the UK. With tariffs as a backdrop, the pressure on the UK government to make the visit a success was huge, and a centrepiece during the visit was the signing of a technology partnership involving new investment and cooperation on AI. Domestically, the government needed this to promote its growth agenda, but a ‘technology partnership’ and tangible realities are different. Given the pressure for the visit to be a success, OpenAI’s Stargate UK announcement was part of an overall joint PR strategy – at least that’s what the Badger senses. At that time, the UK had some of the highest costs for electricity in the world, and that’s still the same today! If there’s one thing an aspirant AI superpower needs, it’s economically competitive electricity and so it can hardly be a surprise when a commercial company focused ‘on the business case and numbers’ decides to hold off further investment. Additionally, there’s uncertainty about changes to UK law to allow AI firms to train their systems using copyrighted works, ongoing investor anxiety about an AI bubble, the fact that OpenAI hasn’t delivered a profit yet and is forecast to make losses of ~$44 billion before becoming profitable in 2029, and that OpenAI is facing massive competition from Google (and others) which is raising significant questions about its future. All of these points were material when Stargate UK was announced 7 months ago, and they remain so today.

A sceptic could thus be excused for thinking that the project was driven by a geopolitical public relations necessity in the first place. For the Badger, with his instincts rattling from experience, it’s thus hardly a surprise that Stargate UK is paused…   

Delhi, AI, and a rosy future for IT services companies?

For the times, they are a-changin’ sang Bob Dylan in the 1960s. This is particularly apt today given four recent matters which, in the broadest sense, have IT at their core.

First, Meta’s CEO has testified for the first time before a jury to defend against accusations that Meta’s social media platforms harm children’s mental health, and that their platforms are designed to prioritise keeping users scrolling to maximise profits. The trial’s outcome could prove seismic. Second, ‘Epstein Data’ has triggered an inevitable media and political frenzy and repercussions for some individuals, but it has produced little so far that would stand up in a court of law. Nevertheless, ‘Epstein data’ is a reminder of the dangers of email, and that using services underpinned by IT always leaves a record somewhere. The third matter is the impact of AI-driven fears on the share prices of major IT services companies. Investors are anxious about the future demand for IT consulting/services. At the time of writing, the share prices of Accenture, Capgemini, CGI, Sopra Steria, Tata Consultancy Services, and Infosys have dropped by 42%, 36%, 36%, 32%, 28%, and 27%, respectively, over the last 12 months. The market is taking a sober look at the impact of AI.

And the fourth is the AI Impact Summit in Delhi, the largest ever global gathering of world leaders and tech bosses, which ended with 88 nations signing the ‘Delhi Declaration of AI Impact’. Some have called this the ‘Delhi Magna Carta’ to emphasise that it represents a milestone in global cooperation, and consensus about AI’s use for economic growth and social good. The Declaration, however, is not legally binding, and so calling it a Magna Carta is a political metaphor rather than a formal treaty. The Declaration’s a political statement of principles which are far from certain to be embedded into national/international laws, standards, and institutions. A hint about why it may ultimately have little influence is captured by a USA comment which is reported in the item here. The comment is that the USA will not accept ‘global governance of AI’. Why? Because it and China are locked in a structural competition over computational power, microchips, AI-enabled defence systems, and the control of global standards. It’s existential for both and the Declaration doesn’t change that.

Unsurprisingly today ‘For the times, they are a-changin’ is an even louder truth, both geopolitically and for IT services companies and their employees. Dario Amodel, CEO of Anthropic, foresees AI eliminating the jobs of many software engineers. It’s always been important for IT and tech companies to be fleet of foot and for IT people to keep their skills current. The Delhi Declaration highlights that this is more important than ever. With AI-driven transformation gathering pace, the market is showing that a rosy future for IT service companies, and their employees, is not guaranteed…

AI – from ‘build, baby, build’ to ‘bust, baby, bust’?

Every Christmas/New Year period, the BBC’s Radio 4 Today programme invites well known individuals to guest-edit the programme. Each guest focuses on a topic relevant to their interests, experience, and society. Two of the Christmas 2025 guests were inventor, engineer, and businessman Sir James Dyson and the AI pioneer and entrepreneur Mustafa Suleyman.  The Badger was driving to visit relatives on the days they were guest-editing. He had the Today programme on the radio as background noise on both occasions. He turned the volume up when each man was interviewed because they were intelligent, impressive, and articulate individuals conveying enormous common-sense and objectivity, characteristics which seem in short supply today.

Their words resonated with the Badger. Sir James Dyson, for example, likes ‘doers’ rather than ‘talkers’, and Mustafa Suleyman spoke eloquently about AI and that it must be ‘a tool in the hands of and under the control of humans if it’s to benefit all of humankind’. There’s plenty of ‘talkers’ in the world, but it’s ‘doers’ like these two with vision, objectivity, commonsense, and a passion for humankind, rather than politicians, which have the greatest influence on the lives of most people. The Badger agrees that AI is a tool. There are plenty of ‘talkers’ concerned that humans would become subservient to AI, but if we let that happen then we only have ourselves to blame. There’s currently a huge ‘build, baby, build’ rush to construct new, giant, energy-hungry, AI data centres and to amass and use the chips and devices they need to function. Enormous sums are being spent around the world, the technology continues to advance way ahead of any regulation, and AI company stock market valuations are stratospheric. Having worked in IT during the dot.com era, the words of these two men made the Badger ponder more about the current AI ‘build, baby, build’ surge.

Four conclusions emerged. The first was that such surges often produce over-capacity and ‘bust, baby, bust’ outcomes (c.f. China’s property crash) with the bigger the boom, the deeper and longer the bust! The second was that AI is here to stay, but some huge AI companies will not survive even though the AI market bubble is not like the dot.com era when many companies with high valuations had no revenues. Inevitably, when investor appetite for speculative risk tightens for any reason, and it will, a painful correction will happen. The third was that eyebrows should be raised when tech companies arrange for the restart of shuttered nuclear facilities to provide electricity for their new data centres.   

The Badger’s last conclusion was that we should question whether the world’s leaders, including those of hyperscale global tech corporations, are the right kind of ‘doers’. Do they have objectivity, common-sense, and mankind’s well-being at heart, or are they just examples of Lord Acton’s 1887 line Power corrupts and absolute power corrupts absolutely’? Whatever the answer, 2026 looks likely to be a troublesome year…

Social media: The same trajectory as tobacco?

A New Year is fast approaching. For many it’s a time of joy and optimism, but for others it can be a daunting, sad, and worrying prospect. Christmas and the New Year period for the Badger’s family is about getting together whatever the circumstances. When we do, there’s always a discussion about the future of the tech world and so the Badger’s been musing on the subject in preparation. One of his conclusions has been that foreseeing a future event isn’t as outrageous as it might seem if you look at history and compare it with present-day dynamics.

The Badger’s concluded, for example, that ‘social media will follow the same trajectory as other industries that have touched health, cognition and social order’. That’s not an outrageous conclusion when there are striking structural parallels between social media and, for example, the tobacco industry. The latter thrived for decades in a regulatory vacuum with products that were known to damage users’ health. Similarly, social media operates in an under-regulated space with products that keep users engaged to maximise profits regardless of the toll on public health. Whereas tobacco’s harm is biochemical and physiological, social media’s is cognitive, social, behavioural, and physical in a way that’s harder to see or measure. It hides it’s harm behind its convenience, utility, and benefits. Worrying about harmful content, its encouragement of habitual screentime leading to lower physical activity, lowering attention spans, and eroding emotional adaptability, is not misplaced because these are all bad for long term physical and mental health.

The tobacco industry was built on the underlying motives of maximum user engagement, maximum revenue, product optimisation for addictive behaviour, and resistance to regulation. Social media seems the same. With tobacco, law makers eventually ‘woke up’ because – as history shows with industries that touch human health, cognition, and social order – once harms and their cost become undeniable in the public domain, society always pushes back! At some stage this seems likely to happen with social media resulting in its radical transformation. Gradual reform rarely works when business models are not aligned with societal well-being, companies are financially and politically powerful, and consumers have become accustomed to products and services. Any transformation of social media, given the slow speed of regulation, seems a long way off unless something radical happens.

What could that something be? Well, history shows that radical change tends to come from economic collapse rather than moral awakenings or gradual reform. If the social media giants were to start making huge financial losses that collapse their share price, then radical change would happen because such shocks always force restructuring, regulation, and cultural re-evaluation. Is this plausible? Well, never say never! The Badger will be adopting ‘never say never’ as his reference point for everything during 2026. In the current world and tech climate, it seems silly to do otherwise…

The world needs Australia to succeed with banning those under 16 from major social media platforms…

Australia’s legislation banning the access of those under the age of sixteen from major social media platforms came into force today, 10th December. Its purpose is to protect children from harmful content, cyberbullying, and online predators. The major social media platforms are required to take reasonable steps to enforce age restrictions or face fines of up to AU$50 million. A neat item from Australia’s ABC on the topic can be found here.  Some platforms began locking out existing under-sixteen accounts and blocking new ones a couple of weeks ago.

Australia is the first country in the world to impose such a ban, and their move could be the first domino in a global trend given that debates are underway in many other countries about following suit. Supporters of the ban see it as a necessary safeguard against online harms and a way to hold the giant tech companies accountable. Critics and the social media companies, however, argue that the ban is blunt, hard to enforce, risks isolating teenagers, and raises privacy/digital rights concerns. After absorbing a wide variety of views expressed in the media by affected teens, parents, and industry and government commentators, the Badger asked himself, ‘who’s side are you on?’ He found the answer surprisingly easy.

From his own use of social media, the Badger thinks that society’s general moral decline is plain to see when misinformation and disinformation abound, and a lot of content amplifies unethical behaviour, distorts decent judgement, and attempts to reshape cultural values. Viral fame seems to reward scandals, outrage, and bad conduct, and constant exposure to divisive content fuels fear and outrage undermining the traditional values that have held communities together for generations. Today’s under-sixteens are vulnerable because they often model their behaviour on what they see online rather than on traditional role models. The Badger thus admires and supports Australia’s action because the major platforms have been too powerful for far too long. They are fast to act to make more money from users’ content, but slow to act on anything dubious or perceived as limiting their power and interests. Will more countries eventually follow Australia’s lead? Probably.

The ban’s critics assert that under-sixteens will simply find alternative ways to access the major platforms. That’s a hollow argument because it’s always been true that teenagers find ways around legal barriers. For example, there are laws about underage consumption of alcohol and smoking cigarettes, and yet it happens! Similarly, in his youth the Badger and his friends found ways of watching movies rated as inappropriate for our age at the local cinema. As has always been the case, the law puts a firm stake in the ground for society, and long may that continue. The world thus needs Australia to succeed with its ban, so let’s hope it does…

Identifying the cleverest person in the room…

IT professionals have experienced rapid innovation, constant engineering process evolution, progressive professionalism and quality improvement, and the commoditisation of technology and services over the last five decades. As an IT professional, the Badger’s worked with many clever and intelligent leaders, managers, and technical people who thrived on this continual dynamic change. Clever and intelligent people have always been at the heart of IT, but clever people don’t always have the greatest intelligence, and vice versa!

While fixing a dysfunctional project decades ago, the Badger had to attend a meeting involving the company’s Managing Director (MD) and other senior company staff and their opposite numbers from the customer to decide the project’s future. It was the Badger’s first time attending such a senior-level meeting. During the pre-meeting briefing, the MD sensed the Badger’s nervousness and reassured him that others would be doing the talking. As we entered the room containing the customer’s team, the MD winked at the Badger and whispered, ‘Tell me afterwards, who’s the cleverest person in the room?’  The meeting was difficult, but it concluded with agreement on a way forward. Deciding on the cleverest person in the room was also difficult. Afterall, how do you tell who is cleverest in a room of clever and intelligent people?

After the meeting, the MD playfully repeated the question and the Badger answered with what he thought the MD expected, namely that it was the MD! They chuckled, shook their head, said it was one of the customer’s team, and then went on to tell the Badger that cleverness and intelligence are different, but related, traits and that he should understand the difference to judge people and situations well. Cleverness is about speed of thought, ingenuity, emotional insight, adaptability, and creative problem-solving, while intelligence is about deep understanding and learning capacity. Clever people can think quickly, improvise, and solve problems in novel or unconventional ways, characteristics that are valuable in dynamic situations like debates, negotiations, or tricky interpersonal circumstances. Intelligent people, however, can acquire, understand, and apply knowledge in one or more domain, characteristics that are valuable in the likes of scientific research, planning, and the mastering of new disciplines. Clever people can be intelligent, and intelligent people can be clever, but the cleverest person in the room is always the person who has the best blend of both traits.

Learning more about the distinction between cleverness and intelligence over the years has been extremely useful. Since people are at the heart of the operations of any organisation, learning more about the difference not only arms you to pick out the cleverest person in the room, but also changes your perspective of those with impressive job titles who, the Badger’s learned from experience, are often unlikely to be the cleverest person in a room of other clever and intelligent people!

Cyber security – a ‘Holy Grail’?

King Arthur was a legendary medieval king of Britain. His association with the search for the ‘Holy Grail’, described in various traditions as a cup, dish, or stone with miraculous healing powers and, sometimes, providing eternal youth or infinite sustenance, stems from the 12th century. Since then, the search has become an essential part of Arthurian legend, so much so that Monty Python parodied it in their 1975 film. Indeed, it’s common for people today to refer to any goal that seems impossible to reach as a ‘Holy Grail’. It’s become a powerful metaphor for a desired, ultimate achievement that’s beyond reach.

Recently, bad cyber actors – a phrase used here to refer collectively to wicked individuals, gangs, and organisations, regardless of their location, ideology, ultimate sponsorship or specific motives – have caused a plethora of highly disruptive incidents in the UK. Incidents at the Co-op, Marks & Spencer, Harrods, JLR, and  Kido  have been high profile due to the nature and scale of the impact on the companies themselves, their supply chains, their customers, and also potentially the economy. Behind the scenes (see here, for example) questions are, no doubt, being asked not only of the relevant IT service providers, but also more generally about how vulnerable we are to cyber security threats.

While taking in the colours of Autumn visible through the window by his desk, the Badger found himself mulling over what these incidents imply in a modern world reliant on the internet, online services, automation and underlying IT systems. As the UK government’s ‘Cyber security breaches survey – 2025’ shows, the number of bad cyber actor incidents reported is high, with many more going unreported. AI, as the National Cyber Security Centre  indicates, means that bad actors will inevitably become more effective in their intrusion operations, and so we can expect an increase in the frequency and intensity of cyber threats in the coming years. The musing Badger, therefore, concluded that organisations need to be relentlessly searching for a ‘Holy Grail’ to protect their operations from being vulnerable to serious cyber security breaches. As he watched a few golden leaves flutter to the ground, the Badger also concluded that in a world underpinned by complex IT, continuous digital evolution, and AI, this ‘Holy Grail’ will never be found. But that doesn’t mean organisations should stop searching for it!

These damaging incidents highlight again that cyber security cannot be taken for granted, especially when the tech revolution of recent decades has enabled anyone with a little knowledge and internet access to be a bad cyber actor. The UK government’s just announced the introduction of  digital ID by 2029. Perhaps they have found a ‘Holy Grail’ that guarantees not only the security of personal data, but also that its IT programmes will deliver on time and to their original budget? Hmm, that’s very doubtful…

AI – Pop goes the weasel!

The Badger’s five-year old grandson, full of energy, innocence, and inquisitiveness, has been staying for a few days. It’s been fun, tiring, and a reminder that grandparents can be important influencers for Generation Alpha!  It was also a reminder that today’s childhood is vastly different to that of previous generations. The Badger’s grandson considers being on WhatsApp video calls, watching kids YouTube videos, and engaging with technology like phones, tablets, and laptops in classroom and home settings as routine. This wasn’t the case when the Badger was five, nor was it when the youngster’s Millennial parents were that age!

One evening, just before the lad’s bedtime, the Badger was on the sofa engrossed in the news feed on his smartphone. Reports of anxiety that AI is a stock market bubble about to pop had grabbed his attention. Some reports (like the one here), but certainly not all, derived from a report from MIT noting that most AI investments made by companies have so far provided zero returns. This fuelled concerns, existing in some quarters for a while, that AI is a stock market bubble soon to crash. Many of the reports drew parallels between AI and the dot.com crash of 25 years ago. As a professional in the IT sector at that time, the Badger experienced first-hand the dot.com era and its aftermath, and so he became absorbed in his own thoughts about the parallels. Until, that is, his grandson jumped on the sofa, prodded the Badger’s ribs, and asked to watch a ‘Pop goes the weasel’ cartoon. Initially struck by the synergy between ‘Pop goes the weasel’ as a good label for his AI thoughts, a suitable YouTube cartoon was found and the two of us watched it on the Badger’s smartphone. (A kids punk-music version of the rhyme didn’t seem suitable just before bedtime).

Once the youngster was in bed, the Badger cogitated further on the dot.com era and AI. The late 1990s saw rapid tech advances with many investors expecting internet-based companies to succeed simply because the internet was an innovation. Companies launched on stock markets even though they had yet to generate meaningful revenue or profits and had no proprietary technology or finished products. Valuations boomed regardless of dodgy fundamentals, and the dot.com crash was thus, to those with objectivity, inevitable. To an extent, some of the same dynamics exist with AI today. It may be a transformative technology, with the likes of ChatGPT having impressive traction with people, but AI is really still in its infancy striving to show a return on investment in a company setting. The Badger senses, therefore, that AI is likely in  sizeable correction rather than dot.com crash territory. This should be no surprise, because the history of tech stock market valuations suggests, to quote the nursery rhyme, ’that’s the way the money goes. Pop goes the weasel’…