How Not to Let AI Blow Us Up! Insights from Bletchley
Getting fixed up at the summit in Bletchley Park: A few crack technology leaders are helping figure things out.
A chorus of industrial and societal leaders are claiming that AI needs a fix now in a veiled dig at Rishi Sunak’s artificial intelligence summit focused on the future risk of ‘Frontier AI’. For those of us still trying to figure out plain vanilla AI, the UK government is striving to confound things with its dystopian, Kevin Costner look at the AI they describe as “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today's most advanced models!”. I.e. this shit just got sur-real so hang on to your jobs and stop going on strike because the advanced models are coming!
It might seem that Sunak prefers spending time legislating for robots than living British workers. Or protecting a leading British export - the creative industries. And where is Sir Keir Starmer on all this? Maybe he’s a little snippy because he wasn’t invited to the red hot tech summit.
For anyone feeling a little FOMO about this futuristic summit we have produced ‘The Letts Journal’s dummy guide to AI for those who are not robots and couldn’t go to Bletchley’.
According to the BBC the UK’s global AI summit at Bletchley Park this week “hopes to bring together AI experts and global leaders to discuss the potential risks of artificial intelligence”. Which is UK for ‘we have no idea how to make this clever tech AI shit so we thought we would smother it in red tape to foil others’.
The summit is squarely focused on so-called “Frontier AI” models - in other words, the advanced large language models, or LLMs, like those developed by companies such as OpenAI, Anthropic and Cohere who, funnily enough, are all based in Silicon Valley though Cohere states it also has a politically correct joint head office in Canada. Or perhaps they just like skiing.
According to CNBC ‘the summit will look to address two key categories of risk when it comes to AI: misuse and loss of control.
Misuse risks: a bad actor is helped by new AI capabilities. For example, a cybercriminal uses AI to develop a new type of malware that cannot be detected by security researchers, or uses it to help state actors develop dangerous bioweapons. Ouch! Sunak as James Bond wants to prevent this.
Loss of control risks: the AI that humans create for mundane tasks could be turned against them.’ Which is what happens when you ask ChatGPT to make you a cup of coffee and instead it sells you some shares in Cafe Nero.
US Vice-President Kamala Harris and European Commission President Ursula von der Leyen are going to Bletchley if only to experience an old, draft-ridden English country house in a storm. So too is Elon Musk who, in turn, has requested a one on one interview with the British PM on X/twitter/Elon’s-bot because he hates the cold and will do anything to boost traffic on his media thingy. Sunak needs a boost as well so it could prove to be a marriage made in heaven.
The former liberal democrat leader Nick Clegg will also be there which is a bit concerning given how it turned out the last time he teamed up with the Conservatives. Apparently there will be very few UK technology leaders of note because Rishi prefers California. The weather is better there and what was that old saying - oh yeah - ‘thar’s gold in dem thar hills’.
Meanwhile US President Joe Biden just out Rishi’d ‘let Rishi be Rishi’ by signing an executive order that requires AI developers to share safety results with the US government. Wouldn’t want you to go too far out on a limb there, Mr. President. Of course, Mr Musk has argued for the US and other countries to go further. In March, he signed an open letter calling for a pause to "Giant AI Experiments" because his AI startup just got started and he needs to buy time to catch up with the others. Plus, he’s a little distracted with X as he acquired it for $44bn and now its worth $19bn per internal correspondence reported by Fortune. Here’s to their next lawsuit!
In response to Biden, the UK government has outlined 5 Future Scenarios for exploring ‘Frontier AI’ Risks (which we have outlined below since you will likely never read it. It's that boring! But it might help with a few stock tips.):
Scenario 1: Unpredictable Advanced AI. In the late 2020s, new open-source models emerge, capable of completing a wide range of tasks with startling autonomy and agency. The pace of change takes many by surprise. In the initial weeks following release, a small number of fast-moving actors use these systems to have outsized impacts, including malicious attacks and accidental damage, as well as some major positive applications. There is public nervousness about the use of these tools. Ya think!
Scenario 2: AI Disrupts the Workforce. At the ‘Frontier’, relatively narrow but capable AI systems are starting to provide effective automation in many domains. By 2030, the most extreme impacts are confined to a subset of sectors, but this still triggers a public backlash, starting with those whose work is disrupted, and spilling over into a fierce public debate about the future of education and work. AI systems are deemed technically safe by many users, with confidence they will not demonstrate divergent behaviour, but they are nevertheless causing adverse impacts like increased unemployment and poverty. Which is why Sunak is happy to hand over to Starmer.
Scenario 3: AI ‘Wild West’. At the ‘Frontier’, there is a diverse range of moderately capable AI systems being operated by different actors. Whilst vibrant new economic sectors are developing based on the use of AI, widespread safety concerns and malicious use reduce societal enthusiasm. Authorities are struggling with the volume and diversity of misuse. A focus on tackling the immediate impacts of this crisis has made it hard to reach a global consensus on how to manage the issues long-term. Sounds like Silicon Valley on a good day.
Scenario 4: Advanced AI on a knife edge. A big lab launches a service badged as AGI (what the f*** is that?) and, despite scepticism, evidence seems to support the claim. Many beneficial applications emerge for businesses and people, which starts to boost economic growth and prosperity. Despite this system clearing the agreed checks and guardrails, there are growing concerns that an AI this capable can’t be evaluated across all applications and might even be able to bypass safety systems. Still sounds like Silicon Valley on a good day. Or Microsoft on any day...
Scenario 5: AI Disappoints. AI capabilities have improved somewhat, but the ‘Frontier’ is only just moving beyond advanced generative AI and incremental roll out of narrow tools to solve specific problems (e.g. in healthcare). Many businesses have also struggled with barriers to effective AI use. Investors are disappointed and looking for the next big development. There has been progress in safety, but some are still able to misuse AI. There is mixed uptake, with some benefiting, and others falling victim to malicious use, but most feel indifferent towards AI. Just like Crypto.
This “Frontier AI” stuff is all good and well but in the mean time there is a growing list of issues and concerns about the risks and inadequacies inherent in AI systems TODAY. We thought we should highlight just a couple.
Is AI robbing our content blind?
Four leading publishing trade associations have urged the UK government to help end the “unfettered, opaque development” of artificial intelligence tools that use copyright-protected works “with impunity.” They include the Publishers Association, the Society of Authors, the Authors’ Licensing and Collecting Society and the Association of Authors’ Agents - all of whom will go shit broke if AI keeps robbing them blind. It’s the first joint statement from the publishing trade bodies on AI, proving they don’t work together very much - no wonder copyright keeps disappearing under their noses.
“This is an issue on which the entire publishing industry is united,” they continued. “It is vital that authors and rights holders are protected by government.”
“We need practices based on consent and fair payment to ensure that authors and rights holders are asked for permission and rewarded for the use of their works. We need to ensure that creators are credited when their works are used to generate derivative outputs.” I.e. give us some of the money and we’ll go away.
It seems that copyright infringement, disinformation and content provenance is the hot new tech issue raging across Silicon Valley. Just ask Kara Swisher, who should know given she has covered the business of the internet since 1994, which we find interesting given our editor has been doing the business of the internet since 1994. They should exchange war stories in some kind of virtual X cage fight/coffee meet-up so long as it's not about Gaza or Ukraine. His latest tech answer, LettsCore, seems to solve the content provenance issue Swisher keeps pounding the desk/iPad about.
'Social loafing' is found when working/goofing alongside robots.
It seems that people tend to pay less attention to tasks when working alongside a robot, according to research that found evidence of “social loafing” - where workers work less hard if they think others will cover for them.
Researchers at the Technical University of Berlin said people come to see robots as part of their team. The scientists suggest that where a colleague, or the technology, performs particularly well, or where they think their own contribution would not be appreciated, people tend to take a more laid-back approach.
So no need to worry about losing your job after all. Just get a robot to do it with you - that way they lose while you snooze! Alternatively chuck a tonne more money into Bitcoin or the newest robot startup and pray like hell because soon it might be time to retire.
Keep up to date with The Letts Journal’s latest news stories and updates at our website and on twitter.