The U.K. government has shelved £1.3 billion worth of funding that had been earmarked for AI and tech innovation. This includes £800 million for the creation of the exascale supercomputer at the University of Edinburgh and £500 million for the AI Research Resource — another supercomputer facility comprising Isambard at the University of Bristol and Dawn at the University of Cambridge.

The funding was originally announced by the then-Conservative government as part of November’s Autumn statement. However, on Friday, a spokesperson for the Department for Science, Innovation and Technology disclosed to the BBC that the Labour government, which came into power in early July, was redistributing the funding.

It claimed that the money was promised by the Conservative administration but was never allocated in its budget. In a statement, a spokesperson said, “The government is taking difficult and necessary spending decisions across all departments in the face of billions of pounds of unfunded commitments. This is essential to restore economic stability and deliver our national mission for growth.

“We have launched the AI Opportunities Action Plan which will identify how we can bolster our compute infrastructure to better suit our needs and consider how AI and other emerging technologies can best support our new Industrial Strategy.”

A £300 million grant for the AIRR has already been committed and will continue as planned. Part of this has already gone into the first phase of the Dawn supercomputer. However, the second phase, which would improve its speed 10 times, is now at risk, according to The Register. The BBC said that Edinburgh University had already spent £31 million building housing for its exascale project and that it was considered a priority project by the last government.

“We are absolutely committed to building technology infrastructure that delivers growth and opportunity for people across the U.K.,” the DSIT spokesperson added.

The AIRR and exascale supercomputers were intended to allow researchers to analyse advanced AI models for safety and drive breakthroughs in areas like drug discovery, climate modelling, and clean energy. According to The Guardian, the principal and vice-chancellor of the University of Edinburgh, Professor Sir Peter Mathieson, is urgently seeking a meeting with the tech secretary to discuss the future of exascale.

Scrapping the funding goes against commitments made in the government’s AI Action Plan

The shelved funds appear to go against Secretary of State for Science, Innovation and Technology Peter Kyle’s statement on July 26, where he said he was “putting AI at the heart of the government’s agenda to boost growth and improve our public services.”

He made the claim as part of the announcement of the new AI Action Plan, which, once developed, will lay out how to best build out the country’s AI sector.

Next month, Matt Clifford, one of the principal organisers of November’s AI Safety Summit, will publish his recommendations on how to accelerate the development and drive the adoption of useful AI products and services. An AI Opportunities Unit will also be established, consisting of experts who will implement the recommendations.

The government announcement deems infrastructure as one of the Action Plan’s “key enablers.” Given the necessary funding, the exascale and AIRR supercomputers would provide the immense processing power required to handle complex AI models, speeding up AI research and application development.

SEE: 4 Ways to Boost Digital Transformation Across the UK

AI Bill will have a narrow focus for continued innovation, despite funding changes

While the U.K.’s Labour government has pulled investment in supercomputers, it has made some steps towards supporting AI innovation.

On July 31, Kyle told executives at Google, Microsoft, Apple, Meta, and other major tech players that the AI Bill will focus on the large ChatGPT-style foundation models created by just a handful of companies, according to the Financial Times.

He reassured the tech giants that it would not become a “Christmas tree bill” where more regulations are added through the legislative process. Limiting AI innovation in the U.K. could have a significant economic impact, with a Microsoft report finding that adding five years to the time it takes to roll out AI could cost over £150 billion. According to the IMF, the AI Action Plan could see annual productivity gains of 1.5%.

The FT’s sources heard Kyle confirm that the AI Bill will focus on two things: making voluntary agreements between companies and the government legally binding and turning the AI Safety Institute into an arm’s length government body.

AI Bill focus 1: Making voluntary agreements between the government and Big Tech legally binding

During the AI Safety Summit, representatives from 28 countries signed the Bletchley Declaration, which committed them to jointly manage and mitigate risks from AI while ensuring safe and responsible development and deployment.

Eight companies involved in AI development, including ChatGPT creator OpenAI, voluntarily agreed to work with the signatories, allowing them to evaluate their latest models before they are released so that the declaration can be upheld. These companies also voluntarily agreed to the Frontier AI Safety Commitments at May’s AI Seoul Summit, which include halting the development of AI systems that pose severe, unmitigated risks.

According to the FT, U.K. government officials want to make these agreements legally binding so that companies cannot back out if they lose commercial viability.

AI Bill focus 2: Turning the AI Safety Institute into an arm’s length government body

The U.K.’s AISI was launched at the AI Safety Summit with the three primary goals of evaluating existing AI systems for risks and vulnerabilities, performing foundational AI safety research, and sharing information with other national and international actors.

A government official said that making the AISI an arm’s length body would reassure companies that it does not have the government “breathing down its neck” while strengthening its position, according to the FT.

U.K. government’s stance on AI regulation vs. innovation remains unclear

The Labour government has shown evidence of both limiting and supporting the development of AI in the U.K.

Along with the redistribution of AI funds, it has suggested that it will be heavy-handed in its restriction of AI developers. It was announced in July’s King’s Speech that the government will “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.”

This supports Labour’s pre-election manifesto, which pledged to introduce “binding regulation on the handful of companies developing the most powerful AI models.” After the speech, Prime Minister Keir Starmer also told the House of Commons that his government “will harness the power of artificial intelligence as we look to strengthen safety frameworks.”

On the other hand, the government has promised tech companies that the AI Bill will not be overly restrictive and has seemingly held fire on its introduction. It had been expected to include the bill in the named pieces of legislation that were announced as part of the King’s Speech.

Subscribe to the TechRepublic UK Newsletter

Catch up on the week’s essential technology news, must-read posts, and discussions that would be of interest to IT pros working in the UK and Europe. Delivered Wednesdays

Subscribe to the TechRepublic UK Newsletter

Catch up on the week’s essential technology news, must-read posts, and discussions that would be of interest to IT pros working in the UK and Europe. Delivered Wednesdays