W63DR9 Creech and Nellis Airmen coordinated a training flight together on the Nevada Test and Training Range, July 15, 2019. Aircrew with the 66th Rescue Squadron conducted training exercises and integrated with the MQ-9 Reaper aircrew to document the Reaper in flight. (U.S. Air Force photo by Senior Airman Haley Stevens)
Technology

AI news and recap: Drone ‘kills’ operator; DeepMind speeds up computers; wind turbine boost

[ad_1]

W63DR9 Creech and Nellis Airmen coordinated a training flight together on the Nevada Test and Training Range, July 15, 2019. Aircrew with the 66th Rescue Squadron conducted training exercises and integrated with the MQ-9 Reaper aircrew to document the Reaper in flight. (U.S. Air Force photo by Senior Airman Haley Stevens)

A US Air Force Reaper drone

APFootage/Alamy

Reports of AI drone “killing” its operator amounted to nothing

This month we heard about a fascinating AI experiment from a US Air Force colonel. An AI-controlled drone trained to autonomously carry out bombing missions had turned on its human operator when told not to attack targets; its programming prioritised successfully carrying out missions, so it saw human intervention as an obstacle in its way and decided to forcefully take it out.

The only problem with the story was that it was nonsense. Firstly, as the colonel told it, the test was a simulation. Secondly, a US Air Force statement was hastily issued to clarify that the colonel, speaking at a UK conference, had “mis-spoke” and that no such tests had been carried out.

New Scientist asked why people are so quick to believe AI horror stories, with one expert saying it was partly down to our innate attraction to “horror stories that we like to whisper around the campfire”.

The problem with this kind of misconstrued story is that it is so compelling. The “news” was published around the world before any facts could be checked, and few of those publications had any interest in later setting the record straight. AI presents a genuine danger to society in many ways and we need informed debate to explore and prevent them, not sensationalism.

AI can optimise computer code

Deepmind

DeepMind AI speeds up algorithm that could have global impact on computer power

AI has brought surprise after surprise in recent years, showing itself capable of spitting out an essay on any given topic, creating photorealistic images from scratch and even writing functional source code. So you would be forgiven for not getting too excited about news of a DeepMind AI slightly improving a sorting algorithm.

But dig deeper and the work is interesting and has solid real-world applications. Sorting algorithms are run trillions of times around the world and are so commonly used in all kinds of software that they are written into libraries that coders can call on as and when needed to avoid having to reinvent the wheel. These filed-away algorithms had been refined and tweaked by humans for so long that they were considered complete and as efficient as possible.

This month, DeepMind’s AI found an improvement that can speed up sorting by as much as 70 per cent, in the right scenario. Any improvement that can be rolled out to every computer, smartphone or anything with a computer chip can bring huge savings in energy use and computation time. How many more commonly used algorithms can AI find efficiency gains in? Time will tell.

Wind power could be turbocharged by AI

Mimadeo/Shutterstock

AI could boost output of all wind turbines around the world

While DeepMind is searching for efficiency gains in source code, others are using AI to find them in machines. Wind turbines work best when directly facing oncoming wind, but the breeze obstinately keeps changing direction. Currently turbines use a variety of techniques to maintain efficiency, but it seems that AI may be able to do a slightly better job.

Researchers trained an AI on real-world data about wind direction and found that it could come up with a strategy that raised efficiency by keeping the turbine facing the right way more of the time. This involved more rotating, which used more energy, but even taking that into account they were able to squeeze 0.3 per cent more power from the turbines.

This figure may not make for a great headline, but it’s enough to boost electricity production by 5 terawatt-hours a year – about the same amount as is consumed annually by Albania, or 1.7 million average UK homes – if rolled out to every turbine around the world.

2AN93AM Switched on caps lock button on keyboard, typing capital letters, toggle key

A surprising way to defeat ChatGPT

Ievgen Chabanov/Alamy

Capital letter test is a foolproof way of sorting AIs from humans

The Turing test is a famous way of assessing the intelligence of a machine: can a human conversing through a text interface tell whether they are speaking to another human or an AI? Well, large language models like ChatGPT are now pretty adept at holding realistic conversations so we perhaps need a new test.

In recent years we have seen a suite of 204 tests proposed as a kind of new Turing Test, covering subjects such as mathematics, linguistics and chess. But a much simpler method has just been published in a paper, where superfluous upper case letters and words are added to sensical statements in an attempt to trip up AI.

Give a human a phrase such as “isCURIOSITY waterARCANE wetTURBULENT orILLUSION drySAUNA?” and they are likely to notice that the lower case letters alone form a logical sentence. But an AI reading the same input would be flummoxed, researchers showed. Five large language models, including OpenAI’s GPT-3 and ChatGPT, and Meta’s LLaMA, failed the test.

But other experts point out that now the test exists, AI can be trained to understand it, and so will pass in the future. Distinguishing AI from humans could become a cat-and-mouse game with no end.

Could the European Union set the future course for AI?

iStockphoto Copyright:

What is the future of AI? Google and the EU have very different ideas

Regulators and tech companies don’t seem to be pulling in the same direction on AI. While some industry players have called for a halt to research until the dangers are better understood, most legislators are pushing safeguarding rules to ensure it can progress safely – and lots of tech firms are ploughing ahead at full speed to commercially release AI.

Politicians in the EU have agreed an updated version of its AI Act, which has been years in the making – the president of the European Commission, Ursula von der Leyen, promised to urgently bring in AI legislation when she was elected in 2019. The laws will now require companies to disclose any copyright content that was used to train generative AI such as ChatGPT.

On the other hand, companies like Google and Microsoft are ploughing on with rolling out AI to many of their products, worried about being left behind in a revolution that could rival the birth of the internet.

While technology has always outpaced legislation, leaving society struggling to ensure harms are minimised, AI really is moving at a surprising pace. And the results of its commercial roll-out could be catastrophic: Google has found that its output can be unreliable even when cherry-picked for advertising. The potential benefits of AI are undisputed, but the trick will be to make sure they outweigh the harms.

Topics:

[ad_2]

Source link

Leave a Reply