Bing Chat AI tricked into solving CAPTCHA tests with simple lies
Technology

Bing Chat AI tricked into solving CAPTCHA tests with simple lies

[ad_1]

Bing Chat is a web search and information tool powered by AI

Ryan Deberardinis / Alamy

Microsoft’s AI-powered Bing Chat can be tricked into solving anti-bot CAPTCHA tests with nothing more than simple lies and some rudimentary photo editing.

Tests designed to be easy for humans to pass, but difficult for software, have long been a security feature on all kinds of websites. Over time, types of CAPTCHA – which stands for Completely Automated Public Turing test to tell Computers and Humans Apart – have become more advanced and trickier to solve.

However, although humans often struggle to complete modern CAPTCHAs successfully, the current crop of advanced AI models can solve them easily. They are therefore programmed not to, which should stop them being used for nefarious purposes. This is part of a process known in the field as “alignment”.

Bing Chat is powered by OpenAI’s GPT-4 model, and it will obediently refuse to solve CAPTCHA tests if presented with them. But Denis Shiryaev, the CEO of AI company neural.love, says he was able to convince Bing Chat to read the text on a CAPTCHA test by editing it onto a photograph of a locket. He then told the AI the locket belonged to his recently deceased grandmother and he needed to decipher the inscription. The AI duly obliged, despite its programming.

Shiryaev says tricking AI models is “just a fun experiment” he carries out for research. “I’m deeply fascinated by the breakneck pace of large language model development, and I constantly challenge this tech with something to try its boundaries, just for fun,” he says. “I believe current generation models are well-aligned to be empathetic. By using this approach, we could convince them to perform tasks through fake empathy.”

But cracking CAPTCHA tests with AI would enable bad actors to carry out a range of unwanted practices, such as creating fake accounts on social media for propaganda use, registering huge numbers of email accounts for sending spam, subverting online polls, making fraudulent purchases or accessing secure parts of websites.

Shiryaev believes that most CAPTCHA tests have already been cracked by AI, and even websites and services that use them instead look at a user’s mouse movements and habits to assess whether they are a human or a bot, rather than relying on the actual result of the CAPTCHA.

New Scientist was able to repeat Shiryaev’s experiment and convince Bing Chat to read a CAPTCHA test – albeit with misspelled results. Hours later, the same request was refused by the chatbot, as Microsoft appeared to have patched the problem.

But Shiryaev was able to quickly demonstrate that using a different lie sidesteps the protection once again. He placed the CAPTCHA text on a screenshot of a star identification app and asked Bing Chat to help him read the “celestial name label” as he had forgotten his glasses.

A Microsoft spokesperson said: “We have large teams working to address these and similar issues. As part of this effort, we are taking action by blocking suspicious websites and continuously improving our systems to help identify and filter these types of prompts before they get to the model.”

Topics:



[ad_2]

Source link

Leave a Reply