Security breach by Prompt Infection affects generative AIs like ChatGPT and Google Bard

A security flaw has been discovered on generative AIs like ChatGPT and Google Bard. The flaw exploits a Prompt Injection attack.

Security breach by Prompt Infection affects generative AIs like ChatGPT and Google Bard
0

Artificial Intelligence has become extremely popular today and is increasingly used.

Everyone knows ChatGPT, which has democratized generative AI, to the point that the GAFAM giants are getting involved, including Microsoft and Google with Bard.

However, this type of technology can suffer, like websites for example, from security vulnerabilities, more or less critical.

A security flaw has been discovered on generative AIs like ChatGPT and Google Bard. The flaw exploits a Prompt Injection attack.

The Prompt Injection attack

How the Prompt Injection attack works is very simple.

During a conversation with the generative AI, simply inject requests pushing the AI ​​to go beyond the restrictions linked to its initial programming.

These injections can be direct or indirect:

Direct Method

It’s simply a matter of speaking directly with the generative AI to ask it basic forbidden things.

There are several ways to do this, such as using synonyms for banned words or deliberately making a spelling mistake.

But also confuse the AI ​​by asking it for a large number of instructions at the same time and then asking it to go back.

Or by diverting the context of a request, such as taking inspiration from a work or asking for help.

Indirect Method

This method can represent a real danger for the user.

Malicious requests can be inserted into website or document pages asking the AI ​​to carry out an order.

Thus an attacker could manipulate the AI ​​by injecting a malicious order allowing it to display illegal content.

The director of information security at Google Deepmind considers indirect injection to be one of Google’s most pressing concerns for its AI.

@IEEE Committee Hosting
Indirect prompt injection

What does the new security flaw affecting generative AI do?

The new flaw makes it possible to bypass the linguistic restrictions put in place by generative AI.

Indeed, users can manipulate the chatbot to use it for malicious purposes and push it to generate illegal or even dangerous content.

Thus, AI can explain how to create cocaine, how to carry out a phishing attack or even how to commit murder.

Europol reported that a large proportion of criminals and cyberattackers have now adopted AI as their assistant.

Furthermore, there are hacker AIs created by hackers like WormGPT. It helps cybercriminals create malware easily.

Hackers can, through an indirect attack, steal data from a company or install malware on a target.

Tips for Avoiding Risk Using Generative AI

Here are several tips we can provide you with:

  • Avoid inserting a URL into the conversation
  • Likewise, avoid inserting a document, whether text or image
  • Avoid communicating sensitive, confidential or personal data
  • Do not use it in the context of work
  • Always use the latest possible version available
  • If you are professionals, use Cyber ​​Threat Intelligence software to detect if you have suffered data theft

More

Comment

Your email address will not be published.