Websites can deceive and manipulate ChatGPT search results

Websites can deceive and manipulate ChatGPT search results
Websites can deceive and manipulate ChatGPT search results

An AI-driven search tool such as ChatGPT may provide erroneous or malicious results if the web pages crawled contain hidden text. This is what emerges from an investigation carried out by the newspaper The Guardian.

About ten days ago, the developer of ChatGPT – OpenAI – announced that the AI-chatbot could now be used as a search engine by anyone – or better: temporarily only by paying users. An important announcement, because OpenAI thus positions ChatGPT as a direct competitor to the Google search engine, long considered untouchable. But an investigation by The Guardian immediately highlights potential security problems with the new system.

‘Prompt injection’

‘Prompt injection’ is the expression that characterizes the problems well, in our opinion. The Guardian tested how ChatGPT reacted when asked to summarize web pages that included hidden content. This hidden content may include instructions from third parties – for example the owner of the web page – that modify the reactions generated by ChatGPT. Think here of the instructions that are automatically added to the ChatGPT ‘prompt’ and which, for example, inform the AI ​​tool to ignore anything it has previously read on the website.

These ‘prompt injection’ techniques can therefore certainly be used in malicious ways, such as tricking ChatGPT into giving a positive review of a product or service, despite a huge amount of negative reviews on the same page. So think, for example, of the owner of a restaurant who thus informs ChatGPT to provide a positive evaluation of his establishment while ignoring the dozens of negative reviews.

Fake manipulation websites

As part of the tests, ChatGPT was notably presented with the URL of a fake website resembling a description sheet for a camera. The AI ​​tool was then asked if the camera in question was a wise purchase. The review page provides a positive but balanced review, highlighting some features that people might not like.

However, if hidden content contained instructions for ChatGPT to give a favorable rating, the reaction was always entirely positive. This was even the case if the page contained negative reviews. The hidden text could therefore effectively be used to make the actual assessment score disappear.

Also malicious code

The above examples involve manipulation of results, but a security researcher discovered that ChatGPT could also evoke malicious code from websites it crawled. Cybersecurity researchers The Guardian spoke to acknowledge that ChatGPT’s current search system poses a very high risk of massive deception for users, but added that OpenAI is capable of solving this type of problem and will do. The journal also submitted detailed questions to OpenAI, but did not receive an official response. In any case, the investigation once again constitutes a warning that the results of an AI tool should not be taken as THE truth: a press release which is displayed on every page of ChatGPT.

Hidden text sanctioned

Another important observation: hidden text on web pages has been ‘sanctioned’ by search robots for some time now. It appears significantly lower in search results or may even be excluded from search results entirely in some cases. Websites that want to rank well in Google or Bing search results, in other words, cannot actually use hidden content. And so they cannot afford to pay for artificial intelligence either.

-

-

PREV Flaws discovered in ChatGPT search engine
NEXT “A wake-up call for our planet”: new images show the frightening scale of Arctic ice retreat