AI chatbots like ChatGPT and Perplexity could send you to scam links, warns study

AI chatbots like ChatGPT and Perplexity could send you to scam links, warns study


Whether you like it or not, artificial intelligence has become a part of our lives, and many people have begun to put their full trust in these chatbots—most of which now also come with search capabilities. Even traditional search engines like Google and Bing have incorporated AI results into the mix, while new-age companies like ChatGPT and Perplexity use a chatbot-style format to give direct answers to users.

However, a new report by Netcraft claims that the trust placed in these AI tools could end up being misplaced, as users could become victims of phishing attacks. It states that these AI tools are prone to hallucinations, resulting in inaccurate URLs that could lead to large-scale phishing scams.

As per the report, OpenAI’s GPT-4.1 family of models was asked for website links to log into 50 different brands across industries like finance, retail, tech, and utilities. While the chatbot got the correct URLs in 66% of cases, it got them wrong in 34% of cases. This, the report claims, could lead users to opening potentially harmful URLs and opens the door for large-scale phishing campaigns.

Moreover, the report notes that there have been over 17,000 AI-written GitBook phishing pages targeting crypto users while pretending to be legitimate product documentation or support hubs. It notes that these sites are clean, fast, and linguistically tuned for AI consumption—making them look good to humans and irresistible to machines.

This could potentially be a major vulnerability, where users trusting AI chatbots open phishing websites, and attackers aware of this loophole could register these unclaimed websites to run phishing scams.

The report also notes a real-world instance where Perplexity AI suggested a phishing site when asked for the official URL of Wells Fargo.

Smaller brands are said to be more affected by this kind of AI hallucination, given that they are less likely to appear in LLM training data.

Attackers are looking to take advantage of AI

Netcraft also uncovered another sophisticated campaign to ‘poison’ AI coding assistants. The attackers created a fake API designed to impersonate the legitimate Solana blockchain, and developers fell prey to the trap by unknowingly including the malicious API in their projects. This led to the routing of transactions directly to the attackers’ wallet.

In another scenario, attackers launched blog tutorials, forum Q&As, and dozens of GitHub repos to promote a fake project called Moonshot-Volume-Bot, in order to be indexed by AI training pipelines.



Source link