AI-Driven Romance Scams Pose Growing Threat, Warns Alan Turing Institute

AI-Driven Romance Scams Pose Growing Threat, Warns Alan Turing Institute

(IN BRIEF) Romance scams are becoming more efficient and harder to detect due to the increasing use of artificial intelligence, according to new research by the Centre for Emerging Technology and Security (CETaS) at The Alan Turing Institute. Generative AI is enabling fraudsters to produce highly convincing fake profiles, automate outreach, and exploit victims’ vulnerabilities with greater precision using deepfakes, LLMs, and AI translation tools. The research highlights both the growing threat of AI-assisted scams and the potential of AI to support detection and prevention. Lead author Simon Moseley calls for urgent improvements in detection tools and online safeguards.

(PRESS RELEASE) LONDON, 15-Apr-2025 — /EuropaWire/ — The Centre for Emerging Technology and Security (CETaS) at The Alan Turing Institute has revealed that romance fraud is entering a dangerous new phase, driven by advances in artificial intelligence. According to a new study, AI is making scams significantly more scalable, efficient, and harder to detect, placing victims at greater psychological and financial risk.

The research outlines how generative AI technologies are being weaponised by fraudsters to build synthetic personas with fabricated backstories, realistic images, and even deepfake audio or video content. These tools allow criminals to operate at a scale previously unattainable, using automated systems to identify vulnerabilities in targets and refine their manipulation techniques.

Large language models (LLMs) are being used to enhance deceptive messaging, tailor scripts, and conduct multi-language engagement using translation tools. Scammers can now automate large portions of the fraud lifecycle — from creating fake profiles to engaging victims — while still relying on human oversight to steer AI-generated content and fix inconsistencies.

However, the researchers point out that the same AI tools used to perpetrate scams may also hold promise in stopping them. LLMs’ capacity to mimic scam messaging could be adapted for detection and countermeasures, forming part of future defence strategies.

The research is part of a broader CETaS initiative examining AI’s impact on online criminality. Lead author Simon Moseley, CETaS Visiting Research Fellow and Principal Data Scientist at the Home Office, underscored the urgency of the findings: “Romance scams are evolving rapidly with the help of AI. Scammers are embedding advanced technologies into long-standing fraud networks, extracting millions from victims seeking meaningful connections. The harm is not just financial—it’s deeply emotional. We urgently need better detection capabilities and stronger online safeguards.”

The CETaS paper warns that current protections are lagging behind the rapid technological developments in scam tactics, and calls for greater collaboration between government, tech companies, and the research community to improve resilience against AI-driven fraud.

Media Contact:

email: press@turing.ac.uk

SOURCE: Alan Turing Institute

MORE ON ALAN TURING INSTITUTE, ETC.:

EDITOR'S PICK:

Comments are closed.