Microsoft blocks $4 billion worth of AI-enhanced scam attempts
According to Microsoft’s latest Cyber Signals report, the landscape of AI-powered scams is changing swiftly, with cybercriminals leveraging emerging technologies to exploit victims.
In the last year, the tech giant reported that it has thwarted $4 billion in fraud attempts, successfully blocking around 1.6 million bot sign-up attempts each hour, highlighting the magnitude of this escalating threat.
The latest installment of Microsoft’s Cyber Signals report, titled “AI-powered deception: Emerging fraud threats and countermeasures,” highlights how artificial intelligence has diminished the technical hurdles for cybercriminals. This shift allows even those with limited skills to create complex scams with relative ease.
Tasks that once required scammers days or even weeks to complete can now be executed in minutes.
The rise of accessible fraud capabilities marks a significant transformation in the criminal landscape, impacting consumers and businesses worldwide.
The progression of cyber scams augmented by artificial intelligence
Microsoft’s report reveals the capabilities of AI tools that can effectively scan and scrape the web for company information. This advancement poses a significant risk, enabling cybercriminals to construct detailed profiles of potential targets, facilitating highly convincing social engineering attacks.
Fraudsters increasingly employ sophisticated tactics to deceive victims, utilising counterfeit AI-enhanced product reviews and AI-generated shopfronts. These deceptive platforms often feature fabricated business histories and fictitious customer testimonials, creating an illusion of legitimacy.
Kelly Bissell, Corporate Vice President of Anti-Fraud and Product Abuse at Microsoft Security, reports a persistent rise in threat numbers. According to the report, cybercrime has escalated into a trillion-dollar issue, with a consistent upward trend observed annually over the last three decades.
“Today presents a significant opportunity to accelerate the adoption of AI, enabling us to swiftly identify and address exposure gaps.” Currently, advancements in artificial intelligence enable significant improvements in the speed and effectiveness of integrating security and fraud protection measures into products.
The Microsoft anti-fraud team has revealed that AI-driven fraud attacks are occurring globally, with notable activity emerging from China and Europe, especially Germany, which stands out as one of the largest e-commerce markets within the European Union.
A recent report highlights a correlation between the size of a digital marketplace and the likelihood of encountering attempted fraud, suggesting that larger platforms may experience a higher proportion of such incidents.
Scams related to e-commerce and employment are on the rise.
Two areas of significant concern regarding AI-enhanced fraud are e-commerce and job recruitment scams. The rise of AI tools has enabled the rapid creation of fraudulent websites in the ecommerce sector, requiring little to no technical expertise.
Many websites replicate the appearance of legitimate businesses, employing AI-generated product descriptions, images, and customer reviews to deceive consumers into thinking they are engaging with authentic merchants.
AI-powered customer service chatbots are introducing a new level of sophistication in online interactions. These chatbots can engage with customers convincingly, potentially delaying chargebacks by employing scripted excuses. Furthermore, they can manipulate complaints through AI-generated responses, giving scam sites an air of professionalism.
Job seekers face significant risks as well. A recent report highlights how generative AI has greatly facilitated scammers’ creation of fraudulent listings across multiple employment platforms. Criminals are creating fraudulent profiles using stolen credentials, crafting deceptive job postings with automated descriptions, and deploying AI-driven email campaigns to target job seekers with phishing attempts.
The integration of AI-driven interviews and automated email communications significantly bolsters the authenticity of these scams, complicating efforts to discern their true nature. According to the report, “Fraudsters frequently solicit personal information, such as resumes or bank account details, claiming it is necessary to verify the applicant’s information.”
Warning signs to watch for encompass unexpected job offers, demands for payment, and interactions conducted via informal channels such as text messages or WhatsApp.
Microsoft is implementing strategies to combat the growing threat of AI-related fraud.
In response to rising threats, Microsoft has announced implementing a comprehensive strategy across its products and services. Microsoft Defender for Cloud offers robust threat protection specifically designed for Azure resources. Meanwhile, like other browsers, Microsoft Edge includes website typo protection and safeguards against domain impersonation. According to a report from Microsoft, Edge employs deep learning technology to assist users in steering clear of fraudulent websites.
The company has upgraded Windows Quick Assist by incorporating warning messages to bolster user security. These alerts aim to inform users of potential tech support scams before they allow access to individuals purporting to be from IT support. Microsoft has reported blocking an average of 4,415 suspicious connection attempts to Quick Assist daily.
Microsoft has unveiled a new fraud prevention policy, a key component of its Secure Future Initiative (SFI). Starting in January 2025, Microsoft product teams will be required to conduct fraud prevention assessments and integrate fraud controls into their design processes, aiming to create products inherently resistant to fraud.
With the ongoing evolution of AI-powered scams, the significance of consumer awareness cannot be overstated. Microsoft has warned users, urging them to exercise caution regarding urgency tactics. The company emphasises the importance of verifying the legitimacy of websites before making any purchases and advises against sharing personal or financial information with unverified sources.