Skip to main content
Microsoft Security

Cyber Signals Issue 9 | AI-powered deception: Emerging fraud threats and countermeasures

Introduction | Security snapshot | Threat briefing
Defending against attacks | Expert profile 

Microsoft maintains a continuous effort to protect its platforms and customers from fraud and abuse. From blocking imposters on Microsoft Azure and adding anti-scam features to Microsoft Edge, to fighting tech support fraud with new features in Windows Quick Assist, this edition of Cyber Signals takes you inside the work underway and important milestones achieved that protect customers.

We are all defenders. 

A person standing in a dark room

Between April 2024 and April 2025, Microsoft:

  • Thwarted $4 billion in fraud attempts.
  • Rejected 49,000 fraudulent partnership enrollments.
  • Blocked about 1.6 million bot signup attempts per hour.

The evolution of AI-enhanced cyber scams

AI has started to lower the technical bar for fraud and cybercrime actors looking for their own productivity tools, making it easier and cheaper to generate believable content for cyberattacks at an increasingly rapid rate. AI software used in fraud attempts runs the gamut, from legitimate apps misused for malicious purposes to more fraud-oriented tools used by bad actors in the cybercrime underground.

AI tools can scan and scrape the web for company information, helping cyberattackers build detailed profiles of employees or other targets to create highly convincing social engineering lures. In some cases, bad actors are luring victims into increasingly complex fraud schemes using fake AI-enhanced product reviews and AI-generated storefronts, where scammers create entire websites and e-commerce brands, complete with fake business histories and customer testimonials. By using deepfakes, voice cloning, phishing emails, and authentic-looking fake websites, threat actors seek to appear legitimate at wider scale.

According to the Microsoft Anti-Fraud Team, AI-powered fraud attacks are happening globally, with much of the activity coming from China and Europe, specifically Germany due in part to Germany’s status as one of the largest e-commerce and online services markets in the European Union (EU). The larger a digital marketplace in any region, the more likely a proportional degree of attempted fraud will take place.

E-commerce fraud

A shopping cart full of boxes

Fraudulent e-commerce websites can be set up in minutes using AI and other tools requiring minimal technical knowledge. Previously, it would take threat actors days or weeks to stand up convincing websites. These fraudulent websites often mimic legitimate sites, making it challenging for consumers to identify them as fake. 

Using AI-generated product descriptions, images, and customer reviews, customers are duped into believing they are interacting with a genuine merchant, exploiting consumer trust in familiar brands.

AI-powered customer service chatbots add another layer of deception by convincingly interacting with customers. These bots can delay chargebacks by stalling customers with scripted excuses and manipulating complaints with AI-generated responses that make scam sites appear professional.

In a multipronged approach, Microsoft has implemented robust defenses across our products and services to protect customers from AI-powered fraud. Microsoft Defender for Cloud provides comprehensive threat protection for Azure resources, including vulnerability assessments and threat detection for virtual machines, container images, and endpoints.

Microsoft Edge features website typo protection and domain impersonation protection using deep learning technology to help users avoid fraudulent websites. Edge has also implemented a machine learning-based Scareware Blocker to identify and block potential scam pages and deceptive pop-up screens with alarming warnings claiming a computer has been compromised. These attacks try to frighten users into calling fraudulent support numbers or downloading harmful software.

Job and employment fraud

A hand holding a piece of paper with numbers and a picture of a person

The rapid advancement of generative AI has made it easier for scammers to create fake listings on various job platforms. They generate fake profiles with stolen credentials, fake job postings with auto-generated descriptions, and AI-powered email campaigns to phish job seekers. AI-powered interviews and automated emails enhance the credibility of job scams, making it harder for job seekers to identify fraudulent offers.

To prevent this, job platforms should introduce multifactor authentication for employer accounts to make it harder for bad actors to take over legitimate hirers’ listings and use available fraud-detection technologies to catch suspicious content.

Fraudsters often ask for personal information, such as resumes or even bank account details, under the guise of verifying the applicant’s information. Unsolicited text and email messages offering employment opportunities that promise high pay for minimal qualifications are typically an indicator of fraud.

Employment offers that include requests for payment, offers that seem too good to be true, unsolicited offers or interview requests over text message, and a lack of formal communication platforms can all be indicators of fraud.

Tech support scams

Tech support scams are a type of fraud where scammers trick victims into unnecessary technical support services to fix a device or software problems that don’t exist. The scammers may then gain remote access to a computer—which lets them access all information stored on it, and on any network connected to it or install malware that gives them access to the computer and sensitive data.

Tech support scams are a case where elevated fraud risks exist, even if AI does not play a role. For example, in mid-April 2024, Microsoft Threat Intelligence observed the financially motivated and ransomware-focused cybercriminal group Storm-1811 abusing Windows Quick Assist software by posing as IT support. Microsoft did not observe AI used in these attacks; Storm-1811 instead impersonated legitimate organizations through voice phishing (vishing) as a form of social engineering, convincing victims to grant them device access through Quick Assist. 

Quick Assist is a tool that enables users to share their Windows or macOS device with another person over a remote connection. Tech support scammers often pretend to be legitimate IT support from well-known companies and use social engineering tactics to gain the trust of their targets. They then attempt to employ tools like Quick Assist to connect to the target’s device. 

Quick Assist and Microsoft are not compromised in these cyberattack scenarios; however, the abuse of legitimate software presents risk Microsoft is focused on mitigating. Informed by Microsoft’s understanding of evolving cyberattack techniques, the company’s anti-fraud and product teams work closely together to improve transparency for users and enhance fraud detection techniques. 

The Storm-1811 cyberattacks highlight the capability of social engineering to circumvent security defenses. Social engineering involves collecting relevant information about targeted victims and arranging it into credible lures delivered through phone, email, text, or other mediums. Various AI tools can quickly find, organize, and generate information, thus acting as productivity tools for cyberattackers. Although AI is a new development, enduring measures to counter social engineering attacks remain highly effective. These include increasing employee awareness of legitimate helpdesk contact and support procedures, and applying Zero Trust principles to enforce least privilege across employee accounts and devices, thereby limiting the impact of any compromised assets while they are being addressed. 

Microsoft has taken action to mitigate attacks by Storm-1811 and other groups by suspending identified accounts and tenants associated with inauthentic behavior. If you receive an unsolicited tech support offer, it is likely a scam. Always reach out to trusted sources for tech support. If scammers claim to be from Microsoft, we encourage you to report it directly to us at https://jokerchen.me/reportascam

Building on the Secure Future Initiative (SFI), Microsoft is taking a proactive approach to ensuring our products and services are “Fraud-resistant by Design.” In January 2025, a new fraud prevention policy was introduced: Microsoft product teams must now perform fraud prevention assessments and implement fraud controls as part of their design process. 

Recommendations

  • Strengthen employer authentication: Fraudsters often hijack legitimate company profiles or create fake recruiters to deceive job seekers. To prevent this, job platforms should introduce multifactor authentication and Verified ID as part of Microsoft Entra ID for employer accounts, making it harder for unauthorized users to gain control.
  • Monitor for AI-based recruitment scams: Companies should deploy deepfake detection algorithms to identify AI-generated interviews where facial expressions and speech patterns may not align naturally.
  • Be cautious of websites and job listings that seem too good to be true: Verify the legitimacy of websites by checking for secure connections (https) and using tools like Microsoft Edge’s typo protection.
  • Avoid providing personal information or payment details to unverified sources: Look for red flags in job listings, such as requests for payment or communication through informal platforms like text messages, WhatsApp, nonbusiness Gmail accounts, or requests to contact someone on a personal device for more information.
A white text on a black background

Using Microsoft’s security signal to combat fraud

Microsoft is actively working to stop fraud attempts using AI and other technologies by evolving large-scale detection models based on AI, such as machine learning, to play defense by learning from and mitigating fraud attempts. Machine learning is the process that helps a computer learn without direct instruction using algorithms to discover patterns in large datasets. Those patterns are then used to create a comprehensive AI model, allowing for predictions with high accuracy.

We have developed in-product safety controls that warn users about potential malicious activity and integrate rapid detection and prevention of new types of attacks.

Our fraud team has developed domain impersonation protection using deep-learning technology at the domain creation stage, to help protect against fraudulent e-commerce websites and fake job listings. Microsoft Edge has incorporated website typo protection, and we have developed AI-powered fake job detection systems for LinkedIn.

Microsoft Defender Smartscreen is a cloud-based security feature that aims to prevent unsafe browsing habits by analyzing websites, files, and applications based on their reputation and behavior. It is integrated into Windows and the Edge browser to help protect users from phishing attacks, malicious websites, and potentially harmful downloads.

Furthermore, Microsoft’s Digital Crimes Unit (DCU) partners with others in the private and public sector to disrupt the malicious infrastructure used by criminals perpetuating cyber-enabled fraud. The team’s longstanding collaboration with law enforcement around the world to respond to tech support fraud has resulted in hundreds of arrests and increasingly severe prison sentences worldwide. The DCU is applying key learnings from past actions to disrupt those who seek to abuse generative AI technology for malicious or fraudulent purposes. 

Quick Assist features and remote help combat tech support fraud

To help combat tech support fraud, we have incorporated warning messages to alert users about possible tech support scams in Quick Assist before they grant access to someone approaching them purporting to be an authorized IT department or other support resource.

Windows users must read and click the box to acknowledge the security risk of granting remote access to the device.

A man talking on a phone and a laptop with a white bubble

Microsoft has significantly enhanced Quick Assist protection for Windows users by leveraging its security signal. In response to tech support scams and other threats, Microsoft now blocks an average of 4,415 suspicious Quick Assist connection attempts daily, accounting for approximately 5.46% of global connection attempts. These blocks target connections exhibiting suspicious attributes, such as associations with malicious actors or unverified connections.

Microsoft’s continual focus on advancing Quick Assist safeguards seeks to counter adaptive cybercriminals, who previously targeted individuals opportunistically with fraudulent connection attempts, but more recently have sought to target enterprises with more organized cybercrime campaigns that Microsoft’s actions have helped disrupt.

Our Digital Fingerprinting capability, which leverages AI and machine learning, drives these safeguards by providing fraud and risk signals to detect fraudulent activity. If our risk signals detect a possible scam, the Quick Assist session is automatically ended. Digital Fingerprinting works by collecting various signals to detect and prevent fraud.

For enterprises combating tech support fraud, Remote Help is another valuable resource for employees. Remote Help is designed for internal use within an organization and includes features that make it ideal for enterprises.

By reducing scams and fraud, Microsoft aims to enhance the overall security of its products and protect its users from malicious activities.

Consumer protection tips

Fraudsters exploit psychological triggers such as urgency, scarcity, and trust in social proof. Consumers should be cautious of:

  • Impulse buying—Scammers create a sense of urgency with “limited-time” deals and countdown timers.
  • Trusting fake social proof—AI generates fake reviews, influencer endorsements, and testimonials to appear legitimate.
  • Clicking on ads without verification—Many scam sites spread through AI-optimized social media ads. Consumers should cross-check domain names and reviews before purchasing.
  • Ignoring payment security—Avoid direct bank transfers or cryptocurrency payments, which lack fraud protections.

Job seekers should verify employer legitimacy, be on the lookout for common job scam red flags, and avoid sharing personal or financial information with unverified employers.

  • Verify employer legitimacy—Cross-check company details on LinkedIn, Glassdoor, and official websites to verify legitimacy.
  • Notice common job scam red flags—If a job requires upfront payments for training materials, certifications, or background checks, it is likely a scam. Unrealistic salaries or no-experience-required remote positions should be approached with skepticism. Emails from free domains (such as [email protected] instead of [email protected]) are also typically indicators of fraudulent activity.
  • Be cautious of AI-generated interviews and communications—If a video interview seems unnatural, with lip-syncing delays, robotic speech, or odd facial expressions, it could be deepfake technology at work. Job seekers should always verify recruiter credentials through the company’s official website before engaging in any further discussions.
  • Avoid sharing personal or financial information—Under no circumstances should you provide a Social Security number, banking details, or passwords to an unverified employer.

Microsoft is also a member of the Global Anti-Scam Alliance (GASA), which aims to bring governments, law enforcement, consumer protection organizations, financial authorities and providers, brand protection agencies, social media, internet service providers, and cybersecurity companies together to share knowledge and protect consumers from getting scammed.

Recommendations

  • Remote Help: Microsoft recommends using Remote Help instead of Quick Assist for internal tech support. Remote Help is designed for internal use within an organization and incorporates several features designed to enhance security and minimize the risk of tech support hacks. It is engineered to be used only within an organization’s tenant, providing a safer alternative to Quick Assist.
  • Digital Fingerprinting: This identifies malicious behaviors and ties them back to specific individuals. This helps in monitoring and preventing unauthorized access.
  • Blocking full control requests: Quick Assist now includes warnings and requires users to check a box acknowledging the security implications of sharing their screen. This adds a layer of helpful “security friction” by prompting users who may be multitasking or preoccupied to pause to complete an authorization step.
A black background with orange dots

Kelly Bissell: A cybersecurity pioneer combating fraud in the new era of AI

Kelly Bissell’s journey into cybersecurity began unexpectedly in 1990. Initially working in computer science, Kelly was involved in building software for healthcare patient accounting and operating systems at Medaphis and Bellsouth, now AT&T.

His interest in cybersecurity was sparked when he noticed someone logged into a phone switch attempting to get free long-distance calls and traced the intruder back to Romania. This incident marked the beginning of Kelly’s career in cybersecurity.

“I stayed in cybersecurity hunting for bad actors, integrating security controls for hundreds of companies, and helping shape the NIST security frameworks and regulations such as FFIEC, PCI, NERC-CIP,” he explains.

Currently, Kelly is Corporate Vice President of Anti-Fraud and Product Abuse within Microsoft Security. Microsoft’s fraud team employs machine learning and AI to build better detection code and understand fraud operations. They use AI-powered solutions to detect and prevent cyberthreats, leveraging advanced fraud detection frameworks that continuously learn and evolve.

“Cybercrime is a trillion-dollar problem, and it’s been going up every year for the past 30 years. I think we have an opportunity today to adopt AI faster so we can detect and close the gap of exposure quickly. Now we have AI that can make a difference at scale and help us build security and fraud protections into our products much faster.”

Previously Kelly managed the Microsoft Detection and Response Team (DART) and created the Global Hunting, Oversight, and Strategic Triage (GHOST) team that detected and responded to attackers such as Storm-0558 and Midnight Blizzard.

Prior to Microsoft, during his time at Accenture and Deloitte, Kelly collaborated with companies and worked extensively with government agencies like the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Bureau of Investigation, where he helped build security systems inside their operations.

His time as Chief Information Security Officer (CISO) at a bank exposed him to addressing both cybersecurity and fraud, leading to his involvement in shaping regulatory guidelines to protect banks and eventually Microsoft.

Kelly has also played a significant role in shaping regulations around the National Institute of Standards and Technology (NIST) and Payment Card Industry (PCI) compliance, which helps ensure the security of businesses’ credit card transactions, among others.

Internationally, Kelly played a crucial role in helping establish agencies and improve cybersecurity measures. As a consultant in London, he helped stand up the United Kingdom’s National Cyber Security Centre (NCSC), which is part of the Government Communications Headquarters (GCHQ), the equivalent of CISA. Kelly’s efforts in content moderation with several social media companies, including YouTube, were instrumental in removing harmful content.

That’s why he’s excited about Microsoft’s partnership with GASA. GASA brings together governments, law enforcement, consumer protection organizations, financial authorities, internet service providers, cybersecurity companies, and others to share knowledge and define joint actions to protect consumers from getting scammed.

“If I protect Microsoft, that’s good, but it’s not sufficient. In the same way, if Apple does their thing, and Google does their thing, but if we’re not working together, we’ve all missed the bigger opportunity. We must share cybercrime information with each other and educate the public. If we can have a three-pronged approach of tech companies building security and fraud protection into their products, public awareness, and sharing cybercrime and fraudster information with law enforcement, I think we can make a big difference,” he says.

A man wearing glasses and a suit

Next steps with Microsoft Security

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


Methodology: Microsoft platforms and services, including Azure, Microsoft Defender for Office, Microsoft Threat Intelligence, and Microsoft Digital Crimes Unit (DCU), provided anonymized data on threat actor activity and trends. Additionally, Microsoft Entra ID provided anonymized data on threat activity, such as malicious email accounts, phishing emails, and attacker movement within networks. Additional insights are from the daily security signals gained across Microsoft, including the cloud, endpoints, the intelligent edge, and telemetry from Microsoft platforms and services. The $4 billion figure represents an aggregated total of fraud and scam attempts against Microsoft and our customers in consumer and enterprise segments (in 12 months).