Artificial intelligence (AI) is transforming how we work, communicate, and innovate—but it’s also transforming the landscape of security risks. The same tools used to boost productivity are being weaponized by bad actors. With hyper-personalized phishing emails, cloned voices, and deepfake videos, cybercriminals are using AI to launch faster, more convincing attacks.
In a May 2024 announcement from the FBI’s San Francisco field office, officials warned that AI is now central to many phishing, social engineering, and impersonation scams. FBI Special Agent in Charge Robert Tripp stated, “Attackers are leveraging AI to craft highly convincing voice or video messages and emails to enable fraud schemes against individuals and businesses alike.”
Let’s discuss what this means for businesses and how to stay protected.
How AI security risks are lowering the barrier for cybercriminals
In the past, launching a cyberattack required advanced skills and expensive software. Today, with generative AI tools and low-cost deepfake technology, anyone with basic tech knowledge can carry out sophisticated attacks.
Research from UC Berkeley’s Center for Long-Term Cybersecurity shows that AI is automating the early stages of many attacks, including:
- Scanning public profiles to tailor phishing content.
- Generating well-written emails that mimic tone and grammar.
- Cloning voices and faces with minimal video input.
- Creating fake Zoom calls and fraudulent money requests.
These tools allow criminals to personalize phishing emails at scale, making them more challenging to spot and much more likely to succeed.
Also read: Why Your Business Needs Cyber Liability Insurance
The rise of deepfakes and voice cloning
Deepfake audio and video (realistic impersonations of people’s voices and appearances) are a growing concern. AI presents many new opportunities for security risks, such as replicating a CEO’s speech, mimicking a loved one’s voice, or faking a live video call. These attacks often bypass traditional warning signs.
While cybersecurity experts at NYU point out that telltale signs still exist—awkward pauses, mismatched lip-syncing, or video compression glitches—they’re becoming increasingly subtle. The key is to verify any suspicious message, even if it looks or sounds real.
Businesses are at risk—but not powerless
The FBI stresses the importance of awareness and layered defenses. By combining smart technology with informed teams, organizations can significantly reduce their exposure. Four best practices stand out:
1. Require Multi-Factor Authentication (MFA)
Even if passwords are compromised, MFA can stop attackers from gaining access. A second verification step—like a text code or biometric scan—is now essential.
2. Train teams to detect AI-enhanced scams
Today’s phishing emails are well-written and convincing. Regular training and phishing simulations can help staff recognize subtle warning signs like mismatched URLs, unexpected urgency, or unfamiliar sender details.
3. Use AI for cyber defense
When used strategically, AI can help combat security risks. As Excelsior University notes, AI-powered tools can scan for anomalies, detect suspicious behavior, and monitor systems in real time—reducing human error and response time.
4. Build security into your systems
Security should be part of the design, not an afterthought. That means integrating encryption, access controls, and continuous monitoring from day one. UC Berkeley’s cybersecurity researchers also recommend aligning with guidance from the Cybersecurity and Infrastructure Security Agency (CISA) and collaborating with law enforcement to build long-term defenses.
Also read: The Basics of MFA Tools: What Your Business Needs to Know
Steps individuals can take to stay safe
AI-powered scams aren’t limited to companies—they’re also targeting individuals. You can reduce your personal risk by following these simple precautions:
- Use strong, unique passwords and a password manager.
- Enable MFA on all your accounts.
- Stay cautious of unexpected messages, even if they seem familiar.
- Slow down before acting on any urgent request, especially involving money or sensitive data.
- Report suspicious activity at IC3.gov, the FBI’s cybercrime reporting portal.
Be prepared for the next generation of AI security risks
As AI-driven cyberattacks become more sophisticated, the security risks to small and mid-sized businesses grow—financially and operationally. It’s not just about defending networks anymore; it’s about protecting your people, your data, and your reputation from a new class of threat.
McGowan Program Administrators’ Cyber Liability Insurance provides strong, affordable protection tailored for small and mid-sized businesses. With up to $2MM in limits, 19 coverage parts, and same-day turnaround, this program addresses both first- and third-party risks, including social engineering scams and business interruption.
To take defense even further, McGowan partners with White Dragon AI to deliver ORCA, a powerful phishing simulation platform that trains employees to recognize and avoid real-world threats.
AI security risks are changing the cyber landscape, but the right coverage and preparation can keep you ahead of the curve.