

Fighting Fraud in the Age of AI
A broad view of a changing landscape
On a Wednesday morning in June, a large group of businesspeople gathered for “Fraud in the Age of AI.” This seminar brought together a panel of leaders in fraud protection tactics to discuss ongoing scams and trends, but especially how artificial intelligence is changing the landscape of fraud—and what businesses can do to better protect themselves.
The setting
The audience — a mix of bankers, investors, development professional, and manufacturers — received a warm welcome from First Financial Bank’s Commercial Banking Regional Market President of Western Ohio, Jon Waldo. Then, they witnessed a live demonstration of deepfake technology. As the audience processed its believability, Jon Waldo drove his point home: “I don’t have to fool every one of you. I just have to fool one of you.”
The panel was moderated by Melissa Donovan, First Financial Bank’s Director of Enterprise Fraud, and the panelists included:
- Lance Murray, Chief Information Security Officer, First Financial Bank
- Carly Devlin, Shareholder and Lead of IT Risk & Cybersecurity, Clark Schaefer Hackett
- A member of the Secret Service, who will remain nameless to protect his work
The conduit for cons
The major theme of the seminar was that AI is not a scam on its own; it is a sophisticated tool that helps scammers become more effective.
Think about the information you post online in social media accounts. AI can scour the web for information like birthdays, former pets, family members’ names, and then generate lists of likely passwords or answers to security questions.
The Secret Service officer explained another tactic: “Scam emails used to be so poorly written that you could identify them quickly due to typos and poor grammar. Not anymore.” He went on, “AI is highly involved with ransomware and with Business Email Compromise, which lets the bad guys get in from an employee clicking on an email.”
Many people assume that if they click on a malicious link, they will immediately see the impacts of malware. However, ransomware can operate in the background, take data (often confidential client information like bank account numbers or SSNs), and disappear, with no obvious damage. The targeted company will think they’re in the clear until they receive a threat to release the data on the dark web and a demand of funds to prevent the shutdown — hence the name ransomware. AI is now being used to help write these codes for malware or ransomware, increasing the quantity of malicious programs that can be created by one bad actor. The malware and ransomware can then be sold as a service to other bad actors who don’t have coding ability, making it easier than ever to become a cyber con artist.
It’s not just the text of phishing emails, either. AI is being used to train scam artists to more effectively pose as trusted personnel to gain access to confidential information. AI-created tutorials can train someone how to sound like an IT professional before they call a target. This makes it easier to believably pose as an internal IT representative with an issue that requires access to passwords and accounts.
The way of the future
The most important thing to know is that AI is not going away any time soon. Murray described the eagerness of many associates across the First Financial footprint to adopt AI, and the methodical approach his team is taking.
“We are building our AI strategy,” he said. “We are asking ourselves, ‘What will be the business case, the use case? Who is going to manage it? Where is the data going?’”
The Secret Service agent shared that 80% of breaches come from the human factor. Devlin elaborated, describing how AI programs like Copilot can comb a vast quantity of files on a system. This means that if a file’s security settings are not high enough, employees can gain access to information that they didn’t even know they had access to. Murray explained that this was why his team had blocked organizational device access to any open AI site while their team conducted its analysis and built its AI strategy.
AI presents unparalleled risk, but also exciting opportunity for companies, particularly companies handling large amounts of confidential client data.
If you suspect you've been targeted by fraud, call the First Financial Bank Business Support Center (BSC) at 866.604.7964. Explore tips and information on how to prepare and protect yourself on our Commercial Fraud and Online Safety Resource Hub.