Cybercriminals don’t need to break down the front door anymore. Sometimes they just need to pass the interview. Jake Moore, Global Cybersecurity Advisor at ESET and former Digital Forensics and Cyber Crime Unit investigator, has spent his career looking at crime from the inside out. At MSP GLOBAL this October, he’ll show you just how far AI-powered deception has already come: deepfake job candidates, cloned voices, fake documents, false identities, and social engineering at industrial scale. Until now, job interview scams have mostly been seen as coming from the interviewer’s side. But with deepfake technology, the threat now runs both ways: employers can be deceived too.
In this Q&A, Jake explains how he used AI to land interviews under a fake identity, where the hiring process breaks down, and why MSPs need to treat recruitment, trust, and human verification as part of their security posture—not just HR admin. Because in the age of deepfakes, the next breach might not look like malware. It might look like the perfect candidate.

You managed to land a job interview at an MSP as “Jackie Morris.” What made the experiment work: the technology, the hiring process, or human trust?

It was possible due to not having the proper verification processes in place but it also helped due to being able to manipulate trust. Hacking humans is still a key and vital part of the cybercriminal toolkit and when the trust is taken advantage of, the rest of the hacking process becomes much easier.

At what point did the recruitment process first start to fail—identity checks, reference checks, video confidence, urgency to hire, or something else?

Reference checks weren’t carried out and the identity checks were only looked at visually which means they can be circumnavigated. There was an urgency to hire the candidate in both positions I applied for, but I was simply able to talk to them and make them believe what they were seeing was truthful—plus why would you question it? Deepfakes were clearly not even on their radar so it didn’t come up and no checks were made.

Deepfakes still sound futuristic to many business owners. How easy was it in practice to create a convincing enough candidate?

Deepfakes are extremely easy to produce in slow time but still take a little extra knowledge and technology to create in real time, so luckily we aren’t seeing this happen too much now; but as the technology improves, we may need to be more aware in the future.

For MSPs hiring technical staff remotely, what are the red flags that are easiest to miss when someone looks and sounds plausible on screen?

Red flags aren’t always clear on a video call as the technology can clearly trick people. Therefore, it remains vital to meet someone in real life before sending out a laptop or sensitive information.

How should MSPs redesign remote hiring without turning it into a slow, suspicious, unpleasant process for genuine candidates?

Meeting the candidate in person is vital but when that isn’t possible, we are starting to see trusted third parties meet with the candidates before anything is handed over. This is helpful when candidates are in different counties. It’s also important to remind prospective candidates that they will have to meet people before they’re hired to minimize fraudulent entries too. Technological verification techniques are also being created so it’s also important to keep an eye out on the latest authentication tech which forever changes.

Recruitment is usually seen as an HR function. In an age of deepfakes, should hiring become part of an MSP’s security posture?

Absolutely. Hiring processes are changing and technology is fully being taken advantage of. But the process is now more about awareness and education. Many people are still completely unaware that deepfakes are even possible, let alone this impressive, so it’s important to teach everyone along the hiring process to be alert.
[At MSP GLOBAL, we would add that this shift is already showing up in the HR world. Gartner’s HR practice has warned that candidate fraud is becoming harder to detect as applicants use AI throughout the hiring process, with Senior Research Director Jamie Kohn noting that “candidate fraud creates cybersecurity risks that can be far more serious than making a bad hire.” Gartner also predicts that by 2028, one in four candidate profiles worldwide will be fake. For MSPs, that means recruitment can no longer sit outside the security conversation: every new hire is also a potential access-control decision.]

What would a mistake like this cost a business in €/$?

It’s not worth putting a figure on this as it could be catastrophic if a threat actor was to get hold of sensitive company information.
MSPs can’t afford to treat AI deception as tomorrow’s problem. At MSP GLOBAL 2026, Jake Moore will show how deepfakes, fake candidates and social engineering are already testing the limits of trust.
Work on Your Cybersecurity Posture
Come to Barcelona on October 21–22 to meet the cybersecurity vendors, service delivery partners and MSP leaders building the next layer of defense. Registration is now open. And Free!



