Home » FBI: Deepfake Fraudsters Applying for Remote Employment

FBI: Deepfake Fraudsters Applying for Remote Employment

0 comment

Cybercrime
,
Fraud Management & Cybercrime
,
Governance & Risk Management

Paycheck Is a Path to Insider Access at Tech Companies(@prajeetspeaks) o
June 29, 2022

That candidate for a remote software coding position may not actually exist, at least not as presented, the FBI says in a warning for tech companies to be on the lookout for deepfake applicants.

See Also: Fireside Chat | Zero Tolerance: Controlling The Landscape Where You’ll Meet Your Adversaries

Threat actors are using a combination of stolen personally identifiable information and advanced imaging technology to spoof tech companies in the hopes of securing remote employment, states an advisory from the FBI’s Internet Crime Complaint Center.

The goal is to gain insider access to customer and financial data, corporate databases and proprietary information.

Deepfakes use artificial intelligence to superimpose someone’s likeness or voice onto someone else, even in real time.

The technique isn’t foolproof: The FBI says prospective employers have caught on to the fraud when the actions and lip movements of interviewee didn’t quite sync up.

The FBI warning comes just weeks after another warning from the federal government advising employers to be on the lookout for North Korean information technology workers posing as legitimate teleworkers.

Malicious Use of Deepfakes

Combining stolen personally identifiable information with deepfakes is a new threat tactic, says Andrew Patel, senior researcher at the Artificial Intelligence Center of Excellence at cybersecurity firm WithSecure.

Don’t count on deepfakes always being easy to catch, he warns. As they mature, deepfake technologies “will eventually be much more difficult to spot. Ultimately, what we’re seeing here is identity theft being taken to a whole new level,” he says.

Cybercriminals have already used voice impersonation technologies to bypass voice authorization mechanisms and for voice phishing, or vishing, attacks. Threat actors can impersonate their targets and bypass security measures such as voice authentication mechanisms to authorize a fraudulent transaction or spoof the victims’ contacts to gather valuable intelligence, the Photon Research Team, the research arm of digital risk protection firm Digital Shadows, told ISMG (see: Deepfakes, Voice Impersonators Used in Vishing-as-a-Service).

APT-C-23, part of the Hamas-linked Molerats group, reportedly targeted Israeli soldiers on social media with fake personae of Israeli women, using voice-altering software to produce convincing audio messages in female voices. The messages reportedly encouraged the Israeli targets to download a mobile app that would install malware on their devices.

In July 2019, cybercriminals impersonated the chief executive of a U.K.-based energy company using a voice-cloning tool in a successful attempt to receive a fraudulent money transfer of $243,000, The Wall Street Journal reported.

You may also like

Leave a Comment

CyberNonStop

Cybernonstop is created to bring news and knowledge through articles to visitors.

Do not forget to subscribe.

Laest News

@2021 – All Right Reserved. Designed and Developed by PenciDesign