AI Deepfakes and ‘Boss Fraud’ Target Lithuanian Businesses
In a sophisticated shift from random phishing to surgical strikes, Lithuanian businesses are facing a new generation of cybercrime. Last year alone, the country recorded nearly 15,500 fraud cases targeting both companies and individuals, with attempted thefts reaching a staggering €58.8 million, according to data from the Lithuanian Centre of Excellence in Anti-Money Laundering. While the scale is alarming, it is the evolution of the tactics—specifically the integration of Artificial Intelligence (AI)—that has security experts on high alert.
Modern scammers are moving away from generic, poorly written emails. Instead, they are utilizing AI tools to mimic the voices and even the video appearances of high-level executives. Imagine a scenario where a company’s accountant receives a Microsoft Teams call from their CEO. The voice is familiar, the face is recognizable, and the request is urgent: a confidential payment must be made immediately to secure a major deal. In reality, the person on the screen is a deepfake, a digital marionette controlled by criminals.
The Anatomy of the Interception Trap
One of the most financially damaging schemes currently plaguing the Lithuanian market involves the silent interception of business correspondence. Scammers gain access to email threads between partners and wait for the precise moment a payment is discussed.
Žygeda Augonė, Head of Information Security at Swedbank, highlights a recent case involving a medical clinic. The clinic was in the process of ordering interior decor worth €30,000 from an international gallery. The transaction seemed routine until the gallery reported that the funds never arrived. Investigation revealed that scammers had intercepted the email chain and swapped the gallery’s bank details for their own. Because the tone and context of the emails remained consistent, the clinic had no reason to suspect they were communicating with an intruder.
This “man-in-the-middle” approach succeeds because it exploits existing trust. Scammers no longer look for random victims; they conduct deep research into a company’s operations, identifying key roles and established partnerships before launching a targeted attack.
The Psychology of Urgency and Authority
Beyond technical interception, the “CEO fraud” or “Boss fraud” relies heavily on psychological manipulation. These attacks typically create a sense of extreme urgency combined with a demand for confidentiality. By pressuring an employee to act quickly, scammers bypass the victim’s critical thinking and discourage them from consulting colleagues.
AI has amplified this threat. Previously, a suspicious employee might have looked for grammatical errors or an unusual email address. Today, AI tools allow criminals to generate high-quality, error-free messages and even replicate the specific communication style and emotional nuances of a specific executive. When a request comes through a real-time video or voice platform, the perceived authority of the caller often overrides standard security protocols.
Establishing a ‘Human Firewall’ Through Verification
While the technology behind these scams is advancing, the most effective defense remains rooted in human behavior and rigid internal processes. Security experts emphasize that technology alone cannot solve the problem; a culture of “friendly suspicion” is required.
To mitigate the risk of AI-driven fraud, businesses are encouraged to implement several non-negotiable verification steps:
- Out-of-Band Verification: If a request for a payment or a change in bank details is received via email or a video call, it must be confirmed through a secondary, previously known channel, such as a direct phone call to a verified number.
- The ‘Safe Word’ Protocol: Some organizations are now adopting internal code phrases or passwords. If an executive makes an unusual or urgent financial request, they must provide a pre-agreed phrase that is only known to authorized personnel.
- Dual Authorization: No single employee should have the power to initiate and finalize large payments. Implementing a “four-eyes” principle, where at least two people must approve a transaction, significantly reduces the window of opportunity for scammers.
- Automated Payment Data: Where possible, companies should move away from manual entry of bank details, which is easily manipulated, toward automated systems that flag any changes to existing vendor records.
As AI continues to lower the barrier for high-tech impersonation, the burden of security falls on the consistency of internal discipline. In the digital age, a familiar voice is no longer proof of identity.
Source: BNS