Welcome to the age of deepfakes, a technology rising rapidly from the shadows of advanced Artificial Intelligence (AI)-driven financial fraud. What once seemed like science fiction has become a pressing reality for Financial Institutions (FIs), with scams now capable of bypassing both technology and human judgment.
Globally, generative AI-enabled financial fraud, including deepfake impersonations and synthetic identity fraud, is projected to exceed US $40 billion by 2027, according to Deloitte’s Center for Financial Services marking a more than threefold increase from US $12.3 billion in 2023.
The Phantom Call: A True Story from Singapore
In late March 2025, the Singapore’s Anti-Scam Centre (ASC) intervened in a highly sophisticated business-related impersonation scam targeting a company finance director at a multinational firm. The victim, initially contacted via WhatsApp, was tricked into attending a Zoom conference call featuring deepfake impersonations of the company’s CEO and other executives.
The fraudsters executed a masterclass in digital deception: perfectly syncing facial movements, voice inflections, and background settings to create an uncanny illusion of authenticity. During the call, the director was instructed to urgently transfer US$499,000 to a local corporate account, apparently to facilitate a confidential acquisition. When a second request for an additional US$1.4 million transfer surfaced, the victim grew suspicious and alerted the company’s bank.
By then, the initial US$499,000 had already been transferred to Hong Kong bank accounts. However, thanks to swift cross-border cooperation between Singapore and Hong Kong authorities, those funds were successfully recovered — a testament to evolving cross-border anti-fraud coordination.
The Global Emergence of Deepfake Financial Crime
This was not an isolated incident. Deepfake-enabled scams are no longer hypothetical risks; they are a global threat. With today’s tools, anyone with an internet connection — can weaponise synthetic media to commit financial crime. These attacks are inflicting material losses, eroding trust in digital systems, and creating new compliance and operational challenges for FIs.
Across Asia, a report by Sumsub published via PR Newswire, found that deepfake-related incidents surged by 1,530 % between 2022 and 2023, with Singapore and Hong Kong emerging as key hotspots for sophisticated corporate impersonation scams. Channel News Asia similarly reported a 207% rise in identity fraud across APAC as criminals increasingly deploy AI-generated imagery and video in social-engineering and verification-bypass scams.
Why Deepfakes Matter Now
Deepfakes — highly realistic synthetic audio, video, and images generated through advanced AI — exploit vulnerabilities in both technology and human behaviour, escalating financial crime risks across multiple vectors as highlighted in the table below:

The Monetary Authority of Singapore (MAS), in collaboration with the Association of Banks in Singapore (ABS), underscored these risks in its September 2025 paper “Cyber Risks Associated with Deepfakes.”
The paper identifies three core areas where the threat is most acute:
- Defeating biometric authentication,
- Enabling social engineering scams
- Fuelling misinformation and disinformation.
The Risk Landscape
1. Defeating Biometric Authentication
Biometric systems using fingerprints, facial or voice recognition has long been a pillar of digital security, but is increasingly vulnerable to synthetic impersonations.
- Indonesia (2024): Fraudsters used AI-generated facial images with virtual camera software to bypass facial biometric KYC during fraudulent loan applications. [Source: Group IB, 2024]
- Vietnam & Thailand (2024): Malware harvested video and audio data from victims and repurposed them to bypass facial authentication in banking apps. [Source: Group IB, 2024]
- Hong Kong (2023): Doctored ID photos enable criminals to obtain loans totalling US$25,000 [Source: South China Morning Post, 2023]
Implication for FIs: FIs that utilise non-face-to-face verification measures should adopt tamper resistant measures. This includes liveness detection technologies that incorporate motion/thermal detection, 3D depth analysis, use of cancellable biometric templates, endpoint injection detection, and robust anomaly logging with heightened AML/KYC scrutiny for synthetic IDs.
2. Enabling Social Engineering and Impersonation Scams
The prevalence of deepfakes dramatically increase the risk of social engineering fraud by delivering lifelike audio-visual stimuli.
- Singapore (2025): The aforementioned finance-director deepfake Zoom call almost caused US$1.9 million in fraudulent wire transfers. [Source: CNA, 2025]
- Hong Kong (2024): A 27-member gang used AI-generated personas to steal US $46 million through romance scams. [Source: CNN, 2024]
- Hong Kong (2024): A firm’s finance employee transferred US$25 million after a deepfake-fabricated video call with the CFO and key executives. [Sources: The Guardian, 2024]
Implication for FIs: Dual control mechanisms should be mandated for high-value transactions. Verification should never solely rely on video or voice presence. Protocols must incorporate out-of-band checks, 2 Factor-Authentication, frequent executive impersonation drills, and continuous anomaly monitoring.
3. Fuelling Misinformation and Disinformation
Deepfakes are also reshaping information integrity, being used to spread false information that destabilises markets and erodes public trust.
- United States (2023): An AI-generated image of a Pentagon explosion caused panic selling and a temporary dip in the S&P 500 index. [Sources: New York Post, 2023]
- Singapore (2025): Prime Minister Lawrence Wong publicly warned about deepfake scams using his likeness to promote illicit cryptocurrency investments. [Sources: The Straits Times, 2025]
- Hong Kong (2024): Fraudulent cryptocurrency platforms circulated deepfake videos of Elon Musk aiming to mislead investors. [Sources: SFC, 2024]
Implication for FIs: Financial organizations should treat synthetic news as potential market abuse triggers and strengthen defences include real-time brand and executive impersonation monitoring, rapid crisis communication playbooks, and trading-surveillance alerts linked to news anomalies.
From boardrooms to markets, deepfake-enabled crimes are increasingly interconnected — spanning impersonation fraud, romance scams, and market-moving misinformation.

Regulatory Context
The Monetary Authority of Singapore (MAS) has reinforced the importance of addressing emerging deepfake and AI risks through multiple frameworks.
The Technology Risk Management (TRM) Guidelines requires continuous monitoring, anomaly detection, and multi-layered authentication to counter evolving attack vectors (paras. 8.1.3 and 8.3.2). Additionally, MAS’ AML/CFT Notices mandate additional measures which may include enhanced due diligence and suspicious-transaction reporting for synthetic or impersonation-linked activity – particularly where such risks are likelier to occur (e.g. remote onboarding of customers).
Complementing this, FATF’s Guidance on Digital Identity (2020) highlights that remote onboarding processes relying on digital ID systems must account for vulnerabilities such as the use of synthetic or manipulated media. The guidance urges FIs to implement strong biometric verification and anti-spoofing liveness detection to mitigate these risks (Sections III, paras. 64–68, 80–83).
PSN01 and PSN02 (for Payments services) and SFA04-N02 (for Capital Markets services) – and their supplementary guidelines, similarly mandate robust customer due diligence, independent verification, and immediate escalation when red flag indicators are suspected.
Recognising the Red Flags
Before implementing defence mechanisms, FIs must first recognise the early warning signs of deepfake-enabled financial crime. These indicators often emerge subtly through communication tone, visual inconsistencies, or unusual transaction behaviour, and can offer the first clues that a fake identity or impersonation attempt is in progress.

Defensive Measures
Once potential warning signs are identified, financial institutions should strengthen their defences across technology, processes, people, and collaboration.
1. Implementation of Technology Controls
- Liveness detection with multi-modal biometric analysis (motion, thermal imaging, 3D depth analysis).
- Digital watermarking and media fingerprinting to identify synthetic media.
- Real-time AI-based deepfake detection in video or voice calls.
- Endpoint security safeguarding authentication hardware and software.
2. Policy and Process Enhancements
- Multi-factor authentication for high-risk transactions.
- Separation of duties and dual authorization for fund transfers.
- Implementation of dynamic verification protocols, possibly utilising government databases.
- Integrating deepfake attack scenarios in incident response playbooks.
3. Human Factors
- Comprehensive employee training to identify tell-tale signs (unnatural facial movements, audio distortions).
- Simulation programs for impersonation and phishing.
- Customer awareness initiatives on emerging scams.
4. Collaboration
- Active intelligence sharing between market participants, enforcement authorities, regulators and industry peers.
- Sector-wide synthetic media threat monitoring response coordination.
The following pillars illustrate the core layers of protection that help firms build resilience against deepfake-enabled threats.

Conclusion
Deepfakes represent a new class of cyber-enabled financial crime — one that exploits trust in both technology and human relationships. For FIs, the challenge is not only to defend against fraudulent transactions but also to preserve confidence in digital interactions.
Deepfakes are not a future threat; they are a present risk. Institutions that fail to adapt will find themselves vulnerable to the next incident.
Partnering for Compliance and Resilience
Curia Regis works with regulated financial institutions across the market to build and implement regulatory compliance frameworks that are focused on cyber-resilience, addressing AML/CFT concerns arising from such new technology. Anchored in Singapore’s regulatory landscape, we combine deep expertise to help firms stay ahead of evolving risks.
Our collaborative approach delivers:
- A clear policy basis and methodology to address regulatory requirements.
- Early identification and assessment of potential risks.
- Practical implementation of technology and process safeguards.
- Ongoing staff training and readiness simulations.
- Comprehensive incident response coordination aligned with the prescribed guidelines.
By aligning operational practices with regulatory expectations, we enable FIs to protect their businesses, preserve client trust, and demonstrate resilience in the face of this new reality.
If you need guidance on translating your organisation’s exposure to the risks presented by emerging technology into practical safeguards that strengthen your compliance and cyber-resilience, contact us here or email us at admin@thecuriaregis.com
