In a disturbing incident highlighting the risks associated with advanced artificial intelligence (AI) techniques, a man from northern China fell victim to a sophisticated scam that exploited deep fake technology. This incident has raised concerns about the potential for AI-driven financial crimes, prompting authorities and the public to remain vigilant.
The scam took place in Baotou, China, where the scammer employed AI-powered face-swapping technology to convincingly impersonate the victim’s close friend during a video call. Deceiving the unsuspecting victim, the fraudster managed to persuade him to transfer a staggering sum of 4.3 million yuan (equivalent to over Rs 5 crore).
The victim, genuinely believing that his friend urgently needed funds for a deposit during a bidding process, complied promptly with the request. It was only when the real friend expressed complete unawareness of the situation that the victim realised he had fallen prey to a scam.
According to Reuters, local police announced that they had successfully recovered most of the stolen funds and were diligently working to trace the remaining amount.
China has recognised the escalating threat of AI-driven scams and has taken proactive measures to address the issue. The country has intensified its scrutiny of AI technology and applications, particularly in relation to voice and facial data manipulation. In January, new regulations were implemented to provide legal protection for victims of AI-driven fraud.
Given the recent surge in scams, including in India, individuals are urged to exercise caution and remain vigilant during their digital interactions.
Deepfakes are a type of AI technology that leverages deep learning to create realistic yet fake videos or images. Through analysing and learning patterns from vast amounts of data, algorithms generate highly convincing and authentic-looking outputs. In the case of deep fakes, this technology is applied to manipulate videos or images, making them appear as if they involve events or individuals that never actually happened or existed.
Gathering extensive visual information about the target individual is usually the first step in the process, frequently from publicly accessible sources like social media or public appearances. After using this information to train a deep learning model to replicate the target, convincing deepfake content is produced.
As the threat of AI-driven scams persists, it is crucial for individuals to stay cautious and adopt preventive measures to protect themselves from falling victim to such fraudulent activities.