The increasing use of videoconferencing platforms and the various forms of remote work also adopted in the post-emergency covid make interpersonal collaborations increasingly virtual. This scenario must undoubtedly force organizations to prepare adequately to be able to recognize impersonation attempts based on social engineering attacks, which are also proving increasingly sophisticated due to the rapid advancement of deepfake technology.
Deepfake technology, what’s it?
The word deepfake, which originates from a combination of the terms “deep learning” and “fake,” refers to digital audio/video products created through artificial intelligence (AI) that could allow one to impersonate an individual with likeness and voice during a video conversation. This is done through deep learning methodologies such as the Generative Adversarial Network (GAN) i.e., a group of neural network models for machine learning, deputed to teach computers how to process information by emulating the human brain.
Deepfake and phishing
The accessibility and effectiveness of deepfake technology have led cybercrime to use it for sophisticated social engineering attacks for the purpose of extortion, fraud, or to cause reputational damage. Consider the impact of a voice phishing attack that replicates the voices of a company’s stakeholders to persuade employees to take a series of actions that could harm security and privacy, or the effectiveness of a phone call with simulated voices for the purpose of convincing an employee to send funds to an offshore bank account.
Aggravating factors
Further aggravating the situation is also the availability of both deepfake tools, made available as a service on clandestine web forums, which make it easier and more convenient for criminal actors with limited technical skills to set up these fraud schemes, and a large number of images and videos posted by users of social media platforms that can be processed by deep learning algorithms to generate precisely deepfake content.
Mitigation
Although there is still no simple and secure way to detect deepfakes, there are still some best practices that can be adopted:
Outlook
Although technology will continue to evolve and it will become increasingly difficult to detect deepfakes, fortunately detection technologies will also improve. But the task for insiders to better protect themselves and their organizations from a variety of cyberattacks will have to be not only to keep abreast of evolving counter techniques and implement them in a timely manner, but also, and most importantly, to raise awareness in their organizations by focusing on training employees of all ranks.
The human factor must always be considered as the first bastion of defense, even and especially against the most sophisticated cyber attacks.
About the author: Salvatore Lombardo
Electronics engineer and Clusit member, for some time now, espousing the principle of conscious education, he has been writing for several online magazine on information security. He is also the author of the book “La Gestione della Cyber Security nella Pubblica Amministrazione”. “Education improves awareness” is his slogan.
Twitter @Slvlombardo
Follow me on Twitter: @securityaffairs and Facebook and Mastodon
(SecurityAffairs – hacking, Social engineering)