New attacks in the age of artificial intelligence. What to prepare for?
Modern technology places ever-greater demands on us. The advent of artificial intelligence is a huge challenge, but it also brings with it the need for greater caution and care. We are facing new types of attacks and scams that are based on known principles, but their technological implementation is at a much higher level. Moreover, this level is constantly improving and moving forward. Without critical thinking and basic knowledge, it is increasingly more difficult to distinguish reality from fiction or deception.
We must therefore prepare for the ever-increasing quantity and quality of fraud in both the online and offline world. An example from the online space might be a video call, while in the offline space it might be a photo, article, or video on social media.
To some extent, we have learned to be wary of phishing attacks that are carried out via email. But what if you get a call from a superior who needs quick access to certain sensitive data? You can see his contact details on your phone, hear his voice and see his face. Moreover, your boss responds seamlessly to all your questions and suggestions. In short, everything looks normal. But is he really your superior? Should we be cautious?
Detecting a fake caller, who is not who they say they are, is becoming increasingly difficult with the help of modern technology and artificial intelligence. Therefore, caution is and always will be in order, except perhaps in personal conversations with a real person. Fortunately, we don’t have physical avatars, which are clones of real people. Yet.
What is the best defense against hackers in the age of artificial intelligence?
The age of artificial intelligence is undoubtedly revolutionary. However, there is no revolution for us users in terms of protection from cyber-attack - we can and must only act evolutionarily. For example, the Zero Trust method “Never Trust, Always Verify” has been a key cybersecurity principle for a long time. If we do not want to be an easy target for attackers, this and other methods must be strictly followed. How to do it? A fake AI-assisted video call can be detected, for example, by asking a trivial question about an actual fact. If we’re in a call with a fictitious person we know, we can ask about a fact that is not publicly known. If we have a strong suspicion, we can end the call and call the person back.
Non-personal communication can take place via email, chat or phone call. In this case, as a general rule, unless we are 100% sure who is actually communicating with us, we never share any personal information, let alone any sensitive data. A simple example is when we receive a text message from a shipping company with a parcel number and a tracking link. We don't click on the link, but instead open the online tracking page of the given shipping company by simply searching for it for online, and use the parcel number given in the message. If the parcel is fictitious, we will not find anything under the given number.
Of course, defense against cyber threats never stands still. Artificial intelligence can also serve us well from the other side. AI is very effective in raising the success rate of detecting and mitigating the risk of cyber-attacks.
Other examples of misuse
Cyber fraud can be divided into two main categories:
• Interactive, typically targeted at a specific person with whom communication is conducted with the aim of persuading the attacked person to take an immediate specific action, e.g. to pass on information.
• Influencing, typically targeted at a group of people with the aim of influencing the opinion of the attacked group, e.g. a selected group of voters before an election.
• General influencing of groups of people with different goals
- Deep fakes (is this a real photo/video?)
- Disinformation scene (it already works today and text info is enough, but a huge increase in image content can be expected)
• What can a hacker (that can be practically anyone) achieve?
- Influence elections / referendums
- Change people’s mood, preferences - leading to polarization in society, which is already happening intensively today with social media - but using AI takes it to a new level
• Replace / bypass other biometry-based controls
- Voice-based biometry check
- Image-based biometry check - e.g. FaceID
How do fake video calls work?
From a technology perspective, fake calls are based on generative adversarial networks – a type of deep learning model composed of two entities – a generator and a discriminator. The generator's task is to learn to replace the imposter's face in the video with a face that is well known to the person. Typically, this can be a boss, a colleague at work, or a friend. The goal is to make the discriminator unable to tell the difference between the real and fake face in the video. Although these models are moving forward significantly, especially with the development of generative artificial intelligence, they still have weaknesses that we can exploit in detecting fake video calls.
First, they require high computing power. In order to generate fake video in high quality and in real time, it is necessary to have dedicated computing stations with powerful graphics cards. The prices of these stations can be in the millions of crowns, depending on the performance of the graphics hardware. The alternative of renting these resources from cloud service providers is easy to spot. Assuming, of course, that the fraudster does not have specialized knowledge in this area. For these reasons, attackers tend to use videos of significantly lower quality that are blurry at first glance.
Second, a large number of high quality photos or videos of the person being imitated are required to create a believable fake video. That's why we can see many examples using the faces of celebrities and well-known personalities, where these materials are publicly available. However, with the proliferation of social networks, it is becoming increasingly easier to obtain data about an “ordinary” person. Still, the less quality input a scammer has, the more effort they have to put into making the scam video look real. Standard maladies of fake videos include, for example, “plastic” facial expressions, incongruous symmetrical features (the left ear is significantly different from the right) or jitter around the face.
Like any technology, AI has both good and bad applications. It can significantly improve the quality of fraudulent videos, but it can also help to protect against them. Since 2019, an initiative by leading technology companies and institutions such as Facebook, Microsoft, and several global universities, has focused on supporting the development of tools to detect fake content on the internet.