Insights

Artificial Intelligence Training for Cyberwar

Artificial intelligence can assist in a targeted manner when it comes to protection of IT infrastructures – if you understand its functionality and extend its capacities.

This was another busy year for businesses in the arms race against cybercriminals: According to the German Federal Office for Information Security, around 320,000 new malware items entered circulation between June 2018 and May 2019 alone – per day. This increases the pressure to establish an IT security infrastructure that can also keep up with the spectrum of future threats.

Already today, AI-based programs play a key role here. The complexity of the challenges in IT security leave us with no other choice but to entrust many protection and monitoring functions to a "sentient" technology.

However: Trust is one thing, transparency is better. If you hire a security service to monitor office space, factory buildings, or laboratories, you would naturally also check their references. The same applies to implementing AI in IT security: only those that provide transparency on their output make it through the door. If the AI functions and mechanisms are transparent, this also immediately makes for the right "collaboration" in the relationship: AI assists, suggests solutions, humans make decisions.

Integrating Devices by Tagging

The AI-based tagging feature is a good example of this transparency. Our cognitix Threat Defender uses this feature to identify attacks on company networks. Be it a smartphone, a router, or switches: If a device is recorded in a network via an IP address or a MAC address it can be tagged and also equipped with tag-specific security guidelines if necessary. This means devices with the same functions can be grouped for easier implementation of security guidelines. In this case AI takes on the helping role of permanently checking the behavior of all similarly tagged devices for deviations from the group status. For example, if something occurs at one of the five devices marked as #Printer that deviates from the behavior of the other four devices, AI raises the alarm.

Find the Cat!

But how does the AI assistant actually learn what is "normal"? This is often explained by means of an image of a neural network – which AI systems use to learn things in a humanlike way, by receiving input and then independently linking this to new information. In practical AI this is not that easy. An example of the difference: We filter the "unusual" from the masses of “usual”, for example we quickly identify a cat image in amongst 100 dog images. An AI system does it differently, because it only "sees" the pattern it knows with varying degrees of certainty. If the AI system is trained to recognize dogs, it cannot recognize the single cat image amongst 100 dog images as a cat, but rather as "a dog with a 5 % degree of certainty".

An AI system cannot recognize that it is being shown other animals, and cannot make any pronouncements about these. It first needs to be retrained again. This is an absolutely central challenge, particularly in AI security, as AI systems always need to relearn that a cat, cow, giraffe, etc. – that is, new threats in the network – exist, and how to respond to them.

To "know" what is currently considered "normal" and is therefore expected behavior, AI systems need data that is constantly maintained. By tagging similar devices as functional groups, AI systems can always determine the behavior of the group or the average behavior of the group's members, and can detect deviations from this in individual devices – and in so doing detect deviations from "normal behavior" even if this changes over time.