S 4.431 Selecting and processing relevant information for logging

Initiation responsibility: IT Security Officer, Head of IT

Implementation responsibility: Administrator

Logged data must contain meaningful information. It is not important whether the data is captured locally or in a centralised manner or whether it is provided for an IT early-warning system. The events to be logged depend on the protection requirements of the respective IT systems and must be coordinated and specified within the organisation in advance, amongst other things (see S 2.new Logging). Amongst other things, the following events must be considered particularly:

An IT early-warning system can extract messages from all these events and can qualify and clearly process the messages. For this, the logged data is filtered, normalised, aggregated, and categorised in advance.

Filtering

Filtering the collected logged data is intended to sort out unnecessary log messages. This is necessary since the amount of the logged data generated is too high to be able to process all information. The filter settings can mostly be adjusted at the centralised logging server and/or the IT early-warning system. The settings must be adapted to the circumstances of the information system and should only be performed by well trained administrators. It is important not to sort out too many or too few log events. Moreover, the filter settings should be checked and updated at regular intervals, for example when new servers are added to the centralised logging server and/or the IT early-warning system or when old servers are decommissioned.

Normalisation

In order to be able to further process the data and to store the data to a database, for example, all arising log messages must be converted to a uniform data format. This process is called normalisation. The messages differ from manufacturer to manufacturer. However, there are also huge differences between logged data of operating systems and applications.

Aggregation

In order to further process the data, aggregation is performed. Here, log messages with identical content are summarised to become one record. Often, identical log messages are generated several times by the same system, which means a low information value for the following messages. For this reason, only the first log file is processed further. However, it is important to supplement the first log message with the number of occurred redundant events in order to be able to determine the frequency of the identical log messages.

Categorisation and prioritisation

After the data has been filtered, normalised, and aggregated, it should be categorised and prioritised. This way, the information contained in the message is increased. For example, messages can be prioritised with the help of zone information such as DMZ or high-security area and can be preferred during analysis. Additionally, the messages can be categorised in system types such as security gateway, operating system, or applications.

Correlation of the data

An essential requirement of the logging server and/or the IT early-warning system is the correlation of the log messages from logged data sources within the framework of which different events are linked. Within an information system, the separate security components such as security gateways, intrusion detection/prevention systems, and anti-virus gateways only have a limited view on their respective function. Therefore, the corresponding logged data should be correlated. An example for correlated log entries would be to link security gateway and router logged data to accounting information of a compromised system.

Correlation can be performed on different levels:

Review questions: