The likelihood is – more than ever before – that digital security in companies is regularly tested by cyber attacks. If an emergency occurs, a good logging infrastructure helps to receive information about the conditions of systems and processes as quickly and in detail as possible-through automatic logging.
The extent of the damage according to a successful cyber attack can be reduced by taking preventive measures, for example through good incident readiness strategy and corresponding hardening. However, all attacks cannot be prevented. In order not to be in the dark after an attack, a good logging strategy is therefore important.
It is currently particularly interesting to include cloud services. It should be ensured that the security functions are available at all, for example as part of a subscription or higher quality license plans. Logging is one of these safety functions.
What happens in logging?
In logging, system and process messages as well as user activities are automatically logged. With the help of these protocols, what has happened in the past can be seen. Therefore, the protocols should be as detailed as possible and retain enough long. The more complex an IT landscape, the more important it is to have information quickly and in detail so that errors can be quickly recorded and analyzed.
The use of a central logging solution is recommended here, which stores the logs of different systems (e.g. firewalls, servers, and cloud services) centrally in one place. Such solutions can often also evaluate the data directly and produce corresponding alarms and reports in the event of errors or traces of attacks.
Who processed what personal data, and in what way? Which systems or services are connected to an event? Especially with a view to possible external access. If the employees choose: For example, always in the corporate network at a certain time and from certain countries, it is very noticeable if access is suddenly listed at other times and/or from other countries.
Looking at as many protocol files from different systems is important in order to derive sensible measures using the analysis results.
Base
According to Art. 32 I lit. c GDPR, the protocol data are an important part of the technical-organizational measures (TOM) i.S.V. Art. 24 I GDPR, with which the processing can be proven in accordance with GDPR. In the event of a technical or physical occurrence, the TOM serves to restore access to personal data.
The log data in automated processing systems must include the following:
- Survey,
- Change,
- Query,
- Disclosure (including transmission),
- Combination and
- Deletion.
For the log data, the GDPR principles for the processing of personal data should also be observed (Art. 5):
- Legality: The data may only be collected for defined, clear, and legitimate purposes and must not be processed in a way that is not to be agreed with these purposes.
- Data economy: The collection of data must not pass “in stock”, but only if this data is really relevant.
- Memory limitation: The data must be saved in a form that only enables the identification of the people concerned as long as is required for the purposes for which they are processed.
- Integrity and confidentiality: The data must be protected against unauthorized access, unlawful processing, and against unintentional loss and damage as well as against unintentional destruction. The conscious manipulation of the data must also be prevented.
The files may not be used for the behavioral and performance control of the employees (§ 31 BDSG).
Read Also: Cyber Security For Seniors: What Should Be Considered?
Cloud services
If you have software-as-a-service (SaaS) products, you usually depend on your cloud service provider. The log data at Office365 (so-called Unified Audit Logs) are easy to understand, for example, and provide information about the activities of users.
However, only if the protocols are also activated, which is not always the case by default. In this way, it can be identified which meetings have taken place when, who was logged in and when, when which emails were sent or received, and which data were uploaded, viewed, or processed.
In addition, the sign-in and audit logs are available in the Azure Active Directory. These have to be called up by API more often in larger environments since the web portal does not always move out all the data. The memory duration at a maximum of 30, depending on the license, is also limited, which is not sufficient for forensic analysis.
It is therefore better to connect this data either to the central logging system mentioned above or to use even better cloud native logging agents such as Azure Monitor and Log Analytics. These are already integrated and only have to be activated and paid for.
Recommendation
In cooperation with the specialist department and the information security officer (ISB), the logging should be safely planned, set up, and operated, as well as all necessary guidelines and requirements. What should be logged where and in what form? Have the guidelines been communicated to all those responsible?
The logging infrastructure as well as your specifications and guidelines should also be checked regularly and, if necessary, adapted. The protocol files should also be evaluated regularly, randomly, and documented – automated and occasionally.
Tips
- Develop a centralized logging infrastructure
- Cloud services involved sensibly
- Note the risk situation in security guidelines
- Regular and key-like examination of the logging