Artificial intelligence and personal data – The points of contact between these two areas are very large. Self-learning systems in particular usually have access to a large amount of data. The ability to make automated decisions also increases the risk to the rights and freedoms of data subjects.
The new European General Data Protection Regulation (GDPR) contains various requirements relating to the use of artificial intelligence. Data protection is therefore always a central aspect when using AI systems.
Artificial intelligence simply explained
First of all, it is necessary to understand what exactly is meant by the term AI. There is no generally applicable or unambiguous definition for the term AI. The Federal Government understands artificial intelligence to mean the conception of technical systems that deal with these problems independently, can adapt themselves to changing conditions, and have the ability to learn from new data. In general, it can be said that AI includes machine learning. This means that human learning and thinking behavior is transferred to the computer. A distinction must be made between weak and strong AI. Weak AI relates to concrete application problems, while strong AI has human-like general intelligence and learning ability. In English, artificial intelligence is called artificial intelligence.
The use of AI systems is particularly popular in medicine. Based on technical systems with artificial intelligence, for example, complex tumor structures can be recognized automatically. Systems with artificial intelligence are also already being used to evaluate application documents. In this context, the AI system autonomously evaluates all applicants.
Artificial intelligence examples
AI can be found everywhere these days and is different in each case. The use of AI is increasing and will probably be essential in the future. AI is already being used in the following areas and functionalities:
- Face recognition in smartphones
- Social media with algorithms
- search engines
- Smart home devices
- Digital voice assistants
- shuttle service
- banking transactions
- Autonomous driving
- medicine and care
- Apps that e.g. B. recognize plants
- VR glasses
Future of artificial intelligence
The federal government has published an official strategy regarding artificial intelligence. With this strategy, the federal government wants to bring to the top of the world in the development of artificial intelligence. In this context, the federal government makes it clear that it sees it as its duty to promote the responsible use of AI for the common good: “We observe ethical and legal principles based on our free-democratic basic order concerning the entire process of development and application of artificial intelligence. We will take up the recommendations of the data ethics committee when implementing the strategy.”
According to the federal government’s strategy, developers and users of AI technologies should be made aware of the ethical and legal limits of the use of artificial intelligence.
Read Also: 7 Helpful Hints To Apply To Your Business Intelligence Efforts
Artificial intelligence and privacy
The use of AI endangers the right to informational self-determination as part of the general right of personality under Article 2(1) GG in conjunction with Article 1(1) GG. This is currently reflected in many discussions about Google Home, Amazon Echo, and similar intelligent language assistants. These sometimes also record conversations and situations if this is not intended or desired by the users. This is particularly questionable in terms of data protection law. For example, a data protection impact assessment, if necessary, can hardly be prepared, if at all, since the algorithm makes its own decisions and it is therefore not possible for the user to understand these decisions. There is a conflict between AI and data protection, the competitiveness of companies, and the security of citizens and their data.
To be able to effectively protect the fundamental rights and freedoms of data subjects, the requirements of the GDPR must be observed for the development and use of AI systems in which personal data are processed. In particular, the principles of data processing following Art. 5 GDPR must be taken into account when processing data by AI systems (lawfulness, processing in good faith, transparency, purpose limitation, data minimization, correctness, and storage limitation. A violation of the principles of data processing according to 83(5)(a) GDPR, a fine of up to EUR 20 million or, in the case of a company, up to 4 percent of its total global annual turnover of the previous financial year, depending on which of the amounts is higher.
Through early technology design under Art. 25 GDPR in the form of technical and organizational measures, those responsible must ensure the implementation of the principles for the processing of personal data from Art. 5 GDPR.
Transparency in Artificial Intelligence
Transparency with artificial intelligence (AI) usually refers to the ability to have a complete view of a system, i.e. all technical aspects are visible and traceable. Three levels of transparency can be distinguished in AI systems:
- Implementation: At this level, the way the model acts on the input data to produce a prediction is known. The technical principles of the model (e.g. workflow, condition set, etc.) and the associated parameters (e.g. coefficients, weights, thresholds, etc.) are known. This is the default level of transparency for most open-source models available on the Internet. Such systems are often referred to as white-box models, as opposed to black-box models where the model is largely unknown.
- Specifications: This refers to all information that led to the implementation received, including
Details about the specifications of the model (e.g. task, goals, context, etc.), the training data set and the training procedure (e.g. hyperparameters, cost function, etc.), the performance, as well as any elements that allow reproducing the implementation from scratch. Research papers usually meet
parts of this transparency.
- Interpretability: This corresponds to understanding the mechanisms underlying the model (e.g. the logical principles of data processing, the reason for an output, etc.). this too
involves demonstrating that the algorithm follows specifications with human values (e.g. in terms of fairness). This level of transparency is generally not achieved in current artificial intelligence systems.
Data protection in machine learning
Crowdsourcing has been linked to machine learning at Apple for several years. With the map function, traffic jams are recognized by geolocation data, speed, and movement and displayed to other users. In the area of data protection, this procedure would be easily compatible in places such as B. the detection of data breaches by AI or risk analysis. However, compliance with the transparency requirement and the principle of purpose limitation and data minimization is problematic in terms of data protection law. As described above, the AI makes its own decisions and is therefore largely beyond the user’s control. In addition, the data is usually used for more than the stated purpose. According to Art. 6 Para. 4 GDPR, for the lawful further processing of this data it is then necessary that the purpose is compatible with the original purpose within the meaning of the standard. Accordingly, further processing by the AI is only legal under the conditions of Art. 6 Para. 4 GDPR.
Machine learning models are based on a large amount of data, extracting statistical patterns to solve a specific problem. The dataset used for training can still be sensitive, either because it contains personal information (medical records, emails, geolocation, etc.) or because the content is restricted (intellectual property, strategic systems, etc.).
Read Also: Why You Should Use AI To Write Your Business’s Press Material
For personal data, AI systems should fully comply with the legal provisions of the General Data Protection Regulation. Appropriate technical and organizational measures should be taken. A data protection officer should carry out this assessment of whether measures lead to a suitable level of protection.
The application of anonymization or pseudonymization to this data is a safeguard recommended by the GDPR, although feasibility is highly dependent on the context of the application. Such a measure promotes the complexity of the systems used even more, as this may affect the explainability of the AI system.
In more general terms, building an AI system based on personal data then requires that all actors involved in the machine learning pipeline, from collecting the data to processing it and training the model, to maintaining it and using it as trustworthy in dealing with this data.
The quality and correctness of the training data are paramount to ensure that the AI systems using machine learning techniques that are designed to be trained with data work properly. Together with the machine learning algorithm responsible for creating the model, the training data is part of the AI system and thus it is part of the security perimeter to be protected. It is therefore of crucial importance that the security of datasets is consistent in terms of their confidentiality, integrity, and availability as well as their conformity with possible data protection frameworks.
Human Dignity, AI, and Privacy Responsibilities
The inviolability of human dignity (Art. 1 Para.1 GG, Art. 1 GRCh) guarantees that the use of AI (in the context of state action)) does not turn the person into an object. In addition, fully automated decisions or profiling by AI systems are only permitted within narrow limits. Art. 22 GDPR stipulates specifically for the use of AI that decisions with legal effect or similar significant impairment may not be left to the machine alone. Even if the scope of Article 22 GDPR is not open, the general principles of data processing according to Article 5 GDPR must be taken into account.
When AI systems are deployed, accountability must be clearly defined and communicated. In this context, all necessary mechanisms must also be set up so that data processing is lawful, the rights of those affected are observed and the security and controllability of the AI systems are guaranteed.
In particular, compliance with technical and organizational measures under Article 32 GDPR and the principles of data processing under Article 5 GDPR must be observed by the responsible body.
Read Also: Artificial Intelligence Technology Can Secure Sites By Scanning Major Venues For Weapons
No unconstitutional use of AI and avoidance of discrimination
It should also be noted that artificial intelligence may only be used for constitutionally legitimate purposes. In this context, the principle of purpose limitation under Article 5 (1) (b) GDPR must be taken into account. If the purpose is to be changed, the strict restrictions from Art. 6 Para. 4 GDPR apply. Thus, in the case of AI systems, the compatibility of the extended processing purposes with the original purpose of the collection must always be observed.
AI systems, primarily learning systems, depending on the data collected. When databases are inadequate, these systems can present results that turn out to be discriminatory.
From a data protection perspective, such discrimination could breach the principle of fair processing, the adequacy of processing, and the linkage of processing to legitimate purposes. Before an AI system can be used, a comprehensive risk assessment must be carried out to determine whether there may be discrimination and thus the rights and freedoms of the persons concerned are at risk.
Transparency requirement for data processing by AI and the principle of data minimization
According to Article 5 Paragraph 1 Letter a), 12 et seq. In this context, the fulfillment of the transparency obligations is of great importance. The information related to the processing process must be easily accessible and understandable. The persons concerned must also be informed about the logic involved. Concerning the proof of the transparency and information obligations, the accountability of the person responsible applies under Article 5 (2) GDPR.
When using AI systems, large databases are usually used. The principle of data minimization must always be taken into account here, see Article 5 (1) (c) GDPR. The principle that the processing of personal data must be limited to what is necessary also applies here. For example, it may be the case that data may only be processed completely anonymously and that this is also appropriate given the purpose.
Technical-organizational measures & control of artificial intelligence
To ensure that data processing by AI systems complies with the law, technical and organizational measures under Articles 24, 25, and 32 GDPR must be observed. This includes e.g. B. the pseudonymization of personal data. Businesses, science, and data protection authorities must develop best practice examples for technical and organizational measures about the use of AI so that special standards for the use of AI systems can be established from a data protection perspective.
The responsible data protection supervisory authorities control and monitor processing processes relevant to data protection law. This applies in particular to the processing of personal data by artificial intelligence.
Since technologies in connection with artificial intelligence are constantly being developed and the data processing processes are generally becoming more intensive and complex as a result, the data protection supervisory authorities, science, politics, and the users of artificial intelligence must accompany the ongoing development of systems based on artificial intelligence and control them under data protection law. In the Hambach declaration of the independent data protection supervisory authorities, you will find further information regarding the data protection-compliant use of artificial intelligence.
In summary, it can be said that there will probably be a need for legislative changes in the future concerning the use of AI. AI can be helpful in several ways, e.g. B. to detect hacker attacks faster. The development of the use of AI is very difficult to determine since many things in this area are still unclear and will probably only be clarified over time and with the development of technical possibilities. Although AI is already widely used, it is still a future topic whose developments in the coming years should be monitored urgently.
Leave a Reply