More and more smart services are becoming a reality. For example, Staples is using AI technology to automate the ordering process and customer service as part of its switch to conversational commerce (the use of social-media chat apps). IBM has developed Watson Beat, an AI tool for creating music. Artificial intelligence is also being applied in the legal services industry. For instance, IPGraphy is an AI-based service that assists in the process of trademark clearance and trademark monitoring. Artificial intelligence is present in every aspect of human existence, including human resources management.
Proper application of artificial intelligence in HR management can deliver value to organisations. In particular, HR leaders can use AI-based solutions to help with the recruitment process, scheduling appointments, as well as employee management. HR management services are expected to benefit from AI technology in three aspects. First, AI can eliminate human bias and uninformed choices. Second, by automating the data gathering and assessment processes, it can increase the efficiency of candidate selection. Third, it can reduce HR departments’ workload and thus allow HR personnel to focus on more challenging matters.
Regardless of how an organisation benefits from AI, the intersection of this technology and individuals’ data raises certain legal questions. Those concerning data protection seem to be most fundamental.
AI and data protection
Successful implementation of AI technologies is a result of multiple factors, the most important of which are the programming skills of their creators and input. Since big data is fuelling advances in AI and one of the main AI skills is to make autonomous decisions based on large collections of information, it is essential for HR departments to focus on the intersection of personal data protection and AI. This is even more vital as the General Data Protection Regulation (GDPR) comes into effect on 25 May 2018.
Even though employers are entitled to process certain personal data of their employees, under the GDPR, employees as data subjects will have greater rights, including the right to greater transparency and the right to data portability. Also, when discussing the intersection of AI and employees’ data protection, we must consider the big data context. The scope of the data processed by AI might be broader than the statutory authorisation, and as a consequence, a fair-privacy impact assessment should be conducted prior to implementing AI-based technologies.
One of the key guarantees provided to every data subject – including employees and candidates – under Article 22(1) of the GDPR is that they “have the right not to be a subject of a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” Nevertheless, automated decision-making is not absolutely prohibited. There are three exceptions from the prohibition, where an automated decision is either (a) authorised by law, (b) necessary for entering into, or performance of, a contract between a data controller and a data subject, or (c) based on the individual’s explicit consent. However, in the latter two instances, it remains obligatory for the data controller to implement proper measures to safeguard the individual’s rights and freedoms.
The minimum threshold under Article 22(3) of the GDPR is to have human intervention on the part of the controller, and for data subjects to have an opportunity to express their point of view and contest the decision. Since AI logic does not necessarily reflect human logic and is based on complex algorithms, meeting these requirements might not always be possible. In particular, where an individual decides to contest an automated decision, the data controller should be able to provide the individual with justification for the decision. Therefore, the developers of AI-based solutions that interact with personal data should think ahead and equip their technologies with a possibility to track back their reasoning. The algorithms should be not only trackable but also auditable. In other words, algorithms should be transparent so that factors influencing algorithmic decisions can be identified by auditing techniques. This can be achieved, among other means, by a combination of interactivity and visualisation.
For a data controller, it’s essential to comply with information obligations. With the constant growth of AI, it might not always be possible to determine which personal data have been processed and how they have been processed. Since there is a risk that not all the data are covered by the data subject’s consent, and thus processed lawfully, it seems reasonable for HR departments assisted by AI technologies to review the content of individuals’ consents.
AI is becoming widespread across private and public sectors. It involves the analysis of big data and results in certain legal implications for data protection. Due to the volume of data and the manner in which it is generated, the application of data protection principles, in particular the principle of transparency and accountability, can be demanding. Even though implementation of privacy by design, auditability, or data portability might seem challenging, it is also important to perceive these features as an advantage to the organisation. Individuals’ interest in data security is becoming greater and greater. Therefore, implementation of proper data protection measures can enhance the organisation’s attractiveness.