(A version of this article was published in TLNT magazine)
Think about data that you share at work, in the most personal sense. You share with your employer, and sometimes with potential employers, so many aspects of your life: details about your professional path, your personal status, health care, social-economics, legal and geographical background. You also agree to share information about what you do in different times and places, who you meet, what information you consume, and so on. Moreover, you leave your digital footprints on the web, social networks, and different apps, where data reveals to employers a lot about you. Did you ever consider how data might affect you at work? How does your employer actually use the data about you, and how technology enables it? What is allowed to do with your data, and what is considered crossing a red line in terms of ethics and regulations?
AI (Artificial Intelligence) and ML (Machine learning) are two buzzwords that dominate the HR tech world today. We don’t know yet if there is a bubble in this field or rather a huge influence on management practices. Nevertheless, the common opinion among professionals is that managers will make better decisions, more informed decisions, related to the workforce, by using predictive algorithms that, for example, fit candidates in jobs or let employers know who is at “flight risk.” There are tons of discussions about this subject, but mainly from the organization’s point of view. What I’d like to do now, for a change, is to take the employee perspective.
Should employees worry?
If you tried to land a job lately, perhaps you had a video interview (e.g., by HireVue), or you were asked to play some mobile games (e.g., by Knack). These technologies, which probably offer you a nice experience as a candidate, actually enable organizations to predict your performance in certain roles, basically by pre-exploring reactions of high and low performers in those exact roles. As a candidate, you’ll probably consent to participate in those practices, even though you don’t know exactly what data these machines collect about you and what is the secret predictive model they use backstage.
I’m not saying that predictive models are bad. On the contrary, I believe that in general, a machine that fits the right person to the right job and does so better than a human whose perceptions may be biased is actually positive, not only for organizations but also for employees, since they may have a better chance to thrive in the right roles. However, anyone who has some general knowledge about ML can point to the confusion matrix and demonstrate that algorithms are not perfect, or more precisely, how much imperfect they are.
Why are predictive algorithms not perfect? There are many technical and statistical reasons, but the one that concerns me, in this context, is the possibility that human biases affect seemingly unbiased machines. The promise of ML and AI was that the more information we feed these sophisticated computer algorithms, the better they perform. Unfortunately, when the input data reflects the history of an unequal workplace, we are, in effect, asking a robot to learn our own biases. Garbage in, garbage out, right?
Such unfortunate effects can easily occur in the workplace. For instance, if an analyst explores people who were promoted in the organization for the last decade and decides to use their data to predict high performance, it might result in a model that exclude minorities from predictions about high performance, since maybe minorities were rarely promoted in the past due to social biases or discrimination. This example may be extreme, but it can underline many other subtle possible occurrences.
Who will defend employees?
Defense (and self-defense) starts with awareness. Indeed, the awareness of data protection and privacy is increasing, and influencing society in general, particularly in regulation. Employee rights are broadened these days in the context of workforce data, although not evenly in each corner of the world. In the EU, a new privacy regulation, the General Data Protection Regulation (GDPR), was published lately (and will be enforceable starting May 25th, 2018). It has serious implications for any employer who processes its employees’ and potential employees’ data, whether it is data regarding work environment or internet behavior. Among many issues, the GDPR offers employees additional rights to reinforce control over their personal data, e.g., extended access and rights to be informed about data usage, data transferring, and period of storage. The new regulation is currently covered by legal experts, and anyone who analyzes employee data will soon start to consult legal departments regarding activities that did not require consultation in the past. In Europe, a new organizational stakeholder emerges – a Data Protection Officer (DPO) – and will be involved in analytics projects.
However, in my opinion, compliance with the GDPR is only a starting point. It will surely force awareness of the HR analytics team to privacy issues. But although it aims to protect privacy, I believe it will also influence employees’ behavior, and HR analytics practitioners will have to respond: When people start exercising their rights and request access to their data, People Analysts will be ready in advance to give them comprehensive information about their data usage. When employees start asking to correct or erase their data, employers will request more transparency and security from HR software providers. Organizations will ensure that they process only the personal data that is necessary for the specific purpose they wish to accomplish, and therefore, they’ll need long-term planning and more serious considerations. This will move the field of People Analytics forward. The implications for employees and candidates: Transparency! But not only…
I believe that eventually, even if it will take a few years, the People Analyst role will include more components of procurement. Analysts will make less programming on their own and be experts in HR tech and analytics solutions. They will learn, for the sake of regulations and ethics, to ask vendors hard questions and be more critical about model accuracy and data privacy, and therefore, they’ll contribute not only to a culture of a data-driven organization but also to a safe work environment regarding employee data. Employees and candidates, for their part, will judge employers, in addition to Employee Experience perceptions, by employer ethics in data management, and when feeling secure, they’ll be more receptive and enthusiastic to participate and cooperate with AI and ML to influence their career path.
Kevin Markham, “Simple guide to confusion matrix terminology,” dataschool.io
Stephen Buranyi, “Rise of the racist robots – how AI is learning all our worst impulses,” theguardian.com
Arnold Birkhoff, “9 Ways the GDPR Will Impact HR Data & Analytics“, analyticsinhr.com