The fourth industrial revolution disrupts organizations, and HR departments are not an exception. A lot has been written about the role of HR leaders in successful digital transformation. However, the discussion about a human-centered digital transformation remained crucial. I was privileged to participate in an experts panel in the global conference Hacking HR 2021 and discuss this topic with a group of professionals that included representatives from the public and private sector. My contribution to the discussion focused on privacy, security, ethical considerations, and responsibilities.
The digital transformation accelerates
The Covid19 crisis brought a significant portion of the workforce to work remotely, and we witnessed an acceleration of the digital transformation of work processes and particularly measures. Organizations started to record and leverage new workforce data sources to enhance productivity by understanding employee behavior and sentiment.
The volume of data available to understand and predict employees’ behavior will continue to grow exponentially. We will have more opportunities to manage through data, including employee listening tools, monitoring tools for safety and wellbeing, biometric data that people willingly share, and of course, performance or productivity measures, and more emerging tools under the umbrella of People Analytics.
But with the evolving practices of People Analytics, new ethical concerns emerge. Notably, employees worry about privacy and security when their employers introduce surveillance software. They may also worry about using their data unexpectedly and even turning it against them.
New ethical concerns emerge
The temptation to use people’s data against them is real, for instance, when their measures indicate low productivity. But employees reciprocate. I would not be surprised to hear about employees who do not feel trusted and find creative ways to avoid the surveillance software.
Moreover, psychological experiments reveal that surveillance solutions might lead not to productivity but the opposite consequence. Spying on home-working employees may be a bad idea because people value autonomy, and any signal of distrust might reduce intrinsic motivation to perform well.
The debate about remote workforce surveillance should not focus only on privacy and the blurred boundaries between work and non-work. Organizations need to tackle ethics and be transparent to build and maintain employees’ trust in the use of their data. Business leaders must ensure that there is no conflict between employer and employees’ interests.
People will exercise beyond their rights
But someday, people will exercise beyond their rights and request access to their data, not only to ensure that their employer processes only the personal data necessary for specific purposes but to leverage their data for wellbeing and career development.
Employees’ expectations will change. People will rate employers, in addition to employee experience, based on the ethical use of data. People Analytics leaders will have to fit the right tools both to business questions and to values and culture.
For a few years now, I keep saying that eventually, People Analytics leaders will not be in charge of the programming in projects. Instead, they will be in charge of the procurement in HR-tech and analytics solutions. For the sake of regulations and ethics, they will learn to ask vendors hard questions and be more critical about model accuracy and data privacy. They will contribute beyond the culture of a data-driven organization to a safe work environment regarding employee data.
AI ethics: skills required
AI ethics are new skills required in the future of work. HR professionals should educate themselves first. Unfortunately, most employees and candidates still lag in understanding the consequences of the increased use of data. Organizations, mainly through learning functions within HR departments, have a lot to do to educate the workforce to be informed participants in those practices.
Some organizations offer their personnel educational opportunities in the domain of Ethics. However, most employees and managers lack a basic understanding of workforce AI tech tools. Unlike general incidents of ethical issues raised when policies and values are conflicted, AI ethical issues involving bias are less noticeable.
That learning about workforce AI ethics is still rarely observed within organizations. Nevertheless, I decided to expand the discussion about AI Ethics in my learning programs that I offer to HR professionals and executives. 25% of my training programs are dedicated entirely to practices of procurement and ethics in People Analytics. Hopefully, my endeavor will result in know-how – the ability to interview AI solutions, just as organizations know how to interview their candidates and employees.
Balance stakeholders’ interests
Since decisions to deploy AI and ML are often made in departments other than HR, HR leaders must have a voice in ensuring ethical use of AI-generated talent data to prevent potential harm. HR must establish new practices in collaboration with the legal team to ensure the algorithms’ results are transparent, explainable, and bias-free.
They must also start considering the balance between stakeholders in the organization by asking how technologies serve both employers and employees beyond the apparent discussion about what technologies they should be using.
We may experience conflicts between the business policies and our values. Fortunately, organizations have the means to bring those kinds of disputes into a discussion, for example, in specialized committees. These efforts should be expanded to include workforce AI Ethics.
The responsibility to the ethical use of AI is still perceived as belonging to the vendor side. Eventually, this state of affairs will change, thanks to employee expectations, new roles and knowledge within HR, and procurement standards.