In Part 1 of this series, I discuss how Ethics change People Analytics practices and what happens when People Analytics becomes workforce surveillance in the Covid19 pandemic times. In this part, I ask what happens when caring about employees becomes an intrusion onto personal lives and explore People Analytics’s future. This series is a follow-up writing inspired by Brainfood Live, in which I was honored to participate lately. Some quotes and stories in the series are based on my monthly review of workforce AI Ethics resources.
What happens when caring about employees becomes an intrusion onto personal lives?
As workforce data sources diverse and includes indicators about attitudes, behavior, movements, and physical responses, based on many technologies, e.g., analyzing what people say or write or how they interact with digital tools, the question about the boundaries between work and non-work arise. Would anyone agree to volunteer to offer their DNA in the recruitment process? As much as it sounds creepy, organizations started to use biometric data to understand work productivity. Even when done voluntarily, the very essence of employer expectations, and certainly in times of crisis when everyone strives to maintain jobs, raises questions. For that reason, I believe that we are about to see a new kind of social activity, for example, in unions. Here are two stories to demonstrate this.
One company harnessed AI and fitness-tracking wearables to understand how the work and external stressors impact employees’ state of mind. During the COVID-19 crisis, companies promote healthy working habits to ensure employees are provided with the support they need while working from home. What can a company offer beyond catch-ups on Zoom? One controversial approach was to combine ML with wearable devices to understand how lifestyle habits and external factors impact employees. Employees volunteered to use fitness trackers that collect biometric data and connect it to cognitive tests, to manage stress better. Factors such as sleep, exercise, and workload influence employee performance. Balancing work and home life benefit mental health and wellbeing.
Understanding performance and wellness is, clearly, an interest of both employees and employers. However, it initiates a discussion about the boundaries of organizational monitoring. Is it OK to collect employee biometric measures, e.g., pulse rate and sleeping patterns, and combine them with cognitive tests and deeper personality traits in the organization arena? If it does, how far is it OK to go with genetic information? And what if the employer also offers medical insurance as a benefit to its employees? Tracking mental and physical responses to understanding work may be essential. Still, employers may provide education and tools without being directly involved in data collection and maintenance. Even when volunteered, there is always a self-selection bias among employees, so the beneficial results are not equally distributed.
Most employers are tracking their workers in one way or another. Research firm Gartner says half of the companies were already using “non-traditional” listening techniques like email scraping and workspace tracking in 2018. They estimate the figure to have risen to around 80% by now. Should employees worry? Should they respond to protect themselves? One app, WeClock, enables employees to track their data and share it with unions to ensure that their whole online existence doesn’t become their employers’ property. Data from employee surveillance used to boost productivity strengthen the position of power that employers have over employees. Regulation for individual rights to data does not offer sufficient remedy yet. There is a vast gap between what companies know about employees and what employees know about themselves. Digitization doesn’t necessarily mean that only employers should have control and access to employee data. This app enables employees to track, and share with their unions, things like how far they must travel to work, whether they’re taking their breaks, and how long they spend working out of hours. It is an interesting attempt to provide a source of aggregate data about critical issues affecting employee wellbeing.
What is the future of People Analytics? What to do about it today?
Businesses are requested to obey the regulation. Managers must comply with organization policies. But compliance doesn’t guarantee that we are always doing the right thing. We may experience conflicts between the business policies and our values. Fortunately, organizations have the means to bring those kinds of conflicts into a discussion, e.g., in specialized committees. Some organizations offer their personnel educational opportunities in the domain of Ethics. However, most managers lack a basic understanding of workforce AI tech tools. Unlike general incidents of ethical issues raised when policies and values are conflicted, AI ethical issues involving bias are less noticeable. The responsibility to the ethical use of AI is still perceived as belonging to the vendor side. Eventually, this state of affairs will change, thanks to employee expectations, new roles within HR, and procurement standards.
Employee expectations will change. People will rate employers, in addition to employee experience, based on the ethical use of data. Therefore, People Analytics leaders will fit the right tools to their organization’s business questions according to values and culture. AI ethics are new skills. HR professionals should educate themselves first. When feeling secure, people will be receptive and enthusiastic to cooperate with AI and data usage to influence their career path and work. Unfortunately, most employees and candidates still lag in understanding the consequences of the increased use of data. Organizations, mainly learning functions within HR departments, have a lot to do to educate the workforce to be informed participants in those practices.
New roles will emerge within HR departments. As new technologies are adopted by HR and generate unprecedented amounts of data about employees and candidates, the data must be carefully assessed, used, and protected. Furthermore, since decisions to deploy AI and ML are often made in departments other than HR, HR leaders must have a voice in ensuring ethical use of AI-generated talent data to prevent potential harm. HR will establish new practices in collaboration with the legal team to ensure the algorithms’ results are transparent, explainable, and bias-free. They will start considering the balance between stakeholders in the organization by asking how technologies serve both employers and employees beyond the apparent discussion about what technologies they should be using.
A recent survey revealed the shift in the HR sector’s mindset and creatively described 21 HR jobs of the future. It depicts hypothetic future HR roles responsible for crucial issues such as individual and organizational resilience, organizational trust and safety, creativity and innovation, data literacy, and human-machine partnerships. As questions start being raised around the potential for bias, inaccuracy, and lack of transparency in workforce AI solutions, more senior HR leaders understand the need for systematically ensuring fairness, explainability, and accountability.
Procurement standards will change. Research institutions already started to create frameworks that support the responsible development, deployment, and operation of machine learning systems. Volunteering domain experts articulate “Responsible Machine Learning Principles” that guide technologists. They set templates that empower industry practitioners who oversee procurement to raise the bar for AI safety, quality, and performance. While ethical frameworks are still hard to implement because there is not much technical personnel that can offer high-level guidance, in the future, we will start to see AI ethics principles in organizations’ metrics, e.g., for fairness and privacy. I believe that regulation will follow eventually. The European Union decided to stricter rules on cyber-surveillance technologies, like facial recognition and spyware. Maybe it is only a step towards transparency, or it implies a future impact on organizations’ practices.