The global Covid19 crisis brought a significant portion of the workforce to work remotely. According to Gartner, 88% of the organizations worldwide made it mandatory or encouraged their employees to work from home after COVID-19 was declared a pandemic. Therefore, we witnessed an acceleration of the digital transformation of work processes and measures. Many organizations started to record and leverage new workforce data sources to enhance productivity by understanding employee behavior and sentiment. However, with the evolving practices of People Analytics, new ethical concerns emerged.
I was honored to be a guest in Brainfood Live and discuss those concerns with my respected colleagues Andrew Marritt, CEO at Organization View, Adam Gordon, CEO at Candidate.ID, and Hung Lee, the show curator and CEO at Workshape.io. Inspired by our conversation, I share some ideas about this topic in the next two blog posts. In Part 1, I discuss how Ethics change People Analytics practices and what happens when People Analytics becomes workforce surveillance. In Part 2, I ask what happens when caring about employees becomes an intrusion onto personal lives and explore the future of People Analytics. Some of the stories I bring in these two blog posts are based on my monthly review of workforce AI Ethics resources, which I offered several months while exploring this topic.
Ethics change People Analytics practices, but slowly.
For a few years now, I keep saying that People Analytics leaders won’t be in charge of the programming, but instead, they will be in charge of the procurement in HR-tech and analytics solutions. For the sake of regulations and ethics, they will learn to ask vendors hard questions and be more critical about model accuracy and data privacy. They will contribute to a culture of a data-driven organization and a safe work environment regarding employee data.
People Analytics roles are changing as data ethics awareness increases, i.e., knowing what is good or bad and practicing this role with moral obligation. There is a lot that we can do with the data. However, it might not be what we should do. Compliance with the GDPR and other regulatory issues was only a starting point. It inevitably forced awareness of People Analysts to privacy issues. But someday, people will start exercising their rights and request access to their data, ask to correct or erase their data, and ensure that their employer processes only the personal data necessary for specific purposes.
That kind of behavior is still rarely observed within the workforce. Nevertheless, I decided to expand the discussion about Ethics in the introductory course I offered to HR professionals, called The People Analytics Journey. 25% of my training program is dedicated entirely to practices of procurement and ethics in People Analytics.
My takeaway from the experience I had in training HR leaders was that their knowledge gap was too broad. I’m an applied researcher with practical ML background, so obviously, I understand the context and terms of AI. But the typical HR brain (and most managers’ brains, to be fair) is wired by descriptive or inferential statistics that we all learned sometime in the past. Machine learning is entirely different. A basic review is insufficient for understanding it to the level of dealing with potential Ethics risks. Yes, I wrote some guides and tried to explain themes that I think everyone should understand, e.g., What AI is – or isn’t? How accurate is AI? Why AI prone to bias? How people react to AI? And how legal frameworks deal with AI? However, none of them offers a systematic approach and a practical methodology to deal with this evolving field.
And so, I decided to continue the journey with a search, and hopefully, an articulation of such a solution. I want to help organizations evaluate AI concerning Ethics, or metaphorically, to assist them in knowing how to interview AI, just as they know how to interview their candidates and employees. To do so, I started the comprehensive resource list that I mentioned above. I decided to include four categories: Ethics in workforce strategic thinking, Ethics in workforce AI practices, Ethics in product reviews, and Ethics in a social context. I hoped that such categorization would facilitate learning in the field.
What happens when People Analytics becomes surveillance?
The title of Brainfood Live show implied some critics about People Analytics solutions. Indeed, the volume of data available to understand and predict employees’ behavior will continue to grow exponentially, enabling more opportunities for managing through data. The systematic attempt of People Analytics to make organizations more evidence-based will continue, using technology including employee listening tools, monitoring safety and wellbeing, biometric data that people willingly share, and of course, performance or productivity measures.
In the pandemic times, thousands of companies started to buy surveillance software that takes webcam pictures of their employees and monitors their screenshots, login times, and keystrokes. Dystopian descriptions go on and on. Some remote employees are photographed along with their desktop screenshots every few minutes. Others are tracked while browsing the web, making online calls, posting on social media, and sending private messages. As creepy as it may sound, phones, sensors, wearables, and IoT can detect and record our moves. Not surprisingly, workers are concerned about privacy and security. When such tools become mandatory, employees may also worry about using their data unexpectedly and even turning it against them.
The temptation to use people’s data against them is real, for instance, when their measures indicate low productivity. But employees reciprocate. I would not be surprised to hear about employees who don’t feel trusted and find creative ways to avoid the surveillance software. Organizations need to tackle ethics and be transparent to build and maintain employee trust in the use of their data. Business leaders must ensure that there is no conflict between employer and employees’ interests. Therefore, HR departments must lead the conversation that addresses employee trust, corporate responsibilities, and new technology’s ethical implications.
The surveillance solutions may be perceived to provide employees incentives to maintain their productivity. However, psychological experiments reveal that surveillance solutions might lead to the opposite consequence. Spying on home-working employees may be a bad idea, contrary to what economic theory would predict. People’s motives are complex. Alongside material payoffs, people value autonomy and reward trusting employers. The debate about remote workforce surveillance should not focus only on privacy and the blurred boundaries between work and non-work, as these perspectives are not comprehensive enough to understand the employment relations. Surveillance technologies might be the wrong solution for boosting productivity because they signal distrust and reduces intrinsic motivation to perform well.
However, if a surveillance application is installed in your organization, there are things to do to track your employees ethically, starting with these simple steps: Accept that remote work is here to stay; Engage the workforce to reach an agreement on which business activities require monitoring and ensure that the benefits of doing so are understood; Ensure you introduce sufficient safeguards to prevent abuse; Be aware that discrimination can occur despite precautions put in place; Rebuild the trust levels that existed in office settings. Another recommendation is to set goals and communicate expected outcomes while offering employees greater autonomy and collaborating tools.