It’s not far-fetched to imagine a company using recruiting technology to help it find potential candidates for specific jobs. And it’s not science fiction to imagine that the technology is based on machine learning, which is based on identifying and repeating certain patterns. Does this mean the software could, therefore, "learn" that most candidates hired come from a specific school or town? Does this encourage bias – or even profiling?
The potential for bias is certainly there, but does it have greater influence than the biases inherent in people-driven decisions?
First, it is important to recognize that human judgment as it is used today in recruiting is flawed. Even the most careful recruiter is drawn to certain nameless qualities. And, even if you ignore personal experience and bias in human recruiters, the battery of tests and selection methods available to HR teams today cannot be considered “fact based” indicators of performance. Because so much of what enterprises mean when they talk about performance is unexplainable, they use two terms to measure how someone might contribute to the workplace:
- Performance indicators, which include sales, productivity and satisfaction metrics; attrition and promotion rates
- Predictors, which include background/experience, test scores and interview performance
- Identify and compare characteristics of best and worst outcomes, testing for statistical significance to focus on what is meaningful
- Determine results within the same unit/group (industry, field, business, geo, etc.) and job to yield more custom results, adding credentials and experience that are industry-specific to certain job types or provide deep geographic context
- Ensure relevance and root out bias including techniques such as regression analysis to identify and isolate certain characteristics or outside influences
- Incorporate a look-back and adjustment based on hire, attrition and promotion to bring the analytics full circle.
As HR departments become increasingly reliant on advanced technologies and the numbers they produce, they also will experience the need for new skillsets required to deploy and use them. It’s a difficult-to-find combination that bridges technology and ethics and that can oversee the monitoring, auditing and enforcing of such automation.
Because machine learning and artificial intelligence programs are rules-driven, they may someday be able to identify and flag bias when and if it shows up, essentially overturning human decision-making. But when, and under what circumstances, will human judgment overturn an automated result? If the automation recommends the hiring of a group of recruits who then promptly leave the company, how will humans manage the fallout? And how do data privacy concerns impact the use of personally identifiable information (PII) with recruiting tools that use machine learning?
We may be just at the beginning of the debate about tools that thrive on the use of more and more data points to make what have historically been “gut” decisions. It would be wise to get clear now about the implications of advanced analytics on recruiting and people management. Contact us to discuss how ISG can help you.