fbpx

AI in Recruitment: Is it ethically viable?

A brief history

May 11th, 1997 marked one of the most reverberating pivot points in the advancement of AI and Superintelligence. The victory of IBM’s Deep Blue against the reigning world chess champion Garry Kasparov was a spectacle that many believed impossible. Kasparov was undisputed, first holding the title at the age of 22; Deep Blue was a fancy, new computer. 

Until this point there were very little instances of Superintelligence displayed publicly — they were mostly confined to the laboratory. So when it emerged, it was a proverbial shock to the world about the capabilities of AI and Superintelligence. 

Since 1997 AI has advanced considerably, it is now applied in a myriad of industries from Recruitment, Robotics, Surveillance, to Manufacturing. It has become a ubiquitous commodity around the globe. Industry giants have begun to integrate AI and complex algorithms into their hiring procedures and these intricate systems have even allowed for industries to be streamlined so much that the question, “Can AI help my business?” has become absurd. 

We are now at a point in civilisation where we have created a medium to make our lives easier. AI has gradually made itself into the cornerstone for this temple of ease and, in this sense, applied AI has become a contingent for modernity.

As a result, it has ushered an ethical dilemma that is at the forefront of many philosophers’, software engineers’, politicians’, and, of course, entrepreneurs’ minds up to today: how do we ensure an ethical standard? Liminal anxiety & fear of the unknown / The potential of AI; good vs bad

This year I read a fascinating article by the anthropologist Beth Singler. She found that in non-academic circles, the discourse of AI has the repetitive tendency to perceive AI, intrinsically or partially, as a stepping stone to a ‘war against the machines.’ Essentially, a reiteration of Terminator but not necessarily with the robotic, terrestrial foot soldiers and time-travelling. 

Singler responded by contending that the fear of AI is not necessarily to do with a dystopian future, but rather, that its liminality (being between the present and future) provokes anxiety. I interpret this as a fear of the unknown; that AI has the potential to do terrible things. 

This provides a platform to begin a discussion of AI in recruitment. Before delving into ethical potentialities there needs to be an understanding of why it’s needed. AI is a liminal phenomenon with a significant potential for bad, meaning that issuing ethical standards is a fundamental requirement. But, accordingly, there is equally as much potential for good if formatted correctly. 

If the world was full of autonomous vehicles there would be fewer crashes, not more; human error is the culprit for the majority of autonomous failures. Commonly there is a bemoaning of AI implication throughout the world in fear of its dangers, and rightfully so; but once it becomes a din of complaints there is needed room for an objective conversation. 

Consequently, AI in recruitment is not something to be shrugged off as an impertinence, but neither should it be accepted without contention. A dialogue needs to take place, and that dialogue manifests itself through an ethical lens. This is evident in an exploration of artificial bias. Recruitment and Artificial Bias

The impetus for integrated AI in recruitment is to mitigate bias and ensure that the best person gets the job. Unfortunately, this is not always the case and from this Artificial Bias has risen and has proven to be a prevalent menace.  

Recent scandals involving facial recognition software have helped to shed light on this. The discrimination faced by individuals because of protected characteristics (i.e. race, gender orientation, etc.) can be seen in many areas; Hirevue’s face scanning technology, for instance. Such instances embolden the importance of data ethics in the continuation of our future society. 

While it would be splendid to be able to simply end Artificial Bias by inputting a set of algorithmic sequences, it is not that simple. Three main problems elucidate this: (1) lack of certainty in ethical principles; (2) continued advancement; (3) displacement of bias from human to machine.

A snag with the GDPR and other ethical principles or frameworks is that it poorly defines the ethical standard to be set. The Data Protection Act of 2018, for example, claims that the business owner must adhere to the data protection principles laid out by the government. A notable principle is to ensure fairness and, though this seems rather rudimentary, it is far more complex. 

Unfortunately, fairness is not a concrete term; it is something that is contorted for the social context. Consider the distinction between an Anti-Classification model of identifying bias (not including protected characteristics) opposed to an Outcome Error Parity model (protected characters receive an equal proportion of positive or negative errors to an unprotected character). These definitions are non-compatible, and for an ethical principle to mandate to do what is ‘fair’ is insufficient. Regrettably, this is not unfamiliar to other principles and frameworks.

The continued advancement of AI systems only helps to conflate this. Moore’s Law claims that the processing capabilities and power of computation doubles every two years. I am no mathematician, but persistent doubling tends to add up to a rather large sum. 

When this is made a factor, continual augmentation begins to be capable of overcoming the ethical principles in place. AI is a relatively new field, it is constantly being updated. If ethical standards don’t keep apace, simply, they will lose the race.

What ties this all together is that by transferring the process of recruitment from humans to negate their bias, space for Artificial Bias propagates. At this moment in time, we don’t wholly know how we make our own decisions. Accordingly, displacing decisions to AI is difficult, but that does not make it an attempt unworthy of effort. 

This is the place for ethics: to weigh the potential in favour of benefit.  Closing remarks

AI in recruitment is an ethical minefield. From data protection principles to the pace of technological development, every aspect of it has the potential to go awry, but the same to reap immense benefits. The requirement of ethical standards throughout the continuation of development is, therefore, a facet of AI recruitment needing profound stress.

As I mentioned, there is an insistency of many to opt for the potential of catastrophe over grandeur. What these people omit is that the place of ethics is to sway this potential in a positive direction. If economics lacked ethical standards, much labour would still be abused. If legislation lacked ethical standards… well, I don’t believe that requires much more evidence. 

AI recruitment has extraordinary prospective benefits. But to achieve these benefits, consistent auditing is necessary. An issue like Artificial Bias, in light of defined ethics and auditing, may cease to be such a prominent issue. 

In a situation such as we are in now, where discrimination is at the forefront of our minds, it is best that we strive towards the benefits, not away.