5 minutes read

Computer says "No". The use and Misuse of AI in hiring and performance evaluation contexts

A late 2023 survey conducted by IBM showed that 42% of companies were already utilising AI screening systems.

A further 40% are actively looking into adopting AI systems as part of their HR processes. HR, like any other business function, is not immune to technological development.

We examine how AI is already impacting some aspects of HR, the legal risks this may pose and steps you may wish to consider to mitigate those risks.

Automated discrimination

There have been a number of high-profile examples of the attempts (and failures) of businesses looking to use AI technologies and screening systems to automate their hiring processes.

Examples include:

  • Amazon’s internally developed AI system which “taught itself that male candidates were preferable,” “penalised resumes that included the word “women’s,” and “downgraded graduates of two all-women’s colleges.”
  • China’s iTutorGroup’s agreement to pay $365,000 in compensation following claims that the company’s AI powered online recruitment software had automatically screened out women aged 55 or older and men aged 60 or over.
  • A class-action lawsuit brought against Workday on the basis that its online hiring platform’s algorithms “disproportionately impact and disqualify Black, disabled, and older job applicants.”

Hidden AI inputs - an unknown risk

An AI system is capable of identifying, collating and reviewing data to provide outputs in response to a user prompt. This can include weighting or scoring information about employees or candidates. If businesses trust AI to formulate key decisions across areas such as recruitment, performance and conduct, employers need to be wise to the potential dangers of discrimination claims that may arise out of AI-influenced decision-making.

The main piece of legislation in the UK governing discrimination in the workplace is the Equality Act 2010. The Equality Act 2010 sets out 9 protected characteristics. These include characteristics such as race, sex, age and disability. It is unlawful to treat employees less favourably because of a protected characteristic. Employers therefore need to be mindful of relying on outputs which may inadvertently produce discriminatory results.

Without properly understanding the unique requirements of the organisation, the role, and the individual being assessed, the output produced by the AI system in response to a prompt may not be fit for purpose. Managers may rely on outputs indicating a negative outcome which have failed to fully consider an employee’s protected characteristics.

Take the example of an AI machine used to identify strengths and weaknesses for performance reviews. An AI system may provide an output that an employee is frequently late to respond to emails received in the afternoon and may count this as a “weakness”. However, this “lateness” may be explainable by the fact that the employee works flexible hours to accommodate their childcare responsibilities or attends regular medical appointments in the afternoon for a disability. If this factor is not accounted for when prompting the system, this could put the business at risk of using outputs that inadvertently discriminate against the employee.

Further, unless explicitly notified, a lead-reviewer may not even recognise that AI generated feedback has been inputted into the evaluation process at all. As AI systems grow more and more sophisticated, these signs will become increasingly difficult to identify.

Where a decision-maker includes the AI’s assessment without fully considering whether the feedback is a true and accurate reflection of the subject’s performance, these inputs risk undermining the results of the evaluation process. This potentially places the business at risk of employees being:

  • Over-promoted above their genuine ability levels creating issues around performance management and/or excessive pay rises
  • Overlooked for promotion where one is deserved, leading to low satisfaction, increased employee turnover and potential legal claims if the decision making is discriminatory
  • Improperly evaluated as part of a redundancy exercise and incorrectly retained/dismissed
  • Engaged in a performance management process or, in extreme cases, dismissed without proper consideration of risk factors such protected characteristics.

The above outcomes carry operational, financial, reputational and legal risks for a business. Whilst these risks which may not be wholly avoidable when using AI systems, they can be mitigated.

How can you mitigate against these risks?

  1. Consider whether your business would benefit from implementing a formal “Responsible use of AI” policy, outlining the circumstances in which employees are permitted to engage AI systems to assist them in performing their roles.
  2. Provide sufficiently detailed/use-specific prompts and avoid using generic or generalised inputs to ensure that any outputs generated by AI systems is reasonably fit for purpose.
  3. Be conscious of the potential for AI systems to drift towards bias when considering the outputs received. Disregard or amend outputs that looks to be tainted by bias. Where AI systems look to be generating potentially biased outcomes, notify your technology team or chief technology officer at the earliest opportunity to ensure that systems can be properly calibrated.
  4. Only use AI-generated content as a starting point and avoid relying solely on AI outputs. Require that employees properly review and scrutinise content, then make any necessary corrections to ensure that content is accurate and reliable.
  5. Require employees to sign off and take individual responsibility for their roles in HR processes where AI is being used to support it. Implement effective safeguards (e.g. disabling direct copy/paste functionality into feedback request forms) and self-declaration certifications to minimise the risk of employees failing to engage thoughtfully with the process.

Written by Bradley Howe and Tom Hasoon

Contact

Bradley Howe

+441612355457

How we can help you

Contact us