FAIR: the Fair Artificial Intelligence in Recruiting Manifesto
    • English (en) uk flag arrow_right

    The global remote work community tends to interact in English. Switching languages might impact your visibility.

  • Search
  • Jobs/gigs
  • Publish a job
  • Your jobs (posted or applied)
  • Preferences
  • Your genome
  • Signals
  • Messages
  • Torre's product roadmap
  • Request features
  • API for developers
  • Help
FAIR:
Fair Artificial Intelligence in Recruiting Manifesto
In short
Recruiters and flexible-work platforms are adapting artificial intelligence to rank candidates and assign work. Unchecked, artificial intelligence will systematically discriminate, omit, and erase billions of people. To avoid it, companies and their technology providers should: 1) disclose when using AI, 2) make the factors transparent, 3) disclose rankings to candidates, 4) detect bias, and 5) reduce discrimination systematically.

In detail
Artificial intelligence is becoming a dominant force in recruiting. AI is not only comparing candidates and job opportunities but also matching candidates and teams. On flexible-work platforms, AI is already deciding who gets the job. Primarily due to sociocultural conditioning, humans are prone to either conscious or unconscious bias. Unconscious bias also affects the architects of AI. The individual contexts and upbringing of the architects shape their understanding of concepts like fairness and equality. In turn, this affects the algorithmic abstractions and models they use to represent the world. By far, most people who apply for a job don't get hired. What’s more, most candidates never find out why they weren't ’good enough.’ For them, the recruiting and decision-making process is an impenetrable black box. Unchecked, artificial intelligence adds further complexity to that impenetrability. Discrimination and inequality are becoming automated. Architects of algorithms are starting to hide behind closed doors. Unfairness in favor of a few will prevail to an even greater degree. It's a dangerous, slippery slope. Fortunately, we can also use technology to reduce bias. We can use behavioral design and transparency to promote equality (here is how the UK used these tools to increase the share of women in boardrooms from 12% to 25% in just four years). We can open the recruiting black box: teach candidates how decisions are made, detect discrimination and reduce it, provide automated and data-driven job advice on a massive scale, educate leaders about the dangers of runaway AI discrimination, and much more. Doing it won't be easy, but the first step is committing to it. Principles alone cannot guarantee ethical AI. Clearly defined, impactful pathways are needed. Similarly, AI ethics must be underwritten not only by individuals, but by businesses and organizations as well. At Torre, we support of the manifesto below. If you are a recruiter, company executive, a flexible work platform manager, a leader of an HR tech company or job board, or a policymaker, we invite you to support it, too.
Manifesto
Use of AI should be disclosed
When using computer algorithms to filter and rank candidates, recruiters and their technology vendors should inform candidates accordingly.
Principles and factors should be transparent
When evaluating or considering a job prospect, candidates must have the right to know what factors are used by the computer algorithms to filter and rank candidates. These factors should include lists and definitions of principles.
Ranking should be disclosed
Candidates have the right to know how they rank when compared to other candidates as well as the ranking criteria for each factor.
Bias should be reported
When using computer algorithms to filter and rank candidates, recruiters and their technology vendors should inform candidates accordingly.
Bias should be reduced proactively
When bias is identified as being discriminatory†, active compensation models should be written into the algorithms to remove inequality on an ongoing basis.
Hopefully, in the foreseeable future, everyone will be treated equally and will have equal access opportunities. May this afford us the opportunity to look back and acknowledge how foolish we once were!
We declare herewith unreservedly that we are to be instrumental in bringing about that change.
Alex signature
Alexander Torrenegra CEO and Founder of Torre

† Discrimination happens on many different levels and in many different categories. This manifesto is not meant to determine which should be considered discriminatory. This is because different groups and nations have different standards. Instead, this manifesto is meant to help expose all biases, regardless of the category they belong to, and then let societies decide if such biases ought to be addressed