3 Comments

Amazon tried using AI/ML and ran into the same issue about ten years back, and even after making it neutral, the project was abandoned, which tells me that there is more to abandoning reason than just gender bias. More here (https://tinyurl.com/3ud85b75).

“By 2015, it was clear that the system was not rating candidates in a gender-neutral way because it was built on data accumulated from CVs submitted to the firm mostly from males, Reuters claimed.

The system started to penalise CVs which included the word "women". The program was edited to make it neutral to the term, but it became clear that the system could not be relied upon, Reuters was told.

The project was abandoned, although Reuters said that it was used for a period by recruiters who looked at the recommendations generated by the tool but never relied solely on it.”

I have not read the study. Does the author think new models/algorithms have found a way to overcome the missing part of the puzzle? I agree AI will be able to help if it is trained with better quality data, but I think being a black box and not explaining why someone or something was picked requires it to propagate one bias or another in the data and will require human in the loop. However, humans start relying on AI systems even when it is wrong. Here is an article about it (https://hai.stanford.edu/news/ai-overreliance-problem-are-explanations-solution)

“In theory, a human collaborating with an AI system should make better decisions than either working alone. But humans often accept an AI system’s recommended decision even when it is wrong – a conundrum called AI overreliance. This means people could be making erroneous decisions in important real-world contexts such as medical diagnosis or setting bail, says Helena Vasconcelos, a Stanford undergraduate student majoring in symbolic systems.”

Expand full comment

The people that will rule the world will be those who can directly author the script for the AIs. Or, those who control the techy nerds. AI is millions, billions, trillions of if/then loops. Manipulate the loops and you win. Analysis of AI selection is pointless. It's just another black box to be manipulated by those with the tools to get what they want.

Expand full comment
Jul 1Liked by Joachim Klement

AI recruitment may have issues with "employment discrimination." If a recruiter with biases related to gender, region, or age uses an algorithmic system to screen resumes, it creates a "discrimination information cocoon." The resumes that are filtered and pushed by the algorithm are those that have already passed through biased screening. This makes the already subtle problem of "discrimination" even more difficult to detect.

Expand full comment