HomeTechnologyArtificial IntelligenceAmazon Scraps ‘Sexist AI’ Recruitment Tool

    Amazon Scraps ‘Sexist AI’ Recruitment Tool

    Amazon has scrapped a “sexist” tool that used artificial intelligence to decide the best candidates to hire for jobs. Members of the team working on the system said it effectively taught itself that male candidates were preferable.

    The artificial intelligence software was created by a team at Amazon’s Edinburgh office in 2014 as a way to automatically sort through CVs and select the most talented applicants. But the algorithm rapidly taught itself to favour male candidates over female ones, according to members of the team.

    They realised it was penalising CVs that included the word “women’s,” such as “women’s chess club captain.” It also reportedly downgraded graduates of two all-women’s colleges. The problem arose from the fact the system was trained on data submitted by applicants over a 10-year period, much of which was said to have come from men. Some of the team members pointed to the fact this mirrored the way shoppers rate products on Amazon.

    “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those,” one of the engineers said.

    But by 2015, it was obvious the system was not rating candidates in a gender-neutral way because it was built on data accumulated from CVs submitted to the firm mostly from males.

    But concerns have previously been raised about how trustworthy and consistent algorithms which are trained on information which has the possibility of being biased will be. In May last year, a report claimed that an AI-generated computer program used by an American court for risk assessment was biased against black prisoners.

    The program flagged black people were twice as likely as white people to re-offend due to the flawed information that it was learning from.

    As the tech industry creates artificial intelligence, there is the risk that it inserts sexism, racism and other deep-rooted prejudices into code that will go on to make decisions for years to come.

    ELE Times Research Desk
    ELE Times Research Deskhttps://www.eletimes.ai
    ELE Times provides a comprehensive global coverage of Electronics, Technology and the Market. In addition to providing in depth articles, ELE Times attracts the industry’s largest, qualified and highly engaged audiences, who appreciate our timely, relevant content and popular formats. ELE Times helps you build awareness, drive traffic, communicate your offerings to right audience, generate leads and sell your products better.

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Related News

    Must Read

    Top 10 Reinforcement Learning Companies in India

    Reinforcement learning (RL), a subfield of machine learning in...

    Reinforcement Learning Definition, Types, Examples and Applications

    Reinforcement Learning (RL), unlike other machine learning (ML) paradigms,...

    Infineon drives industry transition to Post-Quantum Cryptography on PSOC Control microcontrollers

    Infineon Technologies AG announced that its microcontrollers (MCUs) in...

    Decision Tree Learning Definition, Types, Examples and Applications

    Decision Tree Learning is a type of supervised machine...

    Renesas Introduces Ultra-Low-Power RL78/L23 MCUs for Next-Generation Smart Home Appliances

    Ultra-low-power RL78/L23 MCUs with segment LCD displays & capacitive...

    STMicroelectronics Appoints MD India

    Anand Kumar is the Managing Director of STMicroelectronics (ST),...

    Top 10 Federated Learning Applications and Use Cases

    Nowadays, individuals own an increasing number of devices—such as...

    Top 10 Federated Learning Companies in India

    Federated learning is transforming AI’s potential in India by...