In 2014, Amazon wanted to automate its recruiting process. Their solution was to create an AI program that would review job applicants’ resumes and feed the recruiters a score. While this did whittle down the list, by the following year, Amazon had realized there was an issue, as the system was not rating women candidates equally to men. Since the workforce was 60% males, the system incorrectly assumed that the company preferred men. Once the problem was discovered, the company quickly reverted back to the method of reading the resumes.While this illustrates how biases can creep into the systems, how do we go about laying the groundwork of establishing ethical AI systems?
https://beta.informationweek.com/ai-or-machine-learning/what-we-can-do-about-biased-ai