HomeTechnologyArtificial IntelligenceArtificial intelligence can deepen social inequality. Here are 5 ways to prevent...

    Artificial intelligence can deepen social inequality. Here are 5 ways to prevent this

    From Google searches and dating sites to detecting credit card fraud, artificial intelligence (AI) keeps finding new ways to creep into our lives. But can we trust the algorithms that drive it?

    As humans, we make errors. We can have attention lapses and misinterpret information. Yet when we reassess, we can pick out our errors and correct them. But when an AI system makes an error, it will be repeated again and again no matter how many times it looks at the same data under the same circumstances.

    AI systems are trained using data that inevitably reflect the past. If a training data set contains inherent biases from past human decisions, these biases are codified and amplified by the system. Or if it contains fewer data about a particular minority group, predictions for that group will tend to be worse. This is called “algorithmic bias.”

    How does algorithmic bias arise?

    Algorithmic bias may arise through a lack of suitable training data, or as a result of inappropriate system design or configuration. For example, a system that helps a bank decide whether or not to grant loans would typically be trained using a large data set of the bank’s previous loan decisions (and other relevant data to which the bank has access).

    The system can compare a new loan applicant’s financial history, employment history and demographic information with corresponding information from previous applicants. From this, it tries to predict whether the new applicant will be able to repay the loan.

    But this approach can be problematic. One way in which algorithmic bias could arise in this situation is through unconscious biases from loan managers who made past decisions about mortgage applications.

    If customers from minority groups were denied loans unfairly in the past, the AI will consider these groups’ general repayment ability to be lower than it is. Young people, people of colour, single women, people with disabilities and blue-collar workers are just some examples of groups that may be disadvantaged.

    Bias harms both individuals and companies
    The biased AI system described above poses two key risks for the bank. First, the bank could miss out on potential clients, by sending victims of bias to its competitors. It could also be held liable under anti-discrimination laws.

    If an AI system continually applies inherent bias in its decisions, it becomes easier for government or consumer groups to identify this systematic pattern. This can lead to hefty fines and penalties.

    Based on the results, we identify five approaches to correcting algorithmic bias. This toolkit can be applied to businesses across a range of sectors to help ensure AI systems are fair and accurate:

    1. Get better data
    The risk of algorithmic bias can be reduced by obtaining additional data points or new types of information on individuals, especially those who are underrepresented (minorities) or those who may appear inaccurately in existing data.

    2. Pre-process the data
    This consists of editing a dataset to mask or remove information about attributes associated with protections under anti-discrimination law, such as race or gender.

    3. Increase model complexity
    A simpler AI model can be easier to test, monitor and interrogate. But it can also be less accurate and lead to generalizations which favour the majority over minorities.

    4. Modify the system
    The logic and parameters of an AI system can be proactively adjusted to directly counteract algorithmic bias. For example, this can be done by setting a different decision threshold for a disadvantaged group.

    5. Change the prediction target
    The specific measure chosen to guide an AI system directly influences how it makes decisions across different groups. Finding a fairer measure to use as the prediction target will help reduce algorithmic bias.

    Consider legality and morality
    In our recommendations to government and businesses wanting to employ AI decision-making, we foremost stress the importance of considering general principles of fairness and human rights when using such technology. And this must be done before a system is in use. We also recommend systems are rigorously designed and tested to ensure outputs aren’t tainted by algorithmic bias. Once operational, they should be closely monitored.

    Finally, we advise that to use AI systems responsibly and ethically extends beyond compliance with the narrow letter of the law. It also requires the system to be aligned with broadly-accepted social norms—and considerate of impact on individuals, communities and the environment.

    With AI decision-making tools becoming commonplace, we now have an opportunity to not only increase productivity but create a more equitable and just society—that is, if we use them carefully.

    ELE Times Bureau
    ELE Times Bureauhttps://www.eletimes.ai/
    ELE Times provides a comprehensive global coverage of Electronics, Technology and the Market. In addition to providing in depth articles, ELE Times attracts the industry’s largest, qualified and highly engaged audiences, who appreciate our timely, relevant content and popular formats. ELE Times helps you build awareness, drive traffic, communicate your offerings to right audience, generate leads and sell your products better.

    Related News

    Must Read

    What is Fashion Tech? Providing New Product Value and Customer Experiences with Technology

    Courtesy: Murata Electronics What is fashion tech? - diverse technologies...

    Emergency Screaming Detection: How AI Recognizes Human Screams and Saves Lives

    Courtesy: Renesas Detecting human screams for help is important in...

    India’s Electronics Push: Ambition Is Clear. Execution Will Decide the Outcome

    India’s electronics story has entered a decisive phase. The...

    India on the Road to Semicon Self-Reliance with Three More Plants

    India to welcome three more semiconductor plants after PM...

    Upcoming years to Bring Boom for Semiconductors and Electronics

    Union Minister for Electronics and Information Technology Ashwini Vaishnaw...

    R&S Propels 6G Readiness With FR1–FR3 Carrier Demonstration

    Rohde & Schwarz and Qualcomm Technologies, Inc. have reached...

    ROHM and Suchi Semicon Establish a Strategic Semicon Manufacturing Partnership in India

    ROHM and Suchi Semicon have announced the establishment of...

    Keysight to Demonstrate NR-NTN devices Mobility Testing at MWC 2026 in Collaboration with Samsung

    Keysight Technologies, Inc. will demonstrate lab-based validation of new...

    ROHM Strengthens Supply Capability for GaN Power Devices

    Combining TSMC’s Process Technology to Build an End-to-End, In-Group...