Indigenous Representation in AI: 21st Century Implication.

 

A view of diversity and discrimination in AI Systems.



This study examines how well distinct indigenous cultures are represented and how artificial intelligence views ethnicity. The researchers used a quantitative approach, analyzing various datasets to identify patterns AI uses to classify human facial structures. The results show a lack of diversity in representation, highlighting the need to incorporate new datasets and ethnicity-related data into AI systems. These findings have important implications for educators, parents, and policymakers, as they demonstrate the necessity of including members from all ethnic groups and communities in this rapidly growing platform.




Artificial intelligence (AI) technologies have the potential to transform many industries, but there are still concerns that they may worsen existing inequalities and encourage racial prejudice. This literature review aims to provide a thorough analysis of research findings and differing perspectives on how AI systems can discriminate against ethnic minorities. Researchers and academics have extensively studied how AI systems may reinforce racial biases present in the training data. For instance, Safiya Umoja Noble's groundbreaking study, "Algorithms of Oppression," reveals how search engine algorithms perpetuate negative stereotypes by favoring some content over others, which in turn reinforces racial biases (Noble, 2018). According to research on facial recognition algorithms by Joy Buolamwini and Timnit Gebru, the algorithm classifies women and people with darker skin tones with more inaccuracies, which may be due to biases in the training set (Buolamwini & Gebru, 2018). 

 

Furthermore, research has demonstrated that models of natural language processing trained on biased text data can provide discriminating results, therefore solidifying racial disparities (Bender et al., 2021). Significant prejudice and discrimination have been found in AI systems, according to studies (Element AI and Global AI Talent Report 2019). This is especially true for platforms like Google's search results and Facebook's ad distribution (Simonite, T. 2018). “Facial recognition has been independently assessed and proven to be less accurate on certain populations—even if the algorithm does not explicitly include race as an attribute, the outcomes still favor one race over another”(Thompson 2020).



The Effect on Minorities of Race

For ethnic minorities, the biased results of AI systems have real-world consequences in a variety of fields. AI systems applied in healthcare to make medical decisions may cause differences in how patients from underserved communities are diagnosed and treated (Obermeyer et al., 2019). Furthermore, it has been argued that predictive policing algorithms unfairly target minority populations, which has resulted in heightened police and surveillance in these regions (Ensign et al., 2018). Furthermore, it has been discovered that AI-powered hiring algorithms reinforce prejudices against ethnic minorities, making it more difficult for them to find work prospects (Chouldechova, 2017). “

The Amazon resume scanning example is just one of many that show how the functional logics of a given technology echo the gender and racial dynamics of the industry that produced it Amazon’s Rekognition facial analysis service previously demonstrated gender and racial biases worse than those of comparable tools, biases that took the form of literally failing to “see” dark-skinned women while being most proficient at detecting light-skinned men.”(West, S.M., Whittaker, M., Crawford, K. (2019). “Relying primarily on survey-based research conducted in educational settings, pipeline studies seek to understand the factors that lead to gender-based discrimination in computer science, more precisely by interrogating what drives women and people of color away from the field, and implicitly, what might make them stay”(West, S.M., Whittaker, M. and Crawford, K. (2019). AI has a significant and wide-ranging influence on ethnic minorities, with ramifications in many different fields. Artificial intelligence (AI) algorithms utilized in healthcare, for instance, may lead to differences in the diagnosis and course of treatment for patients from underserved populations (Obermeyer et al., 2019). Studies have indicated that minority populations are disproportionately targeted by predictive police algorithms, which results in more cops and monitoring in these regions (Ensign et al., 2018). Furthermore, it has been discovered that AI-powered recruiting algorithms reinforce prejudices against ethnic minorities, making it more difficult for them to find work prospects (Chouldechova, 2017).Furthermore, discriminating results in different circumstances may be sustained by racial prejudices ingrained in AI systems. For example, it has been demonstrated that face recognition algorithms make more mistakes when applied to women and people with darker skin tones, which may be due to biases in the training set (Buolamwini & Gebru, 2018). Similarly, discriminating outputs from natural language processing algorithms trained on biased text input may reinforce racial disparities (Bender et al., 2021).

Racial minorities are directly impacted by these biased results, which exacerbate already-existing disparities and restrict their access to opportunities and resources. Therefore, it is essential to address the racial prejudices present in AI systems to guarantee justice, equity, and fairness for all societal members.


Requests for AI Ethics:
The creation and application of ethical AI systems have come under increasing pressure due to the discriminatory effects of AI technologies. Academics push for enhanced responsibility and openness in developing and applying AI algorithms and diversity in the teams creating these technologies (Crawford et al., 2019). Moreover, legal frameworks that address the ethical implications of AI and guarantee that these technologies respect justice and fairness are desperately needed (Eubanks, 2018). To guarantee that decision-making processes are comprehensible and traceable, there is an increasing need for accountability and transparency in AI systems (Jobin et al., 2019). Transparency makes developers responsible for the results of these systems and helps consumers understand how AI algorithms make judgments. To ensure that AI systems do not reinforce or worsen prejudices against marginalized groups, ethical AI development demands a dedication to justice and non-discrimination (Barocas & Selbst, 2016). It is imperative to incorporate fairness measures and approaches into AI development processes to address algorithmic bias and prejudice.



In AI ethics, maintaining data rights and privacy is crucial (Floridi et al., 2018). To protect people's private information from abuse or exploitation, AI systems should abide by privacy-preserving principles such as data reduction, anonymization, and user permission. Human control and autonomy should be given priority in ethical AI, ensuring that AI systems support human decision-making rather than supplant them (Bostrom & Yudkowsky, 2014). To avoid unforeseen repercussions or injury, AI systems should include human monitoring and intervention tools. This would indicate that there is a dire need to study the possible effects of AI deployment on diverse stakeholders and communities, through societal impact evaluations (Jobin et al., 2019). These evaluations have to take into account aspects related to ethics, society, economy, and culture to guide the appropriate development and use of AI. To solve cross-border ethical issues and guarantee compliance with international human rights norms, ethical AI necessitates international cooperation and governance structures (Allen et al., 2019). Multi-stakeholder projects, legal frameworks, and moral standards can aid the global development of ethical AI. It is crucial to incorporate ethical issues from the beginning when designing and developing AI systems (Floridi et al., 2018). Throughout the AI lifecycle, ethical design techniques like value-sensitive design and participatory design can aid in identifying and addressing ethical challenges. To address new ethical issues and gauge the long-term effects of AI systems, ethical AI necessitates ongoing observation and assessment (Jobin et al., 2019). Continuous assessments, audits, and feedback systems can assist in identifying and reducing moral hazards and unintended consequences.


Images and references are now created using artificial intelligence on a variety of platforms, including Kaggle, Flickr, etc.


 Although this has a lot of advantages, questions have been raised regarding how much picture is being generated and how insufficient the datasets are.

Given the rising significance of artificial intelligence across many platforms, it is critical to look at how different people from across the world are represented on this ever-expanding platform. This study examines multiple datasets and AI pattern usage to close this gap.

In AI ethics, maintaining data rights and privacy is crucial (Floridi et al., 2018). To protect people's private information from abuse or exploitation, AI systems should abide by privacy-preserving principles such as data reduction, anonymization, and user permission. Human control and autonomy should be given priority in ethical AI, ensuring that AI systems support human decision-making rather than supplant them (Bostrom & Yudkowsky, 2014).
To avoid unforeseen repercussions or injury, AI systems should include human monitoring and intervention tools. This would indicate that there is a dire need to study the possible effects of AI deployment on diverse stakeholders and communities, through societal impact evaluations (Jobin et al., 2019). These evaluations have to take into account aspects related to ethics, society, economy, and culture to guide the appropriate development and use of AI. To solve cross-border ethical issues and guarantee compliance with international human rights norms, ethical AI necessitates international cooperation and governance structures (Allen et al., 2019). Multi-stakeholder projects, legal frameworks, and moral standards can aid the global development of ethical AI. It is crucial to incorporate ethical issues from the beginning when designing and developing AI systems (Floridi et al., 2018). Throughout the AI lifecycle, ethical design techniques like value-sensitive design and participatory design can aid in identifying and addressing ethical challenges. To address new ethical issues and gauge the long-term effects of AI systems, ethical AI necessitates ongoing observation and assessment (Jobin et al., 2019). Continuous assessments, audits, and feedback systems can assist in identifying and reducing moral hazards and unintended consequences.

Comparison of AI-generated images vs actual images of 100 indigenous communities from around the world.

Comments

Popular posts from this blog

(5th meeting) UN Permanent Forum on Indigenous Issues, 24th session.

(3rd meeting) UN Permanent Forum on Indigenous Issues, 24th session.

(8th meeting) UN Permanent Forum on Indigenous Issues, 24th session.