Data shows that more that 20% of people in Europe has encountered some form of discrimination — based on gender, age, religious affiliation, etc. These numbers are very high, and the development of digital technologies has also created situations where artificial intelligence (AI) discriminates against people. AI learns from human biases and historical decisions, resulting in discriminatory outcomes, for example, in employment, loan approvals, or even healthcare. This is referred to as algorithmic discrimination.
AI Discrimination Appears in Various Fields
According to the definition, discrimination is unjustified different treatment on a prohibited basis, such as gender, skin colour, age, disability, religious or political beliefs, social origin, or sexual orientation. Essentially, it means creating unfavourable situations for specific individuals or groups without objective reason. Today, AI can do this too, and discrimination can manifest in various areas — the labour market, healthcare, transport infrastructure, credit scoring, etc.
Recruitment Systems that Discriminate Against Women
AI learns from the data we humans create — historical records, previous decisions, and other information. If we don’t critically assess these datasets, AI continues our biases. Several international examples clearly illustrate this — Amazon used an AI solution for employee recruitment, but the algorithm, when analysing resumes, started rejecting women because historically, men had been hired for management and technical positions. The system developed the false notion that a man would be the more suitable candidate, leading to the discrimination of women.
A Chatbot That Learned Hate Speech from Humans
Similarly, Microsoft's chatbot "Tay," created as an experimental AI tool, became racist, sexist, and antisemitic within less than a day. The chatbot operated online and learned from human statements and conversations, literally adopting our worst habits. Another striking example is Google's photo app, which mistakenly identified Black individuals as gorillas — simply because the algorithm had not been trained with images of diverse ethnic groups.
AI Must Not Become a Continuation of Our Worst Habits
These examples demonstrate how easily AI can become biased and discriminatory, and it is clear that technical solutions alone are not enough. Ethical guidelines and a clear value system are also needed. We must think about "how it should be," not "how it was," and create an environment where AI is not the continuation of our worst habits but a tool for a fair and inclusive society. The UN Universal Declaration of Human Rights states that all people are equal and entitled to equal protection against any discrimination. This principle must also apply to situations where AI is involved in decision-making.
The Final Decision Remains in Human Hands
There are solutions to this situation; for example, AI-driven discrimination can be reduced through appropriate laws and regulations. The European Union's Artificial Intelligence Act, which also applies to Latvia, addresses this issue and mandates that AI solutions must be based on inclusive data. This means algorithms must be checked, risks assessed, etc., but it still does not guarantee 100% avoidance of discrimination risk. Even if the information provided by AI is objective, the final decision in many cases remains in human hands — and so does the risk of discrimination. It is crucial not only to develop safe and ethically sound algorithms but also to strengthen human responsibility and create an inclusive environment in society and organizations.
Solutions for Reducing AI Discrimination
To mitigate the risks of discrimination, several aspects must be observed in the development and use of AI tools — data must reflect society as it should be, not as it has been; decisions must be transparent and justified, and the influence of algorithms must be traceable and accountable. Companies and institutions are advised to establish ethical guidelines and regularly check whether the AI solutions they use operate fairly and equally towards all groups in society. Educating the public about data ethics and equality is also important.
Technologies themselves are neither good nor bad — they reflect our choices. And it is we — humans — who can decide whether AI will help create a fairer society or merely reinforce injustice and prejudice in an even stronger form. The choice is in our hands.
2025 © The Baltic Times /Cookies Policy Privacy Policy