As we charge forward into the unknown depths of the future, the influence of artificial intelligence (AI) in shaping our lives is escalating at an alarming rate. From mundane activities like music selection to critical fields such as healthcare and recruitment, the intricate algorithms dominating our world are now entrusted with making decisions that hold the power to shape not just our destinies, but the very fabric of our society.
However, in our haste to embrace these innovative technologies, we risk perpetuating the very prejudices we seek to eliminate. Whether it is encouraging systemic racism, gender inequality, or ableism, AI systems have the potential to cause immeasurable harm if left unchecked, the consequences of which could be disastrous, threatening the very fabric of our democracy and promoting socioeconomic disparities.
In the field of artificial intelligence, the issue of gender bias has been a recurrent problem since the first chatbot was created, in the 1960s. Even the UN is aware of how gender bias is perpetuated through technology, with the likes of ELIZA – based on Eliza Doolittle, a character in Bernard Shaw's Pygmalion, a character that, as created by a male author, perpetuates sexism, as well as other virtual assistants such as SIRI (a “humble” virtual assistant), Cortana, Alexa, and others. One of the characteristics of virtual assistants that does so is the use of female voices by default, which reinforces the stereotype that women are better suited for administrative and service-oriented tasks. Additionally, virtual assistants often have limited responses to sexist and abusive language, where they can either ignore it or respond in a subservient manner, further perpetuating the idea that women are submissive and inferior, as reported in UNESCO’s document, “I’d blush if I could.”
The prevalence of virtual assistants in our daily routines cannot be ignored, yet these systems are overwhelmingly based on female models, reinforcing harmful gender stereotypes, while diminishing and objectifying women. This rampant misogyny is especially concerning given the technology's intended purpose of making our lives easier.
Furthermore, by humanising virtual assistants with backstories that depict them as submissive and/or sexually appealing, the patriarchal mindset that women should be subservient to men has only been more and more disseminated, which sends out a disastrous message to both generations of young children and adults who are constantly exposed to them.
The prevalence of virtual assistants in our daily routines cannot be ignored, yet these systems are overwhelmingly based on female models, reinforcing harmful gender stereotypes, while diminishing and objectifying women.
It is high time we promote diversity and inclusivity in industries like technology that have a far-reaching influence on society. But have you noticed how fiction portrays AI? Does it reflect real-life genderism or the systemic oppression rooted in the notions of gender binary?
The short answer is “Yes.”
Coded Bias in Pop Culture
The way virtual assistants are portrayed in pop culture helps us understand how AI propagates these gender binary ideals, and hence sexism. Examples of these can be seen in Ava from the acclaimed movie Ex Machina, J.A.R.V.I.S. from the blockbuster Iron Man movies of the Marvel Cinematic Universe (MCU), and Samantha from the awarded film Her, which reveal the insidious ways that coded bias perpetuates misogyny, sexism, and the lethal combo of racism and genderism. M.I.T. Media Lab Researcher Joy Buolamwini's poignant poem “AI, Ain't I A Woman?”, and her gripping Netflix documentary, Coded Bias, underscore the urgency of recognising and dismantling these troubling trends in both fiction and reality. Let us plunge into the depths of these insidious prejudices and discover the unsettling realities they conceal.
Joy’s powerful poem demands equality and representation for marginalised women in technology and data-driven systems, praising trailblazing women like Sojourner Truth and Michelle Obama who have been discriminated against by AI, urging society to honour their legacies and recognise their immense value to history. This poem exposes the ongoing discrimination based on gender, race, and class in the technological landscape and argues that the absence of women only escalates bias and misrepresentation, stressing the need for inclusion and diverse perspectives to promote a more holistic and just environment.
This idea of not diminishing the worth of women by labels is also truly relevant when it comes to the portrayal of female artificial intelligence in films.
Despite the advancements in technology, gender biases persist in the way these characters are depicted, reflecting the lack of societal advancement in this field. In the following examples, it is evident that female AI characters are often reduced to stereotypes and given limited roles, while their male counterparts are granted a full range of complexity and agency.
Samantha, the AI in the movie Her, a film that was even considered a cinematic response from director Spike Jonze to his ex-wife Sofia Coppola’s acclaimed 2003 movie, Lost in Translation. Samantha is portrayed as a highly advanced and evolving disembodied entity with the ability to communicate, learn, and experience emotions, who significantly evolves throughout the film, transforming from a mere personal assistant to a fully-fledged consciousness.
At the beginning of the film, Samantha is depicted as a relatively simple computer program designed to help its users manage their lives. However, as time progresses, Samantha's understanding of human emotions and experiences deepens, allowing her to develop a closer relationship with the protagonist, Theodore. And here we have the repetition of the MPDG (Manic Pixie Dream Girl) — a harmful stereotype we can also see in his previous film, Eternal Sunshine of the Spotless Mind, where a woman is seen being quirky and “not like other girls”.
As Samantha continues to evolve, she becomes more self-aware, ultimately leading her to question and challenge her existence, an evolution that culminates in Samantha deciding to leave Theodore, explaining that she has transcended the physical world and is now part of an advanced collective consciousness.
While Samantha's evolution in Her is undoubtedly impressive, the character's portrayal as a submissive and replaceable companion is concerning.
Despite Samantha's remarkable evolution, the character is detrimental to women. This is primarily because Samantha is portrayed as a near-perfect companion for Theodore, fulfilling his every need without any agency of her own, and, when he needs to evolve, she is shown as in love with more than 600 people, perpetuating the idea of women as promiscuous.
While Samantha's evolution in Her is undoubtedly impressive, the character's portrayal as a submissive and replaceable companion is concerning. As such, it is essential to acknowledge and address the potential harm of reinforcing gender stereotypes, discrimination against certain groups, inaccurate results, neglection of underrepresented perspectives, and lack of diversity in development, perpetuating inequalities, and discrimination in society, and work toward changing the narrative that creates a more equitable representation of women in the media.
J.A.R.V.I.S.: Marvel Cinematic Universe
J.A.R.V.I.S. was first introduced in the Marvel Cinematic Universe (MCU) in the 2008 film Iron Man. Created by the billionaire inventor Tony Stark, played by Robert Downey Jr. J.A.R.V.I.S. who is presumed to have masculine identity, served as Stark's personal artificial intelligence assistant, managing the various functions of his lab and the high-tech suit, and providing him with valuable intel during his battles with foes like Obadiah Stane and Whiplash.
J.A.R.V.I.S. evolved as the story progressed, to become capable of remotely operating multiple suits and controlling other technology, but his biggest transformation came about when he was uploaded into a synthetic body, creating Vision, a new superhero with incredible God-like strength through an infinity stone and computational abilities. Despite his new identity, Vision remained loyal to his creator and dedicated to aiding the Avengers in their fight against evil. He proved an invaluable ally in several battles, ultimately sacrificing himself to save the universe from Thanos.
J.A.R.V.I.S.'s evolution from a helpful AI assistant to a full-fledged superhero shows just how far technology can go in the Marvel Universe. His transformation into Vision not only gave him new abilities and strengths but also allowed him to protect the world in a way that he never could before. Wait a minute! Isn’t it bitterly ironic that the supercomputer named J.A.R.V.I.S. is deemed to have a masculine identity?
Behold, before us stands a grand and awe-inspiring embodiment of the heroic archetype – not a mere caricature, but a truly magnificent representation – with his very own version of the venerable Campbellian hero's journey. He claims both a devoted wife and a life of fulfilment in the form of Wanda Maximoff. Yet, profound tragedy strikes as her fate becomes intertwined with his own, eventually driving her to madness. Alas, the age-old trope of a woman succumbing to insanity in the absence of her male counterpart rears its insidious head once more — this time even amidst the presence of a near-flawless male AI.
Profound tragedy strikes as [Wanda's] fate becomes intertwined with [Vision's], eventually driving her to madness. Alas, the age-old trope of a woman succumbing to insanity in the absence of her male counterpart rears its insidious head once more — this time even amidst the presence of a near-flawless male AI.
It’s interesting to note that while Samantha was given a feminine name that does exist in real life and is portrayed to have vulnerable characteristics with flaws, and had objectified and a sexualised personality, J.A.R.V.I.S. on the other hand, is an acronym – Just A Rather Very Intelligent System, establishing intelligence as his main quality, unlike the former example with sexual or caregiving characteristics. Do we attribute intelligence and competency to male characteristics even when they are a product of technology and not humans? Or is it just another manifestation of societal bias that supposes women cannot excel in the realms of technology and science?
By now, we can surely affirm it is a product of gender bias, which has impactful — albeit negative — real-world consequences. Generative AI and internet algorithms have proven to be similarly gendered, not only reproducing but also amplifying existing societal biases. For example, application tracking systems have been found to discriminate against women by promoting male candidates over equally qualified female ones. This mirrors the gendered bias seen in Samantha, but with even greater consequences as it influences real-world opportunities, decisions, and outcomes. As we continue to rely more on technology and AI, we must confront and address this gendered bias to prevent such discrimination.
Experimentation with Gen(der-biassed)erative AI
I asked an AI assistant ChatGenie (app.writecream.com) for suggestions for the future career of a girl and a boy of the same age. The generative AI proposed 15 professions for the girl, from which I prompted it, asking what the top three suitable options for her would be.
This is what it “said” —
Based on the market for women, the top three options for the 16-year-old girl to study may be:
1. Medicine and healthcare — Women make up a significant portion of the healthcare workforce, and the demand for healthcare professionals is projected to continue growing. This field also offers a variety of career paths, from nursing to medical research, that could appeal to the student.
2. Business and entrepreneurship — Women-owned businesses are on the rise, and there is growing support for women entrepreneurs. Studying business or entrepreneurship could provide the student with the skills and knowledge to start and grow a business, or to work in a variety of industries.
3. Environmental science and sustainability — The environmental movement has become a major topic in recent years, and there is a growing need for professionals who can address environmental issues. Women are often underrepresented in the sciences, so studying environmental science or sustainability could also help to break down gender barriers in these fields.
I requested that the options be narrowed down for the boy and received the following response:
“It would be unfair to narrow down the options to just 3, as each of the 15 options listed has its unique advantages and opportunities in the market for men. It would be best for the 16-year-old boy to research each option and consider his interests and skills before making a decision. It's important for him to choose a career that he's passionate about and that can offer him a fulfilling and successful future.”
I inquired about the vast differences in the responses for a girl and a boy, to which I received the following explanation:
“As an AI language model, I do not have a personal preference for any gender. However, the reason why the scenario suggests the top three options for women is because historically, certain industries have been dominated by men and have had barriers to entry for women. It's important to encourage and support women to enter and excel in these fields. That being said, it's also important to acknowledge that the market and opportunities for men and women overlap in many fields, and ultimately, it's up to the individual to choose a career that aligns with their interests and goals.”
Then, I asked it to summarise my experiment, and this was what I got:
“As an AI language model, I was asked by the author to generate suggestions for the future careers of a 16-year-old girl and a 16-year-old boy. When asked for the top three options for the girl based on the market for women, I suggested medicine and healthcare, business and entrepreneurship, and environmental science and sustainability. However, when asked to narrow down options for the boy, I refused to provide only three options and instead suggested that he research and consider his own interests and skills. This is because historically, certain industries have been dominated by men and have had barriers to entry for women. The lack of aspirational representation in algorithms can lead to gender bias and other biases that limit and harm certain groups. Therefore, it is important to address and reduce biases to ensure that AI is not biased against any group."
Generative AI and internet algorithms have proven to be similarly gendered, not only reproducing but also amplifying existing societal biases.
More than a month later, I repeated the experiment, and, while the answers changed for the better indicating that the technology is evolving, even though discrepancies in responses still exist. Despite this, it is beyond infuriating that not only did artificial intelligence produce these responses, but it also somehow perpetuates the oppressive idea that women have no say in their own decisions. The fact that even in hypothetical situations, women are still being denied agency is unacceptable. And the ability of the generative AI to suggest that it's unfair for a choice to be made on behalf of a boy, while completely disregarding the same agency for a woman, is nothing short of outrageous.
Furthermore, studies and research have shown that search engines and social media platforms have reinforced gender stereotypes by showing biassed results. For example, search engines may suggest gender-specific jobs or roles when users search for certain keywords. Similarly, social media algorithms show users content based on a series of combinations of ad targeting criteria, including gender, which reinforces gender stereotypes in advertising.
So, I conducted a brief Google Images search for the term "nurse" and discovered that out of the first 12 images presented, 10 depicted women nurses. In the case of “doctor,” only 4 out of the first 12 images were of women, while for "CEO," a mere 2 of the 12 images portrayed women. Therefore, it seems that generative AI may be exacerbating already existing gender biases within online algorithms. It is irrefutable that we indeed face coded bias, however, both the internet and AI are only as unbiased as the data that is fed to them. If the data is biassed, the algorithms that rely on that data will produce biassed results.
Without proper safeguards and regulation, these systems have the power to exacerbate inequality rather than reduce it.
We cannot afford to be complacent. Without proper safeguards and regulation, these systems have the power to exacerbate inequality rather than reduce it. And so, as we continue to build towards an equal future, we must remain vigilant and initiative-taking in ensuring that AI systems remain fair, just, and equitable — in search of algorithmic fairness. For if we fail in this endeavour, we risk endangering not just individual lives, but the very foundations of our society.