Illustration by Rawand Issa for GenderIT

The development and adoption of Artificial Intelligence (AI) technologies in the MENA region may not be accelerating as much as other parts of the world. However, given the dire human rights situation, rising oppression, and lack of independent, democratic, and robust institutions (data protection authorities, parliaments, judiciary) that protect people from the harms of these technologies, a wider scale adoption of AI would only further erode human rights.

In May, Lebanese digital rights organisation SMEX published an overview of the adoption of AI across the region in the Gulf, the Levant, and North Africa by researcher Sarah Cupler, a PhD candidate at the University of Melbourne researching the regulation of algorithmic decision-making in policing. She found that while there have been calls from governments and the private sector to adopt AI, which remains in “preliminary phase” for most countries in the region, “there is either no regulation or merely soft non-binding principles.”

Worrisome adoption of AI in the region

Despite this slow regional takeoff, Israel and countries of the Gulf Council Cooperation (GCC) are leading in AI adoption, particularly in policing and surveillance. This poses additional challenges to a digital rights movement in the region that is already overstretched.

In Palestine, Israel is committing crimes against Palestinians on a daily basis, including forced evictions, torture, land seizures, unlawful killings, and severe restrictions on movement. It has been deploying AI to control and oppress Palestinians, for example, through facial recognition and predictive policing. A May 2023 report by Amnesty International, titled “Automated Apartheid,” investigated how Israel extensively uses facial recognition software “to consolidate existing practices of discriminatory policing, segregation, and curbing freedom of movement, violating Palestinians’ basic rights.”

One such tool is Red Wolf, deployed at military checkpoints in Hebron. The tool is connected to a database containing personal information about Palestinians, including where they live and their family members. At checkpoints, the tool scans Palestinians’ faces without their consent and compares the biometric data collected with entries in the database. Based on that, it decides whether a person is allowed to pass through the checkpoint or not.

Gulf states, on the other hand, are notorious for their human rights abuses. To crush dissent, these countries are willing to go to great lengths. For example, in 2018, Saudi journalist Jamal Khashoggi was assassinated in the Saudi Consulate in Istanbul by Saudi government agents, who later dismembered his body to conceal this heinous crime. Surveillance is rife in the region, with some countries deploying some of the world’s most advanced tools and tactics to keep a tab on residents and visitors. Increasingly, AI is being deployed to monitor and police people.

While this big spending is part of national strategies aimed at diversifying their oil-based economies, the likelihood of AI technologies being used to control and oppress populations is not far-fetched.

Gulf states are investing heavily in the technology industry, internationally and domestically, including on AI and other emerging technologies. While this big spending is part of national strategies aimed at diversifying their oil-based economies, the likelihood of AI technologies being used to control and oppress populations is not far-fetched.

During the 2022 FIFA World Cup, Qatari authorities deployed 15,000 cameras equipped with facial recognition across stadiums and streets of Doha, despite concerns about mass surveillance and the country’s weak privacy data protections. In the UAE, there is “a large-scale adoption” of facial recognition technology. Already, back in 2018, the Dubai police installed tens of thousands of facial recognition cameras ahead of the Expo 2020, and in 2020, the technology was added to police patrol cars. Facial recognition is also used to “secure” public transport and for border checks.

What is even more concerning is, if, and how, the GCC, particularly Saudi Arabia and UAE, might leverage AI to exercise influence in the rest of MENA given their record of using digital oppression tactics to undermine democratic movements and human rights elsewhere in the region.

In face of all this, as advocates, organisations, and a movement at large, where are we in terms of readiness to tackle AI and the human rights threats brought by it in the region?

Civil society’s AI readiness

For the purpose of this article, by AI readiness, I mean the preparedness of civil society and the digital rights movement to engage in AI policy and design, and address its human rights risks and impacts, for example, through advocacy, research, investigations, awareness raising,  building our own tools and models, and experimenting with new AI technologies as they emerge.

Having spent more than 10 years researching and writing about digital rights, and collaborating and engaging with different people, organisations, and donors to protect human rights in the use and development of technology, I am concerned that as advocates in the region we are far from being prepared to address the current and future threats brought about by AI.

Over the past decade, since the Arab uprisings of 2011, we have been responding to one threat after the other: repressive laws, judicial harassment and violent threats against activists and human rights advocates, surveillance, platform threats, internet shutdowns, media restrictions, digital divide and accessibility, online gender-based violence, etc.

Throughout the years, our movement lost some of its most forward-looking advocates like Palestinian-Syrian open source software developer Bassel Khartabil, who was arrested, tortured, and killed by the Syrian regime, and Tunisian human rights advocate and early blogger Lina Ben Mhenni, who died following a long illness in 2020.

As a movement, we are very few, under-resourced, and many of us are exhausted and burned out. Some of us are in exile or self-exile, if not in jail like Egyptian activist Alaa Abdel Fattah. Throughout the years, our movement lost some of its most forward-looking advocates like Palestinian-Syrian open source software developer Bassel Khartabil, who was arrested, tortured, and killed by the Syrian regime, and Tunisian human rights advocate and early blogger Lina Ben Mhenni, who died following a long illness in 2020.

In comparison, in Europe, one network, the European Digital Rights network (EDRi), has 38 local organisations as members. My aim here is far from making simplistic comparisons as the civic space in Europe is more favourable to human rights work, while in MENA, we face a multitude of threats that prevent us from effectively organising and advocating, and can even put some of us at risk of jail and physical violence. But, my purpose is to illustrate how under-resourced we already are compared to the scale of the threats and challenges we are facing. AI will make things even more challenging for the movement.

To cope with the current and future challenges of AI and other emerging technologies, we need to start incorporating more long-term strategic thinking in our work. This is not always easy in the region given the multiple threats that keep emerging. However, if we do not start dedicating enough space and time to this, we are going to find ourselves spending a lot of our resources reacting to developments around (and beyond) us and the wishes of donors. This has the risk of distracting us and preventing us from driving our own agendas.

As an example for this, I often cite mis- and disinformation, in which interest peaked in the wake of the 2016 U.S. election and during the pandemic. As a result, donors in Europe and North America started spending a lot of funds on this issue. There is no doubt that there are very harmful aspects to disinformation, particularly when it targets vulnerable communities and people (for example, gendered and racialized disinformation, and disinformation targeting journalists and human rights defenders). But, was there a need for digital rights groups to prioritise COVID-19 disinformation when other health-focused organisations could have taken up that task? What impact, if any, did those programs and projects actually have to advance human rights online and offline in the region?

I am concerned that the hype about AI and the concerns and conversations in Europe and North America might once again frame our focus. As a result, digital rights groups and advocates in the region need to get together to develop a shared AI agenda that is in line with our region’s own priorities and threats and not what Western donors decide is more relevant to us through their own lens and concerns.

In addition, organisations need to dedicate resources (budgets and time) so that their staff are able to keep up with developments in AI and other emerging technologies, through courses, workshops, subscriptions to publications, and enough time to read. People need time and space to learn and think so that when the time comes for them to advocate, conduct research, or build AI programs, etc., they are in a position to effectively do this. We cannot expect people to start immediately and adequately engage on an emerging issue if they have not previously had enough time to explore it. Donors should support such efforts and learning spaces. They also need to allocate resources so that organisations can hire and retain skilled employees who can engage in the field of AI and emerging technologies.

Digital rights groups and advocates in the region need to get together to develop a shared AI agenda that is in line with our region’s own priorities and threats and not what Western donors decide is more relevant to us through their own lens and concerns.

Finally, growing and sustaining this movement is essential. We need the fresh perspectives and innovative ideas of younger and dynamic people who can keep this movement going and move it forward. What SMEX is doing with its fellowship program, inaugurated this year, is one way to grow the movement and make it more inclusive. As part of Mariam Al-Shafei Fellowship on Technology and Human Rights, named in homage to the organisation’s former Knowledge and Impact Manager who passed away after a long illness last year,  five women, three of them in their early twenties, are currently working on a range of diverse topics to advance human rights in the digital space, including access and digital safety to disability rights and Sexual and Reproductive Health and Rights (SRHR). We need more of these opportunities to include and involve people of diverse disciplines and experiences, more women, and younger generations.

AI may have been slow to take off in large parts of the MENA region with extreme disparity in adoption among countries. Yet, its fast-paced development and deployment in policing and surveillance, namely by the Israeli Occupation and the GCC, pose serious human rights risks. As advocates, we need resources to acquire knowledge, skills, and tools to be able to effectively engage with AI and its impacts on human rights. We should also consider drafting a shared AI agenda that brings to the fore the region’s most urgent needs - otherwise, we risk getting stuck only with conversations in Europe and North America, and the perspectives of donors will frame how we engage on the topic.

Add new comment

Plain text

  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <br><p>