abstract drawing of different apps interconnected

This piece is the second in a series where Julia Keseru explores the connection between our online systems and bodily integrity, and the long-term effects of digital innovation on our collective well-being.

Three years ago, I gave birth to my daughter through a complicated delivery. It was one of the most intense moments of my life – an experience that required me to be my most vulnerable self, while also demonstrating incredible strength.

Luckily, I had the privilege to go on this journey with a doctor who understood the profound complexity of childbirth. When my delivery went astray, she told me she wanted to discuss options. In reality I didn't have many – for specific reasons, emergency C-section was my only alternative. And yet, when she offered to walk me through possible scenarios, I felt a sense of empowerment that I've never really felt before. Definitely not in a moment like this.

Since then, I've been trying to understand what exactly happened there, and why her behaviour helped turn an intense medical experience into one of the most empowering moments of my life. At first, I thought it was a sense of autonomy – the ability to influence what would happen to my body. Later, I realised it was something else too: her respect for my bodily integrity. This doctor didn’t just seek my formal consent to move forward with surgery, she genuinely wanted to make sure that I felt intact, complete and whole throughout the entire time.

From the perspective of human rights, bodily integrity is defined as the inviolability of the physical body – our right to refuse any forms of physical intrusion. The concept plays an increasingly important role in medical ethics too, signalling a shift from patient autonomy – the idea that we should be able to decide what happens to our bodies – towards a broader, more nuanced approach to patient well-being. (More on that in a bit.)

One pattern is striking though: while respect towards bodily integrity is more or less becoming the norm in our physical interactions, the concept has been largely absent from the digital world.

One pattern is striking though: while respect towards bodily integrity is more or less becoming the norm in our physical interactions, the concept has been largely absent from the digital world. Think of facial and emotion recognition, for instance – emerging technologies that build on machine learning techniques to verify our identity or analyse our emotions. These technologies have a rather aggressive approach to taking what they want (the digital footprints of our bodies), and such non-consensual methods are actually quite typical in the digital realm. In fact, as I argue in the first piece of this series, our online world is predominantly made up of coercive and intrusive practices that tend to bypass consent completely: think “revenge porn”, digital stalking, “doxing”, or even Zoom-bombing.

But how exactly did predation become the digital default?

Predation in the name of innovation

Back in 2017, techno-sociologist Zeynep Tufekci gave a powerful TED talk on the evolution of algorithms in online advertising. She explained how persuasion architectures (the organisation of products and services in ways that attract consumer attention) became more and more sophisticated thanks to the large troves of information about our thoughts and behaviours. She also warned about the consequences of the “digital dystopia” we had been building in the past decades.

One of the most disturbing examples in her talk was an imaginary algorithm promoting Las Vegas tickets to bipolar people. The underlying assumption behind Tufekci’s algorithm was that people with bipolar disorder have regular shifts in their mood and energy, ranging from periods of feeling extremely happy (known as the manic phase), to feeling very sad (the depressed phase). When bipolar people enter their manic phase, they tend to become over-spenders, and oftentimes even compulsive gamblers. And since the onset of mania is now shockingly easy to detect thanks to the amount of information we share about ourselves online, people with similar mental disorders have become easy targets for aggressive online advertising.

Part of the reason why Tufecki’s imaginary algorithm feels so inappropriate is because it abuses the blurry line between what’s happening to our bodies in the “real world”, versus what’s happening to us in the “virtual reality”.

But the truth is that such algorithms can exist precisely because we don’t regard them as potential intrusions — the same way we still don’t think about our online personas as part of our actual “personhood”. And yet, as I argued before, digital footprints of our bodies aren’t simply content anymore: they are — and should be treated as — an extension of our will and agency. Any intrusion to that integrity (and thus our sense of wholeness) is disruptive to our well-being, regardless of whether it’s happening to our physical bodies, or our “data bodies”.

…digital footprints of our bodies aren’t simply content anymore: they are — and should be treated as — an extension of our will and agency.

Accepting innovation at face value

There is an interesting analogy to be made here between clinical medicine and digital innovation – two areas where new technologies brought forth great tension between what science can achieve, versus what is considered appropriate, just and fair. But unlike in clinical medicine where the evolution of bioethics has given birth to more nuanced norms, there are no such values to guide the creation and use of digital technologies — yet.

Because of that, innovation in the online era is accepted almost at face value, regardless of the human costs. As Ruha Benjamin reminds us, technical fixes that claim to bypass our biases are often seen as magical – in fact, the positive connotation surrounding the word has made it almost impossible to criticise anything that is labelled innovative.

As a result, there is now a sea of digital platforms, applications and tools that try to capitalise on the gap between our “real-life” boundaries and digital norms, all in the name of innovation, and with more or less success. Dietary apps targeting people with eating disorders, silicone wristbands that notify your boss about your mood changes, mental health apps selling user data to third parties. In fact, as Shoshana Zuboff argues in her book The Age of Surveillance Capitalism, the very search engines that fuel our internet build on the notion that our online actions can be rendered as behaviour and sold as “prediction products”.

There is an interesting analogy to be made here between clinical medicine and digital innovation – two areas where new technologies brought forth great tension between what science can achieve, versus what is considered appropriate, just and fair.

And yet, when you think of medical interventions, it feels almost surreal to imagine a world where technological progress alone could justify any form of bodily intrusion.

Take the example of Caesarean deliveries, for instance. C-sections have saved the lives of many women and babies whose health was in jeopardy during delivery, and have proven a powerful alternative for women with previous traumatic birth experiences, or who, for any other reason, want to avoid giving birth vaginally. Now imagine a world where doctors could administer C-sections on anyone, at any time –simply because the procedure is deemed effective and innovative. In that scenario, technological progress would no longer be used to prevent or treat a disease, but to serve the particular self-interest of science.

Beyond autonomy, towards integrity

To avoid such dystopias from becoming reality, patient autonomy has become a key part of medical ethics – a guiding principle that doctors and care providers can refer to whenever they have to make decisions about patient treatments. Autonomy encompasses the practical and legal implications of shifting the agency of decision making from doctor to patient, by putting the emphasis on self-determination and control.

But autonomy as a concept has serious limitations. Just because we sign a form, does that really mean we understand the consequences? What if our well-being or security depends on saying yes? What if we are threatened by the authority of those seeking our consent?

These limitations are even more pronounced when we look at the world of digital technologies again. Simply because we are told that our emotions will be analysed in a recruitment interview, that doesn’t mean saying no is an option – especially if we are in desperate need of a job. When we are denied the ability to board an airplane because we don’t want to consent to facial recognition, there isn’t really a meaningful way to opt out – unless we can afford to buy a new ticket. As always, such tensions get more profound when the sufferers of digital intrusions are those historically more vulnerable and exposed to oppression and abuse. How could someone, for instance, refuse biometric data collection if they are seeking asylum or aid?

The problem with autonomy, as medical researchers and privacy activists argue, is that it makes us too obsessed with the fulfilment of what is prescribed on paper, instead of the moral quality of consent as a process. In other words, we focus too much on stand-alone acts and decisions, instead of trying to encompass the full horizon of what it means to respect the dignity and lived experience of another person.

How could someone, for instance, refuse biometric data collection if they are seeking asylum or aid?

However, just because autonomous decisions are not always feasible, we still need certain values to guide our interactions with other peoples’ physical boundaries.

This is precisely where bodily integrity, as a concept, may come in handy.

Stay tuned for the next piece of this series where I will explore how bodily integrity as a concept can be weaved into the design and uptake of digital technologies, and how existing approaches and frameworks like Design Justice or Consentful Technologies can support such a shift in our norms and values.

Add new comment

Plain text

  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <br><p>