Neurotech Has Its Own Ethical Considerations Outside the AI Ethics Landscape

More and more, the growing subject of AI ethics is becoming a robust discussion; one that's mushrooming into varied ideas in the contexts of technology policy and regulation.

Yet as those important conversations unfold, IBM researchers Sara E. Berger and Francesca Rossi (both are AI Ethics global leaders at IBM's Thomas J. Watson Research Center in New York) have co-authored an article for the Association of Computing Machinery (ACM) emphasizing the need to expand the focus of AI ethics to incorporate key aspects of neurotechnological ethics.

While you can read their contributed article, entitled AI and Neurotechnology: Learning from AI Ethics to Address an Expanded Ethics Landscape on the ACM website in full, I'd like to point out that both researchers make a strong argument for an AI ethics that is inclusive of ethical considerations related to the human brain and its "neurodata."

Let's quickly define these important terms:

Berger and Rossi (2023) classify neurotech as invasive or non-invasive across three specific categories:
The other term to define is:

More ethical thoughts worth noting

No matter how well-intended Berger and Rossi (2023) — and other researchers like them who work for other tech companies like IBM — may be to help us expand our understanding of AI ethics and its correlations to neurotech ethics, it's important to note the practice of neurotechnology has its OWN set of ethical concerns completely independent of AI.

As Müller and Rotter (2017) point out, sticking wires onto (or inside) people's head (otherwise referenced as "technological interventions") to see how their brains work is a discipline wrought with its own set of ethical considerations; some of which include unintended impacts to a subject's brain, their person, their sense of identity, or their personality.

For years, neurotech — a field that's still developing but evolving and showing greater promise for future treatment of neurological and psychiatric disorders — was generally limited to the treatment of brain disorders like Parkinson's, epilepsy, and so on.

Yet as technology companies continue exploring ways for how to scale and improve their proprietary AI's performance en masse, neurotech tools and neurotech researchers are now being deployed for non-treatment purposes. 

Closing thoughts

The AI bandwagon is here and the technology holds great promise.

But it's crucial to keep an ongoing, close eye on a much wider array of ethical issues and challenges, especially when we're talking about exploiting the mechanics of human brain functioning to develop profitable and non-medicinal technologies that, by and large, don't always set out to enhance or complement our thinking. In fact, there are too many AI scenarios that set out instead to reduce, compete (and in some contexts, replace) significant human cognitive activity.

Postscript

Lastly, I want readers of this post to keep in mind that the AI ethics lingo tends to sometimes overshadow the human side of the human-machine equation.

When discussing AI ethics, please do keep the following in mind:
  • It's not just about "data" (or "Big Data") ... it's also about our "neurodata;"
  • It's not just about "safety" but also our subjective well-being (generally defined as the health, safety, happiness, and comfort of individuals and/or communities); and lastly,
  • It's not just about "control" and "access" but also our human autonomy and human agency.
Until next time,

(PS Join me on my NEW Slack Channel to ask me questions and keep the convo going!)

🤖🧠🤔

• • • • • • • • • • • • • • • • • • 

References

Berger, S., & Rossi, F. (2023). AI and Neurotechnology: Learning from AI Ethics to Address an Expanded Ethics LandscapeCommunications of the ACM66(3), 58-68.

Hallinan, D., Schütz, P., Friedewald, M., & De Hert, P. (2014). Neurodata and neuroprivacy: Data protection outdated?Surveillance & Society12(1), 55-72.

Müller, O., & Rotter, S. (2017). Neurotechnology: Current developments and ethical issuesFrontiers in systems neuroscience11, 93.

Comments