An Ethical Word of Caution About Neuralink's In-Human Brain Studies

Is Elon Musk an ethical steward for in-human-brain studies?

Though the Physicians Committee of Responsible Medicine (PCRM; an animal-welfare advocacy group) filed a February 2020 complaint against Neurolink that resulted in no official findings (PCRM claimed Neurolink carried out deadly experiments on 23 monkeys between 2017 and 2020), the incident does raise some concerns. 

For example, PCRM cited that in 2019, a Neuralink surgeon used a sealant to close holes drilled into a monkey's skull that had not been approved by the animal research oversight panel, according to emails and public records. 

So does "no official findings" = "no animal welfare violations?" 

I suppose we'll never really know.

But these claims, unfounded by the Department of Agriculture (USDA), still create a rather odoriferous cloud of ethical concerns; not just about Musk and Neurolink but really about any entity that's testing monkey and pig brains when human ones aren't available.

(Sidenote: At the time of this writing, a probe is still underway into Musk's violations of the Animal Welfare Act, which governs how researchers treat & test some animals. One has to wonder though, we humans ARE ALSO animals, are we not? We belong to the animal phylum known as chordates because we have a backbone; & we share many features with mammals. So if our fellow monkeys' & pigs' brains are unethically tested, how ethically can we expect in-human brains to be tested by Neuralink?)

When speaking of the FDA approval, Neuralink says the approval "represents an important first step that will one day allow our technology to help many people," and offered no details of the planned studies.

Another cause for concern

Musk's proven to be a developer that's not as bound to ethical concerns along in his quest to AI-tize our world.

One must remember Musk himself was quick to fire Twitter's own AI safety team back in November 2022, just days after he acquired the company. His known stance on those actions has been that such teams were glorified content moderators in the way of free speech.

No matter how you feel or think about such debates, the firing of AI safety team(s) is an act that tends to regard ethics and safety as more of a hindrance than advantage.

Now Musk is moving ahead and creating his own AI LMM (X.AI) to compete w/what he says is "woke ChatGPT" and this means he's using the full breadth of Twitter data (tremendous volumes of data generated by Twitter's user base), which he now owns and uses to develop his new X.AI large language model (LLM).

His reference to ChatGPT as "woke" offers perspective as to what an X.AI LLM will and won't be or do.

And with FDA's recent approval for Neuralink's in-human brain testings, I suspect most any data he gleans from said studies will be (at some point and in some way) used to further optimize his forthcoming humanoid robots, currently underway.


Legions of humanoid robots, by the way, that Musk predicts will sell for about 10K+ and will be in most every household within the next 10 or so years.

With so much demand for human brain-based intel, these are rather challenging times to be human, it seems.

We must stay vigilant & keep advocating to protect our fragile human brains from all this commercialized mechanization afoot.

Until next time,



--
Mayra Ruiz-McPherson, PhD(c), MA, MFA
Advancing Humanity In An AI-Enabled & Media-Laden World
Cyber/Media Psychologist (dissertation in progress)








Comments