Press Office

Comment: lab grown cells play pong

Comment: Should lab-grown brain cells have legal rights?

Published on: 4 November 2022

Writing for The Conversation, Dr Josh Jowitt discusses whether lab-grown brain cells that can play pong should have legal rights.

Andrii Vodolazhskyi/Shutterstock
Joshua Jowitt, Newcastle University

The story could have been straight out of science fiction – scientists have grown human brain cells in a lab, and taught them to play the video game Pong, similar to squash or tennis. But this didn’t happen on the big screen. It happened in a lab in Melbourne, Australia, and it raises the fundamental question of the legal status of these so-called neural networks.

Are they the property of the team that created them, or do they deserve some kind of special status – or even rights?

The reason this question needs to be asked is because the ability to play pong may be a sign that these lab-grown brain cells have achieved sentience – often defined as the capacity to sense and respond to a world that is external to yourself. And there is widespread consensus that sentience is an important threshold for moral status. Ethicists believe that sentient beings are capable of having the moral right not to be treated badly, and an awareness of the implications of sentience is increasingly embedded in research practices involving animals.

If the Melbourne neurons are sentient, this may mean they are capable of suffering - perhaps through feeling pain or other avoidable discomfort. As there is broad moral consensus that we should not cause unnecessary suffering, this may mean that there are moral limits on what we can do with these neural networks.

It’s worth saying that the team that created the cells don’t think they are there yet as the closed system in which the experiment took place means that, even if we accept the neurons are responding to an external stimulus, we don’t know whether they are doing so knowingly and with understanding of how their actions can cause certain outcomes.

Image of the game Pong.
Pong. Grenar/Shutterstock

But given where we are, it’s not beyond the realms of possibility that sentience could be the next milestone. And if this is true, it’s not just ethicists who should be paying attention - legislators should also keep a close eye on this technology.

The legal problem

This is because, since Roman times, the law has classified everything as either a person or a property. Legal persons are capable of bearing rights. By contrast, property is something that is incapable of bearing rights. So if we think our neural networks might soon have moral status, and that this ought to be reflected in legal protections, we would need to recognise they were no longer property – but legal persons. And the case of Happy, an elephant at Bronx Zoo who campaigners wanted to transfer to an elephant sanctuary, shows us why this is something we should be proactive about.

The New York courts were recently asked whether Happy had a right to freedom, and they said no – because she was not a legal person. A full overview of the case is here, but for our purposes, the key thing to take from the judgement is this: the courts accepted Happy was a moral being who was deserving of rights protection, but were powerless to act. That was because changing her legal status from property to person was too big a change for them to make. Instead, it was a job for the legislature – who are choosing to do nothing.

By recognising a moral claim they cannot enforce, the courts – and the law more generally – is perpetuating what it accepts is an injustice. This is especially shocking when you consider that the term “legal person” has never meant the same as “human being”. Throughout history and in legal systems around the world we have seen temples, idols, ships, corporations and even rivers classified as legal persons. Instead, it is just a signifier that the bearer is capable of having legal rights.

The lesson we can take from this is that we need to future-proof the law. It’s better to be proactive to avoid a foreseeable problem than try and play catch-up when it’s already happened.

And as we’ve said above – this problem is foreseeable with regards to the Melbourne neurons. Even if they’re not sentient yet, the potential is there – and so it is something we should take seriously. Because if we accept that these networks are sentient, and do have moral status because of this, then it is desirable that the law reflect this and grant protections commensurate to their interests.

This is not a revolutionary claim, and we have been in a similar place before. When IVF technology first arose in the 1980s, the law had to confront the question of the legal status of in-vitro embryos for the first time. The approach taken was to convene an inquiry to examine the moral questions raised by this new technology, which culminated in recommendations contained in the Warnock report. These recommendations formed the basis of the UK’s legislative framework around IVF, which creates a kind of “third status” for these embryos - not full legal persons, but with significant restrictions on what can be done to them because of their moral status.

The influences of the Warnock report are still visible today - so there is no reason why a similar approach could not be taken with regards to the issues raised in Melbourne. Yes, there are lots of unanswered questions about the capacities of these neural networks and we may very well conclude that they are not deserving of legal protection just yet.

But there are certainly enough questions around this technology to warrant an attempt at finding an answer.

Joshua Jowitt, Lecturer in Law, Newcastle University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation
Share:




Latest News