“What’s on your mind?” It’s a common question that, with the help of new technologies, may soon yield more personal information than we’d like. Neurotechnology devices can interact directly with the brain to extract information about our thoughts and behaviours and help stimulate certain physical and mental responses. While neurotechnology offers certain health and safety benefits, it also raises significant legal and ethical concerns. In this episode, Jennifer Chandler, a professor at the University of Ottawa’s Centre for Health Law, Policy and Ethics, discusses neurotechnology and emerging cognitive rights such as mental privacy, personal identity, and freedom of thought.
You can learn more about her research at jennniferchandler.ca.
Info Matters is a podcast about people, privacy, and access to information hosted by Patricia Kosseim, Information and Privacy Commissioner of Ontario. We dive into conversations with people from all walks of life and hear stories about the access and privacy issues that matter most to them.
If you enjoyed the podcast, leave us a rating or a review.
Hello, I’m Patricia Kosseim Ontario’s Information and Privacy Commissioner, and you’re listening to Info Matters, a podcast about people privacy and access to information. We dive into conversations with people from all walks of life and hear real stories about the access and privacy issues that matter most to them.
Hello listeners, thanks for tuning in. Imagine a world where we can peer into the average person’s mind to read their emotions, eliminate painful memories or cure addictions, or a world where employers can track a person’s mood, level of attention or concentration on the job, or where a person’s thoughts can be examined by others to learn their political beliefs, or cross-examined and used as evidence in a criminal proceeding. For anyone who thinks we’re still a long way from these possibilities, well, we’re a lot closer than you think. Neurotechnologies are a rapidly developing field. These include devices that can interact directly with our brain or nervous system to read, control or even stimulate certain reactions. A recent report by the United Nations estimates that research and patents for neurotechnology devices have increased more than twentyfold in the last 20 years. Neuralink a tech startup founded by Elon Musk recently received approval from the US Food and Drug Administration to begin human testing of a tiny brain implant that can actually communicate back and forth with computers directly through brain activity alone.
There’s no doubt that neuroscience and neurotechnology can be immensely beneficial to overcome paralysis, alleviate symptoms of Parkinson’s or epilepsy, even help blind people see or deaf persons hear. But these technologies come with unique privacy and ethical concerns, particularly if they’re used to read or interact with and even alter the structure of our brains to change our thoughts and memories in ways that may not be wanted or beneficial. You can see how neurotechnologies begin to run up against our fundamental conception of what makes us human; personal identity, freedom of thought, even mental privacy.
In this episode, we’ll be exploring the legal, ethical and regulatory implications of neurotechnology. My guest is Jennifer Chandler. She’s a law professor at the University of Ottawa, affiliated with the Center for Health Law Policy and Ethics and Cross-appointed to the university’s faculty of medicine. Jennifer, welcome to the show and thank you for taking the time to join us today.
Thank you so much for having me. I’m really pleased to be here having this discussion with you.
So to start us off, Jennifer, can you tell us a little bit about yourself and how is it that a law professor like yourself has come to focus on brain sciences and neuro technologies?
Since I was just starting out, being deeply interested in the connection between the brain on the one hand and the mind, the mental experience and behavior on the other, it remains incredibly mysterious and poorly understood how our subjective experience of the world, and of ourselves and our behavior is somehow produced from this biological entity within our skull. And so I’ve always been deeply interested in this and I started out in the sciences in my earliest studies and I just retained that interest as I went forward into law. And today, I teach mental health law, I research on the law and ethics of brain technologies and other advanced biomedical technologies. And it all comes back to, I think for me, what is fundamental and primordial, which is how do we understand ourselves, how do we understand each other, what produces our behavior? And as we develop more and more powerful technologies to observe brain activity, to start to kind of try to make those linkages between brain activity and mental experience or behavior, we don’t yet fully understand how the one produces the other, but we get closer to that ultimately, really fascinating and somewhat mysterious thing that is that connection between brain and mind.
If I were to ask you in very simple terms, how do you explain neurotechnology to say, your neighbor on the street?
Yeah. Well, I would say it is some kind of technology that interfaces, exchanges, connects to directly with the nervous system. And when I say nervous system, you can identify the brain obviously, but you can also talk about the spinal cord, and the nerves. This would be the peripheral nervous system. You can also think of the sensory nervous system, so the nerves that connect the ears, and the retina or the eye to the brain. So all of these things are part of the overall nervous system. So basically, we’re talking about a technology that interfaces directly with some aspect of that nervous system.
Fascinating. So we’re not talking about expressing ourselves through our voice or hand gestures, we’re talking about connecting directly to the brain to find out what’s going on inside, what we’re thinking or feeling.
If we think about the interfaces, we can identify three categories there as well. One you could say is like an output interface. It’s something that detects information from the brain. Then the second category would be an input interface, so it’s something that simulates the brain or puts information so to speak, into the brain. And then a third category would be a kind of technology that does both. It’s bidirectional. And you might be sitting there thinking, “Well, how do you get information into the brain?” Well, there’s a whole bunch of different ways that it can be done. You can deliver stimulation by magnetic fields, by electric current, by ultrasound. There’s lots of different other potential possibilities such as optical stimulation with light. Now, this is being done in mice who can be modified so that the cells respond to light. This is not something that can be done at present in human beings. And then, these kinds of ways of stimulating can be done from the outside on the surface of the skull, that can be done just below the skull on the surface of the brain, or they can be done through electrodes that go deep into the brain. So these are all ways of getting information or stimulation into the brain.
And as for the output side of things, how do you detect information from the brain? Again, you can use direct measures that pick up electrical activity. You can use sort of indirect measures, and what they do is they pick up other things about the brain’s activity. They detect blood flow, they detect the flow of glucose within the brain. The connection to activity is the presumption that an active part of the brain will have more blood flow into it, it will have more glucose flow into it, and so you make the inference from the presence of the blood flow or the glucose, that that’s an active part of the brain.
Can you give us some examples of how neurotechnologies are used in the healthcare context or in the medical context?
There is a device on the market already to address epilepsy. And epilepsy is something where a seizure will occur from time to time, but in between there’s no need to stimulate the brain. And so, what this device does is it monitors the brain, and detects when a seizure is about to happen, and then only stimulates to prevent the seizure at that time. So that’s an example of why it might be useful to both monitor and then deliver stimulation as needed.
But there’s a whole range of other neurological conditions where this is used as an approach. So Parkinson’s disease and other movement disorders where people have difficulty with movement is an area where the science is well-advanced and there’s many tens of thousands of people already being treated with deep brain stimulation, DBS for short, is the insertion of an electrode into a specific part of the brain so that there can be stimulation, which alleviates the movement related symptoms.
Where it’s a little bit more speculative is in treating psychiatric conditions. So there’s a lot of research right now for trying to use deep brain stimulation for a very broad range of different kinds of mental health conditions, whether it’s mood conditions like depression or conditions like addiction, anorexia, schizophrenia, obsessive-compulsive disorder, PTSD, and a whole range of different problems that people might encounter.
Another sort of input technology is in assistive devices for people with sensory impairments such as the retinal implant, the cochlear implant. So the whole range of these different contexts in which people might find these brain stimulation technologies to be helpful.
Now, if I switch to the output technologies, so here we’re reading information from the brain, the whole different set of potential health issues that are being answered. And the idea here is that there will be a device that picks up brain activity, and then decodes or interprets it to allow for some kind of output. So if we think of someone who’s paralyzed, they may through thought, voluntary mental activity that’s then picked up by the brain computer interface, be able to move their wheelchair, or move a robotic arm or some other object in the environment. And so, what this is offering in essence is a kind of assistive device to restore movement and independence to some degree for people who have a major motor impairment. And one use case is for people whose movement impairment is so severe that they cannot communicate efficiently. And so, some of the really interesting exciting work that’s being done is figuring out how to decode imagined speech from brain activity.
So it can help people move, it can help people communicate, it can help people hear, it could help people see. What about in the employment sector, Jennifer? Are employers using these technologies and if so, how are they using them, say, to improve safety and productivity in the workplace?
What is being used in this context is a kind of brain computer interface that is non-invasive. It’s based on EEG or electroencephalography, which is in the form of little electrodes on the surface of the scalp that will pick up patterns of electrical activity in the brain. It’s not very specific; it tends to pick up waves of activity depending upon different basic mental states. But there’s interest in the employment, and I would mention also in the educational sector to use this kind of thing to monitor, for example, alertness, focus, concentration.
And you can quickly see that this might be really useful in contexts where alertness is very dangerous, or lack of alertness can be very dangerous. So it might be a safety measure. It could also be used to study workflow, and how long can people maintain concentration before it needs to be changed, or the different ways to help the maintenance of concentration. And evidently, you can sort of see the upside, but you can see the downside that comes along with it, because when people are being monitored in this way, it usually subtly, or maybe not so subtly changes behavior if you know yourself to be under surveillance. So it raises questions about liberty, and just how much information employers, or teachers or anyone should have about us on an ongoing basis.
I also understand that there are these electric nodes in construction helmets for construction workers to gauge their level of fatigue on the job site, which can be another example of a dangerous situation where this kind of insight into the level of alertness can be extremely helpful, not only for productivity but for safety. Are there any brain technologies being used as investigative tools in the area of law enforcement?
There is a particular application that has gotten some uptake in investigation in law enforcement. It’s to my knowledge, not used in Canada, but it’s often called brain fingerprinting. And you can kind of think about it a little bit as having a somewhat similar function to the polygraph or lie detector test, except it works by looking directly at brain activity. So basically, what it involves is using EEG to pick up those patterns of electrical activity in the brain. It works by detecting something called the P300 response, and this is a response that happens 300 milliseconds after the presentation of a stimulus. And it works because there is a particular characteristic response that the brain makes when you recognize something, in essence.
So the theory for investigative purposes is that there may be information about the crime known only to someone who was there, who was involved in the crime in some way. And so, what one does is present to a suspect or someone else information that the person knows. So you get a kind of a baseline, you know what their P300 looks like when they recognize something, and then you present information that they don’t know, so you can see what the absence of recognition looks like. And then you present specific information carefully chosen to only be known by the perpetrator, and then see which of those two patterns it elicits. And from that you conclude that they have knowledge about the crime. To my knowledge, it’s only systematically being used widely in India, with some little sort of reports of other investigative and police forces being interested in it in a couple of other countries.
So a modern day polygraph, essentially. What about this brain fingerprinting data? Has it been used in court cases?
There have been a couple of cases in the US of attempts to introduce it into court. However, it’s always run up against concern about whether it’s a valid form of evidence. So it has not made much traction in the United States. In Canada, I’m not aware of it having been used, but as I mentioned in India, we’ve got a whole bunch of references to it in the criminal cases. And there were these developing resistance to its use. It was said, this is a form of testimonial compulsion and self-incrimination. It’s an interference with physical, and mental liberty, and privacy.
And so all of this went to a head in this case of Selby in 2010, where the court went through the theory of why we have self-incrimination protections among other things, and also what these technologies are doing. And came to the following conclusion, that you cannot compel a person whether an accused person to undergo these tests. And if they are compelled, any evidence of compulsion means that the results cannot be used in the justice system. They’re not admissible.
And I would mention one other thing here, which I think is very interesting, which is that we tend to think about these kinds of tests as invasive of mental privacy, and personal liberty, and we worry about it most in the context of the state, or some powerful entity forcing this on people. And indeed I think we should be thinking about that. But what we see in the Indian cases and actually in other contexts as well, where the boundaries are being pushed against these kind of novel forms of evidence is that it’s people who are accused of offenses who want the tests, because they’re seeking information to actually defend themselves, to exculpate themselves. They’re trying to convince people they didn’t do it. So this is a different set of considerations if someone is asking for it, is it unfair not to permit the application? And of course, we don’t want sort of bogus pseudo evidence that’s not valid to be used, but should we be thinking about privacy and compulsion and liberty in a different way in that context where someone’s asking for the use of that kind of information to defend themselves?
So, what kind of legal, ethical and privacy challenges does all of this raise? And what should we be anticipating and thinking about on the doorstep of these new neural technologies?
There’s a tremendous number of things I could say here, and so I don’t want to overwhelm you with all the possibilities, but we need to think about questions of responsibility in terms of the use of devices that may not transparently translate our intentions or actions. We have to think also about questions of responsibility where devices are subtly changing mental states and behavior. There’s interesting questions about how to respond to unintended consequences of brain stimulation as well. Questions about identity, and personality, and capacity and freedom. These are all sort of fundamental values associated with being a human being. Philosophically complex questions of course, but let me try to give a concrete example.
So in the context of deep brain stimulation for Parkinson’s disease, there’s a minority of patients who develop behavioral problems due to the stimulation. And what it looks like is significant changes in mood and behavior, associated with things like compulsive gambling, compulsive spending, drinking, eating, sexual activity, irritability. So there can be a substantial change in personality and behavior in this, about 10% of patients. And this is usually dealt with changing the stimulation parameters, sometimes moving the electrode a little bit or sometimes just changing the frequency and amplitude of stimulation.
But one thing that has been observed is that in some patients, they may deny that there has been a change. There may be people who say, “Yes, there’s been a change,” but they don’t attribute it to the stimulation, or they do in a way attribute to the stimulation. What they say is, “This is good. This is my real personality coming through. It was the Parkinson’s that was holding me down. Now my new energetic behavior is the reflection of the real me.” Or there’s some people who agree with the change and think it’s driven by the stimulation, but really like it because this new, energetic, big spending, big gambling lifestyle is really enjoyable.
And so there’s some really interesting questions that come up. What is pathological? What is authentic? Who is the real person and who gets to speak for that person? If you have family members who are saying, “This person has completely changed and is putting their employment at risk, is putting our finances at risk,” or a variety of social problems that can flow from these abrupt changes in behavior. And what happens if the person is saying, “Actually, you know what? I like it. I want it to continue.” And normally we let people make their own healthcare decisions and choices about what state they want to be in if they’re capable. And in these cases, a lot people are capable. So it does pose a really interesting problem about how to handle these kinds of shifts in behavior and personality.
I would mention another issue, which I think is really interesting, which is what are we going to do with mental data or brain data from which we might be able to infer mental states? One of the things that the law is really not worried about very much is freedom of thought, and that’s because we couldn’t access each other’s thoughts. We spend a lot of time guessing what in people’s thoughts. In fact, we’re a social species; we’re pretty good at trying to read people and understand what might be in their thoughts, but we never actually have direct access. And so, if we’re starting to gather lots of brain data, does that start to actually give access to the content of thought?
As this technology develops and we get closer to more detailed mental content, I think we have to think very hard about what to do about that. That’s another project I have ongoing at the moment, which is to understand the inferences we make from brain data to mental states. What is that connection? What does it rely on? Is it reliable? But also what are the ethics of making that connection? When should we not do that? Conversely, when should we definitely do it because it would be unethical not to do it. So I think this topic of mental privacy is something that the coming years will require us to think really hard about what it actually means, what the boundaries of it are, what are the trade-offs we’re going to make between mental privacy and some other competing social need?
There’s a lot of people thinking about the things that you’ve just raised internationally, and some people are calling on international bodies to begin to recognize novel neural rights or cognitive rights, to begin to enshrine things like the right to mental privacy, the right to personal identity or personality, the right to free will. What’s happening internationally in terms of codifying some of these novel neural rights? What can you tell us about that?
So it’s a really active discussion at the moment, and it’s going on at multiple levels. There’s international organizations like the UN, Human Rights Division that’s looking at this actively at the moment. A report is expected next fall, 2024. There are other multinational organizations, regional bodies that have been looking at this as well, and even some countries that have started to modify their law to put in protection for certain aspects of these neural rights. So Chile is an example where the government has decided to modify its legislation.
There’s a different list of proposed rights depending upon who you ask, but some of the key ones, in my view, are this issue of cognitive liberty, which would include the right to alter your own mental states, as well as to protect them against forcible intervention. Another key proposed novel neural right is this mental privacy issue, and that would be to protect the privacy, and integrity of brain data and mental experiences associated with it. Another big right that’s mentioned is fair access to the kinds of brain interventions or augmentations. And the concern here is that some people will have access to neuro-enhancements, others not, and this will exacerbate inequality that already exist in the world. So these are three key ones that are being discussed. There could be many potential responses.
We could have some sort of broad United Nations instrument of some type, or we could say, “Actually where we really need change is in local domestic laws where the laws have teeth or more teeth.” And even if we look at it within a country, say Canada, the question is. Does it need to go in a constitution like the Charter of Rights and Freedoms, which is an overarching set of rights that protects against the government? Well, it doesn’t protect against private sector activity, so if we wanted protection there, we would need a different instrument, a normal piece of legislation, for example, at the federal provincial level.
So in a way, we have to ask ourselves, “What exactly is the problem?” in order to then choose the right tool in our legal toolkit? And a lot of the discussion that’s happening about this is sort of more at like we need a declaration to just set out a consensus on what we think is bad. It’s not meant, necessarily to be directly legally applicable, but more of a consensus statement at a high level. And even if that doesn’t have direct legal application in a country, it can still be kind of useful as an international statement of principles of a position. So it can have an influence in that way.
So that’s the first question; what level should this sit at? And a second question is, “Well, do we really want to have a whole set of rules, or principles or statements every time we have some kind of new technology?” If we step back and look at the way human rights are normally articulated, they’re few in number, these rights, and they’re articulated in broad terms. We talk about liberty, dignity, life, security of the person, equality, these are very broad concepts that are not meant to apply to only one narrow set of technologies or types of interventions. They’re expected to be able to evolve and encompass new situations that come up over time. And the problem with creating specific rights for every potential context in which we have a concern is we get a proliferation of these rights, and it starts to degrade a little bit the weight of what we say are fundamental human rights if we get too many of them. And secondly, if you make a specific right to capture, for example, a novel neuro right, you are at the same time implicitly saying, our existing rights can’t do this job. You actually put a limit around our existing rights each time you create a new right to occupy that space.
And so, we have to think pretty carefully, I think about whether the existing rights can already address the concerns we have or can be interpreted to evolve, to capture what is of concern to us before we create novel new rights. This is a big debate that’s occurring between legal scholars and everybody else who has an interest in this topic. We’ll see where we end up with this. There’s tremendous activity right now trying to figure this all out, and I think a fairly strong sense that something has to be done.
So speaking of what needs to be done, what’s your advice to us as information and privacy commissioners? What is it that we can or should be doing to contribute to this important debate on neuro technologies?
I think as privacy commissioners, I think a natural role is to be thinking ahead a little bit on this issue of mental privacy, and keeping up with what exactly are the kinds of data that are being collected and for what purpose. For example, there’s a very strong movement in neuroscientific research for open data. This makes a lot of sense because this data is very valuable, it’s hard to collect, and we want to really make use of it for the good. So I think privacy commissioners should be thinking about, “Okay, well what does this mean in terms of the privacy of individuals involved?” It doesn’t necessarily mean don’t collect it, but it might mean some other response to make sure that privacy is protected.
As we are able to make inferences about mental states from that brain data, I think it would be a good idea to keep an eye on the advance of that ability to decode mental states from brain data because that will be the moment where we start to worry a little bit more about the broad use of that data in the context of individuals. If this data can be tied back to individuals, of course we don’t know that for sure, but we should watch that. Thinking through the trade-offs as well, of what are the benefits of collecting and using the data, versus what are the downsides and on whom does that rest? Consent in this context is going to be a very bad way, as it often is to try to protect privacy. People often don’t understand what’s going on with the collection of their data, or if they have a foggy idea about it, they don’t think very hard about it. We all click here when we’re asked to click here for everything. I’m not sure who, if anybody ever reads these documents, and it would be rather interesting, I think, could a privacy commissioner look into finding out what’s in these documents? Where does this information go and what’s it used for?
And I would say that there has been a movement also in recent years, accelerated by COVID to remote telemedical management of these devices. So the information that’s being stored in them is presumably being sent across the internet to clinicians who are able to download, use it and tune up stimulators remotely like that. So we have a cybersecurity dimension as well as a potential data leakage issue going on there too. Of course, there’s all kinds of advantages to this, but it’ll be interesting to know what sort of provisions are in place for handling this information. So those would be a couple of the suggestions I would make to privacy commissions.
Thank you again, Jennifer, for joining me on the show, and expanding really my and our understanding of neurotechnology. It’s clear that neurotechnology has the potential to lead to amazing medical breakthroughs, but there are also serious privacy risks. We need to proceed with caution, ensuring strong legal and ethical safeguards are in place to protect our privacy at this new frontier, including the privacy of our minds and innermost thoughts.
For listeners who want to learn more about this topic, please visit the resources in the show notes to this episode, and for those who want to learn more about IPC’s work more generally, please visit our website at ipc.on.ca. You could also call or email our office for assistance and general information about Ontario’s access and privacy laws.
Well, that’s it folks. Thank you so much for joining us for this episode of Info Matters. And until next time. I’m Patricia Kosseim Ontario’s Information and Privacy Commissioner, and this has been Info Matters. If you enjoyed the podcast, leave us a rating or review. If there’s an access or privacy topic you’d like us to explore on a future episode, we’d love to hear from you. Send us a tweet @ipc.info-privacy or email us at [email protected]. Thanks for listening, and please join us again for more conversations about people, privacy and access to information. If it matters to you, it matters to me.
This post is also available in: French