COMMENTARY

Should AI Join Medical Ethics Committees? Ethicist Says Not Yet

Arthur L. Caplan, PhD 

Disclosures

May 08, 2024

This transcript has been edited for clarity.

Hi. I'm Art Caplan. I'm at the Division of Medical Ethics at the NYU Grossman School of Medicine. Not too long ago, I had a little interaction, just for fun, with OpenAI's chatbot. I decided to ask it to write my obituary, and it did a pretty good job.

It captured many key events, things that I've written and been involved with, but it did have a few errors. It didn't note the fact that I wasn't dead. It didn't note correctly where I was born. There were a couple of mangled connections to institutions or committees that it didn't quite get right. I might give it 85%-90%. It was pretty accurate, but not 100%.

That experience coincided with a paper that appeared recently in the AMA Journal of Ethics, raising the issue of the role of AI in sitting on either an ethics committee or a human experimentation committee. In other words, do we have a place, a chair at the table, for the equivalent of a chatbot?

In the future, the answer is absolutely yes. Why? As the chatbots get better information, as they skim and screen more and more history, more and more legal documents, I think they're going to be there as a resource. It'll be easy to see what the precedents are, what other institutional review boards (IRBs) might have done, and what other ethics committees might have decided.

It'll be easier to see what the legal requirements are in a state or community — what the law says should happen — and whether ethics committees or IRBs want to follow the letter of the law or try to carve out something that might be a little different in the way of a compromise or a way forward. They do absolutely have a future.

That's not quite, I think, what people are thinking about, at least totally. It isn't just using AI as a kind of gigantic database reference machine, if you will, or a giant Google search agent, which probably is getting used by some health lawyers or some committees already in clinical practice.

I think the article was asking whether AI should have a vote. Should AI be sitting around the table saying, well, weighing all this information, this is the way I would come out in making a decision to say, "Approve that research protocol," or "I think we should permit the request of this patient to do X, Y, or Z."

I don't think we're there yet. That is not something that we have to worry about today. My chatbot experience with my own obituary, looking at the literature on how well chatbot doctors are doing in trying to provide psychotherapy or counsel about different types of health problems - there's still too much error. There's still too much confusion.

It isn't the case that AI has gotten to the point yet where I could trust the adequacy of those decisions, which in a way are based on publicly available information that the chatbot studies, learns, and can regurgitate faster than most humans.

Also, I'm not convinced that chatbots are very good at empathy, sympathy, or putting themselves in the shoes of another. I think they're very good at information management and moving fast, but it isn't the case yet that I would say the properties that allow for sound ethical decision-making — empathy, sympathy, trying to show compassion — I'm not sure our AI friends are quite there yet. Will they ever get there? I'm not sure. I'm not enough of an expert, but I know from watching and listening to AI responses that I've checked out that it's definitely not there now.

The other reason to be dubious about giving them a vote is that, at the end of the day, we're not just a rules-based decision-making apparatus with algorithms to say, well, this procedure, we're going to approve it as research; that therapy, we're not going to allow it to be done because it's too risky or the person seems incompetent.

It's not just that there are many variables in play; AI is very good with many variables. What I find myself thinking is that we want situations where we're also going to go after mediation, prolonged conversation, or trying to sway opinions before we get a resolution.

I guess I feel that AI might be a little quick to try to follow a rule to come up with an answer. My own experience over the years on many committees is that it isn't the answer that we want to find; it's a compromise, a mediation, or a way to find some kind of a resolution that all parties can agree to, but perhaps nobody gets exactly what they want.

Much of ethics, therefore, doesn't involve getting the answer, which I fear may be AI's orientation. It involves getting to a comfortable resolution where people feel that they can go on, respect one another, and each get a piece of the pie that makes everybody happy.

Is AI up to the great compromises? I'm not sure it's there yet. Will it ever get there? I don't know. Let's all tune back in 10 years and see what we think.

I'm Art Caplan at the Division of Medical Ethics at NYU Grossman School of Medicine. Thanks for watching.

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.

processing....