Feature Story
Devil or angel? Experts debate merits, dangers of ChatGPT at HIMSS
July 5, 2023
CHICAGO – The keynote panel discussion about ChatGPT at HIMSS in April could easily have been titled, “Generative AI, the Good, the Bad and the Ugly.”
The four panelists – and speakers at HIMSS, in general – agreed that ChatGPT amounts to a kind of sea-change in technology. As one speaker ventured, it’s a once in a generation kind of technology, perhaps as significant as the iPhone, which appeared in 2007.
ChatGPT and other forms of generative AI – that is, artificial intelligence that can produce new knowledge by cobbling together data from multiple sources – has been able to pass medical and law exams, write essays and songs, and can even diagnose rare diseases that may escape the notice of a physician.
It’s these powers that have amazed people around the world, from Harvard data scientists to high school students. And it’s what makes ChatGPT not just good, but potentially great.
Andrew Moore, a former director of Google Cloud AI and a one-time dean of Carnegie Mellon University’s School of Computer Science, urged a capacity audience at HIMSS to move ahead quickly with ChatGPT and genAI to learn how to use it and to take advantage of its intelligence.
“Don’t worry about what’s coming next, start now, to make sure you’ve got people with expertise.”
He added, “Don’t wait for Carnegie Mellon to do it for you, because it won’t happen.” Moore is currently founder and CEO of a start-up company called Lovelace AI, which provides AI support to national security agencies.
Moore alluded to the monumental intelligence of ChatGPT and its abilities to solve problems. In particular, he said the new technology will be able to solve problems that have stymied the experts, from diagnosing and treating diseases to fixing supply chain issues.
He described an incident that occurred a few years ago when an earthquake in the Western U.S. disrupted some nuclear materials buried underground in Colorado. National security officials called him at Carnegie Mellon and asked for help. In response, Moore put together a team of instructors, post-docs and students. Using advanced AI technology, they constructed a robot that could trudge underground and determine the scope of the nuclear problem.
Similarly, he believes that genAI will be used to solve problems in the healthcare sector, but hospital technologists will have to take the lead in devising their own innovations.
“In your hospitals,” he said, “you should have a group that’s looking at new technologies, to be able to develop solutions for patients and local needs as soon as they become available.”
Peter Lee, chief healthcare scientist for Microsoft and also a former head of computer science at Carnegie Mellon University, also touted the advantages that ChatGPT may bestow on the healthcare sector. But he pointed out some of the “bad” characteristics of the system, the bugs that still need to be ironed out.
He observed the software scored 93 percent on its U.S. medical exams, which is quite an achievement. But he also acknowledged that the missing 7 percent is a bit troubling, and that you wouldn’t want to be a patient receiving advice that the machine got wrong.
Nevertheless, Lee asserted that the software can still be very helpful, as long as it has a human overseer checking its work.
As an example of its usefulness, Lee mentioned that his 90-year-old dad has been ill lately, and that he’s been taking all kinds of medical tests. “I have little ability to interpret his lab results, and there’s an opportunity for ChatGPT to act as an assistant and to explain the reports. This is a real boon.”
Dr. Lee is co-author of a recent book titled, “The AI Revolution in Medicine,” which he wrote with journalist Carey Goldberg and Harvard University data scientist and physician Dr. Isaac Kohane. His company, Microsoft, is also a major investor in OpenAI, the firm that developed ChatGPT. Indeed, in January, Microsoft invested $10 billion in OpenAI.
Even with his tight connections to ChatGPT, Lee was a little more reserved about it than Moore. “There are incredible opportunities with genAI. But there are significant risks – some of which we don’t know about yet.”
Another panellist at the HIMSS kick-off discussion, Reid Blackman, had a darker view of genAI. He warned the audience “not to be fooled,” and said the software is not really intelligent, it only looks like it is.
“It’s a word predictor, not a deliberator,” said Blackman, who holds a PhD and is author of the book, “Ethical Machines”. He asserted that ChatGPT and other forms of genAI are still black boxes that can’t tell us how they came to their decisions.
“If you’re making a cancer drug and you enlist the help of ChatGPT,” for example, “you want to know exactly how it arrived at the formulation. But ChatGPT doesn’t give you reasons. It’s responding
to a set of words. But don’t be fooled, those aren’t reasons.”
Blackman noted that generative AI systems work by manipulating huge libraries of words and are trained to predict the most probable sets of words in response to questions they are asked. For that reason, they are referred to as “Large Language Models” or LLM.
Kay Firth-Butterfield, CEO of the Centre for Trustworthy Technology, was also harsh in her criticisms and warnings. The former law professor and judge is currently a barrister in London, UK, and is considered one of the foremost experts on the law and governance of AI.
In particular, she asked, when ChatGPT or other forms of genAI get things wrong in a clinical setting, “who are you going to sue?
“Are we going to sue the person who is using the tools? That’s you.
Alternatively, will the developer of ChatGPT, namely OpenAI, be liable? Will doctors or health organizations that have contributed to it be responsible? These issues, she noted, are still up in the air from a legal point of view.
Firth-Butterfield also commented on ethical issues, such as access. She pointed out that while 100 million people signed onto ChatGPT right after the most recent version was released in March, there are still 3 billion people on the planet who have little or no access to the Internet. If it does turn out that genAI is a transformational tool, these people will be left even further behind.
For his part, Moore is betting that genAI will become a revolutionary tool that benefits mankind. His point, however, is that in order to obtain these advantages, people will have to learn how to use it as soon as possible and adapt it to their needs. For healthcare, he said, that means developing expertise at the hospital level.
“Get your hands on it and understand this technology,” he said. “Don’t wait to see what happens.”