Kay Firth-Butterfield

Search
Close this search box.

AI and Cancer – AI in Healthcare

Jennifer Schenker interviews Kay Firth-Butterfield about AI in healthcare and cancer. As a recent breast cancer patient, Kay is able to reframe her expert knowledge around the responsible deployment of AI through the lens of a patient.

by Jennifer L. Schenker

This article was published on The Innovator and you can read the original copy here.  It is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.
Kay Firth-Butterfield discusses AI in healthcare, AI governance and data in womens' health
Kay Firth-Butterfield, expert in responsible deployment of AI
Jennifer L. Schenker

Written by Jennifer L. Schenker

Kay Firth-Butterfield, one of the world’s foremost experts on AI governance, is the founder and CEO of Good Tech Advisory. Until recently she was Head of Artificial Intelligence and a member of the Executive Committee at the World Economic Forum. She is a barrister, former judge and professor, technologist and entrepreneur and vice-Chair of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Firth-Butterfield was part of the group which met at Asilomar to create the Asilomar AI Ethical Principles, is a member of the Polaris Council for the Government Accountability Office (USA), the Advisory Board for UNESCO International Research Centre on AI, ADI and AI4All.

She sits on the Board of EarthSpecies and regularly speaks to international audiences addressing many aspects of the beneficial and challenging technical, economic and social changes arising from the use of AI. She recently spoke to The Innovator about the role AI played in her own recent successful battle with cancer.

Q. You have been working on AI Governance for over a decade now. What made you focus on AI and cancer

KFB. Following a routine mammogram in July 2023 I was told cells were found which could be cancerous. That was the start of my encounter with a disease that no one in my family or among my friends had ever had. I was amazed to be told that one in eight women will get breast cancer in their lifetime. Many die because they don’t receive treatment soon enough. I wanted to know how AI might improve my chances of making a full recovery and work out for myself where the governance problems might lie in using this new tech option.

Q: Were your mammogram results read by AI? If so, do you think this is a good use of

KFB. We know that one of the great skills of AI is its ability to trawl data and match patterns in that data. Computer vision is now very good and so I didn’t really worry whether or not a machine was reading my mammogram but rather whether my radiologist was double checking.

Even if no doctor was doublechecking it would not have concerned me because mammography biopsies, the next step in my process, while uncomfortable, are not life threatening. Obviously, this would have been a much different decision if the next step of the process was deeply invasive or dangerous, for example, with a brain tumor. In that type of case there must be a systematic approach to how humans and machines make decisions. This is, of course, a conversation much wider than simply use in cancer care.

In my case I had two biopsies with MRI. AI was not used but there are reports of AI successfully helping with the reading of MRIs in prostrate and brain cancer cases, as the technology is is able to detect very small lesions.

Q: We know that some breast cancer is cause by families having a genetic predisposition. Is AI used in genetic testing?

KFB: You are talking about the BRCA set of genes which afflict some families (men and women). Its important to have the test because having the gene radically affects treatment offerings and is helpful to the next generation.  AI has been helpful in such screenings.

See also: Society of Clinical Data Managers – Annual Conference 2023

Q: You eventually needed surgery. Did the surgeons who operated on you use AI?

KFB. My surgeons did not use AI. I have been doing a lot of work with hospitals in the U.S. Managers are concerned about their doctors asking non-proprietary LLMs [large language models] to double check their diagnosis and treatment plans.

Their concerns are twofold, first that patients will receive improper care, which could affect their outcomes and secondly that their hospitals might be sued for incorrect treatment. I have helped them to create both principles and practice around use of LLMs within a hospital setting. That’s why when I asked my surgeons  if they used any AI in their treatment plans for me. they replied that while they had experimented with LLMs they didn’t trust the current models  sufficiently to use them.

Of course, they were referring to the known problem with LLMs of hallucination, in other words making stuff up and presenting it a correct.  In my view their instinct to not use the technology yet is absolutely correct and if they do use it to double check findings.

An additional problem which we are seeing is cannibalism which occurs when an AI has generated a response to a question which is wrong and then that answer becomes part of the data on the Internet; a similar question will then be liable to the LLM picking up the incorrect data, disseminating its answer to the Internet, spreading the misinformation more widely.

Q:  You have often spoken about the paucity of data in women’s health. Was this a concern in your treatment

KFB: Actually, here again, I was fortunate. I am a white woman and second to white men there is more health data about white women on the Internet than anyone else (save perhaps in China.) In fact, because breast cancer is a disease which mainly affects women the bulk of the data is useful for diagnosis and treatment for me. This would have been very different had I been suffering from a cardiological problem where the bulk of the data available is on white men over 55 who live in the U.S.

It would also have been different had I been a person of color because the data is more sparse. We have seen how in many areas of medicine this can lead to poorer treatment and outcomes.

Q: Following surgery you had radiation. Was AI used in that treatment?

KFB: Yes, the radiologists used AI to help set up the machines to give the smallest possible dosage to the smallest possible area. The important thing in this case is to get the right plan so as to avoid impacting the heart and lungs. Traditional AI (machine learning) is hugely helpful for this work. However, again, I was reassured that everything was checked and overseen by my radiation oncologist.

Q: You have now been told that you are cancer free with no likelihood of recurrence. Congratulations ! Is AI a factor in any ongoing care?

KFB: The ongoing treatment for my type of cancer is taking a daily drug. AI is being increasing used in drug discovery and so I anticipate ever better drugs will be developed for cancer care in the future.

Q: Do you feel that AI helped free up your doctors to spend more time with you?

KFB. No, but both of my surgeons gave me their mobile phone numbers and told me to text with any questions. I never waited more than the length of an operation they were performing for a response. Additionally both my radiation oncologist and my oncologist have been extremely responsive to emails. However, I believe I was lucky with my medical professionals and there are millions of people around the world who do not have that quality of care.

My hope for the future with AI is that it can do the ‘back-end’ parts of a medical professional’s job, freeing doctors and nurses up to give  personalized care to patients. While in many cases people fear they could lose their job to AI in this case I believe that a true partnership between AI and healthcare providers can and must be the best way forward for humanity. Indeed, not just for humanity as veterinarians are increasing using AI to help treat their animal patients.

Q: I am reminded of some work you did with members of the World Economic Forum while you were Head of AI there: an AI healthcare tool called CHATBOT RESET, which The Innovator wrote about.

KFB: This work was done in the days before Generative AI. The work was to create the right responsible AI principles and practices for the design, development and use of chatbots in healthcare. The work was piloted in Rwanda and India in the form of a chatbot which provided triage for patients calling in. During the pilot all calls were screened by a trained medical professional but it is hoped that in countries where there are very few doctors per patient such AI systems can be responsibly built and trained to be a real helpmate to the very busy doctors and nurses.

Q. You recently left the Forum and created your own company, Good Tech Advisory. What do you hope to achieve?

KFB:  What I hope is that with the new company I can educate and train companies and others in the responsible use of AI. In fact, I was eager to give back and I am working with one of my surgeons to help to train medical students in the use of AI.  One of the interesting things which has come out of this work so far is the literature review the students did on AI and breast cancer. While there are many studies on the use in imaging there are few uses elsewhere in the process.

Good Tech Advisory will also be working with countries, legislators, judges, and others to help them to develop the necessary tools to avoid, or at least mitigate, the bad outcomes of AI and get to positive outcomes for all with AI.

This article was published on The Innovator and you can read the original copy here. It is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.

Share: