Kay Firth-Butterfield

Search
Close this search box.

New York Times interview: 10 Women Changing the Landscape of Leadership

One of ten women selected by the New York Times, Kay talks about her career, her concerns about AI, protection of children and the way forward for AI governance.

In spring 2021, the New York Times interviewed 10 women from around the world who were leaders in fields ranging from agriculture to finance.

You can read the interview here: https://www.nytimes.com/live/2021/03/05/world/women-leadership/kay-firth-butterfield-artificial-intelligence-technology

Women interviewed:

  • Haunani Kane | Climate | Hawaii
  • Fernanda Canales | Architecture | Mexico
  • Verónica Pascual Boé | Engineering | Spain
  • Fatoumata Kébé | Astrophysics | France
  • Aya Mouallem | Engineering | Lebanon
  • Emma Hodcroft | Epidemiology | Switzerland
  • Ismahane Elouafi | Food Policy | Italy
  • Kay Firth-Butterfield | Artificial Intelligence | Texas
  • Gayle Jennings-O’Byrne | Finance | New York
  • Judith Winfrey | Farming | Georgia

Excerpt of interview with Kay Firth-Butterfield in New York Times:

You have had an unusual career trajectory, from British barrister to a global leader in A.I. ethics. How did you make that transition?

When I started practicing, there were not many women who were barristers; we were hustled into family law and all things touchy-feely. But it was the perfect place for me, because I liked helping people. There comes a time, however, when barristers become judges, and while I was duly selected, I found I didn’t like being a judge. I opted to retire, and I moved with my husband and daughter to Texas. There, I taught law, became involved with a local nonprofit which worked to rescue human trafficking victims, and wrote a thesis on the future of the law, including the impact of A.I.

On a plane from London to Austin in 2014 I met the C.E.O. of an A.I. start-up, who was, by serendipity, moved to my row. We spoke for 10 hours and he asked me to be the chief A.I. ethics officer at his company. Eventually, I was recruited to join the World Economic Forum.

Are there guiding principles that should govern the adoption of artificial intelligence generally?

There are at least 175 different principles out in the world regarding ethical A.I. But whether you’re in Beijing, Boston or Bogotá, the principles all have 10 fundamental aspects that we, as humans, clearly believe, including transparency, accountability, privacy and explainability about what you’re doing with data. In addition, we need to ask if the use of A.I. will benefit humans and the planet, or is this just a company that wants to make as much money as possible and then sell out next year. And of course, there are the really important pieces like bias, inclusiveness, safety and security.

The European Union has been at the forefront of regulating digital privacy. Is it similarly ahead on A.I.?

It depends how you define “ahead.” The E.U. is likely to legislate on A.I., specifically on what we call high-risk cases that involve facial recognition and human resources. But equally, it’s debatable whether legislation is the right thing for A.I., simply because legislating is a slow process, and by the time a law or regulation is enacted, the A.I. tools already may have changed. What is needed are agile governance tools.

So what is the model?

Companies should definitely have an A.I. ethics officer, as well as an ethics statement as a start — it’s what customers and the public want in terms of trust. But we will need some sort of regulation for things like facial recognition and cases where A.I. impinges on people’s rights and liberties, such as in judicial uses, hiring and loans.

Is there any aspect of A.I. that you’re particularly concerned about?

We’re allowing our children, including those under 7, whose minds are very receptive to embedding values which can last a lifetime, to play with A.I.-enabled devices and toys. But we don’t know what the toy companies’ privacy rules are, and even though they may be labeled educational toys, we don’t know what they’re teaching.

This may not be a problem with companies we trust, but the majority of A.I.-enabled toys are created by small start-ups around the world. Everyone says A.I. will be able to educate the world — that’s the holy grail. But let’s do things safely.

At the forum we are partnering with others to set up an awards program to honor companies that are thoughtful about incorporating A.I. into toys. This is a passion project of mine.

What can be done to fix the gender imbalance in the sciences and engineering? And have you experienced any obstacles as a woman?

Only 22 percent of A.I. scientists are women, and that is just one statistic. While I think we’re increasing the number of women, as well as people of color in technology, we’ve got some years before we get it quite right. I’m on the advisory board of AI4ALL, which is seeking to get underrepresented groups interested in technology through education and mentorship.

I have not felt as many obstacles in this part of my career as I did as a young barrister, but I know women who are A.I. scientists feel this.

Share: