Artificial intelligence and bias: A conversation with Layla El Asri

Diversity
Insights

John Stackhouse asks Layla El Asri, a research manager at Microsoft, about human’s roles in solving AI Bias and the societal and ethical implications.

Share

John Stackhouse
Senior Vice-President, Office of the CEO
RBC

To coincide with the RBC Disruptors event “Battling Bias in AI,” our team is examining the societal and ethical implications of artificial intelligence (AI). In this interview series, John Stackhouse asks Layla El Asri, a Research Manager at Microsoft Research Montréal, about the role of humans in solving AI bias.

When did you first become aware and concerned about bias in AI?

I’ve really been thinking about it just for the last three or four years. After I joined Microsoft, that was about the same time that news stories were breaking about different issues surrounding bias and AI. There was the famous example with Google software, where their algorithm misclassified an African American as a gorilla. That was just one example of bias in the product because the data was not representative enough. It was very striking for me because it was really terrible.

Then there was this example, also with Google software, that if you would type a name would be mostly used within the African American community, then you would get ads about searching for a criminal record. And that’s bias in the system, because of bias in the way humans were using the system.

So those examples were really striking and really showed that things could go wrong if we weren’t more careful with the data that we were using in the models that we were putting out there.

The model just tries to optimize its performance. It doesn’t ask, am I being fair?

As a scientist, how do you think about bias?

Your model can only be as biased as your data.

The model just tries to optimize its performance. It doesn’t ask, am I being fair? There are ways to put this into the model, but it needs to be put into the model. From a scientific point of view, that’s a matter of changing the objective of the models so that it not only tries to maximize performance, but is incentivized to not amplify bias or to try to reduce bias if possible.

So that opportunity to de-bug bias, if I can put it that way, is it as straightforward as you’re laying out? Just a matter of recoding?

You know, it really depends. It is possible in certain cases to try to kind of re-engineer the models so that it becomes less biased or unbiased. But in certain cases, it is just impossible. If you have data, for instance, that comes from human decisions — you might not even know of the bias that is present in the human decisions in the first place. And it’s a matter of really testing the model to see if it’s biased.

Sometimes in order to make it unbiased, you just need more data, especially when you have a problem with under-representation like you haven’t got data for darker skin in computer vision. There’s nothing you can do except collect data for darker skin and then retrain your models so that it learns about that data.

How do you at Microsoft come to grips with these challenges?

The way it’s been tackled here is very different. There are research groups that are dedicated to researching these questions — fairness, accountability, transparency and ethics. So questions like, what does it mean for a machine learning model to be fair? Fundamental questions that are yet to be answered. That’s at the research level.

And then there is also a committee within Microsoft which is called the AETHER Committee (AI and Ethics in Engineering and Research). This committee serves as kind of a consulting branch for product teams and leadership within Microsoft.

These groups know the technical issues with machine learning models, they know what they can and cannot do. And it’s really important to advise production and leadership about this, so we can all make an educated decision about whether or not it is safe to release machine learning at this stage or not.

Those are the kind of things that have been put in place within Microsoft as safeguards against unsafe use of AI, and ethical use of AI. And auditing has been a really impactful thing to do, too.

What kind of people are on these committees?

It’s actually a good mix of technical people and also people who have more of a sociological background. We have historians, we have anthropologists, sociologists, all working on these questions to try to understand the potential sociology consequences of certain patient learning technology.

I think we have to find a way to have a good working relationship between machine learning models and human beings, so that we can leverage the amazing adaptation capabilities of human beings and the amazing computational and kind of number-crunching capabilities of machine learning models.

You were talking earlier about the unintended consequences of machine learning. We have lots of unintended consequences with human learning and human decision-making. Should we be more confident in the ability of science to minimize the unintended consequences on the machine side?

You know in the future, I want to be optimistic that the answer will be yes.

But one really important flaw that I see right now, and which makes me lean towards no, is that the models are trained on historical data, so they cannot change. Unless you change their learning objectives, or you change the data that they were trained on. And currently they need a lot of data to learn something new.

Human beings, on the other hand, adapt very quickly. So if there is a problem with bias within your organization, you can talk to people and educate them and they will be able to react very quickly. A machine learning model right now will not be able to react very quickly. It will need a lot of data so it can be trained on it.

So I think that you know for the time being, as long as machine learning model cannot really adapt quickly and learn new things quickly, I think they have to work with human beings. And I think we have to find a way to have a good working relationship between machine learning models and human beings, so that we can leverage the amazing adaptation capabilities of human beings and the amazing computational and kind of number-crunching capabilities of machine learning models.

Maybe I can wrap up with a question about your own research in dialogue systems. Are voice and text biased? What should we, as consumers or producers of voice and text information, be thinking about?

If your model understands only certain people and certain voices and doesn’t understand for instance, the elderly or different accents, then you have a biased system because it doesn’t work for everybody. The good thing with human beings is that we kind of work with everybody, we kind of understand all sorts of accidents. Machine learning models might not always.

And even in text, if you only understand really well-formed English and your model doesn’t understand certain idioms or slang that might be used by certain communities, then you have a product that doesn’t really work for everybody and then you have a problem with bias.

You need to be able to understand all the people that you want to serve, really — all the people that you want your product to work for.

That’s fantastic. Really great insights. Thank you.

Great. Thank you.

Listen to the conversation on the RBC Disruptors podcast about the potential of artificial intelligence.


This article was originally published on rbcroyalbank.com

As Senior Vice-President, Office of the CEO, John advises the executive leadership on emerging trends in Canada’s economy, providing insights grounded in his travels across the country and around the world. His work focuses on technological change and innovation, examining how to successfully navigate the new economy so more people can thrive in the age of disruption. Prior to joining RBC, John spent nearly 25 years at the Globe and Mail, where he served as editor-in-chief, editor of Report on Business, and a foreign correspondent in New Delhi, India. He is the author of three books and has a fourth underway.

RBC Wealth Management is a business segment of Royal Bank of Canada. Please click the “Legal” link at the bottom of this page for further information on the entities that are member companies of RBC Wealth Management. The content in this publication is provided for general information only and is not intended to provide any advice or endorse/recommend the content contained in the publication.

® / ™ Trademark(s) of Royal Bank of Canada. Used under licence. © Royal Bank of Canada 2024. All rights reserved.


John Stackhouse

Senior Vice-President, Office of the CEO
RBC

Let’s connect


We want to talk about your financial future.

Related articles

How the skilled trades are winning over women

Diversity 3 minute read
- How the skilled trades are winning over women

Women in STEM: How a career 180 brought an astrophysicist to wealth management

Diversity 5 minute read
- Women in STEM: How a career 180 brought an astrophysicist to wealth management