Last month, as part of our NCC Conversations series, we explored the issue of systemic racism, and how it impacts our industry, wider society, and the technology that we use every day.
Artificial intelligence systems – which are most commonly implemented using machine learning, a subset of AI – makes up a part of this, and the potential impact on our everyday lives is hugely significant. It’s an entirely new way of programming systems and computers, and the possibilities are endless.
However, with this growing influence comes concerns about making artificial intelligence systems more inclusive and accessible.
To build a safer and more secure future for all, minimising bias in artificial intelligence is crucial. Machine learning algorithms are underpinned by data and designs, which, in turn, are defined by the teams that build these systems and make decisions on how they should be trained.
As part of this month’s focus on systemic racism, our Race and Ethnicity Steering Committee set up a panel to discuss the future of these systems, and what the tech industry can do to reduce bias.
How big is the problem of bias in machine learning?
Felicity Hanley, commercial account manager and vice chair of the Race and Ethnicity Steering Committee, gave us some personal examples of how she’s experienced bias in machine learning systems. “Artificial intelligence is supposed to make life easier for us all. While it can do this, it can also amplify sexist and racist biases from the real world. Some of my personal experiences of AI not going to plan include social media filters making my skin appear whiter. Another example was with an old phone that wouldn’t unlock on the biometric facial recognition setting if the room was dark, whereas it did unlock for a friend who had lighter skin under the same conditions.”
As artificial intelligence becomes increasingly ubiquitous across our lives, the potential for bias becomes far greater. Matt Lewis, NCC Group’s commercial research director, said: “There are quite a lot of instances where AI is being used, and we’re probably not aware of it. The use of facial biometrics is well known and it’s happening in a number of scenarios – not just for authentication to our mobile devices, but also in surveillance applications.”
The UK government’s review into bias in algorithmic decision-making highlights the scale of the issue, with the report stating ‘it has become clear that we cannot separate the question of algorithmic bias from the question of biased decision-making more broadly’. Kat Sommer, NCC Group’s head of public affairs, said: “The report looked at financial services, and the example they mentioned is credit scoring. The unfairness comes in when people who don’t adhere to standard financial backgrounds are treated unfairly because the availability of data is not there to train the models.”
How can we reduce bias in these systems?
A first, important step is to ensure AI is trained on representative datasets. Creating synthetic, representative data could be a future solution to this, and when it comes to developing these systems, as Matt Lewis says, it’s important to examine the datasets used to train algorithms and “see if there is diversity in the [development] team itself.”
Kat told us that the above report recommends taking a multidisciplinary approach to reviewing systems or algorithms to reduce bias wherever possible. “Not just looking at it from a technical perspective, but looking at it from a policy, operational and legal perspective”, she adds.
However, responsibility for issues shouldn’t simply end when products or systems have been released. “Having been on the disclosure side of reporting vulnerabilities to third parties, it’s always been a challenge to try to connect to a human on the inside to raise an issue”, says Matt Braun, regional vice president. “From a proactive standpoint, businesses need to recognise that algorithmic issues are a class of bugs, and there needs to be a way to receive information about those issues from researchers or members of the public.”
The issue of bias in machine learning and artificial intelligence systems is pervasive and impossible to resolve quickly. However, as NCC Group’s data protection and governance officer, Nadia Batool, said: “The panel agreed that having such a broad group of people to discuss AI is invaluable. We recognised that there are no ideal solutions as of yet, so we're now looking at ways to keep this conversation going; so we can keep improving the impact we have as a business, whilst also meeting the Committee's goal of influencing wider societal change.”
Contact
NCC Group Press Office
All media enquires relating to NCC Group plc.