How to ensure AI reflects us more fairly
Wednesday, 23 April, 2025
Pictured: Reggie Townsend, at the UCD AI and Responsible Conduct of Research Symposium, March 13, 2025.
When you look in the mirror, what do you see? Now think about how AI might reflect you, and the society in which you live. How can we make sure AI reflects fairly for all?
This was the basis of a provocative thought experiment that Reggie Townsend,Vice President of the Data Ethics Practice at SAS, put to an audience at University College Dublin earlier this month.
In his keynote atthe AI and Responsible Conduct of Research symposium,Townsend dug deep to unearth the troubling power dynamics at the heart of global AI development, and to ask the important questions needed now for a more equitable, trustworthy approach.
Mirror, mirror of my data
Townsend asked the200-strong audience to think about how transformative the first mirrors must have been for our ancestors.
“To be able to see yourself in the mirror, these reflective surfaces fundamentally changed how humans understood themselves,” he said. “I'm going to suggest that we are standing before a new kind of mirror and we just happen to call it artificial intelligence. And like those first reflective surfaces, it's changing how we see ourselves, how we understand our capabilities and how we envision our future.”
The AI ‘mirror’ reflects our existence as recorded in data, noted Townsend, and the technical failures or distorted views we get from it reflect patterns in the data itself.
“It magnifies our brilliance as well as our biases. Our wisdom as well as our shortsightedness,” he said. “It's showing us the biases embedded in our data… and we shouldn't accept AI systems that reflect a distorted version of humanity. The question is, how do we ensure that these reflections help us become better versions of ourselves? And I would offer that it's not only a question, but that's the profound opportunity that we have in front of us right now.”
With the power to collect, aggregate and use data comes great responsibility, according to Townsend, and he sees that responsibility as a personal, professional and collective “duty to care”, and to own actions and their consequences, whether those consequences are intended or not.
In AI we (do not) trust
AI is a major economic force – expected to contribute more than 15 trillion US dollars to the global economy by 2030, according to Townsend – but it currently lacks in another currency: trust.
“Global public trust in AI remains low, and trust is the preeminent currency for society,” he said. “Today's primary beneficiaries of the AI revolution are predominantly large technology companies and wealthy nations, [and] centralisation of power and resources isn't just problematic from an equity standpoint. It threatens the diversity of perspectives that we all need for a truly beneficial AI development. Because when people don't feel as though their interests are included in development, over time, they reject it.”
The path to trustworthy AI is not mysterious though, he noted. “It simply requires shared risk, shared reward, and shared responsibility. Because when both the challenges and opportunities are distributed more equally and when stakeholders from diverse backgrounds [have] meaningful input into AI development, their trust can flourish.”
Flow responsibly
Regulation has a role in protecting society from the harms of AI, but it should not stifle innovation, according to Townsend, who argued against a rigidly binary approach.
“The river and its banks are not enemies. They define each other. The banks without the river are merely ditches. The river without the banks is a flood. We need them both. You can have innovation and regulation. They are not mutually exclusive.
We cherish the freedom to innovate, to create, to push boundaries, yet with that freedom comes the obligation to consider the ripple effects of our innovation across society. In the context of AI, I believe that responsibility means acknowledging when we build systems, that we can make decisions affecting human lives, we have to remain accountable for those decisions.”
“The river and its banks are not enemies. They define each other. The banks without the river are merely ditches. The river without the banks is a flood. We need them both. You can have innovation and regulation. They are not mutually exclusive.
We cherish the freedom to innovate, to create, to push boundaries, yet with that freedom comes the obligation to consider the ripple effects of our innovation across society. In the context of AI, I believe that responsibility means acknowledging when we build systems, that we can make decisions affecting human lives, we have to remain accountable for those decisions.”
Effective AI governance and standards are key to responsible AI, he added, as well as ‘response-ability’, or the ability to respond to the issues that arise as technology develops.
Personal questions
While governments and organisations have key roles, Townsend highlighted the importance of personal literacy and agency in responsible AI.
“I think each of us, each one of us, regardless of our technical background, needs to develop the capacity to ask critical questions about the AI systems that we all encounter,” he said. “What data was used to train the system? Who benefits from its deployment? What oversight exists? What happens when it fails?”
The future is nuanced
And, optimistically realistic in his outlook for the future of trustworthy AI, Townsend urged a nuanced approach that avoids polarisation.
“In rigid thinking, whether it's blind enthusiasm or every new AI tool, or whether it's categorical rejection of all of them, this rigid thinking or this certainty prevents us from making nuanced judgements, and this is a moment for nuance,” he said. “We have to resist the urge to isolate into binary tribes of us versus them.”
That more balanced, responsible approach could ultimately reflect better on all of us.
“When future generations look into the mirror of AI, the reflection they will see starts with all of us, you and me,” he said. “So here's a proposition. Let's make sure they see a technology that amplifies our wisdom rather than our biases. Let's make sure they see a technology that distributes power rather than concentrating.”
The AI and Responsible Conduct of Research symposium was hostedby UCD Institute for Discovery, the UCD College of Health and Agricultural Science Responsible Conduct of Research Committee, and the UCD AI Healthcare Hub.