Explore UCD

UCD Home >

Responsible AI is everyone’s responsibility

Wednesday, 12 March, 2025

The relentless march of AI into our lives is raising questions that we all need to be asking - including the question of what kind of society we want.

That’s according to Reggie Townsend, the Vice President of the SAS Data Ethics Practice (DEP), who will speak at the The AI and Responsible Conduct of Research symposium in University College Dublin this month.

“The issue with AI is that, as a technology, it has the power to impact people at scale,” he says. “We want that impact to be for the net benefit of humans, not for the net harm.”

Townsend believes that, if left unchecked, AI could reinforce biases, widen social divides and create harms - and that as citizens, consumers and decision-makers, we all have a duty to ask tough questions and demand responsible AI practices.

“I define it as a duty to care,” he says. “We all have some measure of responsibility, as we are both taking from and generating for society.”

Faces and places  

The concepts of responsible AI and data ethics can seem nebulous, but technologist Townsend brings them into sharp focus with concrete examples such as facial recognition and navigation.

“AI is showing up when we can open our phone by looking at it,” he says. “But if

facial recognition technology recognises your face and doesn't recognise my face because of the hues of our skin colours, that's problematic. So data ethics would say let's make sure that when we're using data to create facial recognition algorithms that we're using training data that is reflective of all hues.”

The same phone probably has an app for navigation that can bring you from A to B through a route that’s optimised for safety.

“It's legitimate for customers to want to feel safe, and the feature gets created because the customer is asking for it. But morally, politically, is it still as legitimate if we're routing around certain neighbourhoods, raising the question of when will those neighbourhoods ever get investment? I grew up on the south side of Chicago. People visiting Chicago are often told, don't go to the south side, but it was safe for me. Who gets to determine safe?”

Trade-offs in healthcare

AI is transforming healthcare, with the potential for enormous benefit, but Townsend reflects on how that transformation can involve trade-offs.    

“With AI, we can push for more personalised medicine in a way that we've never been able to before,” he says.

“But that means I have to give up information about my body, to have it measured and monitored and to give up my privacy. And then people can take what I give and take advantage of it. Is that trade-off worth it, or can we erect structures, barriers, that allow us to go to personalised medicine but prevent access to that information by those who might exploit it? These are social, personal, choices that we have to make.”

A tender point in time

Townsend believes we are at a ‘tender point in time’, thanks to a confluence of technological change, including the capacity of AI to have impact at scale; increasing polarisation towards extremes of view in society and decreasing public trust in experts and institutions.

His own interest in responsible AI was amplified by the social turmoil after the death of George Floyd in 2020, and he cautions that everyone needs to be aware and to keep asking questions.

“One of the themes that I really push on is AI literacy,” he says. “We have to make sure that more people are aware of what is going on, so that they can make informed choices about how AI shows up in their lives and decide whether they need to call their political representatives and say, no, vote for this as opposed to that. Instead, we're letting all of this stuff go by without critically questioning it.”

Townsend also wants to see people ‘move up out of their phones’ and disengage from social media long enough to engage across communities at a human level.  

“While we still can, let's figure out how we can get the greatest benefit, because what we don't want is for these really advanced capabilities to fall in the hands of people who are prepared to exploit the confluence of events,” he says.

As a touchstone, he points to principles followed at SAS that promote openness and transparency.

“We start off with the idea of human-centricity. We talk about accountability and robustness and privacy and security and transparency. We have to make a decision about how our technology is going to be deployed, and sometimes you have to say no. Those aren't easy choices, and you need appropriate levels of transparency where you let people know here's what we're up against, here's what we're thinking, here's why we made the decision that we did. Because at the end of the day, we don’t want to hurt people.”

The AI and Responsible Conduct of Research symposium will take place from 2-5pm on March 13th 2025 at University College Dublin. Hosted by UCD Institute for Discovery, the UCD College of Health and Agricultural Science Responsible Conduct of Research Committee, and the UCD AI Healthcare Hub, the symposium will showcase the use of AI in research, and address the resulting technical and ethical challenges. Book a free spot here: (opens in a new window)https://www.eventbrite.ie/e/ai-and-responsible-conduct-of-research-tickets-1219588294419