Why Ethical AI Is Central to Brand Trust and Communications in 2025
Not since ‘The Cloud’ became mainstream in the early 00’s has a topic dominated the tech and communications landscapes as AI did this past year. As we enter 2025, many brands are expected to evolve their external comms strategies, shifting from launch stories and market tests to progressive, core messaging themes.
In 2025, alongside growing public awareness of AI’s potential benefits and risks, ethical AI will move to become a central issue for brands. The impact of poorly controlled AI — from algorithm bias to data privacy — will place companies under pressure to show they are deploying their AI solutions responsibly. For brands, this means ethical AI ceases to be a check box and becomes a critical component of external communications, essential for brand trust, corporate reputation, and long-term success. As we look ahead, companies that can effectively communicate their commitment to ethical AI will be better positioned to navigate the evolving landscape while differentiating themselves as leaders in this space.
With this in mind, we sat down to speak to Kush Wadhwa, CEO and founder of Trilateral Research, an ethical AI company that specialises in responsible AI solutions and support services, including AI governance, cyber security and data protection. We explored the role of ethical AI and how organizations can overcome the challenges associated with removing AI bias.
Hi Kush. Let’s start with the obvious. Are organizations investing enough in responsible AI?
No. Unfortunately there are statistics that show a significant gap between those building AI and those focusing on safe AI. The Center for Humane Technology, for example, has found a huge difference between people working to build AI and those working on building safe, ethical AI.
Naturally, investment also differs depending on the use case; the problem that you are trying to solve. How you should invest or go about removing AI bias will change depending on the context.
What exactly makes AI unethical?
That’s a good question. Ethical AI is designed to operate without bias, ensuring any algorithms and data used are fair and secure. If AI bias exists within the algorithms or data, lacks security, or doesn’t comply with data protection laws, it is considered unethical. These are some of the basic indicators of irresponsible AI.
However, the reality of addressing these issues is complex. It’s important to provide adequate education, literacy, and training to people interacting with AI to understand its outputs. AI is constantly evolving with new data, so there must be a plan to check that outputs remain unbiased and transparent.
If AI is ineffective, not serving its users, or causing harm, it is also deemed irresponsible. The context, use case, and specific problem the AI is addressing play significant roles in determining its ethical standing.
Can AI bias ever be undone if humans input the data?
Humans are often unconsciously biased. Different datasets pose different issues, that’s why it’s crucial to understand the risks related to these biases. An interdisciplinary approach is key to identifying and mitigating biases. For example, in police systems, data might have reporting biases based on the people collecting that information; the perspective with which they captured a video, or the way that a report was taken down. To address these biases we need adequate transparency, explainability and literacy built in at the front end. Then, on the other side, everyone utilising the outputs must have a clear understanding of how to apply the data. In short, the entire value chain needs to be addressed.
Why is investing in responsible AI better for businesses?
Great question. There was an article in the MIT Sloan Management Review that surveyed AI users. I believe it found 85% of people felt there might not be enough investment in responsible AI methods, and 40% felt the investment was inadequate. But here’s the thing: 70% of those who did invest in responsible AI, and used mature AI systems, saw better outcomes and increased efficiencies.
We’ve seen this first hand. In one of our applications, CESIUM, which helps police, local authorities, and the NHS with safeguarding, we achieved a 400% capacity uplift. This was precisely because we invested in responsible AI methods.
What challenges do companies face in being responsible?
The biggest challenge is figuring out how to put responsible AI principles into practice. That’s where the industry is struggling right now. Organizations like the EU and OECD have created lists of these principles, like explainability and transparency. We’ve been involved too and have worked with UK government departments to build a framework for responsible AI, which was launched at the first AI Safety Summit.
The issue is how to apply these principles in real-world scenarios. What does transparency or explainability mean in a specific context? We believe the solution is to use a multidisciplinary team. This means bringing together legal experts, domain experts, ethicists, and tech people to work together. Effective communication between them is crucial so they can understand how to integrate these principles into the technology. The key is to foster an environment where everyone comes to the table with an open mind, ready to adapt and consider new information. It’s about promoting interdisciplinary discussions on the risks and issues related to a specific use case or context, and then having open conversations about how to mitigate those risks.
Organizations with the right culture are usually able to do this successfully.
How do you build those effective communication models?
It’s not that different to creating the right organizational dynamics in any setting. You need to ensure that your data protection person, ethicist, legal expert, communications team and domain expert are all communicating effectively with the technical team and other stakeholders. This requires the right power dynamics and organizational culture.
Do you think AI developers need to focus more on ethics? Shouldn’t they make sure to include that message in everything they put out?
Absolutely. Developers need to talk about ethics more, but they also need to do a lot more in practice. For example, organizations like OpenAI have built amazing models, but I’d like to hear more about what they did to ensure their AI is safe. Why is it acceptable to just scoop up data from the web without proper safeguards? What investments did they make to avoid that?
And what about cyber security, Trilateral offers cyber security services too – but what is the link between the two?
There’s a strong link. Put simply, ethical AI is about doing the right things with AI, and cybersecurity ensures those systems are secure enough to uphold those principles. Together, they create AI that we can trust and rely on, without inherent risk.
What advice would you give to other companies about investing in responsible AI practices?
Start with training. There’s a huge lack of understanding about what AI can do, so you need to demystify AI across your organization. Then, move on to more specific training for decision-makers, risk owners, HR, and different departments. It’s crucial that users understand the insights AI provides and how those insights are derived, so they can have the right level of confidence in them.
Second, conduct risk and impact assessments for each tool; understand the risks and issues and then find creative ways to mitigate them.
Finally, invest in ongoing assurance. To get the best ROI from your AI investments and protect your reputation, you need continuous monitoring and risk management. Building the tool is just the beginning; ongoing assurance is part of the shared responsibility model.
We’ve done a lot of training ourselves, and some of it is available for free on the Alan Turing Institute website. It’s a great resource for anyone looking to get started.
Thank you, Kush, we are fascinated and excited to see what happens as investment in ethical AI increases over the next year!
As we move further into the AI age, a commitment to ethical AI will be vital for building stakeholder trust. For communications professionals, the test comes in breaking down the complexity of ethical AI to craft clear messages that resonate.
As Kush Wadhwa emphasized, investing in responsible AI practices is not just about risk mitigation. It is more than a technical challenge to be dealt with by development teams; it is a core concern for brand reputation, and therefore, communications.
Hotwire has also created its own AI framework, but with a different focus: brand power. The AI Beyond Efficiency Framework delivers on four lasting values at the heart of the relationships between people and brands: Agency, Recognition, Impact, and Intimacy. You can download and read the associated report here.