‘You can’t do AI-led transformation without people at the centre.’
May 8th marked part two of our event series with Bayes Business School, where we turned our attention to another of the biggest topics in business: Ethical AI.
To swap hype for substance, we brought together expert speakers from both the legal and entrepreneurial sides of the fence.
The result was a practical and grounded discussion on the future of workforce development, technology, governance, digitalisation, and even a bit of cake:
‘When it comes to AI regulation, the EU is your store-bought cake. In the UK, you’ve got to build each layer yourself.’
As always, a big thank you to our wonderful hosts at Bayes Business School, our speakers, and our audience; you made it an incredibly special evening.
If you didn’t get a chance to come along to this event, there are plenty more in the pipeline. The next instalment of our From Ideation to Investment series will kick off with a discussion on building a diverse workforce and creating a high-performance culture. If you’re interested, You can RSVP here.
In the meantime, read on for expert insights into what it takes to protect and accelerate your business with Ethical AI.
What is Ethical AI?
First things first, what even is ethical AI? It’s surprisingly hard to find a definitive answer, especially given how much it’s being talked about currently.
This is partly because it’s so contextual, and partly because we’ve developed an exclusionary way of discussing it. As one of our panellists pointed out, ‘AI is an elitist language.’
AI’s biggest challenge tends not to be capability, but rather, its clarity. Success often depends on translating complex systems into accessible, actionable insights for everyone involved, not just the experts.
It’s not a catch-all solution to every modern problem (as many businesses are finding out the hard way), it’s more of a toolset, and it needs to be more accessible.
We asked our panellists what they thought:
‘Ethical AI refers to the development and deployment of artificial intelligence systems that emphasise fairness, transparency, and respect for human values.’
Others focused on what it takes to embed the tech ethically, and the positive disruption it can represent:
‘It should be a values-driven approach that embeds the authentic, specific values of the business into every stage of the AI development process. I’d even go as far as to say it needs to proactively challenge existing power structures.’
For some, the outcome is the part of ethical AI that matters most:
‘Ethical AI is a system or a process, but it’s the outcome or the result that matters in this conversation. That’s a question of whether or not someone trusts a business to use an AI product.’
Ultimately, what good looks like will depend on the organisation and how the AI systems are used. The best approach is to start small with low-stakes practices.
As your governance maturity increases, so should your AI efforts. And speaking of governance…
Governance, Regulation, and Strategy
You want to implement AI into your business – how do you build a strategy? For our panellists, good AI governance is the strategy:
‘While involving your lawyers is beneficial for any project, regulatory compliance alone does not guarantee successful and ethical AI outcomes. This is where governance comes in.’
Whereas compliance is the act of meeting external regulatory obligations, governance is setting your own standards, rules, and oversight. The distinction is important, particularly when you consider how slowly it’s taken the laws to catch up to the tech in recent years.
When the laws we have won’t ever be enough, governance bridges the gaps. AI governance involves the processes by which organisations take responsibility for the decisions made on the implementation of AI:
‘Effective governance requires organisations to consider the unique aspects of AI and integrate them into their existing governance practices. By doing this, they can ensure that AI is used responsibly and effectively while minimising potential drawbacks and negative impacts.’
A strong AI governance framework starts with strategy. It’s the foundation for any responsible AI adoption, but strategy alone isn’t enough. Organisations need to understand both the enablers that drive adoption and the derailers that can compromise it. One of the most significant derailers is bias and the absence of clearly defined ethical AI principles.
One of our guests pointed towards the ISO/IEC 42001 for more direction on what good AI governance looks like in practice.
The framework sets out a clear progression: starting with a defined AI vision, followed by a strategy grounded in risk appetite and ethical AI principles.
From there, organisations should establish strong governance oversight, build risk and compliance structures, and put the right people and skills in place, often through dedicated roles or committees like an AI Ethics Board or AI Committee.
As one of our speakers pointed out:
‘You can’t scale governance without grounding it in real language, tools, and care.’
The panel was in agreement:
‘Scaling governance depends on where you are as an organisation. The first step is identifying who’s going to be involved. Those people can’t be naysayers; they have to be able to sit together and talk to each other. If you disagree, you need to be able to disagree in the same room as one another.’
Standards and Policies
Standards and regulatory policy are often conflated. Unlike regulation, which consists of legally binding rules, standards provide the framework for best practice that supports organisations in developing products and services.
Combine the two, and you get a holistic view of governance that ensures businesses operate inside legal and ethical boundaries.
Different types of standards include foundational, process, measurement, and performance, which are typically tracked across three levels of standardisation: national, regional, and international.
‘Standards build trust and confidence in AI and operationalise regulatory requirements at the same time, facilitating market adoption and global alignment in the process.’
One of the tensions here is in localisation vs standardisation. On paper, standardising models and processes seem efficient: you reduce duplication, centralise oversight, and ensure consistency across markets. In reality, standardisation can often come at the cost of relevance and effectiveness.
‘Models designed globally are typically isolated. They need to be designed locally, otherwise, you risk limiting trust and adoption between local end users.’
Beyond the EU AI Act
While the UK government has no appetite for adopting the EU AI Act, the extra-territorial scope of the rule will have a big impact on British businesses operating in or serving EU markets.
Domestically, the UK continues to favour a more flexible, industry-led approach to AI governance. Rather than replicating the EU’s regulatory model, the UK’s AI strategy prioritises guidance over mandates, encouraging innovation while focusing on tangible outcomes.
That doesn’t mean the UK is standing still. Several targeted initiatives are underway to help businesses navigate AI adoption more responsibly.
These include the AI and Digital Hub pilot (a one-year pilot that launched back in April 2024) which created a single point of contact for innovators seeking regulatory advice across multiple domains.
The AI Management Essentials Tool is also in development to help organisations, particularly SMEs and startups, assess their internal AI governance processes, drawing from ISO standards and the NIST framework.
Meanwhile, the AI Audit Bot enables AI creators to check their applications against a growing library of regulations across the UK, EU, and US jurisdictions, helping to clarify legal risks.
‘An important component of the UK’s National AI Strategy is the recognition of AI standards as essential tools for supporting AI adoption across industries, particularly in healthcare, finance, and public services. The UK Government has recently reconfirmed the importance of technical standards for AI.’
People First
AI is of course powerful, but it’s not a strategy by itself. The most successful AI use cases start with human needs.
‘Start with the people and the challenge. Businesses have been tripping up by thinking tech or business-first in the past few decades already. AI runs the risk of refuelling this by suggesting “AI first.”’
This tends to be a common point of failure when it comes to tech transformations – when AI is implemented without a grounded understanding of its human context, it tends to replicate and reinforce existing biases.
AI integration must stretch beyond the technical conversation to encompass empathy, inclusion and human-centred design because models are only as good as the data and the perspectives that shape them.
How Technical Do You Need to be to Harness AI?
For those with a more business-focused background, navigating the (many) pitfalls of AI implementation can feel like an alien concept. Thankfully, our panel have a different way of looking at it:
‘As with anything, the more you know, the easier it is to start implementing, but we can’t know everything. It’s about knowing enough to effectively manage the tech, the capability, the risk, and the opportunity. You need to understand the art of the possible and how much to invest in hiring the right people.
Who are those ‘right people?’ Surrounding yourself with a strong team is essential, but it’s hard to know who to target:
‘The key isn’t just the technologists, the data engineers, the prompt engineers and architects. You need the legal side covered, the research, the product managers, and the transformation specialists.’
For others, the table doesn't need to look quite so big:
‘It would be possible to create a small group of executives with strong technical knowledge and business expertise to access applicable criteria for each AI system to be put into production.’
One of the more compelling arguments is that there simply aren’t enough people with the right blend of skills to cover every base, at least, not yet. That’s why the focus shouldn’t just be on building huge advisory boards, but on making smart, strategic appointments.
There’s a growing consensus that while a formal AI ethics lead is important - someone like a Chief Trust Officer or Chief AI Ethics Officer - this role is one part of a broader ecosystem of responsibility.
Ethical oversight isn’t the job of one person; it requires cross-functional input and enterprise-level buy-in.
An AI ethicist can absolutely add value, particularly when they help engineering teams embed ethical, social, and political considerations into the design and deployment of AI systems.
The Ground Up
AI success doesn’t purely revolve around top-down success. It requires a host of insights from those on the ground, the people that are closest to the customer, which includes those who know the customer journey, to deliver meaningful value:
‘While you don’t need an in-depth knowledge of the technical systems, you need to be able to ask intelligent questions and uncover gaps.’
As for the data side of things:
‘You don’t need to suddenly turn into a data engineer. You need to know roughly how it works, what the risks are, and most importantly, what that means for your business.’
Accountability and Ownership
Who’s responsible for making sure AI is governed well? Our panellists see it as a case of leveraging standards and working out where you stand to begin with:
‘Responsibility for AI’s governance is shared, but leadership must own it. The responsibility needs to lie in the value chain, from the people that build and define the systems to those who regulate them.’
While leadership should champion the agenda, it requires input from across the organisation to be meaningful. That means involving legal, technical, and operational teams early, and aligning them around a shared set of principles.
Governance isn’t a checkbox exercise; it’s an ongoing process that begins with understanding your organisation’s current capabilities and constraints.
‘Know your strategy, your system’s limitations, and where your business sits. What’s your starting point? Once you’ve got that, you can move from overwhelmed to a heat map of opportunities.’
AI committees and leadership councils are key players in this process, as are the cross-functional teams that support them.
These structures help ensure accountability is embedded at every level, translating strategic intent into operational reality. When done well, governance becomes a framework for confident decision-making, not a barrier to innovation.
It helps organisations move with purpose, ensuring AI is deployed responsibly, sustainably, and with a clear view of the risks and rewards.
Levelling the Playing Field
Looking ahead, how can AI level the playing field for smaller businesses looking to compete with industry giants? It’s the multi-billion-dollar question on the minds of many SMEs, would-be entrepreneurs, and tech professionals alike.
For some of our expert speakers, it’s the narrow, deep-focus smaller players who will capture market share:
‘OpenAI may be the cornerstone now, but it’s the specialised LLMs and AI products that look ready to win the race. They can do this if they focus on the operationalisation of AI, rather than lofty ideas. That’s not to say governance shouldn’t be carried out early in the process either.’
Others see AI as a complete power changer that affects every level of a business:
‘AI doesn’t just sit there. It’s going to shift every aspect of every role, and it’s going to do that at every level of business.’
It’s clear that, as adoption rates rise and understanding grows, the gap between leaders and laggards will shrink, productivity will go up, and smaller players will find ways to unlock more value.
The compounding effect is real, and you can start to see it emerge in critical industries. Take the banking sector, for example, where the gap between digital leaders and everyone else is widening at pace.
That said, with the right focus, agile SMEs can leapfrog the legacy systems that so often hold back major enterprises.
‘It’s only a matter of time before you get AI companies. How long they’ll last, however, is a completely different matter.’
In a Nutshell
Protecting and accelerating your business with ethical AI (in a nutshell) involves:
- Starting with the Why
- Conducting an AI readiness assessment
- Keeping up without rushing. Remember low-stakes practice.
- Involving your legal team
- Leveraging standards, your best practices
Parting Advice
Keeping it short and sweet, our guests offered some parting insights that captured the spirit of the evening:
- ‘Make sure there’s no alternative before you use AI. Not everything needs AI.’
- ‘Start with the human. Resolve a specific problem.’
- ‘How do bring value back to humans? Decide what the values are that you’re going to feed into this.’
- ‘Always leverage standards.’
Meet Our Panel:
Niharika Harihan
Niharika is a digital ventures and AI leader with over 18 years of experience shaping products, services and customer experiences at the intersection of business strategy, design, data, and AI. She is currently a Partner at Sorai, a next-generation digital, AI, and customer experience studio that works in close partnership with McKinsey & Company. At Sorai, Niharika helps clients design and launch new ventures, scale high-performing teams, and drive end-to-end digital transformation across the UK and MENA regions. Niharika is also the Co-founder and Chief Product & Strategy Officer at Design3, the world’s first global ed-tech platform for product and design teams focused on AI, Web3, and sustainability. Under her leadership, Design3 scaled to 35 countries and is now developing an AI-native learning platform for product teams. Previously an Associate Partner at McKinsey Digital, she has led large-scale transformation and venture design initiatives for Fortune 500 clients across banking, retail, beauty, and professional services. Earlier in her career, she held senior roles at Barclays, Publicis Sapient, and EY Seren, and started at Nokia Research. Niharika is a recognised thought leader in AI product strategy, digital innovation, and design leadership and a former Advisory Board Member at Imperial Business Design Studio.
Shazia Khan
Shazia Khan is a highly experienced legal professional with over 20 years of cumulative expertise across banking, finance, corporate law, data privacy, and AI governance. This broad background allows her to navigate complex regulatory landscapes, including GDPR and the AI Act, and deliver effective data privacy compliance and risk mitigation strategies for diverse organisations. Currently, as a Senior Privacy Counsel at Axiom, she is assigned to AXA UK PLC, leading data privacy for Smart Automation and Robotic Process Automation (RPA) agents, including the application of AI within these tools. Her extensive experience includes interim roles as Associate Principal Counsel at The Walt Disney Company, where she led EMEA & UK data privacy, and numerous senior privacy counsel positions at leading companies like Google, Booking.com, and PayPal. She also serves as the Global Leadership Network Coordinator for British Women in AI Governance, driving global AI discourse and holding an AI Governance Program Architect Certification.
Barbara Zapisetskaya
Barbara is a Principal Counsel (Technology) at the European Bank for Reconstruction and Development. She has extensive experience in drafting and negotiating complex technology and commercial transactions. She has a particular interest in the regulation and governance of artificial intelligence and is a regular contributor to this topic. In 2020-2023, Barbara co-authored a chapter on artificial intelligence in the UK published annually by Global Legal Insights. In 2024, she has co-authored a chapter on AI governance for “The Handbook of Board Governance” (edited by R. Leblanc). She sits on the AI Committee at the British Standards Institution and is also a certified AI Governance Professional (AIGP).
Dr Chantelle Larson
Dr Chantelle Larson is a globally respected leader in authentic human and AI systems transformation, with over 25 years of experience driving equity from boardroom strategy to grassroots change. A founder of Women in AI Governance™, Chantelle integrates deep research and practice across GenAI, organisational design, and inclusive operating models, helping organisations innovate responsibly and design workplaces that truly work for all. Her portfolio spans executive coaching, equity-led transformation, and cultural change, underpinned by expertise in AI governance, change management, and workforce design. Chantelle is also the editor of Humanity Magazine and a passionate advocate for intersectional equity and human-centred innovation.
Anish Joshi
Anish is a Partner and Chief Innovation Officer at Sorai, a pioneering ‘Enterprise Accelerator’ transforming how large organisations build digital products, launch 0→1 ventures, and scale innovation. Positioned as a bold alternative to traditional consulting, Sorai equips enterprises with the speed, agility, and product thinking of start-ups — powered by design and AI. A leader at the intersection of Design, Strategy, and Sustainability, Anish is also the Founder of Design3, the first global ed-tech platform and network helping designers and innovators navigate the future of Web3, AI, and climate-conscious design.
With a background spanning senior roles at Shell, Deloitte, and Novo Nordisk, he has built design and innovation capabilities across industries, delivering impactful work for organisations including Google, Vodafone, the NHS, and Saudi Arabia’s digital transformation efforts.
Anish holds an Executive MBA from Imperial College and regularly contributes as a lecturer, keynote speaker, and government advisor on design-led innovation.