Published on the 15/04/2026 | Written by Heather Wright
Why AI governance is the real competitive advantage…
Most companies think AI innovation is being held back by risk. For Spark’s Matt Bain the opposite is true. After seven years of quietly building ethical guardrails and governance muscle, the telco discovered that doing the ‘slow, boring stuff’ first was the secret to moving faster than others – including when it cloned a real person for a national Skinny campaign.
“We started using machine learning and predictive analytics about seven years ago, so this isn’t new for us” Bain tells iStart. “What people think of as AI now is really just the latest evolution of things we’ve been doing for a long time.”
“These projects aren’t really AI projects – they’re data projects.”
From the outset, Spark discovered that deploying AI – particularly where customer data is involved – raised complex questions that couldn’t be solved by technology alone.
“When you’re building models, it can be hard to explain how they choose. So the starting point for all of this was really strong ethical guidelines around how we use the technology.”
Rather than reacting case by case, Spark invested early in defining what it considered acceptable – and unacceptable – uses of AI. “What that looks like: What do we want to use this for, what data is acceptable, how is the data stored securely, and making sure that if a model does something we can explain why it did it.”
The principles were designed to be enduring, rather than technology-specific.
“The technology changes really quickly,” Bain notes. “But if we start from a foundation of what we believe is right and wrong as a business, and what we would never do – even if it was commercially attractive – then those principles still apply as the technology evolves.”
Over time, those principles were embedded operationally, supported by internal tools Spark developed to help teams assess proposed AI use cases before they’re deployed. “We’ve now got tools that allow our people to see the use case, understand what it’s being used for and the system will flag whether this is permissible.” The tool translates Spark’s ethical principles into practical decision-making, helping teams evaluate how data will be used, whether a human needs to remain in the loop and whether the use case aligns with published governance standards. Embedding these checks early reduces ambiguity and gives teams the confidence to move quickly once a project is cleared, Bain says.
He says this foundational work gave Spark a level of maturity before generative AI entered the mainstream.
“We already had a baseline. Our teams understood the effects, the principles and how they turn into actions.”
A Skinny test
That groundwork was tested with an ambitious project to clone a customer for an advertising campaign for Skinny – Spark’s low cost mobile and broadband brand – in what Bain says was a world-first.
“We’d never cloned a human before,” he says. “That’s a whole new level. Someone’s identity. Their voice. Their mannerisms and physical likeness.”
The technology made it possible to produce creative content faster and at a lower cost – and it feed into the Skinny brand – ‘cheeky and innovative’, using AI based on a real person, rather than a completely synthetic human to keep marketing costs low.
“We could put her in space. We could put her on the moon. We could test creative ideas really quickly and get to a finished product much faster than traditional filming.”
The visible outcome however, understated the work required behind the scenes.
“Above the waterlines is what you see,” Bain says. “Below the waterline was a huge amount of new thinking – particularly around security – because this is actually more sensitive than the personal information we usually hold.”
Skinny treated the AI likeness as requiring a higher standard of protection than typical customer data.
“Before we even captured the data, we had to decide who could access it, how many people, whether it could be downloaded, and where it could live. It could only exist in a very secure environment and could never leave it.”
Legal and privacy teams were involved throughout, running alongside the technical work.
“Some of these things you don’t realise until you hit them, so the practical steps evolved as we went,” Bain says. “Our legal and privacy teams were actively involved, making sure they were always a few steps ahead.
Liz Wright, the customer who was cloned, has remained involved throughout the campaign.
“She was more involved each time. We didn’t just do things and put them out there without her knowing, even where we could have done that legally. We didn’t think that was the responsible way to handle it.”
There are clear limits too, on usage. “We have rights for a defined period, and then we destroy the data. We don’t want to hold it for longer than we need to.”
Guardrails unlock speed
While the governance work was intensive upfront, Spark says it was this effort that now allows teams to move quickly.
“What we found was that we spend more time upfront evaluating the risks and making sure we have those covered. But once the guardrails are in place, teams can move very fast inside them.”
Bain argues that the common perception – that AI governance slows innovation – misunderstands where the real friction lies.
“Building the models is really quick these days,” he says. “The hard part is the guidelines and the ethical approach.”
Once teams understand how AI works and what the boundaries are, Bain says fear drops away. “When people understand the technology and the risks, they’re emotionally comfortable with it, that’s when they can really move.
Bain notes that AI governance must accommodate technologies that change continuously. “These models are doubling in capability every six months. You can’t imagine now what will be possible in two years.”
Rather than constantly rewriting rules, Spark applies the same principles to each new use case as it emerges.
“You need to govern it like a piece of technology that will constantly change,” Bain says. “Have broad principles, then apply the same framework every time.”
That approach, he says, creates clarity internally and avoids paralysis. “It means people are never ambiguous about where the business stands. And it lets them move quickly within those guidelines.
Hard won lessons
When asked which AI capability Bain is most proud of at Spark, he points not to the latest offering, but to its customer intelligence platform that uses hundreds of machine-learning models to understand individual customer preferences and predict what will be most relevant to them. “We now have hundreds of models running against our customer base that say things like: Heather loves music, she’s not price-sensitive, she prefers iPhones, she’s an early adopter,” Bain says. That insight is used across channels – emails, texts, retail stores and call centres – to personalise interactions in real time. “What we found was that we massively improved engagement with those communications.”
Adding that same intelligence helps frontline staff make more relevant recommendations when customers get in touch. “That’s a pretty cool thing – and it’s still pretty unique in the market today.”
Another internal AI tool is a system allowing staff to query financial and performance data using plain language, rather than writing manual SQL queries. Traditionally, Bain says, answering questions about revenue, sales or margins meant a slow loop between analysts, spreadsheets and follow-up requests. “We built an AI that could understand text, turn it into an SQL query, query the database and then create a multimodal report with graphs, tables and written insights.”
What once took days now happens in seconds or minutes, dramatically speeding up decision-making and allowing finance teams to focus on higher-value analysis rather than producing reports.
So what are Bain’s three key tips for companies finding their way with AI and data?
Get your data in order first Bain says data quality is the single biggest constraint on successful AI use. “These projects aren’t really AI projects – they’re data projects,” he says, adding that ‘the underlying data is 90 percent of the work’. Where data definitions are inconsistent or ambiguous, models become confused and unreliable. By contrast, when data is clear, structured and robust, Bain says AI solutions can be deployed very quickly and deliver real value.
Start now — the learning curve is unavoidable
Bain warns that delaying adoption doesn’t reduce complexity, it just postpones learning. “If you don’t get started now, there’s a learning curve – and that learning curve will be the same no matter when you start,” Bain says. He recommends beginning with low‑risk use cases so organisations can build confidence, understanding and capability before scaling more complex applications.
The technology is no longer the barrier — mindset is What used to be expensive and exclusive is now widely accessible. “What used to only be possible for a Spark is now available to small businesses – we’re using the same technologies,” Bain says. The real work is educating teams, demystifying how AI works, and putting appropriate guardrails in place. “By understanding it, you can mitigate the risks,” Spark says, adding that responsibility and confidence – not fear – are what allow businesses to move forward productively.



























