AI success hinges on leadership, not just technology. Leaders who excel in AI initiatives share three key traits: clear vision, actionable strategy, and effective collaboration. These qualities separate thriving organisations from those struggling to scale AI.
Key insights:
- 42% of companies abandoned AI projects by 2025, up from 17% in 2024.
- Firms with CEOs actively overseeing AI saw a 3.6x boost in financial results.
- Most challenges (70%) stem from people and process issues, not technical barriers.
What works:
- Outcome-first strategies: Define business goals before investing in AI.
- Dedicated leadership: Roles like Chief AI Officer (CAIO) ensure alignment and focus.
- Cross-functional teams: Combining technical and business expertise avoids common pitfalls.
- Structured pilots: Clear objectives and risk management improve scalability.
- External partnerships: Collaborations with cloud providers, startups, and industry experts accelerate progress.
Examples:
- Sanofi: Achieved a 20–30% improvement in drug target identification using AI.
- Delta Air Lines: Reduced maintenance-related cancellations by integrating analytics with operations.
The takeaway? AI leadership is about guiding teams through organisational shifts, aligning AI with business objectives, and measuring success rigorously. For actionable strategies, events like the RAISE Summit (8–9 July 2026, Paris) offer a platform for collaboration and growth.
AI Leadership Success Statistics: Key Metrics for Innovation and ROI
5 AI Realities Successful Leaders Have Mastered
sbb-itb-e314c3b
Creating a Clear AI Strategy
A clear AI strategy ensures that AI efforts are directly tied to measurable business results. 78% of companies with a formal AI strategy are already seeing returns from generative AI, and over 60% of organisations now have generative AI applications in production - a fourfold increase from 2023 to 2024 [6]. Despite this progress, only 10% of large companies have successfully made AI a core part of their business success [3]. The difference often lies in how leaders craft and implement their strategies, focusing on actionable, outcome-driven plans.
"Impact before technology, targets before tools, discipline before hype." - Boston Consulting Group [7]
Successful AI leaders adopt what’s called a zero-based, outcome-first mindset. Rather than simply adding AI to existing workflows, they start by envisioning what optimal performance looks like - typically over a 36-month horizon - and work backward to define the profit-and-loss impact and capabilities needed [7]. This approach forces clarity on success metrics before any investments are made. A great example is Saudi Arabia's Data & AI Authority (SDAIA). Since its establishment in 2019, SDAIA has developed initiatives like the "National Data Bank" and the "Estishraf" analytics platform. By 2023, these systems connected over 200 government entities, benefiting 85 organisations and generating an estimated 50 billion SAR (around €12.2 billion) in value through better policy planning and enhanced citizen services [3].
Connecting AI Goals with Business Objectives
The most effective AI strategies align high-level vision with practical, on-the-ground insights. Leaders set broad strategic priorities tied to specific "AI domains" - whether departments, products, or entire processes - while also gathering tactical feedback from teams familiar with operational challenges [6]. This dual approach ensures AI initiatives solve real-world business problems, not just theoretical ones.
Resource allocation becomes a key factor when budgets are tight. Top-performing AI organisations - the leading 10% - typically allocate 4 percentage points more of their technology budgets to AI compared to their peers. They also manage 22% more AI use cases [3]. These leaders view AI projects as a "string of pearls" - a series of diverse solutions, each carefully evaluated for feasibility, cost, and return on investment [5].
"AI initiatives shouldn't be treated as standalone projects, but tightly integrated as part of your overall corporate strategy" - Raymond Peng, Senior Principal of AI Value Creation at Google Cloud [6]
Before starting any AI initiative, it’s crucial to establish baselines by measuring the current performance of manual processes. This groundwork allows for accurate A/B testing and ROI assessments later [6]. Once goals are clearly defined and prioritised, governance structures become essential to ensure the strategy is executed effectively.
Chief AI Officers and Centralised Governance
A well-crafted strategy needs strong leadership to ensure consistent execution and scalability. Appointing a dedicated head of AI - such as a Chief AI Officer (CAIO) - can be a game-changer. Between 2020 and 2025, the number of companies with a designated CAIO has nearly tripled [9]. This role provides the strategic oversight required to embed AI into the company’s long-term plans, making it a core function rather than just an extension of IT [9]. Companies that succeed with AI early on are 1.4 times more likely to have their CEO deeply involved in leading the transformation, with 84% opting for a top-down or hybrid approach to AI strategy [4].
Centralised governance often includes an AI Centre of Excellence (CoE), which brings together experts in data science and business analysis. This team is responsible for setting design standards, deciding whether to build or buy solutions, and ensuring ethical AI practices are in place [8]. Clear accountability is also key:
- A Business Outcome Owner oversees adoption and ensures benefits are realised.
- A Technical Production Owner handles reliability and system performance.
- An Independent Risk Owner defines and enforces control measures [7].
"Trying to scale AI across your organisation without a strong digital core is like driving a sports car with a decrepit engine - you may look good for the first mile, but you won't go very far" - Philippe Roussiere at Accenture [3]
Leaders in AI are 2.5 times more likely to prioritise building a robust "digital core" where cloud, data, and AI systems work seamlessly together, enabling scalability [3]. They also integrate responsible AI controls right from the start, reducing risks in the long run [3].
Building Cross-Functional AI Teams
Even the most advanced AI strategies can falter without strong teamwork. Research highlights that 95% of AI pilots fail - not because the technology is flawed, but because organisations often neglect the people, processes, and data that surround it [2]. The key to moving from a failed pilot to a successful, scalable solution lies in collaboration between technical experts and business teams from the very beginning.
Cross-functional teams bring together data scientists, domain specialists, legal advisors, and operations staff, each contributing insights that help minimise costly errors. For example, even a technically flawless AI model can fall short if it doesn’t fit into existing workflows. On the other hand, domain experts who deeply understand business challenges can guide AI specialists toward creating tools that make a measurable difference. Deloitte puts it succinctly: "The best data scientists aren't just technically minded; they understand the organisation's business challenges and can assist in problem-solving" [12]. This kind of collaboration bridges technical expertise with business insights, laying the groundwork for impactful AI solutions.
"Diversity of participants positively affects collective problem-solving performance, and may in some cases be more important than their individual abilities."
– Vegard Kolbjørnsrud, Associate Professor, BI Norwegian Business School [10]
The numbers tell a compelling story. While 94% of employees are eager to learn new skills for generative AI, only 5% of CXOs say their organisations provide reskilling opportunities [3]. This gap creates a significant bottleneck, preventing many companies from scaling AI effectively.
Combining Technical and Domain Expertise
Successful AI projects often begin with domain experts rethinking their workflows before engineers write any code. This co-creation approach ensures that AI tools address real business needs, rather than simply automating outdated or inefficient processes. A great example of this strategy comes from Accenture’s sales function in 2023–2024. Sales teams first revamped their workflows and then partnered with engineers and data scientists to develop AI tools tailored to these new processes [3].
Sanofi offers another standout example. Under CEO Paul Hudson, the pharmaceutical company has pursued an AI strategy since 2019 that spans both R&D and business operations. By creating the "Plai" app with Aily Labs, Sanofi integrated AI capabilities with pharmaceutical expertise. The result? A 20–30% boost in identifying potential therapeutic targets and an 80% prediction rate for low-inventory supply chain positions [3].
Each role in a cross-functional team plays a vital part:
- Business product owners ensure funding and encourage adoption.
- Domain experts define what success looks like.
- Applied scientists deliver robust and accurate models.
- Change enablement teams guide the human transition.
- Risk and legal advisors establish compliance frameworks.
When these roles collaborate from the start, AI initiatives are far more likely to deliver real, measurable value. By focusing on co-creation and fostering collaboration, organisations can unlock even greater potential.
Promoting Collaboration Across Departments
Cross-departmental collaboration is essential for scaling AI initiatives effectively. Breaking down silos within an organisation requires proactive leadership. Involving support functions - like legal, cybersecurity, HR, and operations - early in the process helps avoid last-minute compliance hurdles and ensures long-term success.
IBM’s marketing team provides a striking example of what cross-functional collaboration can achieve. By working together, they reduced banner ad production time from six weeks to just 60 seconds, while tripling effectiveness [2].
"AI success at scale is a team sport. By fostering cross-functional collaboration... organisations can maximise the value of their AI investments."
– Jenna Goldstein, Partner, Berkeley Partnership [11]
Some practical ways to encourage collaboration include:
- Establishing leadership forums where Chief People Officers, CIOs, and CTOs align on operating models and workforce skills.
- Creating transparency systems that allow team members to share insights and objectives.
- Launching "lighthouse" projects - diverse AI experiments that reveal patterns for integration across departments.
Delta Air Lines offers another powerful example. Between 2010 and 2018, Delta partnered with Airbus on the Skywise platform, combining engineering expertise with data analytics to monitor 14,000 variables per aircraft. This predictive maintenance approach cut maintenance-related cancellations from over 5,600 in 2010 to just 55 in 2018 [10]. A key factor was the close collaboration between ground operations staff and data analysts to identify the most critical signals.
Finally, fostering psychological safety is crucial. Teams need an environment where failure is seen as a learning opportunity rather than a career risk. When employees fear penalties for unsuccessful pilots, they avoid bold, innovative ideas. Leaders must cultivate a "mastery climate" that values learning through structured experimentation over playing it safe [10].
Managing Experimentation and Scaling AI Projects
Once cross-functional teams are in place, the next hurdle is figuring out how to transition AI projects from small-scale tests to organisation-wide deployment. Statistics paint a challenging picture: up to 70% of AI Proof of Concepts (PoCs) fail to meet their business goals, often due to unclear objectives or data-related issues [13]. Similarly, around 95% of generative AI pilot programmes fail to deliver measurable value before stalling [14]. Achieving success requires structured pilots and a focus on scaling that prioritises human involvement.
Structured Pilots and Risk Management
The best AI pilots follow a clear, step-by-step process rather than diving headfirst into development. A seven-step framework is often the most effective: identify the business problem, evaluate feasibility and risks, prepare the necessary data, build a prototype, test it against specific performance indicators, assess ROI, and then decide whether to scale or revisit the design [13]. This method avoids common pitfalls like over-engineering or spreading resources too thinly across competing goals.
Keeping technical progress aligned with operational readiness is critical. Take Scotiabank as an example. They launched an AI-driven fraud detection pilot using machine learning to tackle a specific use case. By focusing on measurable results, the pilot successfully prevented €500,000 in fraud losses within just three months. Once the project met its ROI and technical benchmarks, the bank scaled the solution across its operations [13].
Before scaling any AI initiative, ensure it meets key readiness criteria: achieving business KPIs, demonstrating technical stability, adhering to data governance standards, securing stakeholder support, and addressing ethical risks [14]. Investments in MLOps, well-defined roles, and proactive change management have been shown to boost AI returns by as much as 80% [14].
Human oversight is also essential during pilot phases, especially for high-stakes decisions. A human-in-the-loop approach can help reduce risks while the model continues to learn and improve. Conducting early Data Protection Impact Assessments (DPIAs) can reveal potential compliance challenges, such as GDPR requirements, before they become roadblocks [13]. Treat regulatory and risk considerations as non-negotiable design requirements from the start.
When pilots are structured to deliver measurable results, the next step is scaling these successes across the organisation.
Scaling AI with Change Management Programmes
Even after a successful pilot, scaling AI projects requires strong change management to overcome resistance. Pilot success doesn’t automatically translate to full-scale adoption - statistics show that between 70% and 85% of generative AI projects fail to move beyond the pilot phase [16].
The strategic oversight discussed earlier in "Chief AI Officers and Centralised Governance" becomes even more critical during this transition. Novo Nordisk’s rollout of Microsoft’s Copilot GenAI tool offers a great example. Over 14 months (from January 2024 to February 2025), the company expanded from a few hundred users to 20,000 employees. The results were striking: employees saved an average of 2.17 hours per week, but the real driver of satisfaction wasn’t just the time saved. Employees valued the tool’s ability to improve work quality - better summarisation and ideation, for instance - three times more than the time savings [15].
Change management needs to be baked into the design process, not treated as an afterthought. This includes aligning incentives, building networks of internal champions, and focusing on tangible, personal benefits rather than abstract metrics like corporate ROI [14][15]. A practical framework for scaling is the 90-day playbook: dedicate the first 30 days to resolving deployment issues, the next 30 to embedding operations in high-impact teams, and the final 30 to demonstrating financial results [16].
One manufacturing company showcases how this works in practice. They scaled a predictive maintenance pilot using automated data pipelines and MLOps, leading to a 12% reduction in equipment downtime across multiple factories [14]. While the technical infrastructure was key, the real breakthrough came from effective change management. By helping frontline staff see AI as a tool that enhanced their work rather than a threat to their jobs, the company accelerated adoption naturally.
Scaling AI isn’t just about technology - it’s about people. When employees feel that AI tools genuinely improve their work, adoption becomes a lot easier.
Using Partnerships to Accelerate AI Innovation
Once you've built strong in-house AI capabilities, the next step is reaching beyond your walls. Why? Because tackling data silos and driving innovation often requires external partnerships. In fact, 74% of C-level executives admit that data silos are a major roadblock to rolling out AI across their organisations [18]. The companies that thrive are those that tap into the strengths of external collaborators - whether it’s infrastructure, specialised expertise, or regulatory insights.
"It will be difficult for IT teams to operate independently when it comes to coordinating all the elements that need to come together to build a robust GenAI strategy."
- Vishal Chhibbar, Chief Growth & Strategy Officer at EXL [18]
Working with External Innovators
Smart partnerships focus on filling specific gaps in your AI setup. For example:
- Cloud Service Providers (CSPs): They provide the backbone infrastructure and ensure compliance in heavily regulated industries. Many CSPs even offer funding programmes to help reduce modernisation costs - something CIOs should definitely explore [18].
- Large Language Model (LLM) Developers: These experts customise algorithms for your unique use cases.
- Industry Specialists: Their deep knowledge helps weave AI into practical, everyday workflows.
Data backs up the shift to open innovation. According to IBM's Institute for Business Value, organisations that embrace open innovation grow revenue 59% faster than those sticking to closed systems. They’re also 3.3 times more likely to outpace competitors in revenue growth and 2.7 times more likely to excel in profitability [22]. The lesson here? Innovation thrives when companies collaborate and share data across networks, instead of hoarding resources [18][22].
Events like RAISE Summit

Networking events are a goldmine for finding the right partners. Take the RAISE Summit, held annually at Le Carrousel du Louvre in Paris. This event pulls together the entire AI ecosystem - from enterprises and startups to investors and policymakers. The 2025 edition alone welcomed 822 CEOs from 168 Fortune 500 companies, along with investors managing over €600 billion in assets [20]. Impressively, over 80% of attendees are high-level decision-makers, making it a hotspot for turning conversations into concrete deals [17][20].
"The fastest-growing AI Tech conference in Europe, and maybe in history."
- Eric Schmidt, Former CEO and Chairman of Google [17]
The 2026 edition, happening on 8–9 July, is expected to draw more than 9,000 attendees and feature over 350 speakers [17]. What makes RAISE stand out is its focus on action. For instance, the invitation-only CxO Summit gives Fortune 1,000 leaders a private space to compare AI strategies and form partnerships.
"The CxO Summit exists so companies don't just talk about AI, they leave RAISE with real partnerships, pilots, and signed deals."
- Hadrien de Cournon, Co-Founder of RAISE Summit [20]
An AI-powered matching tool helps attendees schedule one-on-one meetings with partners whose goals align perfectly. Beyond the main summit, events like the RAISE Startup Competition (offering a €5 million prize pool) and the RAISE Hackathon (€200,000 prize pool with 7,000+ developers) connect participants with cutting-edge innovators [17]. These opportunities complement internal strategies by turning big ideas into actionable AI projects.
Partnership Models Comparison
Different partnership styles serve different needs. Here’s a quick breakdown:
| Partnership Model | Primary Advantage | Key Challenge |
|---|---|---|
| Bilateral Collaboration | Combines complementary strengths [19] | Integration hurdles [19] |
| AI-Driven Ecosystems | Shares knowledge and resources [19] | Managing multiple stakeholders [19] |
| Service-Oriented (Vendor) | Speeds up innovation at scale [19] | Reliance on vendor ecosystems [18] |
| Research Consortia | Advances long-term goals [19] | Balancing research with business needs [19] |
| Data-Centric Networks | Addresses AI training data gaps [19] | Data privacy and integration issues [19] |
When collaborating with startups, focus on defining the use case and finalising commercial terms early to avoid delays [21]. For CSPs, leveraging their security expertise and infrastructure investments is essential for meeting compliance requirements in regulated sectors [18]. The key is to pick a partnership model that directly addresses your organisation’s challenges - whether it’s infrastructure, data, or expertise - rather than diving into partnerships without a clear purpose.
Measuring and Improving AI Innovation Results
Tracking measurable outcomes is essential to justify AI investments. Surprisingly, 95% of organisations report no returns from their GenAI projects [27]. The main reasons? Ineffective measurement and a focus on automation rather than fostering real capability shifts.
Key Metrics for Success
Avoid superficial metrics like total prompts or user counts. Instead, zero in on business outcomes. Eric Siegel, consultant and former professor, highlights this issue:
"When it comes to evaluating a model, most ML projects report on the wrong metrics - and this often kills the project entirely." [24]
Strong AI leadership requires both a clear vision and precise performance tracking to maintain progress. Effective leaders focus on three core areas: model quality (accuracy and safety), system reliability (infrastructure and latency), and business impact (ROI, revenue growth, and cost savings) [23][26]. Organisations with robust measurement frameworks are three times more likely to achieve meaningful returns on AI investments [25].
Here are some key metrics to focus on:
- Active AI Users %: The proportion of employees actively using AI tools each month.
- Time-to-Proficiency: The number of days it takes for an employee to consistently use an AI tool.
- Productivity Impact Score: Quantifiable improvements in output directly linked to AI.
Take GitHub Copilot as an example. It reached over 1.3 million developers on paid plans and issued licenses to more than 50,000 organisations in under two years - a clear indicator of rapid adoption [25]. Monitoring Cost per Prompt (usually between €0.02 and €0.10) is another way to manage expenses as usage scales.
Other revealing metrics include revenue per visit for e-commerce or time saved for internal processes [23][26]. Organisations that redefine their KPIs with AI are 90% more likely to experience financial gains [27]. As Phil Gilbert, former head of business transformation at IBM, points out:
"We've slipped back into the old 'butts-in-seats' metric... Nobody is asking: how is AI helping the team generate better outcomes?" [27]
Once you’ve identified the right metrics, integrate continuous feedback to ensure ongoing improvements.
Continuous Improvement Frameworks
Simply measuring metrics isn’t enough - constant refinement is what drives long-term success. Create a virtuous cycle by monitoring model outputs, gathering feedback (via human input or automated systems), and incorporating that feedback into future updates [26]. Adding thumbs up/thumbs down feedback tools within AI systems can highlight underperforming features and steer development [23][25].
For generative AI, use judge models (auto-raters) to evaluate creativity, accuracy, and relevance. These tools should be calibrated alongside human raters to maintain quality standards, tracking aspects like coherence, fluency, safety, and groundedness [23].
Start by recording baseline performance metrics to validate efficiency gains [27]. Use time-to-proficiency to measure the effectiveness of employee training [25]. If usage drops after an initial surge, it often signals performance issues rather than a lack of awareness [23].
Barry O'Reilly, executive coach and co-founder of Nobody Studios, summarised this well after reviewing over 300 public generative-AI deployments and conducting 150+ executive interviews:
"Transformation isn't automation. It's not efficiency gains alone... It's a capability change, a mindset shift and a business model evolution." [27]
To guide these efforts, adopt the "Faster, More Frequent, Cheaper, Better" framework. This approach measures shorter feedback loops, more frequent experimentation, reduced learning costs, and better decision-making [27]. Organisations typically evolve through four maturity stages - Experimentation (<20% adoption), Early Adoption (20–50%), Integration (50–75%), and Optimisation (75%+) - so it’s important to set realistic goals based on your current stage [25].
Conclusion
AI isn't just about technology - it represents a shift in how organisations and cultures operate. As Alex Milovanovich from MIT Sloan aptly states:
"Success lies not in asking what AI can do instead of humans, but in discovering what humans and AI can achieve together that neither could accomplish alone." [1]
To thrive in this evolving landscape, businesses need a well-defined AI strategy, cross-functional collaboration, and a commitment to testing and scaling their initiatives. Interestingly, most leaders identify cultural hurdles, not technological barriers, as the biggest challenge. This highlights the importance of transforming the workforce to adapt to AI-driven changes.
Building partnerships and fostering continuous learning also play a crucial role. Engaging with external innovators, tracking meaningful outcomes, and iterating based on practical feedback can set successful initiatives apart from the 95% of AI pilots that fail [2]. Jean-Stephane Payraudeau, Managing Partner at IBM, puts it succinctly:
"The question is not whether you adopt AI, but whether you're ready to lead the transformation it demands." [2]
For those looking to take decisive steps, the RAISE Summit, happening on 8–9 July 2026 at the Carrousel du Louvre in Paris, offers an unparalleled opportunity. With over 9,000 participants - 80% of whom are C-level executives, founders, or investors - the event is a platform to refine strategies, build partnerships, and turn AI ambitions into actionable outcomes [17][20]. Eric Schmidt, former CEO of Google, describes it as "the fastest-growing AI tech conference in Europe, and perhaps in history" [17].
Now is the time to act. By aligning with a clear strategy, fostering teamwork, and embracing disciplined experimentation, you can convert AI potential into tangible results. Join the global AI community and lead the charge in shaping the future of innovation.
FAQs
What should our first AI use case be?
When introducing AI into your organisation, it's smart to begin with areas where it can provide quick, tangible results and support your overall goals. Experts often suggest starting with tasks that are repetitive or heavily reliant on data, like improving customer service workflows or streamlining supply chain operations. Choose a clear, impactful project that can showcase measurable ROI. This approach not only proves AI's potential but also sets the stage for expanding its use across the organisation.
Do we need a Chief AI Officer to scale AI?
The decision to appoint a Chief AI Officer (CAIO) hinges on your organization's level of AI development, strategic goals, and the complexity of its AI initiatives. For companies with advanced AI systems or centralized approaches, a CAIO can help boost ROI, simplify governance, and maximize value at scale. On the other hand, businesses in the early stages of adopting AI might find it more practical to distribute AI responsibilities across existing roles or teams.
The key is to align your leadership structure with your AI objectives. This ensures proper oversight, effective scaling, and a seamless connection between AI efforts and overarching business goals.
How do we prove AI ROI in euros (€)?
Proving the return on investment (ROI) of AI in euros (€) calls for a clear and structured approach. One effective method is using frameworks like IDC’s AI Business Value Framework, which helps quantify AI's benefits in areas such as revenue growth, operational efficiency, and new innovations.
Alternatively, businesses can focus on directly connecting AI initiatives to measurable outcomes. For instance, you could track cost savings, increased revenue, or reduced risks. The key is to ensure these metrics align closely with your company’s overarching goals, making it easier to demonstrate the tangible value AI brings to the table.



