The Futility of AI Rankings in Banking: Why C-Suite Should Look Beyond the Numbers
- Clara Durodié
- Jun 6
- 4 min read
AI rankings in banking have become a popular tool purportedly measuring which institutions lead in AI adoption and innovation. However, these rankings are fundamentally flawed and ultimately a futile exercise for C-suite executives and board directors to base strategic decisions on.
At their core, these rankings rely on an “outside-in” assessment methodology. They aggregate publicly available data—from company disclosures, financial reporting, and a variety of third-party sources—and then combine this with input from over 50 subject matter experts across banking, technology, and benchmarking domains. Each bank is assessed on around 90 individual criteria.
Professional Integrity
We had the data and the option to build such indexes ourselves. Yet we deliberately chose not to. The reason is simple: the market is already saturated with self-validating, inconsequential reports that add little real value to decision-makers. Rather than contribute to the noise, we refused to put our name to work that risks misleading C-suite executives with superficial metrics.
The critical issue is that these rankings reflect only what banks choose to disclose or what external observers can measure. They do not capture the true strategic nuance, cultural readiness, or the often invisible operational transformations underway inside the institution. As such, they risk presenting a skewed or incomplete picture.
There’s also the matter of professional integrity—something we take seriously at Cognitive Finance Group. We chose to uphold it then, and I remain deeply committed to continue protecting our reputation and respect in the market.
AI is Inherently Contextual and Evolving
AI initiatives are not a one-size-fits-all proposition. The goals, capabilities, and challenges of AI vary dramatically between institutions. For example, a retail bank focusing on customer engagement through AI chatbots has a very different trajectory than an investment bank using AI for algorithmic trading or risk analytics. Attempting to distil these varied efforts into a single score or ranking is an oversimplification that can mislead leaders.
A 2023 McKinsey report highlights that AI maturity is deeply tied to sector-specific context and evolving capabilities, making static external assessments unreliable. The dynamism of AI deployment means what looks like leadership today may quickly become outdated as new applications and innovations emerge.
Public Data Misses Critical Internal Factors
Rankings rely on data banks willingly share or that can be scraped from external sources. Yet many critical factors are inherently internal—such as the organisation’s culture, talent quality, data governance frameworks, and ethical AI practices. For example, JPMorgan Chase’s secret sauce in AI lies not just in the number of AI projects, but in deep integration with their compliance and risk teams—something a public ranking cannot see.
The MIT Sloan Management Review found that internal organisational factors contribute more to AI success than technology alone, yet these factors are invisible to external analysts. This lack of insight into internal maturity reduces rankings to a superficial popularity contest.
Focus on Outputs Over Outcomes
Most indexes track measurable outputs like the number of AI initiatives, investments, or patents filed. However, quantity does not equate to quality or impact. For instance, Wells Fargo may boast numerous AI projects, but what matters to shareholders and customers is the actual improvement in operational efficiency or customer satisfaction those projects deliver.
Research by PwC shows that less than 20% of AI investments in financial services have delivered measurable ROI to date, underscoring the gap between activity and outcomes. Rankings that highlight outputs alone risk creating a false impression of AI maturity.
Rankings Encourage Gaming and Signal Seeking
Because rankings become public benchmarks, banks often feel pressure to optimise their scores, sometimes through superficial means. This might mean prioritising initiatives that boost visibility rather than those that address long-term strategic challenges. For example, a bank might accelerate AI pilot announcements without fully embedding the technology operationally—actions designed to impress analysts, not necessarily to create value.
This phenomenon is well documented in the ESG space, where firms “greenwash” to improve scores. Similar dynamics apply to AI rankings, risking distorted incentives and misaligned priorities.
The Lag Between Data and Reality
There is an inevitable delay between when data is collected, analysed, and published. In an area like AI, which evolves at breakneck speed, this latency means rankings may already be obsolete by the time executives see them. A bank heavily investing in AI talent and infrastructure today will not be reflected in rankings based on last year’s filings or announcements.
The 2022 Gartner CIO survey emphasises the need for real-time intelligence to inform technology strategies—a requirement static rankings cannot fulfil.
Lack of Standardisation and Comparability
No universal standard exists for measuring AI maturity or impact in banking. Different rankings use varying criteria, methodologies, and weighting systems, leading to contradictory results. One index might value investment volume, another the number of patents, while a third focuses on leadership perception.
This inconsistency confuses executives rather than clarifies. A Deloitte study found that 65% of financial services leaders are sceptical about the usefulness of AI benchmarking due to these disparities.
The Danger of Herd Mentality in the C-Suite
Finally, reliance on these rankings can foster herd mentality. Leaders may feel pressured to chase what competitors appear to be doing, even if it conflicts with their institution’s unique strategy or capabilities. This can lead to resource misallocation, chasing vanity metrics over sustainable competitive advantage.
In contrast, the most successful AI strategies are those that align tightly with an organisation’s business goals and risk appetite—a deeply internal process not captured by external indexes.
AI rankings in banking are little more than marketing tools dressed as strategic benchmarks. They simplify complex, context-dependent initiatives into reductive scores that fail to capture internal realities, operational outcomes, and the strategic nuances critical to true AI leadership.
For C-suite executives and board directors, the focus must shift away from superficial rankings and towards meaningful internal metrics, qualitative insights, and continuous engagement with AI’s evolving impact. Only then can banks unlock AI’s transformative potential rather than be misled by external scores designed for show.
Comments