Lost in Heraklion: AI, Consciousness and Bottled Water
- Clara Durodié 
- Sep 18
- 6 min read
This article was first published on July 2, 2025 Decoding AI® by Clara Durodié, in circulation since 2016. Sign up to receive it in your inbox.
I'm writing this from Heraklion, Crete, the island of myth, meltemi winds, and, this week, serious philosophical pondering. I’m here for a conference that is less about sun and sea, and more about semantics, theory of mind and the synthetic of consciousness in machines. Yes, consciousness. In AI. And no, the irony of contemplating the soul of silicon while sipping ouzo by the Mediterranean is not lost on me.
AI 2.0
Over the past two days, I’ve been immersed in mind bending debates with philosophers, neuroscientists and AI practitioners. I had to, part of testing my assumptions for my next book, Synthetic Consciousness® in Financial Services.

You’d think we’d keep it technical, neural nets, weights and parameter counts. But no. We went straight to the metaphysical jugular: What does it mean for a machine to "understand"? Can synthetic consciousness emerge? Has it already emerged? If so, what happens to, say, the banking sector? Money is a social construct just like consciousness, I have been unpacking with the legendary Keith Frankish, over the morning coffee.
You see, while the philosophical questions are lofty, the implications are about as grounded and urgent as they come. I’ve become increasingly convinced that what I call AI 2.0 is imminent: the emergence of synthetic consciousness and autonomous agents within a five-year window.
In April 2025, on a panel in Hong Kong, I gave it maximum 10 years. After sitting down with the most revered people in the field who spend their lives thinking about and analysing sentience and consciousness, I had to shorten the timeline to 5 years. FIVE YEARS. Financial services will be the proverbial canary. Business models, governance structures, risk management, regulations and board-level liability are up for a profound redefinition.
Again.
And people won’t have the time, years to adapt, because it won’t be evolutionary like the Industrial Revolution, and not even half-evolutionary like AI 1.0. It will warp us. Slowly then suddenly. [The pace of change will likely be accelerated by what’s unfolding in the US—the Big Beautiful Bill, the national debt and the inevitable transition to digital money. I’ll unpack all of that in a separate piece in my newsletter Decoding AI®.]
Sound far-fetched?
Cast your mind back to 2023, when the arrival of ChatGPT sent boardrooms into overdrive. Slowly then suddenly, everyone had to do something with AI, ready or not. The result? A flood of pilots, most of which never made it live and many hundred of millions down the drain. It’s estimated that 80% of AI projects failed to reach production and that’s not just a poor ROI it’s a sign of a sector caught flat-footed.
Now, brace for the next wave: AI 2.0. Only this time, the disruption will be faster, deeper and far more unruly. We’re talking about agentic AI, autonomous systems capable of acting, transacting and interacting with each other, without human oversight. One expert at the conference offered a stark summary: “It’s going to be a shit show.” I didn’t disagree. I have been telling our clients the same thing since 2023. It was no accident that the very first episode of my podcast featured none other than the Oxford Professor Mike Wooldridge, a man who has spent his career designing multi-agent systems (what we now call agentic AI) and teaching them social skills precisely to manage the risks and avoid the “shit show” scenario.
These autonomous AI agents (agentic AI) which everyone can build will wreak havoc on operations, risk frameworks and yes, on CEO’s jobs.
CEOs will be held accountable.
Expect lawsuits.
Expect skyrocketing insurance claims, and then decline.
And yes— expect some board-level painful soul-searching, finally some will have to leave and be replaced by technology competent board directors.
The uncomfortable truth is that many boards still don't fully grasp what’s unfolding in AI. The level of AI literacy in C-suites and boardrooms is, in the majority cases, so limited it verges on symbolic. CEOs are not supported by technology competent boards, and that shows. This isn’t merely an operational shortcoming. It’s a looming governance crisis. If senior leadership can’t ask the right questions, how can they possibly discharge their fiduciary responsibilities with any credibility? And when autonomous systems start behaving... well, autonomously, “we didn’t know” won’t hold up for long.
Where’s Your AI Score in your Due Diligence ?
And the scrutiny shouldn't stop at the boardroom door.
Expect bankers, investors and lenders to face their own reckoning. Asked to justify how they priced, supported and de-risked their bets on AI ventures. Where was the scrutiny? Where was the due diligence on the purpose and intent of the AI companies they funded?
Just as we see exhaustive ESG scoring frameworks applied to environmental and social risks, shouldn’t there now be an equally rigorous standard for AI? Where’s the scoring for safety, purpose and alignment with human values? Our clients have it. Do you? Is it embedded into your diligence process or is it still a footnote in a slide deck?
And then there’s synthetic consciousness®—my term to distinguish AI sentience (or the appearance thereof) from the organic, biological kind. Unlike prior tech cycles, which gave society time to absorb and adapt, this one won’t wait. Synthetic consciousness® may arrive in forms subtle and unannounced. Legal systems will lag. Regulators will scramble. And businesses that fail to prepare could find themselves on the wrong end of an existential audit.
Which is exactly why I’m racing to finish my book: Synthetic Consciousness® in Financial Services — Redefining Regulations, Investments and Boards. The timing couldn’t be more critical.
Why Getting Lost Might Be the Most Human Act
But let’s go back to Crete and to a moment of unexpected clarity that didn’t come from the conference hall, but from getting utterly lost in Heraklion.
Let me tell you the story of the bottled water.
Today’s afternoon session of our conference was relocated to an art gallery across town. Instead of clear directions, the organisers printed a QR code on our badges, assuming we’d all scan it and follow a link to Google Maps app on our phones. Helpful unless, like me, you don’t use Google Maps. (Why? Because, quite apart from privacy concerns, neuroscience research consistently links over-reliance on GPS with a decline in spatial memory and awareness. One shortcut too many and apparently we’re fast-tracking ourselves to early cognitive decline. I’d rather not.)
So there I was this afternoon, walking up a hill under the generous Cretan sun, clutching an iPad with a screenshot of the route (don’t ask), Apple Maps fumbling me into confusion and utterly unsure which exit of a large roundabout to take. Amidst the chaos and heat, I spotted a stately building with a Greek flag, surely a government office. In I went, lost expression and all.
Inside, I was met by a kind woman who smiled warmly and summoned a colleague named Eva. After a brief chat, she said something that no algorithm ever could: “I’ll take you there.”
Fifteen minutes, she insisted. A small window in her workday and she’d use it to walk me through the maze of Heraklion’s old town, through stone-paved alleys and finally to the art venue. Before we left, she handed me not one but two bottles of cold water “One for now, one just in case.” Who does that?!!
We chatted along the way: about life, work, the island and the kindness of locals. I gave her my card, told her to stay in touch and promised I’d return. And as I finally walked up the steps to the gallery, I realised something profound.
Because I didn’t use digital, I made a friend.
I got lost—and in doing so, found something far more human than a Google app could ever offer. A smile. A gesture. A moment of genuine hospitality. Humanity.
The Human Kindness Algorithm Can’t Code
Here’s the real point: In designing our future with AI, we must remember not to automate away and optimise into extinction the things that make us human.
AI for Good also means that we must design to leave space for human conversation, connection, spontaneity. The winners in the next wave of AI 2.0 won’t just be those who move fastest or spend most. They’ll be the ones who integrate AI in ways that truly support human capability: not replace it, automate it, or simply optimise it away.
Synthetic consciousness® may be coming. Autonomous agents may upend business as we know it. But our job, as leaders, thinkers, builders and yes, wanderers in Greek roundabouts, is to ensure that technology remains a human construct.
I’ve attached a few snapshots (no filters) from my time here on this windy, sun-drenched island.
A place where kindness is offered with no transaction in mind, where the food is not just fresh and delicious but unapologetically nutritious and where the wind styles your hair with such careless elegance you start to feel like you’ve walked off the pages of Vogue.




Comments