Postcards from the AI Rollout You Are Already Inside Of
A small experiment. I gathered four perspectives written from May 2027, twelve months from today, looking back at the year that just passed. Each is from a different role in the AI rollout story. A non-executive director. A venture investor. A head of transformation. An operational lead. The postcards disagree with each other in places. That is the point.
I have included a fifth, shorter voice at the end. A skeptic who argues the whole frame might be wrong.
If you read this and recognise one of the roles, the easy thing is to nod. The harder thing is to ask: which of these mistakes am I making right now, and what would it take to find out before May 2027 arrives in real time.
Tim Robinson, 13 May 2026
From a non-executive director, three boards, two FTSE 350 and one PE-owned
12 May 2027
The thing I am most embarrassed about, looking back, is how comfortable the August 2026 deadline made me feel.
Three boards. Same AI agenda on all of them. A quarterly CIO update. A green compliance tracker. A budget line approved. Reassurance that we would be ready.
We were not governing AI. We were tracking toward a date. When the Digital Omnibus deal in May 2026 pushed the high-risk obligations to December 2027, the date disappeared and so did the conversation. I sat through one September meeting where Copilot did not appear on the agenda at all. None of us asked why.
The fiduciary question we should have been asking was simpler than the compliance question. Do we know what AI is doing in our name, and can we stop it if it goes wrong? In twenty years as a non-executive director I have learned that any question that begins with “do we know” usually does not have a good answer. This one was worse than usual.
Shadow AI was already running. Sixty-eight percent of staff using tools we had not approved. Sixty-two percent of agentic deployments outside formal governance. None of it in the board pack.
What we should have done with the time the deferral bought us was the unglamorous thing. An inventory. A kill-switch. A named executive with the authority to halt deployments and the obligation to report when she did. Not a maturity model. Not another committee.
We did not. By Christmas the shadow AI was production-critical and the kill-switch was politically impossible.
From a partner at a London growth-stage venture firm
12 May 2027
I priced the multiplier and ignored the denominator. Cleanest way to say it.
Through the second half of 2025 and most of 2026 I diligenced portfolio companies with the analytical framework I had inherited from a decade of SaaS investing. Seat counts. Activation rates. Pilot velocity. ARR expansion. It had served me well. So I imported it whole into AI.
The organisations we were funding were not, structurally, SaaS businesses absorbing a SaaS product. They were industrial-scale transformations and the workforce was the rate-limiting variable. The product was a multiplier on a denominator I never measured.
The honest version of what happened next is uglier than the post-hoc analysis usually allows. The portfolio companies fed me deployment metrics. I knew the metrics were not transformation evidence. I accepted them anyway because the alternative was uncomfortable diligence conversations about valuations I had already marked up, and because fund-cycle pressure made the cost of looking hard immediate and the cost of not looking hard deferred to a future vintage. Motivated incuriosity, institutionalised.
OpenAI’s launch of a deployment-focused enterprise unit in May 2026, backed by more than four billion dollars, was the market’s verdict on this. Anthropic announced its own venture the same day. The labs had concluded that selling capability without owning implementation was leaving value on the table, and they moved to fill the gap. They were right. We had been funding half a transformation.
What I should have asked for in 2026 was workflow redesign before deployment and one named owner accountable for outcomes. I asked for pilot counts. We marked the deals up on those pilot counts. I will be reading my own apology letters out of a different chequebook for a while.
From the former head of transformation at a FTSE 250 (left the role December 2026)
12 May 2027
I want to be honest about the part that gets lost when this gets told as a transformation-leader failure.
I knew the right sequence. I had read McKinsey. Redesign the workflow first, then choose the tool. I said it in steering meetings. I wrote it in plans. I had it in slide three of the same deck I presented to the executive committee in April, July, and October. By the third presentation I was using the same words and the same room was making the same nods and we were doing the same things we had always done. The political economy of transformation in mid-2026 was that nobody had the authority to do it properly, including the person whose job title said they did.
What I am more genuinely embarrassed about is the bundled-tool default. We did not select Copilot. We activated it. Procurement had already happened. The path of least resistance was to switch on what we were already paying for. I let it happen because designing a tool-selection process at the workflow level would have taken three months I did not have. We optimised for platform integration. We did not optimise for use-case fit. I knew I was doing it. I did it anyway.
The agentic surprise hit by Q3 2026. Forty percent of our enterprise applications had AI agents embedded by year-end, up from less than five percent in early 2025. The generative AI governance framework I had spent a year building was already inadequate. The silicon ceiling I had been told to expect for frontline workers turned out to be an incentive failure. Only thirteen percent of staff were rewarded for redesigning their work with AI. We had given them tools without giving them a reason to use them.
What I should have demanded, and did not have the political weight to win, was a single named executive with P&L accountability for the AI programme, sitting above the business unit heads, with the authority to enforce sequencing.
I left in December. The job did not need a head of transformation. It needed an empowered one.
From the operations director of a regulated financial services firm
12 May 2027
The thing that haunts me from mid-2026 is that we connected agentic AI to live customer data without finishing the incident-response plan.
If you had said it like that in a Tuesday standup nobody would have agreed to it. But that is what we did. Three in four organisations in our sector had given agentic AI access to data and processes, in pilot or production. One in five had a tested incident-response plan for AI failure. We were one of the four out of five. I knew it. I raised it. I was told that response infrastructure would be Q4 work, after the rollout had demonstrated value.
The rollout demonstrated something else first.
The C-suite alignment problem was the bit nobody else seemed to see clearly. Fifty-four percent of COOs in our peer group were worried about regulatory and compliance exposure from agentic AI. Only twenty percent of CIOs and CTOs felt the same way. We were looking at different versions of the same programme. The technologists saw the architecture. The operators saw the exposure. With that gap, accountability was structurally impossible because nobody agreed on what we were even doing.
What I should have done, and did not push hard enough for, was an inventory. Every model. Every feature. Every automation. With a named owner, a defined success outcome, and a kill criterion. We tried to build that retrospectively in early 2027 after the first incident. It took six months and it was incomplete because the people who built the original deployments had already moved on or left.
The hallucination question I now ask first, and which we treated as a quirk in 2026, is what happens when this system is confidently wrong about something a human will act on. The answer was usually unacceptable. We deployed anyway because the pressure to show progress overrode the discipline to contain risk. I have stopped using the word “risk.” I use “exposure” because risk sounds like something you can model.
From an independent AI policy researcher
12 May 2027
A footnote in case all of the above is a comforting story for the wrong audience.
Every postcard you have just read locates the failure inside the organisation. Boards lacked judgment. Investors lacked frameworks. Transformation leads lacked authority. Operational leads lacked containment. The narrative is coherent and it is partly true. It is also, suspiciously, the narrative that benefits the people who can sell its solutions. The labs who pivoted into forward-deployed engineering. The consultancies who sell governance remediation. The academics who publish papers about structural failure. I am one of the last group. I will not pretend the bias is not there.
The harder reading is that the regulatory infrastructure failed first, and the rest of us are absorbing the blame for it. The harmonised standards that were supposed to support the August 2026 deadline did not arrive on time. The AI Act risk tiers had no operationalised definitions. The trilogue on the Omnibus ended without agreement on 28 April. Organisations could not classify what they could not measure, and the measurement frameworks were never provided.
The structural-failure narrative laundered that into a story about brave individual organisations who got it wrong. Some did. Most did not have a fair chance.
If you are reading this in May 2026, the question is not which of the four postcards is closest to your truth. The question is whether you are positioned to ask, of the people upstream of you who escape blame in either telling, what they were doing while you were being held responsible.
What to do with this
I do not know which of these postcards will read as accurate in twelve months. I think the non-executive director is the closest to most rooms I am sitting in right now. I would like the skeptic to be wrong but the case is not bad.
If this was useful, the highest-leverage thing you can do is send the URL to someone you wish would heed it. Boards still tracking compliance calendars. Investors still measuring seat counts. Heads of transformation still being overruled on sequencing. Operational leads still being told that incident response is Q4 work.
You can also do something else. Paste the URL into your AI assistant and ask it one of the questions below. The postcards are designed as a starting point for a conversation, not a finished argument.
For boards: “Act as a board secretary preparing me for the AI governance section of my next board meeting. Generate five questions I should ask the executive team that would expose whether our AI governance is deadline-dependent (tracking external regulatory milestones) or institutional (with our own internal accountability logic). For each question, describe what a strong answer looks like and what a weak answer looks like, so I know how to read the response.”
For investors: “Act as a senior diligence partner reviewing an AI-positioned portfolio company. Draft a five-question diligence script that distinguishes companies measuring depth of AI transformation (workflow redesign, outcome accountability, embedding) from those reporting deployment breadth (seat counts, pilot numbers, licence activations). For each question, give me the answer that would be a green flag and the answer that would be a red flag.”
For heads of transformation: “Act as my chief of staff. Write a one-page memo from me to my CEO arguing that our AI programme needs a single named executive with full P&L accountability, sitting above business unit heads, with authority to enforce workflow-first sequencing. Open with the specific cost of the status quo. Close with three concrete actions my CEO can take this week.”
For operational leads: “Act as an experienced operations director who has lived through a year of AI rollout problems. Help me identify the three operating-model decisions I am currently deferring as future problems that I should actually be solving this quarter. For each, tell me the failure mode I am setting up, the specific decision required, and the smallest defensible first move.”
For anyone: “I am a [your role] at a [size] organisation in [sector]. Audit our current AI rollout for the four most common failure patterns I have seen in the last twelve months: boards substituting regulatory deadlines for fiduciary judgment, investors measuring deployment breadth not transformation depth, transformation leaders deployed without authority, and operations connecting AI to production data without incident-response plans. For each pattern, tell me the question I should ask in my organisation this week to surface whether it is happening to us.”
Tim Robinson, 13 May 2026
How this was made
The four postcards were paraphrased from analytical output generated by Delphi, an open-source multi-round AI consensus tool I built. Five expert personas, three rounds of deliberation, adversarial stress-testing, sixty-seven citations, and a structured Decision Canvas with regime-by-regime monitoring signals. Each postcard is informed by one expert’s position and written for character.
The companion dossier contains the full analytical workings: cross-examinations between experts, structured uncertainty per claim, counterfactual risk analysis, source bibliography, and a “How to read this” framing that situates the whole thing as scenario analysis rather than forecast. Read the Delphi Scenario Dossier →
Build your own analysis on any strategic question. Delphi is open source under the MIT licence: github.com/AgilistTim/Delphi.