Get in touch

All articles

13 May 2025 • 9 minute read

Hey Siri, who’s driving now?

I enjoyed David Perell’s recent interview with Tyler Cowen.

It’s all about the practical implications of AI on writing. Even if you do have a pathological aversion to AI, I recommend listening to two people taking the time to understand and share what this technology might mean. In Cowen’s words…

“If you want to make progress on thinking about the very big questions, simply using it, experimenting, seeing what works and fails, will get you much further than sitting on your duff and rereading Heidegger.”

This post isn’t about writing but it was prompted by something I heard in that interview. It comes at around the 15 minute mark when Cowen briefly describes how sophisticated he suspects AIs will become.

“Do you have an understanding of how much better AIs will be as they evolve their own markets, their own institutions of science inquiry, their own ways of grading each other, self-correction, dealing with each other, and become this republic of science? The way humans did it, how much did it advance human science or literary criticism to build those institutions? Immensely. That's where most of the value add is. So AIs, I believe, will do that.”

This is pretty bold and certainly feels a way apart from how most people use AI today. To me, Cowen is describing an ecosystem where humans and AIs work as peers. But these AIs are not just responding to prompts. They're acting autonomously, collaborating with each other and shaping progress in their own right. At this point, AI becomes as much a sociological story as a technological one.

On a personal level it made me realise how just a few months ago, I would have scoffed at the plausibility of this vision. It certainly would have felt more like science than speculative fiction. But honestly, the more I spend time with this stuff, the more I'm beginning to see this as a very likely future. Perhaps even 2-3 years away.

I started wondering what has changed for this to feel like a realistic possibility. Have I just been overindulging on the Kool-Aid (again), or is there something more going on? Probably both. But either way, I can now see a plausible path from how we use AI today to something that looks a lot more like Cowen’s vision.

So I mapped it out and ended up with a six stage journey.

Stage 1: AI as a Tool

In this first stage, AI is treated as a utility or tool we use to complete well-defined tasks. It’s reactive, not proactive and it works best when given narrow instructions.

Most people are still experimenting and experiencing a sense of novelty and surprise by testing the system’s boundaries and getting a feel for what it can and can’t do. The AI feels clever and a little like watching a magic trick, but not yet collaborative.

There’s no major ethical or sociological impact yet. Instead, AI is seen as a productivity boost or novelty, not something that reshapes workflows, institutions or culture.

Stage 2: AI as an Assistant

In this second stage, AI starts to feel more capable and responsive to the needs and preferences of the user. It still relies on clear direction (from its owner) but can now track tone, adapt to context and offer useful suggestions.

For example, when asked to rewrite an email in a more conversational voice, it may also suggest removing a redundant sentence and clarifying a key idea. And, inevitably, will throw in a flattering comment to give you a little dopamine boost. The interaction has become more fluid and the AI begins to feel like a reliable helper rather than just a clever trick.

There's a subtle shift here. People are starting to trust AI with a bit more responsibility and their expectations are growing. But its influence remains confined to individual tasks and productivity gains, rather than shaping how people collaborate, make decisions or structure their work.

Stage 3: AI as a Collaborator

In this third stage, AI begins to feel less like a tool or assistant and more like a peer. The interaction becomes more dynamic and reciprocal, with the AI not just responding to instructions but more actively shaping the work.

For example, it might question a brief, challenge your assumptions or suggest a completely different angle you hadn’t considered. Now you’re not just using AI to execute your thinking, you’re working with it to shape and develop new ideas.

This is the first moment where the AI starts to feel like a creative partner and, in doing so, starts to challenge some pretty deeply-rooted ideas about authorship, judgement and originality.

Stage 4: AI as a Delegate

At this stage, the relationship shifts again. Instead of working side-by-side with AI, we begin to hand over whole tasks. As a user, you might provide a high-level brief and then trust the AI to break it down, plan the steps and come back with a result.

ChatGPT’s Deep Research is a practical example of this. It can investigate a topic, find relevant material, digest the key ideas and return with a structured response, all while working in the background. The AI stops being a collaborator 'in the same room' and becomes something you can send off to work on your behalf.

This act of delegation reflects a growing confidence in the AI’s autonomy, not just in generating ideas but in responding to a brief and managing a process.

While this mode of working is still in its early days and the tasks remain relatively constrained, it quietly redefines our role once again. We’re no longer just creators or co-creators and instead become project leads, brief-givers and reviewers of work we didn’t directly create.

Stage 5: AI as an Orchestrator

Things get interesting here. By stage 5, we stop simply giving AI tasks to complete and instead challenge it with a broader goal or outcome. At this point, the AI begins to orchestrate how to achieve it. From there, its job is to figure out how to reach the goal, plan the necessary steps, chain actions together and then, if required, enlist the help of other specialist AIs to get the job done.

I’ve not experienced anything like this yet, but perhaps we’re starting to glimpse it with open-source projects like ~AutoGPT~. These systems show how an AI can take a big, broad objective, break it down into tasks and orchestrate a plan by coordinating with other AIs to carry it out. One agent might research a topic, another draft a report, while a third checks progress and decides what to do next.

It’s still early days for this kind of approach, but it does mark the emergence of AIs that can act independently, reason across steps and collaborate with one another.

For humans, it invites a shift from prompt-and-response thinking toward goal-setting and orchestration. The opportunity now is to frame objectives in ways that make the most of the networked intelligence these systems offer.

Stage 6: AI as Peer

At stage 6, the landscape has changed pretty fundamentally. AIs are no longer just assistants, collaborators or even agents carrying out our instructions. They are now independent actors in a broader ecosystem where humans and AIs operate as peers, each contributing to shared goals within distributed systems.

I’m not going to pretend I have a clear idea of what this will look like when we get there. The social implications of humans interacting with AIs in shared spaces are complex enough. And I’m not sure I’m ready for my AI to be hanging out with yours just yet.

But this piece isn’t about forecasting a fully-formed future. It’s about connecting the dots between how we work with AIs today and the kind of world Tyler Cowen hinted at. One where AIs operate with more independence, take initiative and act as peers to us and to each other.

By breaking the journey into stages, from tool to assistant, collaborator, delegate, agent and eventually peer, I at least hope it shows how that kind of future might not be as implausible as it first sounds.

What this means for us

If anything, my takeaway from this exercise is that we need to start considering the implications of this world sooner rather than later. Things are moving fast and if even some of these stages turn out to be only directionally right, we’ll be facing some pretty fundamental questions about trust, responsibility, authorship and agency.

These are more than design challenges. They’re sociological ones that ask how we live, work and make sense of the world alongside increasingly independent systems.

In our next article, we’ll explore one possible response to these challenges. A Human-AI Charter: a co-created agreement between a person and an AI that helps define the boundaries of their relationship and build trust within increasingly autonomous systems.


Bonus! Here’s a diagram of the six stages. We’ve been using it as a foresight tool to help teams map where they are and explore how agentic AI might shape their product or strategy in the months ahead. We’d love to hear what you found useful, how you used it or where you think we can improve it.

Six steps on the path to Human–AI collaboration

Download a hi-res version here

Want to chat?

We’d love to hear from you

Get in touch