The 4 AI Roles of the Future
Looking forward to a golden age of human innovation
In Part 1 of this blog, we saw how Claude Code, despite its somewhat intimidating front-end, makes it feasible to create pretty compelling and useful software products, even by someone with pretty limited software experience. I built an AI bookmarking and research app, which I am now using in writing this blog. But what are the wider lessons? If Claude Code represents the state-of-the-art in AI coding assistants, what are the implications for software development going forward? And more broadly, what does this suggest about the way we will interact with AI agents in the future.
In this blog, I argue that as execution costs collapse, we will not be constrained by our ability to build stuff. Instead, I suggest that we will see the emergence of four key human roles that will shape the AI-augmented workplace of the future.
The 100x Economic Advantage
The cost structure of the software industry has been characterised by high up-front costs [software development costs and infrastructure], but very low marginal costs. Unlike physical products, which have per-unit material costs, there are practically no incremental per-unit costs associated with software. Replication is basically free, so the cost of adding incremental customers is largely driven by the cost of hosting and operating the software.
As a result, software and tech companies have been driven to rapidly scale the number of users to cover their high up-front costs. If, however, software can be created by AI, this no longer holds true. For example, the original team that created the Pocket app I tried to re-create was about 20 strong [1], and worked on a new version of the app for about a year. Thanks to Claude Code, I created my app over a couple of weeks for the cost of under $100. Of course, Pocket supported over 20 million customers - but the economics of software have truly been turned on its head.
It is now feasible to create curated experiences for a single customer! From a purely cost perspective, you are swapping a £100,000 per engineer salary with an approximately $50-100/month AI license. This is not an argument about the rights and wrongs of replacing human workers. It is about recognising that there is the potential to reduce the cost of creating software by a factor of one hundred.
The 10x engineering advantage
In my case, Claude Code created in excess of 11,000 lines of typescript code to create Weavify. (Yes, I know number of lines of code =/= quality or usefulness!) But what of its impact beyond hobbyists? Jensen Huang said that Nvidia uses it “all over” and “Anthropic made a huge leap in coding and reasoning, ” [2] while, reports claim that Microsoft engineers are using it in favour of, or together with its own product, GitHub Copilot. [3] By all intents, Claude’s Opus 4.5 represents a qualitative shift in the efficacy and reliability of AI code generation. Boris Cherny, the head of Claude Code, states that he uses it for all the code he creates. [4]
This shift is attributable to the way AI agents are now used. Unlike chatbots, they work on your behalf, and come back to you when it needs clarification, direction or to ask for permission to go to the next stage. They are no longer merely personal assistants. They are instead teams of co-workers. But what is the productivity advantage conferred by such systems?
It is not surprising that the AI frontier labs and Big Tech are at the front of the pack when it comes to the use of AI coding agents. Back to Cherny, he estimates that 95% of Claude Code is written by Claude Code - the tool literally building itself. The impact is difficult to quantify, and given that Opus 4.5 was only released in November, there is not a lot of data. But to get some sense, a team of 12 engineers was releasing 60-100 releases a day, including almost one external release a day. [5] The effectiveness can be seen in Anthropics’ Cowork tool, which was built “entirely” by a small team (around 5 engineers) with Claude Code within a week and a half. It is pretty unthinkable to go to a releasable preview in less than the time for a standard two-week sprint. This is pretty amazing. [5]
This step change is driven by AI coding tools shifting from being useful “auto-complete” tools and providing coding advice on point questions to being able to operate more autonomously. In an Ars Technica article, [6] a software engineer who worked extensively on the Linux kernel, said that he “now expects to tell an agent that ‘this test is failing, debug it and fix it for me.’ These are high-trust activities that have been traditionally very time-consuming. Not only can you ask an agent to do this, but you can also organise several agents to work in parallel, fixing, refactoring and optimising different parts of the code. A review of discussions on Reddit shows that engineers are increasingly confident in shifting to “refactoring large codebases” and taking over “upgrading libraries, fixing bugs, and performance improvements, freeing up time for innovative features.”
What can be achieved in small teams can be more difficult to achieve at scale. In their Q4 2025 financial report, Meta claims that since 2025, development teams have seen output increase by 30%, with power users seeing output by 80%. Looking across many tech firms, year-on-year productivity improvements in the order of 20-30% seems typical. But how can teams reach the productivity gains claimed by Claude Code?
“Syntax coding is dead” - bridging the human-machine gap
The most obvious takeaway from the latest evolution in AI coding is that the ability to write code syntax is no longer a useful skill. As Andrej Karpathy, former head of Tesla AI, put it, he now codes in English. Programming languages were always designed to act as a bridge between a programmer's intent and mental model and instructions interpretable by machines. The bridge was always incomplete; humans had to learn a “computer language” to give it instructions.
Now the bridge is complete. Humans can give instructions in English, and they can be confident of a good outcome. The Ars Technica report quotes a developer: “I still need to be able to read and review code,” he said, “but very little of my typing is actual Rust or whatever language I’m working in.” Coming back to Reddit, some developers are beginning to say that AI is now generating between 50-90% of code, and Anthropic’s own report into how AI is transforming work at Anthropic says that work has shifted “70%+ to being a code reviewer/reviser.” [7]
Looking to the future - the new categories of AI work:
I am not convinced that the role of humans as reviewers of AI output is durable. I think it is more representative of where we are today in AI maturity. As the AI agents become more reliable, I fully expect this role to be also be carried out more effectively by AI agents.
So what enduring roles will humans have in this new agentic world? My view is that there will be four key roles that will always have a strong human dimension. Let’s explore them.
1. Humans as Intelligent ‘AI Commissioners’
One thing we can be reasonably certain of is that AIs will never own assets. They will not own monetary resources, nor will they be accountable for what people do. Of course, human workers will take on tasks from AI systems, much as Uber drivers or workers in Amazon fulfilment centres do, but at the end of the day, the AI will not be the manager or ‘boss.’ The key point is that humans will remain accountable to their stakeholders (e.g. shareholders, taxpayers) for the conversion of resources into outcomes.
Humans will remain responsible for “how” AI is used. Consider a software example, where an AI coding agent is tasked to build a highly transactional social network feature in Python. It may then turn out, for example, that using Python for such applications is inefficient and consequently consumes too much cloud compute to be efficient. The service would have been profitable, had it been built in Rust, then it would have been profitable. Yes, the AI agent may have made the recommendation to use Python, but the decision rests with the person who commissions the task, the ‘AI Commissioner.’
So while many state that roles are “abruptly pivoting from creation/construction to supervision,” [6], I feel that this shift is more nuanced. A key enduring human role is to task AI systems intelligently, maintaining human accountability for the outcome. In software engineering, this means understanding the technical architecture choices that best match your needs. Are you building for one customer or 100 million customers? What is your business model? What are your per-unit costs? How will you market it? What will be your distribution model?
The Intelligent AI Commissioner role can be applied to all industries. In customer service roles, how do you specify and build systems of AI agents that create a great customer experience? If you create inconsistent, poor experiences, that is not the fault of the AI system, but of the people who commissioned, supervised and validated it. Think of law firms: their rules-based, document-based ways of working are natural territory for LLM disruption. However, despite how automated the system is, the legal accountability for the advice given will remain with a human. So, whether you are running a traditional law firm, where LLMs are used to help review and draft documents, or building a fully AI-based legal practice, it will always remain to humans to set the parameters of the service being built, and take legal accountability for the outcome.
2. Humans as ‘Human-AI Orchestrators’
One evening, while building my app, Claude Code got stuck in a rabbit hole. It was trying to debug a problem in the deployed version of the code, and kept to-ing and fro-ing, consuming tokens and failing to fix the problem. It was only when I established (with the help of Claude.ai) a clear workflow (see the Annex in the previous post) and mandated that Claude Code follow it, that we were able to fix the problem.
As I mentioned, much of the literature states that humans should be supervisors of AI. I don’t think that the term ‘supervisor’ does it justice. In this case, I was setting up the rules and the guidelines that I wanted my AI agents to follow when carrying out their tasks. As the person paying the monthly Anthropic bill, I am accountable for that spend. If Claude Code burns through tokens in ways I do not understand, then that is on me. By creating and enforcing a workflow and deployment strategy, I was taking ownership of the problem. I was not simply supervising; I was orchestrating the behaviour of my AI agents.
The orchestrator roles become critical when coordinating humans and agents within a single workflow. For example, the Claude Code team documents and maintains a workflow document (CLAUDE.md) that sets the rules by which the coding agents need to follow. Implicitly, this also guides the human software developers - thereby acting as a glue, a common playbook for the human-AI workforce. In software applications, this means understanding and owning your software development pipeline, ensuring that your development tools (CI/CD, repositories, test automation, etc., collaboration, documentation etc.) are designed and set up to work well for mixed human-AI teams.
The complexity of this orchestration is what is currently holding back the rollout of agentic AI systems. In a previous post on the future of work, I wrote how less than 10% of firms were scaling AI systems outside of a single functional silo. This is not a problem that is restricted to the digital domain. A recent McKinsey report [8] describes how most human jobs consist of automatable and non-automatable tasks. The non-automatable tasks may include those that require physical activities (e.g. in construction), human-to-human interaction (e.g. in healthcare) or have a level of accountability that is difficult to delegate to an agent or robot (e.g. regulatory tasks, or human supervision). The orchestration role will involve creating workflows that work across humans, AI agents and robots.
The McKinsey report describes some imagined scenarios, including a buildings materials depot. This is very much in the ‘physical’ world end of the spectrum, but you can see how the workflow will include AI agents managing inventory and ordering materials, providing tailored customer advice, humans building customer relations and supervising the store, and robots transferring and loading materials. In another example, Jason Lemkin, the founder of SaaSTt, the largest community of SaaS (Software as a Service) events, describes how his company replaced a team of approximately 10 human sales and business development staff with a team of 2.5 people supervising 20+ agents. [9]
So irrespective of whether your workflow is fully digital or crosses into physical or human interactions, Human-AI Orchestrators will be key to your success. The AI orchestrators will be responsible for designing, integrating, managing and maintaining these complex systems. These are very much the CIO and CTO teams of the future.
3. Humans as ‘AI Validators’
We now close the “accountability cycle” by validating the output of AI or hybrid human-AI teams. I am not talking here about the act of carrying out the review and testing of AI outputs. Although we have seen how software engineering teams are now spending more time reviewing output, I believe that this is a transitory step, and before long, we will be relying on AI agents to test and validate outcomes.
I am instead referring to the socio-technical process by which an organisation ensures that its outcomes, and the processes used to create them, meet its customer expectations as well as its regulatory and compliance obligations. Customer expectations cover a broad range of considerations. In an aerospace components company, it is ensuring that the component you are creating meets the needs of the broader system (be it an aeroplane or a spacecraft), and is compliant with all the applicable and typically rigorous regulations. In this context, AI Validation is therefore an intrinsically human (and organisational) function. It ensures an AI-enabled outcome is acceptable and lawful for its intended use—i.e., it meets safety, regulatory, ethical, and customer obligations—and that there is contestability and redress when it fails. In healthcare applications, the medical professional responsible for your care remains accountable for your well-being, even if they may be using AI-supported diagnosis and treatment tools.
The key limitation of the current crop of generative AI algorithms are optimised to give statistically ‘most likely’ outcomes, which are not necessarily technically correct. There will therefore always be a role for humans to act as a person responsible for the outcome, and often this will have legal implications. I will certainly not offer any legal analysis, as I am not qualified or experienced to do so. That said, most jurisdictions, including the UK [10] and the EU, do not recognise AI or IT systems as being legally accountable entities. Instead, named individuals typically are expected to hold the role of the named “Validator”, being responsible for the outcomes of automated systems, whether it involves personal data, financial outcomes, safety-critical systems or healthcare outcomes. For example, the UK’s Information Commissioner’s Office sets the standard of “meaningful human review” when personal data is processed in a way to have significant outcomes. [11]
To be clear, people involved in AI Validator roles are not just the people who hold accountability for the systems, but those designing, maintaining and operating the systems that ensure the correctness of outputs and acceptability of outcomes. Examples are the chief engineer signing of declaration of conformity, or the Clinical Safety Officer in a healthcare setting. AI Validators are responsible for classifying risk, defining acceptance criteria, designing and maintaining verification tests, maintaining audit trails, and managing governance, including potentially review and rectification mechanisms.
4. Humans as ‘AI Innovators’ - A golden age of innovation?
AI tools are having two major disruptions on knowledge work. First, they are “work displacement” actors - replacing tasks previously carried out by humans with AI agents. Secondly, as described above, they are radically reducing the cost of producing complex outputs, such as software systems. As AI technology advances, there is no reason to believe that other fields of engineering, biotechnology, materials science, medicine, and the creative arts will not also see a collapse in the cost of ideation.
In all engineering and tech organisations that I have worked for, output has been constrained by engineering capacity. This, increasingly, is no longer the key constraint, and I believe we are entering a golden age of invention. Let me explain. The product development process is an iterative process. These can range in scale from Apollo Moon launches to the Build-Measure-Learn cycles popularised by Eric Ries’ Lean Startup methodology. The speed of product development has been traditionally constrained by the time to build and test systems. The “learning” bit - the act of figuring out what changes, pivots, optimisations, experiments to carry out to create a better outcome has, in my experience, rarely been the limiting factor. But remove that constraint, as I experienced when creating Weavify, teams can now innovate without the handbrake on.
This opportunity is largely overlooked, as most discussion on AI adoption appears to focus on the productivity and efficiency gains that hybrid human-AI systems can produce and the sheer difficulty of creating reliable human-AI workflows and systems. Even Google is focusing on metrics such as developer velocity, reducing toil and enhancing code quality. It is therefore not surprising that the prize of all this, an explosion of human creativity, is being missed. As the cost of running experiments falls, so does the cost of failure. You start to run out of reasons not to be ambitious in your ideation.
Your constraints shift “left” and “up” in the product development cycle. You are now limited by “early-stage” activities - such as product ideation, creativity, access to data, and the ability to carry out experimental A/B testing on real customers. Additionally, as the ability to move fast increases, then so does the importance of having a coherent strategy and what your core proposition is. Otherwise, it is too easy to get lost in the “noise” of multiple iterations. The risk here is not so much a lack of ideas or number of development cycles, but a lack of coherence. This makes strategic clarity, and human judgement, more, not less, critical.
AI Innovators will therefore be entrusted with the responsibility of steering these faster development cycles. Today, we typically call these roles product designers, product managers, scientists or innovators. I suspect that these roles will remain particularly well suited to human intuition for two main reasons. First, they operate in a space where there may be a lot of data about existing and previous products or iterations. However, innovation is the act of envisaging imagined futures, something on which little data exists, and consequently, prediction algorithms may struggle with. Secondly, these roles rely on understanding human needs, both expressed and unexpressed. Again, AI tools can help, but I am not sure they will be in the driving seat.
In a blog on the Product Management in the Age of AI, Brian Balfour, CEO of Reforge, an AI customer analysis company states, “The yearly strategy cycle makes no sense in this environment anymore, given how fast things are changing and speeding up… Leaders will be steering ships moving at 10X the speed before.” [13] I couldn’t put it better. If your velocity is 10X higher, then this must be matched by strategic clarity and rapid decision-making.
Conclusion
In this blog, I have tried to extrapolate what this inflection point in the performance of AI coding means for the future of work more generally. The most commonly used lenses - namely analysing which tasks cannot be automated, misses the mark. Additionally, the oft-used paradigm of humans as supervisors and AI as builders, while true, is somewhat simplistic.
What I have tried to do is ask an altogether more useful question. As the act of “building things” becomes cheaper, where will the uniquely human attributes of human connection, intuition and accountability continue to have enduring value in a world dominated by AI. My argument is that these attributes become more, or less valuable, and will be concentrated in four types of roles.
While we are seeing the first signals take shape in software development, I suggest that these four roles, or versions of them, will emerge across multiple sectors and industries. This is not a prediction on what specific jobs will emerge in the future. It is a framework of how to think about the reshaping of the workforce and the roles of humans within it
Further Reading
METR, Measuring AI Ability to Complete Long Tasks, Model Evaluation & Threat Research (METR), March 2025
Haider M., Nvidia CEO Jensen Huang Calls Anthropic’s Claude ‘Incredible,’ Says Every Software Company Needs To Use It, Yahoo Finance, January 2026.
Wired Staff, Claude Code Is a Big Success—and a Test of Anthropic’s Business Model, Wired, 2025
Orosz G., How Claude Code is built, The Pragmatic Engineer, September 2025
Edwards B. Developers say AI coding tools work—and that’s precisely what worries them, Ars Technica, February, 2026
Anthropic, How AI is transforming work at Anthropic, December 2025.
Yee et al., Agents, Robots, and Us: Skill Partnerships in the Age of AI, McKinsey & Company, November 2025
Saastr Staff, Stop Learning AI, Start Doing AI: The 20+ Agents Running SaaStr, SaaStr, 2025–2026
UK Department for Science, Innovation & Technology, Implementing the UK’s AI Regulatory Principles: Initial Guidance for Regulators, February 2024.
UK Information Commissioner’s Office, How do we ensure individual rights in our AI systems?
Langley H., An internal Google project is trying to supercharge employees with AI. Codename: Project EAT., Business Insider, 29 January 2026
Belfour B. et al., Moving To Higher Ground: Product Management In The Age of AI, Reforge Blog, March 2025.
Morrone M., Anthropic’s viral new work tool wrote itself, Axios, January 2026




