Two ideas collided recently that crystallized something we've been sensing at INS. The first: every knowledge work role is collapsing into a single meta-competency of directing AI agents. The second: the people who win in this era aren't the most technically skilled at AI, they're the ones who think about it differently. Together, these ideas reveal a paradox that matters for every organization trying to figure out AI strategy: going faster is actually safer than going slow.
In our previous pieces on the skill tree for probabilistic systems and the super exponential AI timeline, we explored how the technical landscape is shifting and how fast it's accelerating. This piece examines the human side of that equation: what it actually means to shift from doing to directing, and why the organizations that hesitate are taking on more risk than the ones that lean in.
Fifty Roles, One Skill
Something unprecedented is happening across knowledge work. Engineer, product manager, marketer, analyst, designer, operations lead: these used to be distinct career paths with distinct skill sets. They're converging now, rapidly, into variations on a single theme: humans directing AI with domain knowledge and clear intent toward an outcome.
The data supports this. Gartner predicts that close to half of enterprise applications will integrate task-specific AI agents by the end of 2026, up from less than 5% in 2025. That's an eight-fold increase in just over a year. This isn't a trend to monitor. It's a structural shift already underway.
Consider what a product manager does today versus two years ago. The job used to require synthesizing customer feedback, writing specs, coordinating with engineering, managing stakeholders. Now increasingly the job involves prompting models to draft specs, using AI to analyze customer data, and using agents to directly build in production. That pattern repeats across every function. Legal teams compress weeks of contract review into hours. Finance teams build projections in a fraction of the time. Customer success teams run AI agents that handle the majority of initial inquiries.
What used to be fifty different specializations is converging into variations on a single theme: humans directing AI with good knowledge and good software-shaped intent toward an outcome. Your domain expertise doesn't disappear. It becomes foundational rather than differentiating by itself. You need great domain knowledge to direct AI effectively. But you have to be able to leverage that knowledge through AI.
This is what we mean by convergence. It's not that marketing becomes engineering or that finance becomes product. It's that every role now requires the same meta-skill: the ability to orchestrate AI agents effectively. The differentiation between roles shifts from what you know how to do to how well you can direct AI to do it with the domain knowledge you carry.
The Bike Analogy: Why Slower Is Riskier
There's a powerful analogy that captures the counterintuitive reality of AI adoption. Learning to ride a bicycle. When you're going slow on a bike, it's incredibly hard to balance. Every wobble feels like you're about to fall. But when you go faster, the gyroscopic effect kicks in. The bike stabilizes. Balance becomes almost effortless.
Kids learning to ride bikes demonstrate this perfectly. They think going slower will keep them safer. But the physics work the opposite way. Speed creates stability.
AI works the same way. Organizations that move slowly with AI find themselves constantly wobbling: overthinking every prompt, second-guessing every output, trying to fit AI into existing workflows that weren't designed for it. They spend more energy fighting for balance than making progress. Meanwhile, organizations that lean in and go faster find that the patterns start to solidify. The unconscious understanding of how AI works across systems builds up. The ride gets steadier over time precisely because they committed to velocity.
If you tried ChatGPT in 2022, decided it hallucinated too much, and left it there, that position has become untenable. You don't have time to wait for AI to "mature." You don't have time to say your job is immune. Anytime you are touching a computer, you are touching AI. That's how pervasive this will be within the next year. Preparation means engagement. This is an art you learn by doing. You don't learn to ride a horse by reading a book.
The old career model assumed your expertise appreciated over time. You learned something valuable, it stayed valuable, and it gradually compounded. The new model is different. Expertise depreciates unless you continuously update it. And the depreciation rate is accelerating because AI capabilities are accelerating. The SWE-bench coding benchmark went from 4% solvable in 2023 to essentially saturated two years later. The doubling time of AI capability improvement is shrinking.
This creates an uncomfortable but important truth: the skills that will matter in 2027 are being defined right now by the people engaging right now. If you wait until the technology settles down, early adopters will have already built the workflows, established the norms, and captured the opportunities. They'll have compound learning while you're still working through the basics.
Follow the money for proof. Big tech's combined AI capital expenditure was close to half a trillion dollars in 2025 and will exceed that in 2026. The big five plan to add at least $2 trillion in AI-related assets over the next four years. This is the biggest capex project in human history. The money is committed. There is no mature state to wait for. There is only a continuously steepening curve that rewards those who climb early.
The Uncomfortable Truth
For anyone thinking about waiting: the people who are thriving now are not the ones who took an AI class and are coasting. They're the ones who developed the meta-skill of continuously learning and adapting as the technology evolves. The half-life of any specific piece of AI knowledge is short and getting shorter. The half-life of the learning habit is long and getting more durable. The question isn't whether to engage. It's whether you'll engage on your own terms while you still have the option.
Be the Director, Not the Doer
So if the urgency is real and every role is collapsing into "directing AI agents," the critical question becomes: what does it actually mean to direct well?
Most people still treat AI like a faster pair of hands. It's the equivalent of buying a self-driving car and keeping your hands on the steering wheel the entire time. The real leverage comes from treating AI like a team you direct, not a tool you operate.
This is a genuine identity shift for many professionals. We built careers on being excellent doers. Our value was in execution: writing the code, crafting the analysis, building the presentation, processing the orders. When the doing can be delegated to AI, the value moves upstream to the directing: defining what needs to happen, setting quality standards, making judgment calls on the output.
The 10-80-10 Model
A practical structure for directing AI in any workflow:
- First 10% — Ideation: Sit down with your team, your domain knowledge, your understanding of the problem. Collaborate. Define the intent, constraints, and quality criteria. This is pure human work: vision, context, judgment.
- Middle 80% — Execution: This is where AI takes your input and produces nearly completed work. Generation, research, drafting, analysis, building. The volume of output AI can handle here is extraordinary, and it doesn't sleep, take vacations, or call in sick.
- Final 10% — Taste and Integration: You come back in to evaluate, refine, and integrate the output into the business. This is where the human taste and domain expertise we developed earlier become irreplaceable. It's the final quality gate that makes AI output actually valuable.
This model maps directly to the skill tree we outlined in our previous piece on probabilistic systems. The first 10% is conditioning: intent specification, context engineering, constraint design. The middle 80% is the workflow running. The final 10% is authority: verification, judgment, and the decision to ship.
From Push Prompting to Pull Prompting
One tactical change captures the director mindset in practice: the shift from push prompting to pull prompting. Most people use AI by telling it exactly what to do and how to do it. They push their assumptions into the prompt and get back a mirror of their own thinking. Innovation can't happen when you're guiding AI based solely on what you already know.
The director's approach is different. Instead of telling AI how to do something, tell it what you need and have it ask you the questions. AI has more information and more context than you can hold in your head. Let it guide you toward the best outcome rather than constraining it to your existing mental model.
The same inversion applies at the workflow level. Most people do all the work themselves and then go to AI to fill in the blanks. Directors have AI do the work and then come in to fill in the blanks themselves, applying taste, judgment, and domain expertise to the final integration. This is the 10-80-10 model in action.
The Five Shifts That Separate Directors from Spectators
Becoming an effective AI director isn't a single skill. It's a set of interconnected mindset shifts that build on each other. Skip one, and the others lose potency.
Use AI as a Trainer, Not a Crutch
The fear that AI will make us intellectually lazy gets the causality backwards. Used well, AI strengthens thinking rather than replacing it. The key distinction is between using AI to bypass learning and using AI to accelerate it.
The same principle applies in the workplace. When an analyst uses AI to generate a financial model, they shouldn't just accept the output. They should interrogate it, learn from its approach, and develop sharper instincts about what makes a good model. AI as trainer means every interaction is a learning opportunity that compounds over time. This is how domain expertise stays current rather than depreciating.
Develop Taste as a Professional Skill
Taste is your ability to recognize excellence instantly. In a world where AI can generate infinite outputs, the bottleneck shifts from production to evaluation. The person who can look at ten AI-generated options and immediately identify the best one, or articulate why none of them are right, holds enormous leverage.
Taste isn't innate. It's built through deliberate exposure to excellence. Immerse yourself in the best work in your field. Study the masters. When you encounter something great, ask yourself why it works. Train your pattern recognition for quality. In an era of infinite AI-generated output, taste becomes the scarcest and most valuable resource.
Think in Software-Shaped Intent
This is perhaps the most overlooked skill in the convergence. When directing AI agents, you need to think in terms of what agents can actually deliver within the technical ecosystem they occupy. Where is the agent's tool set? Where is its memory? Where is its workflow? When you direct an agent, will the result look software-shaped: an interface that adequately reads and writes data to solve the problem?
This used to be exclusively an engineering skill. Now it's becoming universal. Even if your job has nothing to do with building software, the idea that we work with agents means we all need to develop an intuition for how software systems process information. The person who can frame a marketing campaign, a financial analysis, or an operational process in terms that an AI agent can execute on will consistently outperform the person who can only think in traditional workflows.
Build Vision Through AI-Augmented Thinking
Vision is the ability to see a future that doesn't exist yet, but should. Most people don't spend enough time in the future. AI changes the economics of forward-looking analysis dramatically. Scenarios that once required teams of analysts can now be explored in minutes.
The discipline here is to block out dedicated thinking time, separate from execution, and use AI as a research co-pilot for exploring possibilities. Study breakout innovations outside your industry. Use AI to pressure-test assumptions. The breakthroughs don't come from knowing your industry better than everyone else. They come from cross-pollinating ideas: seeing what assembly lines can teach you about knowledge work, or what music production can teach you about software development.
Lead with Care in a World of Infinite Outputs
AI can mimic intelligence. It cannot mimic genuine care. In a world where everyone has access to the same AI tools, the differentiator isn't the quality of your prompts. It's whether people trust you enough to help you execute your vision.
This is the most human and least automatable of all the shifts. Get to know your people, your clients, your community. Understand their goals well enough to align organizational objectives with individual aspirations. Ask for feedback before giving it. Celebrate milestones. The whole point of business is to develop people. If you wake up every day to build wealth, you'll burn through talent. If you wake up to develop people, they'll build the business with you. In a world of infinite AI outputs, care is the ultimate input.
What This Means for INS
Evaluating the Convergence
At INS, we're evaluating what this convergence means for our own teams. The question we keep coming back to is whether we can help people across every department learn to express their workflows in forms that agents can execute against. That's the primitive fluency we wrote about previously, and we believe it's becoming the foundation for every role.
If we get this right, the investment in making workflows explicit, in pulling business logic out of tribal knowledge and GUI-dependent processes, would serve double duty: it becomes our AI readiness strategy and our operational excellence strategy at the same time. Every workflow we make legible to agents becomes a workflow we could direct rather than manually execute. Every primitive we define becomes a building block that will compound across the organization.
The Encouraging Reality
It's easy to read all of this as doom and gloom. Roles are collapsing. Timelines are compressing. Domain expertise alone won't save you. But there's a deeply encouraging flip side that we keep seeing confirmed in practice.
When people choose to engage with AI with genuine curiosity, when they decide to get on the bike and pedal, the results are consistently transformative. Not just for their output, but for their sense of agency and capability. Curiosity literally opens up the brain. The patterns start to solidify. Working with AI across different systems builds an unconscious competence that makes every subsequent interaction more effective.
We've watched this happen across widely differing fields: healthcare, finance, engineering, product, even small-town community building. Without exception, the choice to positively lean in takes people farther than they expected. The people who win aren't the most technically gifted. They're the ones who combine curiosity with domain knowledge, who develop taste through deliberate exposure to excellence, who think in terms of directing outcomes rather than executing tasks, and who never forget that the humans on the other side of every transaction are what ultimately matter.
The Bottom Line
Your biggest edge in the AI era isn't speed or tools. It's becoming the kind of person who thinks clearly, directs boldly, and cares deeply. Every role is converging. Every timeline is compressing. AI is here to amplify you, not replace you. But amplification requires engagement. The bike won't balance itself. You have to pedal. And the faster you go, the steadier it gets.
Sources: Nate B Jones (AI News & Strategy Daily), Dan Martell (Reclaim Your Freedom. Build Your Dream Life.).