We've invested significantly in NetSuite and the ecosystem of business systems that run INS. Now we're asking: how do we bring AI into these workflows? The answer isn't as simple as buying another module or hiring a consultant. The real barrier is how our work is structured.
Most AI agent deployments stall at the same place: the agent can write a plausible draft, summarize data, generate options, but it can't go further. It can't actually execute meaningful work in NetSuite. It can't update inventory records, process orders, or reconcile accounts alongside our teams. The wall isn't the AI. It's our GUI-centric, tribal-knowledge execution model. This approach served us well in the pre-AI era, but now stands as an impediment to fully leveraging AI at scale.
The Hidden Tax on Our Business Systems
Consider how work actually happens across our systems today. In NetSuite, critical business logic lives in saved searches that only certain people know how to modify. Custom workflows encode approval processes that aren't documented anywhere. SuiteScript customizations solve problems we've forgotten we had. The knowledge of why things work the way they do lives in people's heads: "Ask accounting about that field," "Inside Sales built that workflow," "We don't touch that saved search."
The same pattern repeats across vendor portals, each with its own click-paths, its own permission structures, its own tribal knowledge about how to get things done. And then there are vendor-specific deal registrations, each manufacturer with different rules, different portals, different expiration timelines, different approval processes.
The minute you want agents, everything you hid becomes expensive. AI agents don't thrive in "clicky" environments where state is scattered across NetSuite screens, custom fields, saved searches, and role-based permissions. Agents thrive where the underlying work is visible and editable.
A recent case study from Cursor, one of the breakout companies in AI-assisted development, illustrates this perfectly. They had adopted a content management system for their website, the kind of "grown-up" infrastructure move organizations make as they scale. But they realized the CMS had become a wall between their AI agents and the work itself.
They migrated back to raw code and markdown in three days, using $260 in AI tokens. A project estimated to take weeks with an agency. The headline isn't that AI agents are fast. It's that removing the abstraction layer made the work legible to agents in a way it hadn't been before. The same principle applies to our ERP and business systems.
What "Primitives" Means for INS
A primitive is simply a small, stable building block that stays useful even when tools change. For INS, primitives aren't about technology. They're about the fundamental questions that govern how work moves through our business: How does an order progress from quote to shipment? What determines when inventory needs replenishment? How do we know a vendor price change has been applied correctly?
The questions that define our work primitives are ones we ask constantly, but rarely document explicitly:
The Six Questions That Define Work Primitives
System of Record
Where is the authoritative source? Is it the NetSuite item record, the vendor portal, or a spreadsheet someone maintains?
Before/After State
Can we see what changed on a customer record or pricing update? Or do we only discover changes when something breaks?
Defined Gates
What must be true before an order ships, a price updates, or a vendor gets paid? Are these rules explicit or in someone's head?
Validation Checks
How do we prove the inventory count is right? "It looks right" isn't validation. Reconciliation against a defined source is.
Rollback Capability
If a bulk price update goes wrong, how quickly can we undo it? Agents increase throughput; rollback keeps that from becoming risk.
Traceability
Who changed that customer's credit terms, when, and why? As AI agents join our workflows, this becomes essential.
These aren't engineering abstractions. They're the questions we ask when something goes wrong with an order, when inventory doesn't match, when a customer disputes a charge. The difference is that organizations with clear primitives can answer them immediately, while organizations dependent on NetSuite screens and tribal knowledge cannot.
The NetSuite Challenge: Why Our Current Setup Resists AI
When work lives inside NetSuite's user interface, when our team must remember which saved search to run, which custom fields matter, and which workflows trigger automatically, agents remain advisors at best. They can summarize data we export for them, but they cannot act.
Consider the hidden costs we're paying across our systems today:
- Scattered business logic. Rules for pricing, credit holds, and order routing live in saved searches, workflow actions, and SuiteScript. No single place documents how decisions actually get made.
- Permission complexity. Different roles see different data. What sales sees differs from what accounting sees. An agent would need to understand these boundaries to work safely.
- Integration brittleness. Data flows between NetSuite and vendor portals. Deal registrations live in manufacturer systems with no connection to our quotes. When something breaks or expires, finding the source requires deep system knowledge.
- Undocumented customizations. Years of SuiteScript modifications, custom fields, and workflow tweaks. Some solve problems we've forgotten. Some conflict with each other.
We're not alone in this. Most mid-market companies running NetSuite face the same accumulated technical debt. But the cost isn't just in maintenance overhead. It's in the ceiling it puts on what AI can do for us. That hidden state, maintained in human memory and scattered across system configurations, is extraordinarily expensive.
If our workflows remain locked inside NetSuite's screens and our team's institutional memory, AI agents will remain drafting assistants. They'll help write emails and summarize reports, but they won't process orders, manage inventory, or reconcile accounts alongside us.
The Shift to Artifact-Based Work
The world being built by leading AI organizations revolves around artifacts: documented processes, explicit rules, versioned configurations, and validation checks. When a workflow resolves to artifacts plus validation, agents can participate in execution. When a workflow lives inside GUI state that humans must operate and remember, agents cannot reliably act.
This is why software companies are winning first in the AI agent race: not because engineers are better, but because software development is already built around the infrastructure of legibility: version history, explicit rules, tests, rollbacks, and audit trails. The same infrastructure we need for our business systems.
The cultural pattern emerging at AI-native organizations is that non-technical people are learning to express their work in forms that agents can read. Not learning to code, but learning to document workflows explicitly, to define validation rules clearly, and to collaborate with agents on the actual work.
What This Means for INS Systems
For our business systems, primitives might include: pricing rules documented outside of scattered saved searches, order processing logic that's explicit rather than embedded in workflows, inventory policies that can be read and validated independently of the UI, and change logs that capture not just what changed but why.
For our broader ecosystem: explicit data mappings between NetSuite and vendor portals, documented deal registration rules by manufacturer, pricing logic that accounts for registered deals versus standard pricing. The question isn't whether to replace these systems. It's whether we can answer the six primitive questions for our critical workflows, and whether those answers exist in forms that both our team and AI agents can read.
Building Primitive Fluency at INS
The goal is not to turn our sales team into programmers or our warehouse staff into NetSuite administrators. The goal is to help everyone express their work in forms that agents can safely act against, and to recognize when tribal knowledge needs to become documented process.
This requires teaching concepts, not technology:
- State: What is the current status of this order, this customer, this inventory item? Where is that written down? If it's "check the saved search" or "ask Maria," that's a problem.
- Artifacts: What is the system of record? When NetSuite says one thing, a vendor portal says another, and the warehouse count differs, which one wins?
- Change records: Can we see what changed on a customer account without digging through audit logs? Can we explain why a price changed last month?
- Checks: How do we validate that an order is ready to ship? That inventory is accurate? That a vendor payment is correct? "It looks right" isn't a check.
- Rollbacks: If we push a bad price update or misconfigure a workflow, how quickly can we recover? Is that documented?
- Traceability: Who changed what, when, and why? This becomes essential when agents start participating in workflows.
When our team shares these mental models, something powerful happens: people can see that NetSuite is a tool, not a source of truth. They can propose process improvements without triggering fear, because they can articulate what must stay true regardless of how we implement it. Work can be restructured into forms that agents can participate in, rather than being trapped in click-paths and institutional memory.
A Framework for Moving Forward
The mistake would be to interpret this as "replace NetSuite" or "rebuild everything from scratch." That's not the point. NetSuite and our other systems will remain central to our operations. The point is that primitive fluency, understanding how work actually flows and making that flow explicit, creates the foundation for AI to participate.
Start by asking these questions about our most critical workflows:
- Map our current state. How does an order actually move from quote to shipment? Where does each decision get made, and where is each piece of information stored?
- Identify the gaps. Which of the six primitive questions can we answer immediately for order processing? For inventory management? For vendor payments? Where do we rely on "ask someone who knows"?
- Define our artifacts. What should be the authoritative source of truth for pricing? For customer credit terms? For deal registration status? Can that truth exist independently of any single system's UI?
- Build visibility. Can we see what changed on a customer record or item setup without digging through NetSuite's system notes?
- Establish checks. What validates that inventory counts are correct? That orders are complete? That vendor invoices match POs? Are these checks automated or manual?
- Enable reversal. If we push a bad configuration change, how quickly can we recover? Do we even know what the previous state was?
AI strategy is not a procurement decision. It's a literacy decision. The winners won't be companies that buy the most AI tools. They'll be companies where enough people understand their work primitives that they can restructure workflows to be agent-legible, unlocking AI as operators rather than just advisors.
The Path to Human-Agent Collaboration
Organizations that define clear, composable primitives position themselves for effective collaboration between humans and AI agents. This isn't about replacing human judgment in complex customer situations or vendor negotiations. It's about creating conditions where human expertise can be amplified by AI that handles routine execution reliably.
Imagine: our commercial domain expertise captured in explicit rules that agents can apply consistently across all opportunities. Our operations team's order processing knowledge documented so that agents can handle routine fulfillment while humans focus on exceptions. Our accounting team's reconciliation logic expressed clearly enough that agents can flag discrepancies before they become problems.
Simple wins in the age of AI. When AI capabilities change fast, when models improve monthly, the organizations that thrive will be those with stable primitives underneath. Not because they froze their technology, but because they built on foundations that remain useful regardless of which AI tools sit on top.
The companies that capture AI's potential won't be those with the biggest AI budgets or the most sophisticated ERP implementations. They'll be the ones where enough people understand how work actually flows, where teams can question whether a process needs to be this complex, and execute improvements because everyone understands what must stay true underneath.
That's what operational excellence looks like in the AI era: simpler foundations, less hidden state, fewer brittle handoffs, and more of the company able to safely improve how we work. Not speed for its own sake, but the systematic removal of barriers between our people, our AI agents, and the customer outcomes that matter.