Yesterday, Jensen Huang stood on the GTC stage and announced NemoClaw, Nvidia's enterprise-grade AI agent platform built on OpenClaw. Not a chatbot. Not a search tool. A fully autonomous agent that browses websites, extracts specifications, fills forms, and compiles procurement evaluations without a human in the loop. This changes the timeline for us and for every B2B company whose website isn't ready for its first non-human visitor.
Following the NemoClaw announcement, we ran an agent readiness audit on industrialnetworking.com and then revised it after feedback from our marketing team, our web development partners at Intuit Solutions, and Four Columns. The findings are the reason for this briefing. Our site has real product prices, detailed service descriptions, clean text, and no paywalls. The content quality is strong. But our robots.txt rate-limits every major AI crawler to one page request per 10 seconds: GPTBot, ClaudeBot, PerplexityBot, ChatGPT-User, and thirty others are listed with a Crawl-delay: 10 directive that BigCommerce sets automatically and does not allow merchants to modify. Our JSON-LD structured data exists but is injected entirely via client-side JavaScript, making it invisible to AI agents that perform static page fetches. No llms.txt file. Minimal semantic HTML. Missing meta descriptions and OpenGraph tags. The site is technically crawlable, but an agent evaluating us against competitors with faster, more accessible data will move on long before it finishes reading our catalog.
This document covers why this matters, what was found, and what can be done about it. Some fixes require coordination with our BigCommerce development team. Others are straightforward.
The macro numbers make the urgency clear. AI referral traffic grew 357% year-over-year in 2025, reaching 1.13 billion visits in a single month. Gartner projects that by 2028, 90% of B2B purchasing will be intermediated by AI agents, pushing over $15 trillion of B2B spend through agent-driven evaluation. Fifty-one percent of all web traffic is now automated. The agents are already browsing. The question is whether they can find us when they do.
Background: What NemoClaw Is and Why It Matters
OpenClaw launched in January 2026 and became one of the fastest-growing open-source repositories in GitHub history. It is an AI agent that runs locally on a user's own hardware. It browses the web, fills forms, extracts data from any site, manages files, triages email, schedules tasks, and performs multi-step automation without a human in the loop. It uses the Chrome DevTools Protocol for full browser control, meaning it navigates websites, clicks through them, and extracts data exactly the way a human would, but faster and at scale.
OpenClaw is also, from an enterprise security perspective, alarming. It runs with full access to the host machine and the open web. There is no sandboxing. No access control over which files or systems the agent can touch. No audit trail. No guardrails preventing the agent from accessing sensitive data it shouldn't, sending information to external endpoints, or executing actions outside its intended scope. For any company with compliance requirements, customer data obligations, or regulated operations, deploying raw OpenClaw is a non-starter.
That's the gap NemoClaw fills. Nvidia's enterprise wrapper addresses the security and governance requirements that have kept OpenClaw out of corporate environments:
- OpenShell sandboxing: Agents run in isolated process-level sandboxes that prevent access to files, systems, and network endpoints outside the task scope. The agent can browse vendor websites without touching internal databases or customer records.
- Privacy router: Controls when and how agents communicate with cloud-based models (Claude, GPT, Nemotron). Sensitive data stays local. The router enforces policies about what data can leave the network and what must be processed on-premises.
- Local model execution: Nvidia's Nemotron models run locally on RTX hardware (GeForce RTX PCs, RTX PRO workstations, DGX Station, DGX Spark) for companies that need to avoid cloud exposure entirely.
- Hardware-agnostic deployment: Works regardless of underlying chip architecture. Supports models from OpenAI, Anthropic, and Nvidia's own family.
- Audit and governance: Enterprise logging, access controls, and policy enforcement that IT and compliance teams require before signing off on agent deployment.
This was the missing infrastructure layer that makes OpenClaw deployable in enterprise procurement, operations, and research workflows. The security gaps that kept autonomous agents out of corporate environments are being closed. Mass adoption is now a question of months, not years.
Our site is one of the sites those agents will try to read. NemoClaw isn't alone in this space. Perplexity, ChatGPT with browsing, Claude with computer use, and a growing fleet of specialized procurement agents all have similar capabilities. But NemoClaw removes the last major barrier to enterprise-scale agent deployment. This is the new normal for how B2B buyers will find and evaluate vendors.
How This Plays Out for INS
Here's a realistic scenario. A plant engineer tells their AI assistant: "Find me industrial networking vendors that carry managed Ethernet switches with EtherNet/IP support, can provide OT network design and architecture services, and offer on-site commissioning and validation for midstream oil and gas environments."
That's our sweet spot. We should be at the top of every response. But the agent needs structured, machine-readable data to find us: product categories like Ethernet switches, cellular routers and gateways, serial device servers, and protocol converters. It needs our 14 service lines described with scope and geographic coverage. It needs our case study outcomes with quantified metrics. It needs to know we partner with Cisco, Cradlepoint, and Ericsson.
Right now, our site is designed to funnel humans toward "Contact Us" and "Request Quote." The service descriptions on /offerings/ read "From design and installation to ongoing support, INS delivers end-to-end services" without structured data an agent can parse. Our robots.txt rate-limits AI crawlers to one page every 10 seconds (a BigCommerce platform default we cannot change), which means an agent evaluating our full catalog takes over 8 minutes while a competitor's site with no delay gets evaluated in seconds. Our JSON-LD schema markup exists but is invisible to most AI agents because it is injected via client-side JavaScript rather than rendered in the static HTML. The agent has every reason to move on. The vendor with faster, cleaner, more accessible data gets the recommendation. We don't.
McKinsey estimates $750 billion in revenue will flow through AI search by 2028. The companies that show up in those searches aren't the ones with the best marketing. They're the ones whose information is structured so agents can find it, parse it, and trust it.
What Agent-Readable Actually Means
Agent readability is distinct from mobile-friendliness or accessibility, though it shares DNA with both. Mobile-friendly meant restructuring content for smaller screens. Accessible meant adding semantic markup so screen readers could navigate. Agent-readable means structuring content so AI systems can extract meaning, verify accuracy, and act on what they find.
The analogy that's useful here: in the early web, a page could look great to humans but be completely opaque to search engines. Companies that added proper HTML structure, meta tags, and sitemaps didn't change what they said. They changed how they said it. We need to do the same thing again, at a higher level of sophistication, for agents.
The Machine-Readable Layer
AI agents don't read websites the way humans do. They extract structured data. JSON-LD schema markup, which Google now explicitly recommends, tells agents exactly what your content represents: a product, a service, an organization, a specification sheet. Sites with proper schema markup see 20-40% higher engagement from AI systems. More importantly, research shows that language models are up to 300% more accurate when citing content with proper structured data. The critical nuance: that schema must be present in the static HTML source, not injected via client-side JavaScript. Most AI agents perform static fetches and never execute JavaScript. If your schema only appears after JS runs, Google's crawler sees it but Claude, GPT, Perplexity, and autonomous procurement agents do not.
The Semantic Clarity Layer
Beyond schema markup, agent readability means content that answers specific questions directly. Compare two versions of our own copy. Current: "From design and installation to ongoing support, INS delivers end-to-end services that simplify complex connectivity challenges." Agent-readable: "INS provides OT network design and architecture, commissioning and validation, fixed wireless site surveys and installation, and managed network services with Cisco, Cradlepoint, and Ericsson partner technologies across industrial, enterprise, and midstream oil and gas environments." The first sentence is marketing. The second is data. Agents need data. We should review our key pages with this lens.
The Documentation Layer
Some of the most agent-friendly companies have started serving clean Markdown versions of their documentation. Stripe, Neon, and FastAPI allow you to append .md to any documentation URL and get a clean, parseable version. This is relevant for us because agents burn tokens on our BigCommerce navigation chrome and boilerplate. Clean Markdown mirrors of /offerings/ and /all-products/ would cut agent context size 70-80% and dramatically improve extraction accuracy. This is a Level 3 item, but worth noting as we plan.
Can an AI agent, acting on behalf of one of our prospective customers, extract structured, trustworthy data from industrialnetworking.com and use it to recommend INS? Right now the answer is no. Fixing that is the goal of the recommendations in this document.
The Emerging Standards
The web is developing a new layer of standards specifically for agent interaction. None of them are fully mature yet, but the direction is clear. Here's what the dev team needs to know about each one.
llms.txt
The simplest new standard and our highest-priority gap. Just as robots.txt tells search engine crawlers what to index, llms.txt is a Markdown file placed at the domain root that tells AI models where the most important content lives. About 844,000 websites have implemented it so far, including Anthropic, Cloudflare, Stripe, and the state of Maryland. No major AI platform has officially confirmed reading these files yet. But the adoption pattern mirrors early SEO: the companies that move first establish the convention, and the platforms follow. We should create ours this week. It points agents to /offerings/, /all-products/, and /technologies/private-cellular-networking/ as priority content.
Microsoft NLWeb
Microsoft's Natural Language Web project, announced in 2025 by CTO Kevin Scott as "HTML for the agentic web." Rather than creating new file formats, it builds on existing structured data formats like RSS, Atom, and JSON feeds, combining them with Model Context Protocol servers to create natural language interfaces for existing content. Early adopters include Tripadvisor, Shopify, and O'Reilly Media. Relevant for us because it doesn't require rebuilding content infrastructure. It builds on what already exists.
Model Context Protocol (MCP)
MCP is the open standard from Anthropic for connecting AI assistants to external data systems. Within months of its November 2024 launch, it was adopted by OpenAI, Google DeepMind, and Microsoft. For websites, MCP means creating structured interfaces that allow agents to query data directly rather than scraping pages. Scraping is passive and brittle. An MCP connection is active, structured, and reliable. This is a longer-term consideration for us, but worth understanding as we plan the roadmap.
agents.txt
A proposed standard that declares a site's policies and interfaces for AI agents: available API endpoints, authentication methods, rate limits, and contact information. A machine-readable welcome mat that tells agents not just what's on the site but how to interact with it properly. Low effort to implement once we have the other pieces in place.
Every one of these standards follows the same logic: make explicit for machines what was previously implicit for humans. The information on our site is already there. The structure is what's missing. This is exactly what happened with SEO in the early 2000s. The companies that structured their content first gained advantages that compounded for years. We have the same window now.
What Other Companies Are Seeing
These benchmarks help frame the opportunity we're leaving on the table.
Vercel (developer platform) watched ChatGPT referrals grow from less than 1% of signups to 10% of all new signups in six months. Their approach: make documentation static and easily crawlable, structure content for semantic clarity, and track which prompts triggered brand mentions. Ten percent of new customers from a channel that didn't exist eighteen months ago.
Tally (eight-person form builder) saw ChatGPT become their number one referral source, driving over 2,000 new signups per week. They grew from $2M to $3M ARR in four months, five months ahead of schedule. Their advantage wasn't technical sophistication. It was years of authentic content in forums, Reddit, and community blogs, creating the corpus that AI models learned from and now cite. This is relevant for us because our Imagine blog content and technical resources are building the same kind of corpus.
Retail broadly has seen a 520% increase in AI-referred traffic between 2024 and 2025. Visitors arriving through AI responses are 4.4 times more qualified than traditional search visitors. That qualification rate is the key metric. Agent-referred visitors have already narrowed their intent before arriving.
The GEO (Generative Engine Optimization) services market is projected to grow from $1 billion in 2025 to $17 billion by 2034. That's a 45% compound annual growth rate. An entire industry is forming around agent-readable content. We don't need to hire a GEO agency. We need to implement the foundational work ourselves.
Our Specific Gaps
Gartner found that 74% of procurement leaders lack AI-ready data. Our customers are building agent-driven procurement workflows right now. Here are the gaps we identified in our own site that put us at a disadvantage when agents evaluate vendors:
Identity and Categorization
Our homepage describes us as "Your Partner for Seamless Connectivity." An agent needs structured data: Schema.org Organization markup with industry classifications, certifications, geographic regions served, and specific capabilities. Our BigCommerce platform does generate JSON-LD schema on product and category pages, and our Organization schema exists on the homepage. But there are two problems. First, all of this schema is injected via client-side JavaScript, which means AI agents performing static fetches never see it. Second, our Organization schema is bare-bones, providing so little context that agents have no reason to crawl deeper into product pages where the better data lives. The fix is two-part: server-side render the JSON-LD so it appears in static HTML, and enrich the Organization schema with industry classifications, certifications, geographic coverage, and specific capabilities.
Service Descriptions
Our /offerings/ page lists 14 OT service lines, but the descriptions are marketing-oriented. Agents need structured data for each service: scope, geographic availability, partner technologies involved, and engagement models. The "Request Quote" endpoint is a dead end for agents. We don't need to publish exact pricing, but structured service tier descriptions would keep us in the consideration set.
Quantified Case Study Outcomes
Our case studies (Cisco midstream project, restaurant connectivity, industrial inventory management) are well-written narratives. But agents can't easily extract metrics, timelines, scope, or technologies involved. Adding structured schema markup to these pages with quantified outcomes would give agents the evidence they need to rank our credibility against competitors.
Example: The Cisco Midstream Case Study
To make this concrete, here is what the gap looks like using our own Cisco midstream case study as an example.
What we have now: A well-written narrative titled "How INS and Cisco Built a Zero-Downtime Network for Midstream Operations." It describes a critical network outage at a customer's midstream facilities across Oklahoma, Texas, and Louisiana. It explains how INS and Cisco deployed a Parallel Redundancy Protocol (PRP) architecture. The writing is strong. A human reader gets the story. But an AI agent trying to evaluate INS as a vendor for a similar project can't extract structured facts from prose paragraphs.
What an agent needs to see (as JSON-LD structured data embedded in the page):
Project type: OT Network Modernization
Industry: Oil and Gas, Midstream
Geography: Oklahoma, Texas, Louisiana (multi-site)
Partner technology: Cisco
Problem: Critical network outage at 24/7 midstream facilities; legacy network stretched over decades
Solution: Full-field network assessment, Parallel Redundancy Protocol (PRP) architecture, legacy protocol integration, operator documentation
Result: Zero-millisecond failover, continuous data flow from field sites to control rooms, post-project support roadmap
Services used: Network Assessment, Network Design and Architecture, Configuration and Installation, Commissioning and Validation, Post-Project Support
Protocols: PRP, legacy industrial protocols
Outcome metric: Zero downtime achieved
The narrative stays on the page for human readers. The structured data goes in the page's <head> as JSON-LD. Both audiences are served. The agent can now compare this project against similar deployments from competitors and rank INS based on verifiable facts, not marketing tone. This is the pattern we should apply to all three case studies.
None of these gaps are technical failures. They're the natural result of building a site for human visitors over the past several years. The good news: closing them doesn't require a redesign. It requires adding a structured layer on top of what already exists.
This is not about abandoning our current site or rebuilding for robots. It is about serving two audiences simultaneously. Human visitors see the same well-designed pages. Agents read the structured data layer underneath. Same content, two interfaces. The investment is modest. The cost of not doing it is invisibility in a channel that Gartner says will intermediate 90% of B2B purchasing by 2028.
Why This Is Not Just SEO 2.0
It would be easy to bucket this with our SEO efforts. It shares some DNA, but the implications are different in a way leadership should understand.
SEO optimized for a world where humans made decisions after visiting our site. The goal was to get the click. Agent readability operates in a world where the decision may be substantially made before any human visits our site. The AI agent has already evaluated our specifications, compared them against alternatives, checked our certifications, and formed a recommendation. By the time a human arrives, they're confirming a shortlist, not starting a search.
Zero-click searches already account for 65-70% of all Google queries. AI Overviews and agent-generated summaries are accelerating this trend. The question isn't whether people will stop visiting our website. They won't. The question is whether our site will even be part of the conversation that happens before a buyer decides to visit.
In B2B specifically, buyers typically evaluate six to ten vendors before making contact. If AI agents are doing that evaluation, the vendor whose data is most structured, most accessible, and most verifiable gets recommended. The vendor with the better marketing but no structured data doesn't make the list.
The Broader Benefits
This work isn't only for agents. Every fix we make for agent readability also benefits other channels:
- Structured product data improves search engine results, accessibility for screen readers, and internal data consistency for our own sales team
- Clean semantic HTML improves accessibility scores, page performance, and maintainability for the dev team
- Markdown mirrors give our sales team, partners, and new employees clean reference documentation alongside the agent benefit
- Quantified case studies with structured data serve our proposal process and marketing team, not just agents
The Tally example is instructive for us specifically. They're an eight-person company that became the top ChatGPT referral in their category, not because of marketing spend, but because of content quality and structure. We have significant content assets: 25 product categories with real pricing, 14 service lines, multiple case studies, and the Imagine blog building thought leadership corpus. The playing field in agent-intermediated search favors data quality over advertising budgets. That's an advantage for a mid-market company like INS.
Every specification we make machine-readable, every service we describe in structured data, every outcome we quantify in schema markup becomes a compounding asset. It serves AI agents today, search engines simultaneously, accessibility tools always, and whatever comes after agents tomorrow. This is not a separate initiative. It is the work of building a durable digital presence.
Audit Summary and Recommended Actions
Current State (Updated April 2, 2026)
What's working well: Real product prices visible on /all-products/. Detailed service list on /offerings/. Clean text content. No paywalls. JSON-LD schema markup exists on product pages, category pages, and the homepage (confirmed by Intuit Solutions). Product schema is mostly correct. The content quality is genuinely strong.
What needs to change:
- JSON-LD schema is invisible to AI agents. All structured data is injected via client-side JavaScript. AI agents that perform static page fetches (Claude, GPT, Perplexity, autonomous procurement agents) see zero schema markup. Only Google's crawler, which renders JavaScript, can access it. Fix: Server-side render JSON-LD so it appears in the static HTML source. This is the single highest-impact change. Requires coordination with Intuit Solutions.
- Organization schema is too thin. Even when rendered, the Organization schema is so limited that AI agents have no reason to crawl past the top-level page to reach product data. Fix: Enrich Organization schema with industry classifications, certifications, geographic coverage, partner technologies, and service capabilities. Coordinate with Four Columns.
- Product descriptions use prose instead of structured specs. Product schema uses narrative descriptions rather than segmented specification lists. Agents parsing product data get marketing copy instead of structured attributes. Fix: Restructure product descriptions in schema to use specification lists where possible. Coordinate with Intuit Solutions.
- robots.txt rate-limits every major AI crawler. 38 named crawlers have a Crawl-delay of 10 (one page per 10 seconds). No AI crawlers are blocked (no Disallow directives), only delayed. This is a BigCommerce platform default that cannot be modified through the admin panel. Fix: Escalate to BigCommerce support to explore exceptions. This is a platform constraint, not a configuration toggle.
- No meta description on the homepage. No
<meta name="description">tag, no canonical URL, and no OpenGraph tags on any pages checked. AI agents relying on meta tags for site context get nothing. Social media link previews render without rich content. Fix: Add meta description, canonical URLs, and OpenGraph tags to all page templates. Straightforward theme-level change. - Missing H1 and broken heading hierarchy. The homepage has no
<h1>tag. Headings skip from h2 to h5. AI agents use h1 as the primary signal for page topic. This is also a WCAG accessibility gap. Fix: Add H1 to homepage, correct heading hierarchy across templates. - JavaScript-dependent content rendering. Product listings are rendered client-side via the Klevu search library. A static crawler sees an essentially empty product grid. This is architectural to the BigCommerce + Klevu integration. Mitigation: Server-side rendering of JSON-LD schema ensures agents get structured product data even if they cannot render the visual grid.
- Sitemap not referenced in robots.txt. Our sitemap exists at
/xmlsitemap.php(BigCommerce convention), not the standard/sitemap.xml. The robots.txt file has noSitemap:directive pointing to it. Intuit Solutions has agreed to add this. Status: In progress. - No llms.txt file. No guidance for AI systems about where our most important content lives or how to cite us. BigCommerce has no native support, but Intuit Solutions has implemented workarounds for other clients. Fix: Implement llms.txt via BigCommerce workaround. Coordinate with Intuit Solutions.
- Minimal semantic HTML. The site uses partial semantic markup with custom data attributes (
data-container-role) instead of native HTML5 elements. No<main>or<nav>elements detected. Browser-controlling agents like OpenClaw rely on the accessibility tree to navigate. Fix: Add proper main, nav, section elements via theme editor. - DNS on Dyn.com with no CDN-level agent controls. Migrating to Cloudflare would unlock native AI Crawl Control tools: auto Markdown serving to agents, per-bot policies, crawler traffic dashboards, and DDoS/WAF protection as a bonus. Cloudflare's AI gateway also provides visibility into which agents are accessing the site and how often. Recommended as part of the agent-readiness initiative.
Corrections from Original Audit (March 17, 2026)
The original version of this briefing contained errors that have been corrected above:
- JSON-LD: Originally reported as "zero JSON-LD structured data." This was incorrect. Schema markup exists but is injected via client-side JavaScript, making it invisible to AI agents performing static fetches. The original audit's crawler could not execute JavaScript and reported the schema as missing.
- robots.txt fix: Originally described as "10 minutes in BigCommerce." This was incorrect. BigCommerce automatically sets the crawl-delay and does not allow merchants to modify it through the platform admin. A workaround requires direct escalation to BigCommerce support.
These corrections were identified through feedback from Chad Allison (INS Marketing), Joseph Gastler (Four Columns), and Christopher Lazzaro (Intuit Solutions). The core thesis of the briefing - that our site has significant agent-readiness gaps despite strong content - remains valid.
Recommendation
Our customers' procurement teams will be deploying AI agents increasingly over the next months and years. Our content is already strong. Our schema markup exists but is hidden from the agents that matter most. The highest-impact fix is server-side rendering of JSON-LD, which makes our structured data visible to every AI agent without requiring JavaScript execution. Combined with enriching our Organization schema and adding basic meta tags, these changes transform our site from invisible to discoverable in the agent-driven procurement landscape.