← March 20, 2026 edition

nao-labs

The analytics agent built for context engineering.

nao Labs Wants Every Company to Talk to Its Own Data Without Calling an Engineer

The Macro: The Analytics Bottleneck Nobody Has Fixed

Every company I have talked to in the last two years has the same problem. They have data. They have a warehouse. They have dashboards. And nobody outside the data team can actually answer a question without filing a ticket and waiting three days.

The analytics bottleneck is not a technology problem. The technology is fine. BigQuery, Snowflake, Databricks, all of these can run complex queries in seconds. The bottleneck is that the people who have the business questions do not know SQL, and the people who know SQL do not have the business context. So the data team becomes a translation service, converting “how did our retention change after we launched the new pricing page” into a 47-line query, running it, putting it in a slide, and sending it back. Then the business person asks a follow-up, and the cycle starts again.

This problem has created an entire category of BI tools. Looker, Tableau, Mode, Metabase, Hex. All of them try to make data accessible to non-technical users. All of them work reasonably well until the question goes beyond what the pre-built dashboard covers. And the questions that matter most are almost always the ones the dashboard was not built for.

The natural-language-to-SQL wave was supposed to fix this. A dozen startups launched in 2024 and 2025 promising to let business users ask questions in plain English and get answers from their data warehouse. The demos were impressive. The production deployments were less impressive. The core problem is that generating correct SQL from a natural language question requires understanding the schema, the naming conventions, the business logic encoded in the data model, and the edge cases that make certain queries produce misleading results. Without that context, you get queries that look right and return wrong answers.

Competitors like Athena Intelligence, Seek AI, and Text2SQL.ai are all working variations of this problem. Databricks has AI/BI built into their platform. Snowflake shipped Cortex Analyst. The big players are moving into the space, which validates the market but also means any startup here needs a credible differentiation story.

I think the differentiation comes down to context engineering. Not how good your SQL generation is, but how well your system understands the specific data environment it is working in. That is where nao Labs is focusing.

The Micro: Context Engineering as the Whole Product

nao is a two-person team out of YC’s Spring 2025 batch, founded by Claire Gouze and Christophe Blefari. Claire was previously head of data at Sunday, a restaurant tech company that raised a $100M Series A. She came from BCG Gamma, which is McKinsey-for-data-science and means she has seen enterprise data problems from the consulting side. Christophe has over ten years building data tools and platforms, and he maintains a data engineering blog with over 20,000 followers. This is a team that has lived inside the analytics stack for their entire careers.

The product is an open-source framework for building, evaluating, and deploying analytics agents. The key concept is “context engineering,” which is their term for the process of teaching an AI agent everything it needs to know about your specific data environment before it starts answering questions.

The workflow starts with nao-core, a CLI tool that builds agent context from multiple sources. It connects to your data warehouse (BigQuery, Snowflake, Postgres, Databricks, DuckDB, Redshift, MotherDuck). It syncs with your data transformation layer (dbt, Looker, Airflow). It pulls in documentation from Notion, Confluence, Google Drive, and Linear. It reads your GitHub repositories. All of this gets compiled into a context layer that the analytics agent uses when generating and executing queries.

That approach is different from competitors who just point an LLM at a database schema and hope for the best. By incorporating your dbt models, your Looker definitions, your internal documentation, and your code, the agent starts with a much richer understanding of what the data means, not just what the columns are named.

The testing framework is the other piece I find genuinely interesting. You can write unit tests for questions-to-SQL, defining expected queries for given questions and tracking reliability metrics over time. This sounds unglamorous but it solves one of the biggest objections enterprise buyers have to AI-generated analytics: “how do I know the answers are right?” With nao, you can build a test suite that validates accuracy against known-good queries and catches regressions.

Deployment options include a web UI, Slack, Microsoft Teams, and self-hosted installations. The system supports Claude, Gemini, GPT, Mistral, Bedrock, and Ollama as LLM backends. SOC 2 certification is already in place.

Pricing is simple: free for the open-source core, $30 per seat per month for nao Pro.

The Verdict

I think nao is making the right architectural bet. The analytics agent products that will win are not the ones with the best SQL generation. They are the ones that understand the data environment deeply enough to generate correct SQL consistently. Context engineering is the right abstraction.

What I would want to know at 30 days: how long does the context building process take for a real enterprise data stack? If it is a few hours of CLI work, that is a reasonable onboarding cost. If it takes weeks of manual curation, adoption will stall.

At 60 days: are companies using the testing framework? The test-driven approach to analytics agents is genuinely novel, and it is the kind of feature that could become a selling point for compliance-sensitive industries. But it only matters if people actually write the tests.

At 90 days: retention at $30 per seat. That is cheap enough for a team to try it without procurement approval, but it also means nao needs a lot of seats to build real revenue. The open-source to paid conversion rate will tell the story.

The founding team is exactly who you would want building this. Deep data engineering experience on both sides, enterprise exposure from the consulting world, and an open-source-first approach that gives them distribution advantages over closed-source competitors. The $30 price point is aggressive in a good way. It removes the objection and forces the product to sell itself on actual utility.

If the context engineering approach produces meaningfully better query accuracy than competitors who skip that step, nao has a real product. That is the whole bet, and I think it is a good one.