← February 19, 2026 edition

relta

AI data analyst that's always right

Relta Thinks AI Data Analysis Has a Trust Problem. Formal Verification Is Its Fix.

AIAnalyticsData Engineering

The Macro: AI Analytics Has a Confidence Problem

I want you to imagine a scenario. Your CEO asks a question about quarterly revenue by product line. You type the question into an AI analytics tool. It generates SQL, runs the query, and gives you an answer. The answer looks reasonable. The chart looks clean. You put it in a slide deck and present it to the board.

The SQL was wrong. It joined two tables incorrectly and double-counted a subset of transactions. Nobody caught it because the output looked plausible. This is not a hypothetical. This happens constantly.

The text-to-SQL space has exploded over the past two years. Tools like Dataherald, AI2SQL, and dozens of others promise to let non-technical users query databases using natural language. The underlying large language models have gotten remarkably good at generating syntactically correct SQL. The problem is that syntactically correct and semantically correct are different things. A query can run without errors and still return the wrong answer because it misunderstood the data model, used the wrong aggregation, or filtered on the wrong column.

The analytics and business intelligence market is massive. Grand View Research pegs it at over $30 billion, and AI-powered analytics is the fastest-growing segment within it. Every BI tool, from Looker to Tableau to Power BI, is bolting on natural language querying. The demand is obvious: business users want to ask questions in English and get answers without waiting for an analyst to write a query. The supply of tools offering this is enormous and growing.

But trust is the bottleneck. If you can’t trust the answer, the tool is worse than useless because it’s useless and it creates false confidence. At least when you had to wait for an analyst, the analyst would sanity-check the results. An AI tool that confidently gives you wrong numbers is actively dangerous for decision-making.

The semantic layer approach has been one response to this problem. Companies like Cube and dbt have built frameworks where you define business logic once (what “revenue” means, how “active users” are counted, which tables represent what) and then queries are generated against that layer of definitions rather than raw tables. This helps, but it doesn’t eliminate errors. The AI can still misinterpret the semantic layer or generate queries that technically use the right definitions but combine them incorrectly.

The Micro: Verified SQL or Nothing

Relta’s approach is different from anything else I’ve seen in this space. Instead of generating SQL and hoping it’s correct, Relta formally verifies the generated queries against a semantic layer. This isn’t “we run the query and check if the output looks reasonable.” This is mathematical verification that the query logic matches the intended semantics.

The company is based in San Francisco and went through Y Combinator. On GitHub, the team maintains a project called github-assistant that lets users explore repositories with natural language, built in TypeScript. That project has about 98 stars, modest but real, and gives some insight into the team’s approach to natural language interfaces.

The core product works by building a semantic layer that defines your data model, your business rules, and the relationships between entities. When a user asks a question in natural language, Relta generates SQL and then runs formal verification against the semantic layer to confirm the query will return the correct answer. If the verification fails, the system doesn’t just return results with a disclaimer. It goes back and regenerates the query.

This is a meaningfully different approach from competitors. Most AI analytics tools use a “generate and validate” pattern where validation means checking syntax and maybe running the query against a sample dataset. Relta’s verification is structural. It’s checking whether the query logic itself is sound, not just whether it runs without throwing an error.

The “always right” claim in their tagline is bold. I’m skeptical of any absolute claim in software, but the formal verification approach at least provides a mechanism for backing it up. Traditional AI analytics tools are probabilistic. They give you the most likely correct query. Relta is attempting to provide provably correct queries. That distinction matters enormously when the output is going into financial reports, regulatory filings, or strategic decisions.

The semantic layer requirement means there’s setup work involved. You don’t just point Relta at your database and start asking questions. You need to define your data model in Relta’s semantic layer first. That’s friction, and for small teams that just want quick answers, it might be too much. But for organizations where data accuracy is non-negotiable, the upfront investment in building a proper semantic layer pays for itself the first time it catches a query error that would have otherwise gone undetected.

The Verdict

Relta is tackling the right problem. The AI analytics space is full of tools that optimize for ease of use at the expense of accuracy, and for casual data exploration that’s fine. For anything where the numbers actually matter, it’s not.

The formal verification approach is genuinely novel in this category. I haven’t seen another text-to-SQL product that provides mathematical guarantees about query correctness. If the technology works as described, it positions Relta well for enterprise customers in finance, healthcare, and other regulated industries where wrong answers carry real consequences.

The challenges are predictable. Semantic layer setup is a barrier to adoption. The verification step presumably adds latency compared to tools that just generate and return. And the total addressable market of organizations that both need AI-powered analytics and care enough about accuracy to invest in formal verification is smaller than the “everyone who wants to ask questions of their data” market that competitors target.

What I’d want to see at 90 days: real-world accuracy comparisons against competing tools, clear documentation on how long semantic layer setup takes for a typical mid-sized data model, and honest benchmarks on query latency with and without the verification step. If Relta can show that it catches errors that other tools miss, in scenarios that matter, the product sells itself.

The tagline “always right” is the kind of claim that will either be vindicated or become a punchline. My bet is on the former, but only if the team keeps the scope tight and resists the temptation to expand into use cases where formal verification becomes impractical.