The Macro: Issue Trackers Are Where Productivity Goes to Die
I have a confession. I hate issue trackers. Not the concept, but the reality. Every engineering team I’ve worked with starts a new project with clean labels, organized boards, and good intentions. Six months later, the backlog has 400 items, half of them are mislabeled, a quarter are duplicates, and nobody can find anything without searching for 10 minutes.
The problem isn’t the tools. Jira is powerful. Linear is fast. GitHub Issues is simple and effective. The problem is that issue management is a tax on engineering time. Someone has to triage every new issue. Someone has to label it correctly. Someone has to assign it to the right person. Someone has to decide if it’s a duplicate. These are small tasks individually, but they add up to hours every week, and they’re the kind of work that nobody wants to do but everybody notices when it’s not done.
This is why the “AI for project management” category is heating up. Linear added AI features. Jira is building them. GitHub has Copilot touching everything. But most of these implementations feel like afterthoughts. They’re adding AI to existing products rather than rethinking the workflow around what AI can actually do well.
The interesting companies in this space are the ones asking: what if the AI didn’t just assist with issue management, but actually ran it? What if you wrote rules in plain English and the system executed them automatically, handling the triage and labeling and assignment without a human in the loop?
The Micro: Write Rules, Let the Bot Handle It
Maige is an open-source platform that automates GitHub issue and PR management through natural language rules. You connect your repository, write instructions like “always assign UI-related issues to @designlead” or “label .env PRs as ‘needs-approval’ unless opened by @maintainer,” and Maige executes them automatically.
The product is built by Rubric Labs and it’s already running on 4,300+ repositories. The user list includes some names I recognize: Documenso, Nuxt, Highlight.io, Cal.com, Trigger.dev, and Precedent. That’s not a vanity list. Those are active, well-maintained open-source projects with real issue volume.
The feature set covers the main pain points. Auto-labeling classifies incoming issues and PRs. Auto-assignment routes them to the right person. Auto-commenting responds to common patterns. Code review catches issues before they merge. There’s also a code sandbox for testing automations before they go live.
Setup is straightforward. Connect your repo, which creates a webhook and generates codebase embeddings. Write your rules. Watch the dashboard to see runs and adjust as needed. The pricing is $30/month for the standard plan, with 30 free issues to start. No credit card required for the trial.
What I like about Maige’s approach is the natural language rules. Instead of configuring a complex automation builder with conditions and branches and dropdown menus, you just write what you want in English. “Review all incoming PRs per CONTRIBUTING.md” is a rule that would take 20 minutes to configure in a traditional automation tool. In Maige, it’s one sentence.
The competition is interesting. GitHub Actions can do some of this with custom workflows, but the setup cost is high and you need to maintain YAML files. Linear’s built-in automation is smooth but limited to Linear’s own ecosystem. Jira’s automation is powerful but requires Jira expertise to configure. Sweep AI and CodeRabbit focus more on code review specifically. Maige is trying to be the general-purpose automation layer for everything that happens in a GitHub repository.
Being open-source is a smart move for this category. Engineering teams are skeptical of tools that touch their codebase. Being able to inspect the code, self-host if needed, and contribute fixes builds trust in a way that closed-source tools can’t match.
The Verdict
Maige is solving a problem that every engineering team has and few teams solve well. The 4,300 repo count tells me the product works. The quality of the repos using it tells me it works for serious teams, not just hobby projects.
The risk is getting squeezed. GitHub is going to keep adding AI features to Issues and PRs. Linear is going to keep building automation. If the big platforms ship 80% of what Maige does as a native feature, the remaining 20% may not justify a separate subscription. The counter-argument is that platform-native features tend to be generic, while Maige’s natural language rules are specific to each team’s workflow. That specificity could be the moat.
I want to see two metrics. First, rule complexity. Are teams writing simple one-liner rules or sophisticated multi-condition automations? More complexity means more lock-in. Second, false positive rates. If Maige mislabels issues or assigns them to the wrong person, teams will turn it off fast. An AI automation tool is only as good as its accuracy on the boring cases, not the impressive demos.