The Macro: The Hiring Market Is Bad and the Tooling Is Making It Worse
The 2025 job market has not been fun to watch. Multiple sources — NBC News, the New York Times, the BLS’s own hiring lab — converge on the same uncomfortable picture: job growth stalled badly last year, with more than 1.4 million fewer jobs added than models predicted, and annual revisions showed the 2024 numbers weren’t even as good as we thought at the time. The labor market isn’t in freefall, but it’s genuinely tight in a way that makes the marginal quality of your application matter more than it did in 2021.
And yet the dominant hiring infrastructure hasn’t meaningfully evolved. Applicant tracking systems still parse resumes into keyword soups. LinkedIn’s skills endorsements remain a largely decorative layer. GitHub profiles exist but require active archaeology to interpret — a recruiter without an engineering background isn’t going to diff your commits. The result is a persistent mismatch: candidates who’ve done interesting work can’t surface it effectively, and recruiters searching for specific capabilities are wading through résumés that all say roughly the same things in slightly different fonts.
The portfolio space has attempted answers. Personal websites, Notion-based case study dumps, Behance for designers, GitHub for engineers. But these are scattered, recruiter-unfriendly, and require meaningful upfront effort to maintain. There’s a real gap between “I have work to show” and “a recruiter can efficiently discover and evaluate that work” — and that gap is where a handful of startups are currently trying to build.
Projects Yard is the latest entrant in this space, and it’s coming from a team with Carnegie Mellon roots, which at minimum means they’ve watched enough recruiting cycles up close to have opinions about what’s broken.
The Micro: STAR Format Meets Searchable Directory
The core product is a structured portfolio builder aimed specifically at tech candidates, positioned as a two-sided marketplace (or at least a directory) where candidates create project showcases and recruiters search against them.
Here’s how it actually works, as best as can be determined from the product page and launch description: you feed in your resume or project decks, and an AI layer converts that material into STAR-formatted (Situation, Task, Action, Result) structured entries. The pitch is that this takes about 15 minutes rather than the hours a well-formatted case study traditionally requires. Those entries then live in a searchable directory organized around actual skills demonstrated through real project work — not keyword tags you self-selected, but skills inferred from what you demonstrably did.
The recruiter-side experience is the harder half of this equation. Recruiter-facing candidate search is a product that lives and dies on data volume — a directory with 200 portfolios is mostly useless to a serious recruiting team. That’s the classic cold-start problem for any talent marketplace, and nothing in the current launch materials makes clear how they’re solving supply before demand or vice versa.
The AI-to-STAR conversion is the technically interesting piece here. STAR is a well-understood framework in recruiting contexts, and structuring unstructured project descriptions into it automatically would be genuinely useful if the output quality is good (a big if — this kind of extraction tends to hallucinate impact metrics). What the product needs to prove is that the structured output is accurate enough that recruiters can trust it.
The Product Hunt launch landed at #8 for the day with 26 upvotes and 9 comments — a modest but not embarrassing showing for an early-stage product in a crowded-adjacent space. Nothing to overread there, except that the engagement-to-vote ratio is thin, which usually means the comments are mostly supportive noise rather than power-user depth.
The Verdict
Projects Yard is solving a real problem with a reasonable approach, and the STAR-structured AI conversion is a genuinely smart UX shortcut — if it works well. Those are two meaningful ifs sitting next to each other.
The 30-day question is supply: how many candidates actually build portfolios, and are they the kind of candidates that would make a recruiter change their workflow to use a new search tool? At sub-hundred portfolios, this is a demo. At a few thousand well-structured entries, it starts becoming interesting.
The 60-day question is recruiter adoption. This is the hard side. Recruiters are not early adopters by nature — they have sourcing workflows embedded in LinkedIn Recruiter, Greenhouse, and Lever, and adding a new tab to that process requires a strong pull. “Better structured portfolios” is a genuine value prop, but it needs to be dramatically better, not marginally better.
The 90-day question is whether the AI output quality holds up under scrutiny. If candidates start inflating their STAR metrics (and they will — humans optimize for systems), the signal degrades fast.
What we’d want to know before fully endorsing this: actual recruiter usage, not just candidate signups. One side of a marketplace is just a list. But the instinct here is sound, the timing is reasonable given a soft job market, and the CMU orbit gives them legitimate recruiting-ecosystem access to test against. Worth watching.