Guide Labs just closed a $9.3M seed round, and the move feels less like a raise and more like a quiet detonation under the black-box status quo. When Initialized Capital steps in as lead and pulls Tectonic Ventures, Lombard Street Ventures, Pioneer Fund, YC and E14 Fund into formation, you can tell the thesis is not wishful thinking. It is conviction with a bank account. These firms do not chase hype. They chase inevitability, and Guide Labs is building something that looks a lot like the future of accountable AI.
The company’s core trio says everything about the direction of travel. Julius Adebayo has spent years proving that post-hoc explanations often tell fairy tales. Fulton Wang has built enough large-scale systems to know where models hallucinate their logic. Aya Abdelsalam Ismail brings research depth that turns interpretability from an academic ideal into an applied discipline. They joined forces between 2024 and early 2025, and you can feel the alignment, building models that not only think but also show their work.
What makes this raise different is that Guide Labs did not settle for the usual performance vs clarity tradeoff. Instead, they introduced an interpretable 8B-parameter LLM that exposes its reasoning with the kind of honesty most models avoid. Training-data attribution, prompt attribution, concept-level transparency and real steerability are not marketing lines. They are the structural steel of a system built for medicine, lending, hiring and drug discovery, where opacity is not just inconvenient but dangerous. Enterprises looking for explainability that actually explains can finally stop squinting at guesswork.
The supporting cast amplifies the signal. PRISM gives users the ability to see which training examples nudged a prediction. Atlas turns raw datasets into concept-aligned maps that domain experts can actually reason with. Discrete diffusion with block causal attention gives the architecture its backbone. Early team members like Isaac Plant and Andreas Madsen are translating these breakthroughs into operational reality, which is where most AI startups discover gravity.
This round landed because Guide Labs proved they can turn deep research into durable product. The team is already scaling their interpretable models, sharpening attribution tools and preparing APIs built for enterprises that need auditability as much as accuracy. High-stakes sectors have been waiting for AI that behaves less like a mystery box and more like a partner you can question, challenge and trust.
Startups Startup Funding Early Stage Venture Capital Seed Round AI LLM LLM Engineering Data Data Driven Research Technology Innovation Tech Ecosystem Startup Ecosystem Hiring Tech Hiring

