There is a quiet panic running through engineering leadership right now. Teams are shipping more. Dashboards glow greener. AI licenses get renewed without a fight. And yet when the board asks the simplest question, did this actually make us better, the room goes oddly silent. Velocity is loud. Impact is not. That gap is where a lot of confidence goes to die, and it is exactly where the conversation needs to get sharper.
The Engineering Leadership Community has always been good at sensing pressure points before they become public problems. On February 27, 2026, ELC is hosting an online roundtable that does not care how impressive your demo looked. Roundtable: AI Productivity Metrics is about the uncomfortable middle, the place between anecdotes and evidence. Between feeling faster and proving value. This is not an argument against AI. It is a demand that we stop confusing motion with progress.
Picture the room, even if the room is digital. Senior engineering leaders, directors, VPs, CTOs, people who have already rolled out the tools and lived with the consequences. No vendor theater. No motivational fog. Just peers comparing notes on what they tracked, what they missed, and what broke once the honeymoon ended. The energy is candid because it has to be. Everyone here has already spent the money.
Joy Dixon is hosting, and that matters. Joy Dixon does not come at this as a metrics tourist. With more than 25+ years across engineering leadership, software engineering, network administration, and technical training, and as Founder and CEO of Mosaic Presence, she understands both the systems and the humans inside them. As a Black Venture Institute Fellow with BLCK VC, advising and developing startups, she has seen how quickly weak measurement turns smart ambition into expensive noise. Her lens is inside out, which is the only way this topic works.
The intellectual thread runs back to ELC Annual 2025 in San Francisco, where Neela Deshpande led a roundtable on outcome oriented metrics for AI features. That session cracked something open. Leaders realized the hardest part was not adoption, it was translation. How key AI metrics, common errors, and ROI actually connect when quality, validation cost, and business outcomes start pulling in different directions.
This roundtable sits right there. Not asking how much code AI helped you write, but how much better what you deliver actually became. Not celebrating activity, but interrogating consequence. The kind of conversation that changes how the next budget meeting sounds, how the next roadmap is defended, how credibility is rebuilt quietly, without theatrics.
Moments like this do not announce themselves as turning points. They feel smaller. A sharper question. A better metric. A sentence you can finally say out loud. Those tend to be the ones that last.

