Scale
Search

Beyond Grep: Search for Reliable Coding Agents

Session Abstract

Coding agents succeed in verifiable loops (compiler + tests), but large repos still expose retrieval weaknesses.
This session explores how lexical, structural, and semantic search can provide cleaner context for LLMs. We compare tradeoffs and evaluation approaches to improve reliability without inflating token cost.

Session Description

Coding agents work well partly because software is a verifiable domain: compilers, tests, and static checks create tight feedback loops that support iterative improvement.
Yet even with better tooling, MCP integrations, and skills-based workflows, many agents still degrade in large codebases where retrieval quality becomes the limiting factor.

This talk explores a working hypothesis: improving search is one of the highest-leverage ways to improve coding-agent outcomes before changing model size.
We will examine retrieval patterns across keyword, structural, and hybrid lexical-semantic pipelines, and discuss where each approach may help or fail.

Attendees will see how indexing, relevance tuning, and retrieval evaluation reduce token waste, answer quality, and provide stable foundations for agentic systems. A live demo shows search in action, highlighting how it complements AI rather than being replaced by it.