Learn
Articles
Benchmarks, deep dives, and technical analysis. Real data from 188 agent sessions across real codebases.
We Benchmarked CodeSift Against Native Agent Workflows
Comprehensive benchmark across real TypeScript codebases testing every CodeSift tool against the closest practical native workflow. 64 tools, 3 repos, real data.
Combo Flows: The 13 Tool Sequences Real Agents Use
N-gram analysis of 188 real agent sessions reveals 13 common tool sequences. 772 runs across 33+ codebases. Aggregate: −61% token reduction, 70% win rate.
CodeSift vs. Native Tools: The Token Cost of Flying Blind
Direct technical comparison of CodeSift workflows against native agent workflows. Token costs, call counts, and model quality degradation.
10 CodeSift Tools That Change How Agents Navigate Code
The tools that most meaningfully change agent behavior: token efficiency, workflow compression, and capability expansion.
When Keyword Search Isn't Enough
Semantic search, hybrid search, and conversation search. Three embedding providers, three search modes, real benchmark results.
Type-Safe Navigation for AI Agents
The LSP bridge gives AI agents direct access to language server knowledge: resolved definitions, hover types, cross-file rename.
All 64 Tools — Complete Reference
Complete reference table for all 64 CodeSift MCP tools with categories, native equivalents, and token reduction data.