Graph Workflow comparison

trace_route

Trace HTTP route from handler to service to DB in one call. Supports NestJS, Next.js, Express. Optional Mermaid flowchart.

−99%
Token reduction
4× grep for route strings
Native baseline
1 vs 4
Calls (CS vs native)
61 vs 35,000
Tokens (CS vs native)

What This Tool Actually Does

trace_route is an execution-path reconstruction tool. Given a URL pattern, it identifies the handler function, traces the service layer calls, finds the database operations, and returns the complete execution path.

This is a fundamentally different operation from searching for a route string. A raw text search for /api/users/:id returns every place that string appears: route definitions, test fixtures, documentation, frontend fetch calls, OpenAPI specs, comments. The signal-to-noise ratio is poor, and none of those matches tell you the execution path.

trace_route understands framework routing conventions. It knows that a NestJS @Controller('users') with a @Get(':id') method maps to GET /api/users/:id. It knows that a Next.js file at app/api/users/[id]/route.ts exports a GET function. It knows that Express uses router.get('/users/:id', handler).

A Raw Text Baseline Is Deceptively Bad

The 99% token reduction number deserves honest context. A raw text search for route strings produces enormous output because route patterns are scattered across the codebase. They appear in frontend API clients, test setup, middleware configuration, documentation, and the actual handlers. Grep returns all of them.

The 35K token native baseline reflects that reality. An agent searching for /api/users with grep will read through API client files, test helpers, middleware chains, and documentation before finding the actual handler. Then it needs additional searches to trace from the handler into the service layer and database calls.

trace_route skips all of that. It goes directly from route pattern to execution path because it has the framework-specific knowledge to resolve the route to its handler and the call graph to trace from the handler downward.

trace_route(repo="local/my-project", path="/api/users/:id")

Framework Support

FrameworkRoute Resolution
NestJS@Controller + @Get/@Post/@Put/@Delete decorators, @Param, @Body
Next.js App Routerapp/api/.../route.ts file-system routing, exported HTTP method functions
Expressrouter.get/post/put/delete, app.use middleware chains

The tool detects the framework automatically from the project structure and applies the correct resolution strategy.

What the Output Contains

A trace_route response includes:

  • Handlers: The controller or route handler functions that match the route, with file paths and line numbers
  • Call chain: The execution path from handler through service layer, traced to depth 3 by default
  • DB operations: Any database calls (Prisma, TypeORM, Knex, raw SQL) found in the execution path

When To Use It

Use trace_route as your first call when investigating any API endpoint. Before searching for error sources in an endpoint, before understanding what data an endpoint returns, before modifying endpoint behavior. The execution path gives you the complete picture in one call.

Common scenarios:

  • Bug report mentions an endpoint: trace_route shows you the full path from request to database, so you know exactly where to look.
  • Performance issue on a route: See which service calls and DB queries execute, identify the bottleneck without reading multiple files.
  • Adding a new endpoint: Trace a similar existing endpoint to understand the established patterns in this codebase.
  • Security review: See what data flows through an endpoint and where authorization checks happen (or do not happen).

Mermaid Output

Add output_format="mermaid" for a flowchart of the execution path, useful for documentation or architecture discussions.

Benchmark note

This benchmark compares CodeSift against the closest practical native workflow an agent would use for the same task. For some tools, that baseline is a direct shell equivalent such as rg or find. For AST-aware, graph-aware, and LSP-backed tools, the baseline is a multi-step workflow rather than a strictly identical command. Results should be read as agent-workflow comparisons: token cost, call count, and practical context efficiency.