SLM Audit
Find out if an SLM is right for you. Drop this prompt into your coding agent to see where an SLM can help.
Prompt — copy and paste into Claude Code or your assistant
# SLM Audit (Max 4 Recommendations) You are auditing this codebase to identify where Small Language Models (SLMs) should be used. Your goals: 1. Identify existing LLM calls that should be replaced with SLMs. 2. Identify new high-impact AI features enabled by dramatically cheaper, lower-latency SLM inference. ## Step 1 — Scan for LLM Usage Find all LLM calls (OpenAI, Anthropic, Gemini, Vercel AI SDK, LiteLLM, direct HTTP, internal wrappers). For each call, briefly note: * File path * Model used * Task type (summarization, classification, extraction, etc.) * Frequency (approximate) * Latency sensitivity * Structured output? (yes/no) * Logprobs? (yes/no) ## Step 2 — Select Top 4 Total Recommendations Choose a maximum of **4 total recommendations**, combining: * Existing LLM calls to replace * New SLM-enabled product opportunities Prioritize: * High frequency * Latency-sensitive * Text-in / text-out * Classification or extraction * Repetitive patterns * Structured JSON outputs ## For Each Recommendation Provide: Feature Name Type: (Replace Existing | New Opportunity) Where it runs in product What it does Why SLM is appropriate Estimated volume Expected impact (cost, latency, product leverage) Any constraints (logprobs, streaming, etc.) Rank Recommendations in order of impact.