Getting started
Quickstart
Get from zero to live cost tracking in under 5 minutes.
1. Create a project
Sign in to LLM Cost Tracker and create your first project. A project maps to one app or codebase. You'll get an API key you'll use in step 3.
2. Install the SDK
npm install @llmcosttracker/sdk3. Wrap your LLM call
Replace your existing Anthropic call with trackedCall(). The return value is identical — nothing else in your code needs to change.
// Beforeconst response = await anthropic.messages.create({model: 'claude-sonnet-4-6', messages, max_tokens: 1024,})// After — same return value, now trackedimport { trackedCall } from '@llmcosttracker/sdk'const response = await trackedCall({client: anthropic, feature: 'search', userId: session.userId, apiKey: 'lct_live_your_key_here', params: {model: 'claude-sonnet-4-6', messages, max_tokens: 1024,},})The feature tag is how LLM Cost Tracker groups costs in the dashboard. Use descriptive names like 'search', 'summarize', or 'chat'.
The same trackedCall() function works for any OpenAI-compatible provider — pass an OpenAI, DeepSeek, xAI, or Perplexity client and the SDK detects it automatically. For Google Gemini, pass a GoogleGenerativeAI client instance. The params shape stays exactly as each provider expects it.
4. Verify data is flowing
Make one LLM call in your app, then open your LLM Cost Tracker dashboard. You should see the event appear in the call log within a few seconds.
You can also test with curl:
curl -X POST https://www.llmcosttracker.com/api/events \ -H "Content-Type: application/json" \ -d '{"api_key": "lct_live_your_key_here", "feature": "search", "user_id": "test_user", "model": "claude-sonnet-4-6", "input_tokens": 1000, "output_tokens": 200, "latency_ms": 1200}'Done
That's it. Head to your dashboard to see cost by feature, by user, and a full call log.
Next: SDK reference →