SDK

trackedCall()

Wraps a standard (non-streaming) LLM call and logs cost data to your dashboard. Supports Anthropic, OpenAI, Google Gemini, and any OpenAI-compatible provider (DeepSeek, xAI, Perplexity). Returns the identical response as the underlying call.

Usage

import { trackedCall } from '@llmcosttracker/sdk'const response = await trackedCall({client: anthropic, feature: 'summarize', userId: session.userId, apiKey: 'lct_live_your_key_here', params: {model: 'claude-sonnet-4-6', messages, max_tokens: 1024,},})

Parameters

clientAnthropic | OpenAI | GoogleGenerativeAIrequiredYour initialized client instance. Pass an Anthropic client, any OpenAI-compatible client (OpenAI, DeepSeek, xAI, Perplexity), or a Google GenerativeAI client — the SDK detects the provider automatically.
paramsobjectrequiredThe exact params you would pass to the underlying create() call.
apiKeystringrequiredYour LLM Cost Tracker project API key. Found in Settings.
featurestringoptionalTag for this call — e.g. "search", "summarize", "chat". Used for grouping in the dashboard.
userIdstringoptionalYour app's user identifier. Used for per-user cost attribution.
promptVersionstringoptionalVersion label for your prompt — e.g. a git SHA or "v2.1". Used for before/after cost comparisons in the dashboard.

Returns

The exact response from the underlying LLM call — identical to calling the provider's API directly.

Error handling

If logging fails for any reason, the error is caught silently. The function always returns the LLM response — a logging failure will never throw or affect your application.


Next: trackedStream() →