SDK
trackedStream()
Wraps a streaming LLM call and logs cost data once the stream completes. Supports Anthropic and OpenAI-compatible providers (OpenAI, DeepSeek, xAI, Perplexity). Returns the identical stream as the underlying call.
Token usage data isn't available until a stream completes. For Anthropic, LLM Cost Tracker uses stream.finalMessage() to capture usage after the last chunk. For OpenAI-compatible providers, it reads usage from the final chunk with stream_options: { include_usage: true }. Both happen automatically.
Usage
import { trackedStream } from '@llmcosttracker/sdk'const stream = await trackedStream({client: anthropic, feature: 'chat', userId: session.userId, apiKey: 'lct_live_your_key_here', params: {model: 'claude-sonnet-4-6', messages, max_tokens: 1024, stream: true,},})// Use the stream exactly as normalfor await (const chunk of stream) {process.stdout.write(chunk.delta?.text ?? '')}Parameters
Identical to trackedCall() — same options, same API key, same feature and userId tags.
Returns
The exact stream from the underlying provider call. Use it exactly as you would without tracking.
Google Gemini streaming is not yet supported by trackedStream(). Use trackedCall() for Gemini calls.
Next: Configuration →