Skip to main content

· One min read

batches, guardrails, team management, custom auth


info

Get a free 7-day LiteLLM Enterprise trial here. Start here

No call needed

✨ Cost Tracking, Logging for Batches API (/batches)​

Track cost, usage for Batch Creation Jobs. Start here

✨ /guardrails/list endpoint​

Show available guardrails to users. Start here

✨ Allow teams to add models​

This enables team admins to call their own finetuned models via litellm proxy. Start here

✨ Common checks for custom auth​

Calling the internal common_checks function in custom auth is now enforced as an enterprise feature. This allows admins to use litellm's default budget/auth checks within their custom auth implementation. Start here

✨ Assigning team admins​

Team admins is graduating from beta and moving to our enterprise tier. This allows proxy admins to allow others to manage keys/models for their own teams (useful for projects in production). Start here

· One min read

A new LiteLLM Stable release just went out. Here are 5 updates since v1.52.2-stable.

langfuse, fallbacks, new models, azure_storage

Langfuse Prompt Management​

This makes it easy to run experiments or change the specific models gpt-4o to gpt-4o-mini on Langfuse, instead of making changes in your applications. Start here

Control fallback prompts client-side​

Claude prompts are different than OpenAI

Pass in prompts specific to model when doing fallbacks. Start here

New Providers / Models​

✨ Azure Data Lake Storage Support​

Send LLM usage (spend, tokens) data to Azure Data Lake. This makes it easy to consume usage data on other services (eg. Databricks) Start here

Docker Run LiteLLM​

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.55.8-stable

Get Daily Updates​

LiteLLM ships new releases every day. Follow us on LinkedIn to get daily updates.