LLMRequest
All features are free during beta !

An observability and customizable llm proxy platform

The only API to rule them all

dashboard

How it Works

Configure an API Client and use it in your project.
1from openai import OpenAI
2
3client = OpenAI(
4    base_url="https://api.llmrequest.com/openai/v1/",
5    api_key="LLMR-3d1a00e7504bbde6d364689f16d8a5917e"
6)
7
8response = client.chat.completions.create(
9    # translate to claude-3-5-sonnet by llmrequest
10    model="spell-checker-expert",
11    messages=[
12        {"role":"system","content":"Correct the spelling errors"},
13        {"role": "user", "content": "I like apples !"}
14    ]
15)
16
Client list

Customize your clients API

Control which models each client can use

Features

What Makes Us Different

No additional libraries, effortlessly empower your AI features.

Metrics

Beautiful charts to understand your data consumption

One API for all LLMs

Use any LLM provider or your private infrastructure

Traces

Dig into your API requests and responses

Custom Model Names

Call your model based on business rules, not technical names.

Privacy

We do not exploit or sell your data to third parties, ensuring you remain in control of your data.

No Hidden Costs

Use your own API key subscription to providers

Sign Up for Updates

Stay updated with the latest product news and updates.