How it works
Route your requests based on your requested model
How to use
Configure an API Client and use it in your project.
OpenAI compatible : don't install additional library
1from openai import OpenAI
2
3client = OpenAI(
4    base_url="https://api.llmrequest.com/openai/v1/",
5    api_key="<llmr-private-api-key>"
6)
7
8response = client.chat.completions.create(
9    # translate to claude-3-5-sonnet by llmrequest
10    model="spell-checker-expert",
11    messages=[
12        {"role":"system","content":"Correct the spelling errors"},
13        {"role": "user", "content": "I like apples !"}
14    ]
15)Features
What Makes Us Different
No additional libraries, effortlessly empower your AI features.
Metrics
Beautiful charts to understand your data consumption
One API for all LLMs
Use any LLM provider or your private infrastructure
Traces
Dig into your API requests and responses
Custom Model Names
Route your requests, based on custom model's name.
Privacy
We do not exploit or sell your data to third parties, ensuring you remain in control of your data.
No Hidden Costs
Use your own API key subscription to providers
FAQS
Common Questions
Pricing
Choose your plan or start for free !
Contact
Connect With Us
We'd love to hear from you. Whether you have a question, feedback, or just want to say hello. Please use the contact form to reach out.
This is the only way to get in touch with us, and we’ll make sure to get back to you as soon as possible.
