Back to Home
Quick Start Guide
Get up and running with Rodex AI in minutes
1
Choose Your Model
Select from our available models or use auto-select
All models require the rodex-
prefix. For automatic model selection based on speed, use rodex
.
Popular Models:
rodex
- Auto-select fastest modelrodex-llama-3.3-70b-versatile
- Groq's fastest Llama modelrodex-grok-beta
- XAI's Grok modelrodex-gemini-2.0-flash-exp
- Google's latest Gemini
2
Set Up Authentication
Simple token-based authentication
All requests require the authorization header with the token Rodex
:
Authorization: Bearer Rodex
When using the OpenAI SDK, simply set api_key="Rodex"
and it will automatically add the Bearer token.
3
Make Your First Request
Start generating AI responses
Using cURL:
curl https://api-rodex-cli.vercel.app/api/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer Rodex" \ -d '{ "model": "rodex", "messages": [ {"role": "user", "content": "Hello, Rodex!"} ] }'
Using Python:
import openai client = openai.OpenAI( api_key="Rodex", base_url="https://api-rodex-cli.vercel.app/api/v1" ) response = client.chat.completions.create( model="rodex", messages=[{"role": "user", "content": "Hello, Rodex!"}] ) print(response.choices[0].message.content)
Using Node.js:
import OpenAI from 'openai'; const client = new OpenAI({ apiKey: 'Rodex', baseURL: 'https://api-rodex-cli.vercel.app/api/v1' }); const response = await client.chat.completions.create({ model: 'rodex', messages: [{ role: 'user', content: 'Hello, Rodex!' }] }); console.log(response.choices[0].message.content);
4
Add Custom Instructions (Optional)
Personalize AI responses for your use case
Enhance your requests with custom instructions to guide the AI's behavior:
{ "model": "rodex", "messages": [ {"role": "user", "content": "Build a REST API"} ], "custom_instructions": "Use TypeScript, Express, and focus on type safety" }
Pro Tips
- Use "rodex" model for automatic selection of the fastest available model
- Add custom_instructions to get responses tailored to your specific needs
- Check model status on the homepage to see which providers are currently online
- Click on any model on the homepage to see configuration examples for that specific model