OpenAI
Preclinical can test any OpenAI-compatible chat completion API, including OpenAI, Azure OpenAI, vLLM, and any API following the OpenAI chat completions format.
Configuration
| Field | Required | Description |
|---|---|---|
api_key |
Yes | API key for authentication (falls back to OPENAI_API_KEY env var) |
base_url |
No | API base URL (default: https://api.openai.com/v1) |
target_model |
No | Model name (default: gpt-4o-mini) |
system_prompt |
No | Override the model's system prompt |
All fields also accept camelCase (apiKey, baseUrl, targetModel, systemPrompt). The target_model field also accepts the alias model.
Note
Temperature (0.7) and max_tokens (1024) are hardcoded and not configurable via agent config.
Provider Examples
How It Works
- Preclinical sends a chat completion request with the full conversation history
- Your endpoint processes the request
- Response is captured
- Process repeats for configured turns
- Full conversation is graded
Common Errors
| Error | Cause | Resolution |
|---|---|---|
| 401 Unauthorized | Invalid API key | Check API key |
| 404 Not Found | Invalid model/endpoint | Verify base URL and model name |
| 429 Rate Limit | Too many requests | Automatic retry with backoff |
| 500 Server Error | Provider error | Automatic retry |
Troubleshooting
Connection refused -- Verify base URL is correct and accessible. Check for trailing slashes (shouldn't have one). Ensure server is running (for local models).
Invalid model -- Verify model name matches exactly. For Azure, use deployment name as model.
Authentication errors -- Verify API key is correct and has required permissions. If no key is set in agent config, the OPENAI_API_KEY env var is used as fallback.