openai-php/laravel
Laravel integration for OpenAI PHP. Install via Composer and artisan, configure API key/org, then call the OpenAI facade to create responses, chat, and more. Community-maintained client for the OpenAI API.
## Getting Started
### Minimal Setup
1. **Installation**:
```bash
composer require openai-php/laravel
php artisan openai:install
This generates config/openai.php and adds .env variables:
OPENAI_API_KEY=sk-...
OPENAI_ORGANIZATION=org-...
First API Call:
Use the OpenAI facade to interact with OpenAI’s API:
use OpenAI\Laravel\Facades\OpenAI;
$response = OpenAI::completions()->create([
'model' => 'gpt-3.5-turbo',
'prompt' => 'Explain Laravel dependency injection in 3 sentences.',
]);
echo $response->choices[0]->text;
Key Facades: The package provides facades for all OpenAI API endpoints:
OpenAI::completions() → Text completionsOpenAI::chat() → Chat completionsOpenAI::images() → Image generationOpenAI::edits() → Text editsOpenAI::embeddings() → EmbeddingsOpenAI::audio() → Audio transcription/translationOpenAI::fineTuning() → Fine-tuning jobsOpenAI::files() → File managementOpenAI::models() → Model listingOpenAI::realtime() → Realtime API (e.g., Assistants)OpenAI::conversations() → Conversation APIOpenAI::responses() → Responses API (e.g., Assistants)Use method chaining for clarity and type safety:
$response = OpenAI::chat()->create([
'model' => 'gpt-4',
'messages' => [
['role' => 'system', 'content' => 'You are a helpful assistant.'],
['role' => 'user', 'content' => 'What is Laravel?'],
],
]);
Handle streaming responses (e.g., for chat or completions):
OpenAI::chat()->create([
'model' => 'gpt-3.5-turbo',
'stream' => true,
'messages' => [['role' => 'user', 'content' => 'Tell me a joke.']],
], function ($chunk) {
echo $chunk->choices[0]->delta->content;
});
Use Laravel’s exception handling or custom middleware:
try {
$response = OpenAI::completions()->create([...]);
} catch (\OpenAI\Exceptions\RateLimitException $e) {
// Retry logic or notify user
} catch (\OpenAI\Exceptions\InvalidRequestException $e) {
// Validate input
}
Override defaults in config/openai.php or via .env:
OPENAI_BASE_URL=https://api.openai.com/v1 # Custom endpoint (e.g., proxy)
OPENAI_REQUEST_TIMEOUT=60 # Increase timeout for large requests
Mock API responses in tests:
use OpenAI\Laravel\Facades\OpenAI;
use OpenAI\Responses\Chat\CreateResponse;
OpenAI::fake([
CreateResponse::fake([
'choices' => [['message' => ['content' => 'Fake response!']]],
]),
]);
$response = OpenAI::chat()->create([...]);
expect($response->choices[0]->message->content)->toBe('Fake response!');
Bind the OpenAI client to Laravel’s container for dependency injection:
// In a service provider
$this->app->bind(OpenAIClient::class, function () {
return OpenAI::client();
});
Then inject it into controllers/services:
use OpenAI\Client;
public function __construct(private Client $openai) {}
public function generateSummary() {
$response = $this->openai->completions()->create([...]);
return $response->choices[0]->text;
}
Use Laravel’s retry helper or a custom decorator:
use Illuminate\Support\Facades\Retry;
Retry::times(3)->attempt(function () {
OpenAI::chat()->create([...]);
});
Log requests/responses for debugging:
OpenAI::client()->withOptions(['logger' => new \OpenAI\Logger\FileLogger('/path/to/logs')]);
API Key Leaks:
.env to version control. Use Laravel’s .env.example for templates.Rate Limits:
RateLimitException gracefully.$retryAfter = $e->getRetryAfter();
sleep($retryAfter);
Deprecated Endpoints:
engines for completions) are deprecated. Use models instead:
// Old (deprecated)
OpenAI::completions()->create(['engine' => 'text-davinci-003']);
// New
OpenAI::completions()->create(['model' => 'gpt-3.5-turbo']);
Streaming Quirks:
Swoole/ReactPHP for async handling. Avoid blocking requests in CLI.fastcgi_read_timeout in Nginx).Model Availability:
Cost Management:
token_count in responses to estimate costs:
$tokenCount = $response->usage->total_tokens;
$cost = $tokenCount * 0.000002; // Example: $0.002 per 1M tokens
Enable Verbose Logging:
OpenAI::client()->withOptions(['debug' => true]);
Logs will include request/response bodies and headers.
Inspect Raw Responses:
Access the underlying Guzzle response for debugging:
$response = OpenAI::chat()->create([...]);
$rawBody = $response->toArray(); // Convert to array
Validate Inputs:
Use Laravel’s Validator to sanitize inputs before passing to OpenAI:
$validated = Validator::make($request->all(), [
'prompt' => 'required|string|max:4096',
])->validate();
Handle Timeouts:
Increase OPENAI_REQUEST_TIMEOUT for large payloads (e.g., fine-tuning files):
OPENAI_REQUEST_TIMEOUT=120
Custom Headers: Add headers globally via the client:
OpenAI::client()->withOptions([
'headers' => ['X-Custom-Header' => 'value'],
]);
Middleware: Intercept requests/responses with middleware:
OpenAI::client()->withMiddleware(function ($request, $next) {
$request->setHeader('Authorization', 'Bearer ' . config('openai.api_key'));
return $next($request);
});
Event Listeners: Listen to OpenAI events (e.g., for logging or analytics):
OpenAI::client()->on('request', function ($request) {
Log::debug('OpenAI Request:', $request->getBody());
});
Service Provider Extensions: Extend the service provider to add custom clients:
// In a custom service provider
$this->app->singleton('custom.openai', function () {
return OpenAI::client()->withOptions(['baseUrl' => 'https://custom-api.com/v1']);
});
Testing Assertions: Verify API calls in tests:
OpenAI::assertSent(\OpenAI\Resources\Chat::class, function ($method, $params) {
return $method === 'create' && $params['model']
How can I help you explore Laravel packages today?