openai-php/client
Community-maintained PHP client for the OpenAI API. Works with models, responses, chat/conversations, files, images, audio, embeddings, fine-tuning, and more. Simple, typed SDK with streaming support, built for modern PHP and Laravel setups.
models(), responses(), chat()), aligning well with Laravel’s service-layer patterns. Each resource can be treated as a dedicated service class.createStreamed()) enable real-time processing, which can be integrated with Laravel’s event system or WebSocket layers for reactive applications.type: 'function') allow for custom logic hooks, enabling integration with Laravel’s job queues, task scheduling, or even custom command handlers..env and config/ system can securely manage OpenAI API keys, reducing boilerplate in client initialization.$response->outputText) map cleanly to Laravel’s Eloquent-like syntax, easing adoption.assistants, threads) can be phased out via Laravel’s feature flags or deprecated method warnings.Symfony\Component\HttpFoundation\StreamedResponse for large payloads.throttle middleware or custom rate-limiter services).responses())?assistants) actively used, or can they be sunsetted?Log::channel()) track API costs?429 Too Many Requests) be translated into Laravel exceptions or user-friendly messages?vcr for HTTP mocks), or can Laravel’s HTTP tests suffice?AppServiceProvider:
$this->app->singleton(OpenAI::class, fn () => OpenAI::client(config('services.openai.key')));
config/services/openai.php:
return [
'key' => env('OPENAI_KEY'),
'base_uri' => env('OPENAI_BASE_URI', 'https://api.openai.com/v1'),
'organization' => env('OPENAI_ORG'),
];
ResponseCreated) for reactive workflows.models()->list(), embeddings()) in a single module.chat() and responses() to leverage tooling and streaming.Swoole or ReactPHP for async handling.Cache::remember()) for frequent model listings.assistants, threads, and fineTunes with newer responses() or chat() APIs.DeprecatesFeatures trait to warn users of deprecated methods.composer require openai-php/client guzzlehttp/guzzle..env and config/services/openai.php.app/Services/OpenAIService.php) to wrap the client:
public function generateResponse(string $prompt): string {
return $this->client->responses()->create(['model' => 'gpt-4o', 'input' => $prompt])->outputText;
}
Symfony\Component\HttpFoundation\StreamedResponse.Log::info() or a custom monitor.openai-php/client for breaking changes (e.g., OpenAI API deprecations). Use Laravel’s composer.json scripts for automated testing.responses() in a Laravel controller").DeprecatesFeatures trait for package methods that map to deprecated OpenAI APIs.OpenAIException) extending Laravel’s Exception.Sentry or Monolog for debugging.429 errors:
public function handle($request, Closure $next) {
try {
return $next($request);
} catch (ServerErrorException $e) {
if ($e->getCode() === 429) {
return response()->json(['error' => 'Rate limit exceeded'], 429);
}
throw $e;
}
}
support() helper for links.cache() for rate-limited or expensive operations (e.g., model listings).opcache and Laravel’s queue workers.horizon for queue monitoring.filesystem or database for chunks).| Failure Scenario | Mitigation Strategy |
|---|---|
| OpenAI API downtime | Implement a circuit breaker (e.g., spatie/flysystem-circuitbreaker) or fallback cache. |
Rate limiting (429) |
Use Laravel’s throttle middleware or exponential backoff (e.g., symfony/http-client). |
| Streaming response disconnection | Use Laravel |
How can I help you explore Laravel packages today?