openai-php/client
Community-maintained PHP client for the OpenAI API. Works with models, responses, chat/conversations, files, images, audio, embeddings, fine-tuning, and more. Simple, typed SDK with streaming support, built for modern PHP and Laravel setups.
## Getting Started
### Minimal Setup for Laravel Integration
1. **Install the package** via Composer:
```bash
composer require openai-php/client guzzlehttp/guzzle
Configure API key in .env:
OPENAI_API_KEY=your_api_key_here
Create a service provider (app/Providers/OpenAIServiceProvider.php):
namespace App\Providers;
use Illuminate\Support\ServiceProvider;
use OpenAI\Laravel\Facades\OpenAI;
class OpenAIServiceProvider extends ServiceProvider
{
public function register()
{
$this->app->singleton(\OpenAI\Client::class, function ($app) {
return OpenAI::client(config('openai.api_key'));
});
}
}
Publish config (optional):
php artisan vendor:publish --provider="OpenAI\Laravel\OpenAIServiceProvider" --tag="openai-config"
Update config/openai.php with your base URI, organization, etc.
First use case: Generate a chat response in a controller:
use OpenAI\Laravel\Facades\OpenAI;
public function generateResponse(Request $request)
{
$response = OpenAI::chat()->create([
'model' => 'gpt-3.5-turbo',
'messages' => [
['role' => 'user', 'content' => $request->input('prompt')],
],
]);
return response()->json(['response' => $response->choices[0]->message->content]);
}
// Basic chat interaction
$chat = OpenAI::chat()->create([
'model' => 'gpt-4',
'messages' => [
['role' => 'system', 'content' => 'You are a helpful assistant.'],
['role' => 'user', 'content' => 'Explain Laravel middleware'],
],
'temperature' => 0.7,
]);
// Stream responses for real-time UX
$stream = OpenAI::chat()->createStreamed([
'model' => 'gpt-3.5-turbo',
'messages' => [['role' => 'user', 'content' => 'Tell me a joke']],
]);
foreach ($stream as $chunk) {
echo $chunk->choices[0]->delta->content;
}
// Function calling pattern
$response = OpenAI::responses()->create([
'model' => 'gpt-4',
'tools' => [
[
'type' => 'function',
'function' => [
'name' => 'fetchWeather',
'description' => 'Get weather for a location',
'parameters' => [
'type' => 'object',
'properties' => [
'location' => ['type' => 'string'],
'unit' => ['type' => 'string', 'enum' => ['celsius', 'fahrenheit']],
],
'required' => ['location'],
],
],
],
],
'input' => 'What is the weather in Barcelona?',
]);
// Handle tool calls
foreach ($response->output as $item) {
if ($item->type === 'function_call') {
$args = json_decode($item->arguments, true);
$weather = $this->callWeatherService($args['location'], $args['unit'] ?? 'celsius');
// Send back to OpenAI if needed
}
}
// Start a conversation
$conv = OpenAI::conversations()->create([
'metadata' => ['user_id' => auth()->id()],
'items' => [
['type' => 'message', 'role' => 'user', 'content' => 'Start a new chat'],
],
]);
// Add to conversation
OpenAI::conversations()->items()->create($conv->id, [
'items' => [
['role' => 'user', 'content' => 'Follow up question'],
],
]);
// Generate embeddings
$embeddings = OpenAI::embeddings()->create([
'model' => 'text-embedding-ada-002',
'input' => ['Your text here', 'Another text'],
]);
// Store in database (e.g., with Laravel Scout)
$model->searchableData = ['embedding' => $embeddings->data[0]->embedding];
// Bind to Laravel container in a service provider
$this->app->bind(\OpenAI\Client::class, function ($app) {
return OpenAI::factory()
->withApiKey(config('openai.api_key'))
->withHttpClient(new \GuzzleHttp\Client([
'timeout' => 30,
'headers' => [
'Accept' => 'application/json',
'User-Agent' => 'Laravel/' . app()->version(),
],
]))
->make();
});
// Example: Generate content via CLI
public function handle()
{
$response = OpenAI::chat()->create([
'model' => 'gpt-3.5-turbo',
'messages' => [
['role' => 'system', 'content' => 'You are a content generator.'],
['role' => 'user', 'content' => $this->argument('prompt')],
],
]);
$this->info($response->choices[0]->message->content);
}
// Dispatch a job to generate content in background
GenerateContentJob::dispatch('Write a blog post about Laravel 11 features')
->onQueue('openai');
// Job handler
public function handle()
{
$response = OpenAI::chat()->create([...]);
// Store result and notify user
}
public function handle($request, Closure $next)
{
if (!$request->hasValidOpenAIKey()) {
abort(403, 'Invalid API key');
}
return $next($request);
}
Rate Limiting
try {
$response = $client->chat()->create([...]);
} catch (\OpenAI\Exceptions\RateLimitException $e) {
sleep($e->getRetryAfter());
retry();
}
withRetry() in the client factory:
->withRetry(3, 1000) // Retry 3 times with 1s delay
Token Limits
usage->totalTokens to avoid hitting limits.$chunkedPrompts = array_chunk($longText, 1500);
foreach ($chunkedPrompts as $chunk) {
$response = $client->chat()->create([...]);
}
Streaming Quirks
$stream = $client->chat()->createStreamed([...]);
foreach ($stream as $chunk) { /* ... */ }
$stream->close(); // Critical!
Model Deprecation
client->models()->list() regularly. Deprecated models may break silently.config(['openai.default_model' => 'gpt-4-0106-preview']);
Cost Management
$response = $client->chat()->create([...]);
\Log::info('OpenAI Cost', [
'input_tokens' => $response->usage->inputTokens,
'output_tokens' => $response->usage->outputTokens,
'total_tokens' => $response->usage->totalTokens,
]);
$model = config('openai.fallback_models.gpt_3_5_turbo') ?? 'gpt-3.5-turbo';
Enable Verbose Logging
$client = OpenAI::factory()
->withHttpClient(new \GuzzleHttp\Client([
'debug' => true,
]))
->make();
Check storage/logs/laravel.log for HTTP details.
Mocking for Tests
// In tests
$mockClient = Mockery::mock(\OpenAI\Client::
How can I help you explore Laravel packages today?