Weave Code
Code Weaver
Helps Laravel developers discover, compare, and choose open-source packages. See popularity, security, maintainers, and scores at a glance to make better decisions.
Feedback
Share your thoughts, report bugs, or suggest improvements.
Subject
Message

Laravel Laravel Package

openai-php/laravel

Laravel integration for OpenAI PHP. Install via Composer and artisan, configure API key/org, then call the OpenAI facade to create responses, chat, and more. Community-maintained client for the OpenAI API.

View on GitHub
Deep Wiki
Context7
## Getting Started

### Minimal Setup
1. **Installation**:
   ```bash
   composer require openai-php/laravel
   php artisan openai:install

This generates config/openai.php and adds .env variables:

OPENAI_API_KEY=sk-...
OPENAI_ORGANIZATION=org-...
  1. First API Call: Use the OpenAI facade to interact with OpenAI’s API:

    use OpenAI\Laravel\Facades\OpenAI;
    
    $response = OpenAI::completions()->create([
        'model' => 'gpt-3.5-turbo',
        'prompt' => 'Explain Laravel dependency injection in 3 sentences.',
    ]);
    echo $response->choices[0]->text;
    
  2. Key Facades: The package provides facades for all OpenAI API endpoints:

    • OpenAI::completions() → Text completions
    • OpenAI::chat() → Chat completions
    • OpenAI::images() → Image generation
    • OpenAI::edits() → Text edits
    • OpenAI::embeddings() → Embeddings
    • OpenAI::audio() → Audio transcription/translation
    • OpenAI::fineTuning() → Fine-tuning jobs
    • OpenAI::files() → File management
    • OpenAI::models() → Model listing
    • OpenAI::realtime() → Realtime API (e.g., Assistants)
    • OpenAI::conversations() → Conversation API
    • OpenAI::responses() → Responses API (e.g., Assistants)

Implementation Patterns

1. Structured API Calls

Use method chaining for clarity and type safety:

$response = OpenAI::chat()->create([
    'model' => 'gpt-4',
    'messages' => [
        ['role' => 'system', 'content' => 'You are a helpful assistant.'],
        ['role' => 'user', 'content' => 'What is Laravel?'],
    ],
]);

2. Streaming Responses

Handle streaming responses (e.g., for chat or completions):

OpenAI::chat()->create([
    'model' => 'gpt-3.5-turbo',
    'stream' => true,
    'messages' => [['role' => 'user', 'content' => 'Tell me a joke.']],
], function ($chunk) {
    echo $chunk->choices[0]->delta->content;
});

3. Error Handling

Use Laravel’s exception handling or custom middleware:

try {
    $response = OpenAI::completions()->create([...]);
} catch (\OpenAI\Exceptions\RateLimitException $e) {
    // Retry logic or notify user
} catch (\OpenAI\Exceptions\InvalidRequestException $e) {
    // Validate input
}

4. Configuration Management

Override defaults in config/openai.php or via .env:

OPENAI_BASE_URL=https://api.openai.com/v1  # Custom endpoint (e.g., proxy)
OPENAI_REQUEST_TIMEOUT=60  # Increase timeout for large requests

5. Testing with Fakes

Mock API responses in tests:

use OpenAI\Laravel\Facades\OpenAI;
use OpenAI\Responses\Chat\CreateResponse;

OpenAI::fake([
    CreateResponse::fake([
        'choices' => [['message' => ['content' => 'Fake response!']]],
    ]),
]);

$response = OpenAI::chat()->create([...]);
expect($response->choices[0]->message->content)->toBe('Fake response!');

6. Integration with Laravel Services

Bind the OpenAI client to Laravel’s container for dependency injection:

// In a service provider
$this->app->bind(OpenAIClient::class, function () {
    return OpenAI::client();
});

Then inject it into controllers/services:

use OpenAI\Client;

public function __construct(private Client $openai) {}

public function generateSummary() {
    $response = $this->openai->completions()->create([...]);
    return $response->choices[0]->text;
}

7. Rate Limiting and Retries

Use Laravel’s retry helper or a custom decorator:

use Illuminate\Support\Facades\Retry;

Retry::times(3)->attempt(function () {
    OpenAI::chat()->create([...]);
});

8. Logging API Calls

Log requests/responses for debugging:

OpenAI::client()->withOptions(['logger' => new \OpenAI\Logger\FileLogger('/path/to/logs')]);

Gotchas and Tips

Common Pitfalls

  1. API Key Leaks:

    • Never commit .env to version control. Use Laravel’s .env.example for templates.
    • Restrict API keys to specific IPs in OpenAI’s dashboard if possible.
  2. Rate Limits:

    • OpenAI enforces rate limits. Handle RateLimitException gracefully.
    • Use exponential backoff for retries:
      $retryAfter = $e->getRetryAfter();
      sleep($retryAfter);
      
  3. Deprecated Endpoints:

    • Some endpoints (e.g., engines for completions) are deprecated. Use models instead:
      // Old (deprecated)
      OpenAI::completions()->create(['engine' => 'text-davinci-003']);
      
      // New
      OpenAI::completions()->create(['model' => 'gpt-3.5-turbo']);
      
  4. Streaming Quirks:

    • Streaming responses require a callback or Swoole/ReactPHP for async handling. Avoid blocking requests in CLI.
    • Ensure your server supports long-running connections (e.g., fastcgi_read_timeout in Nginx).
  5. Model Availability:

    • Not all models are available in all regions. Check OpenAI’s model documentation for regional restrictions.
  6. Cost Management:

    • Use token_count in responses to estimate costs:
      $tokenCount = $response->usage->total_tokens;
      $cost = $tokenCount * 0.000002; // Example: $0.002 per 1M tokens
      

Debugging Tips

  1. Enable Verbose Logging:

    OpenAI::client()->withOptions(['debug' => true]);
    

    Logs will include request/response bodies and headers.

  2. Inspect Raw Responses: Access the underlying Guzzle response for debugging:

    $response = OpenAI::chat()->create([...]);
    $rawBody = $response->toArray(); // Convert to array
    
  3. Validate Inputs: Use Laravel’s Validator to sanitize inputs before passing to OpenAI:

    $validated = Validator::make($request->all(), [
        'prompt' => 'required|string|max:4096',
    ])->validate();
    
  4. Handle Timeouts: Increase OPENAI_REQUEST_TIMEOUT for large payloads (e.g., fine-tuning files):

    OPENAI_REQUEST_TIMEOUT=120
    

Extension Points

  1. Custom Headers: Add headers globally via the client:

    OpenAI::client()->withOptions([
        'headers' => ['X-Custom-Header' => 'value'],
    ]);
    
  2. Middleware: Intercept requests/responses with middleware:

    OpenAI::client()->withMiddleware(function ($request, $next) {
        $request->setHeader('Authorization', 'Bearer ' . config('openai.api_key'));
        return $next($request);
    });
    
  3. Event Listeners: Listen to OpenAI events (e.g., for logging or analytics):

    OpenAI::client()->on('request', function ($request) {
        Log::debug('OpenAI Request:', $request->getBody());
    });
    
  4. Service Provider Extensions: Extend the service provider to add custom clients:

    // In a custom service provider
    $this->app->singleton('custom.openai', function () {
        return OpenAI::client()->withOptions(['baseUrl' => 'https://custom-api.com/v1']);
    });
    
  5. Testing Assertions: Verify API calls in tests:

    OpenAI::assertSent(\OpenAI\Resources\Chat::class, function ($method, $params) {
        return $method === 'create' && $params['model']
    
Weaver

How can I help you explore Laravel packages today?

Conversation history is not saved when not logged in.
Prompt
Add packages to context
No packages found.
davejamesmiller/laravel-breadcrumbs
artisanry/parsedown
christhompsontldr/phpsdk
enqueue/dsn
bunny/bunny
enqueue/test
enqueue/null
enqueue/amqp-tools
bower-asset/punycode
bower-asset/inputmask
bower-asset/jquery
bower-asset/yii2-pjax
laravel/nova
spatie/laravel-mailcoach
spatie/laravel-superseeder
laravel/liferaft
nst/json-test-suite
danielmiessler/sec-lists
jackalope/jackalope-transport
twbs/bootstrap4