- How do I install and configure openai-php/laravel in my Laravel project?
- Run `composer require openai-php/laravel`, then execute `php artisan openai:install` to generate the config file. Add your `OPENAI_API_KEY` and `OPENAI_ORGANIZATION` to your `.env` file, and you’re ready to use the `OpenAI` facade for API calls.
- Which Laravel versions does this package support?
- The package officially supports Laravel 10–13 (as of v0.19.1) and PHP 8.2+. Laravel 11 is supported from v0.13.0 onward. Always check the [GitHub releases](https://github.com/openai-php/laravel/releases) for the latest compatibility details.
- Can I use this package for testing AI-driven features without hitting OpenAI’s API?
- Yes, the package includes a `fake()` method to mock API responses. Use `OpenAI::fake()` in your tests to simulate calls, making it easy to verify behavior without incurring costs or rate limits.
- How do I handle API rate limits or quota exhaustion in production?
- The package throws `RateLimitException` when limits are hit. Implement retry logic with exponential backoff or fallback mechanisms (e.g., caching responses). Laravel’s HTTP client supports middleware for custom retry logic if needed.
- Does this package support multi-region OpenAI endpoints (e.g., EU vs. US)?
- Yes, you can configure the `OPENAI_BASE_URL` in your `.env` file to point to region-specific endpoints like `https://api.openai.com/v1` (US) or `https://api.openai.eu/v1` (EU). This is fully supported via the config file.
- What if OpenAI’s API changes or deprecates endpoints? Will this package break?
- The package relies on the `openai-php/client` library, which may introduce breaking changes if OpenAI’s API evolves. Monitor the [client’s changelog](https://github.com/openai-php/client) and pin to minor versions in `composer.json` to mitigate risks.
- Can I use this package for real-time systems where low latency is critical?
- While the package is optimized for Laravel, OpenAI’s API inherently introduces latency. For real-time systems, consider caching frequent responses or using edge caching (e.g., Redis) to reduce API calls.
- Are there alternatives to this package if I need more control over HTTP requests?
- If you need direct access to the underlying HTTP client (e.g., for custom middleware or retries), you can use the standalone `openai-php/client` package. However, this package abstracts away most of that complexity for Laravel developers.
- How do I log API calls for debugging or compliance purposes?
- The package doesn’t include built-in logging, but you can wrap `OpenAI` facade calls in Laravel’s logging system (e.g., `Log::info('OpenAI call:', $requestData)`) or use middleware on the HTTP client for audit trails.
- What’s the best way to manage costs when using OpenAI’s API at scale?
- The package doesn’t include rate-limiting or cost-monitoring tools, so you’ll need to implement custom logic. Track token usage, set budget alerts, and cache responses aggressively. Tools like Laravel’s `cache()` or Redis can help reduce API calls.