- How do I install Prism in a Laravel project?
- Run `composer require prism-php/prism` in your project directory. Prism supports Laravel 11+ and PHP 8.2+. After installation, publish the config file with `php artisan prism:install` to set up provider credentials via `.env`. The package integrates seamlessly with Laravel’s service container and Facades.
- Which AI providers does Prism support out of the box?
- Prism natively supports OpenAI and Anthropic. You can also extend it to work with other providers by implementing the `ProviderInterface` and registering your custom provider in the config. The unified API ensures consistent behavior regardless of the underlying provider.
- Can Prism handle multi-turn conversations with context?
- Yes, Prism includes built-in support for multi-step conversations. Use the `Prism::chat()` method to chain messages and maintain context across turns. You can store conversation state in Laravel’s cache (e.g., Redis) or a database for persistence between requests.
- How do I use tools or function calling with Prism?
- Define tools as PHP functions or closures and register them with Prism’s `withTools()` method. When calling `Prism::chat()`, the package will automatically invoke these tools if the LLM requests them. Tools are useful for integrating database queries, external APIs, or custom logic into your AI workflows.
- What Laravel versions does Prism officially support?
- Prism is designed for Laravel 11 and later. While it may work with older versions, the package leverages Laravel’s modern features like Facades, Artisan commands, and dependency injection. For non-Laravel PHP projects, use the standalone `prism-php/prism` core package.
- How do I handle rate limits or token costs in production?
- Prism provides hooks for rate limit handling, but you’ll need to implement monitoring and fallback logic. Use Laravel Queues to offload LLM calls and avoid timeouts. For cost management, track token usage with middleware or a custom service and set budget alerts.
- Can I use Prism with self-hosted LLMs or local models?
- Prism’s architecture supports custom providers, so you can extend it to work with self-hosted models like Ollama or vLLM. Implement the `ProviderInterface` and configure your local endpoint in the `prism.php` config. For production, consider Prism’s server mode for optimized inference.
- What are the alternatives to Prism for Laravel LLM integration?
- Alternatives include Laravel AI (for simpler use cases), LangChain PHP (for advanced workflows), or direct API wrappers like Guzzle for OpenAI. Prism stands out with its fluent API, tool integration, and Laravel-native DX, making it ideal for complex AI features like assistants or automation.
- How do I test Prism in my Laravel application?
- Mock LLM responses using Laravel’s HTTP testing helpers or Prism’s built-in testing utilities. For provider-specific tests, use the `Prism::fake()` method to simulate API calls. Test multi-step conversations by chaining `Prism::chat()` assertions in your PHPUnit or Pest tests.
- Is Prism secure for handling sensitive data in prompts?
- Prism validates inputs and supports masking sensitive data via the `withMasking()` method. For production, combine this with Laravel Policies to restrict LLM access and encrypt provider credentials in `.env`. Always sanitize dynamic data before passing it to prompts.