prism-php/prism
Prism is a Laravel package for integrating LLMs with a fluent API for text generation, multi-step conversations, and tool usage across multiple AI providers—letting you build AI features without dealing with low-level provider details.
composer require prism-php/prism
php artisan vendor:publish --tag=prism-config
config/prism.php (e.g., OpenAI, Anthropic) using environment variables.use Prism\Prism\Facades\Prism;
$response = Prism::text()
->using('openai', 'gpt-4o')
->withPrompt('Explain quantum computing.')
->asText();
config/prism.php (published via vendor:publish)./docs/providers/{provider}.md (e.g., anthropic.md).// Basic text generation
$response = Prism::text()
->using('openai', 'gpt-4o')
->withPrompt('Summarize this article: {article}')
->withVariables(['article' => $articleText])
->asText();
// Multi-turn conversation
$conversation = Prism::conversation()
->using('anthropic', 'claude-3-5-sonnet')
->withSystemPrompt('You are a helpful assistant.')
->withUserMessage('Hello!')
->reply()
->withUserMessage('Tell me a joke.')
->reply()
->asText();
// Tools for dynamic actions
$response = Prism::text()
->using('openai', 'gpt-4o')
->withPrompt('Book a flight from {departure} to {arrival}.')
->withVariables(['departure' => 'NYC', 'arrival' => 'LAX'])
->withTools([
Tool::as('book_flight')
->withDescription('Books a flight.')
->withParameters([
'departure' => 'string',
'arrival' => 'string',
'date' => 'string',
]),
])
->asText();
// Anthropic prompt caching
$response = Prism::text()
->using('anthropic', 'claude-3-5-sonnet')
->withSystemPrompt(
(new SystemMessage('Reusable system message.'))
->withProviderOptions(['cacheType' => 'ephemeral', 'cacheTtl' => '1h'])
)
->withPrompt('Cached response.')
->asText();
$this->app->bind(Prism::class, function ($app) {
return new Prism($app['config']);
});
use Prism\Prism\Facades\Prism;
use Illuminate\Bus\Queueable;
class GenerateTextJob implements Queueable {
public function handle() {
$response = Prism::text()
->using('openai', 'gpt-4o')
->withPrompt('Process this data: {data}')
->withVariables(['data' => $this->data])
->asText();
}
}
$cacheKey = 'prism:query:' . md5($prompt);
return Cache::remember($cacheKey, now()->addMinutes(5), function () use ($prompt) {
return Prism::text()->using('openai', 'gpt-4o')->withPrompt($prompt)->asText();
});
Provider-Specific Quirks:
cache_control or cacheType for prompt caching (see docs).30s may be too short for complex models (e.g., gpt-4). Override with:
->withClientOptions(['timeout' => 120])
Configuration Overrides:
usingProviderConfig()) re-resolve the provider, which can be costly. Prefer static config for performance.Variable Injection:
$safeVariables = collect($userInput)->map(fn($val) => htmlspecialchars($val))->all();
Rate Limits:
spatie/rate-limiter for throttling.Prism::setLogLevel(\Monolog\Logger::DEBUG);
withClientOptions(['debug' => true]) to log HTTP payloads:
->withClientOptions(['debug' => true, 'curl' => function ($curl) {
$curl->setopt(CURLOPT_VERBOSE, true);
}])
Custom Providers:
Extend Prism\Prism\Contracts\Provider to support unsupported APIs:
class CustomProvider implements Provider {
public function generate($model, array $options) {
// Implement your logic
}
}
Register via config/prism.php:
'providers' => [
'custom' => [
'class' => \App\Providers\CustomProvider::class,
'config' => [...],
],
],
Middleware: Add middleware to modify requests/responses globally:
Prism::middleware(function ($request, $next) {
$request->merge(['custom_header' => 'value']);
return $next($request);
});
Event Listeners:
Listen to prism.before-request and prism.after-response events for logging/auditing:
Prism::listen('prism.before-request', function ($request) {
Log::info('LLM Request', $request->toArray());
});
Cost Optimization:
Use smaller models (e.g., gpt-3.5-turbo) for drafts, then upscale for final outputs.
// Draft with cheaper model
$draft = Prism::text()->using('openai', 'gpt-3.5-turbo')->withPrompt($prompt)->asText();
// Final with premium model
$final = Prism::text()->using('openai', 'gpt-4')->withPrompt("Improve this: {$draft}")->asText();
Prompt Engineering:
Leverage Prism’s SystemMessage and UserMessage for structured prompts:
$response = Prism::text()
->using('openai', 'gpt-4')
->withSystemPrompt('You are a technical writer.')
->withUserMessage('Explain Docker in simple terms.')
->asText();
Testing:
Mock providers in tests using Prism::fake():
use Prism\Prism\Facades\Prism;
public function test_llm_response() {
Prism::fake(['openai' => 'gpt-4o']);
$response = Prism::text()
->using('openai', 'gpt-4o')
->withPrompt('Test prompt.')
->asText();
$this->assertEquals('Mocked response', $response);
}
How can I help you explore Laravel packages today?