Weave Code
Code Weaver
Helps Laravel developers discover, compare, and choose open-source packages. See popularity, security, maintainers, and scores at a glance to make better decisions.
Feedback
Share your thoughts, report bugs, or suggest improvements.
Subject
Message

Prism Laravel Package

prism-php/prism

Prism is a Laravel package for integrating LLMs with a fluent API for text generation, multi-step conversations, and tool usage across multiple AI providers—letting you build AI features without dealing with low-level provider details.

View on GitHub
Deep Wiki
Context7

Getting Started

Minimal Steps

  1. Installation:
    composer require prism-php/prism
    php artisan vendor:publish --tag=prism-config
    
  2. Configure your provider in config/prism.php (e.g., OpenAI, Anthropic) using environment variables.
  3. First use case: Generate text with a simple prompt:
    use Prism\Prism\Facades\Prism;
    
    $response = Prism::text()
        ->using('openai', 'gpt-4o')
        ->withPrompt('Explain quantum computing.')
        ->asText();
    

Where to Look First

  • Configuration: config/prism.php (published via vendor:publish).
  • Provider Docs: /docs/providers/{provider}.md (e.g., anthropic.md).
  • API Reference: Prism Website.

Implementation Patterns

Core Workflows

1. Text Generation

// Basic text generation
$response = Prism::text()
    ->using('openai', 'gpt-4o')
    ->withPrompt('Summarize this article: {article}')
    ->withVariables(['article' => $articleText])
    ->asText();

2. Conversational Chains

// Multi-turn conversation
$conversation = Prism::conversation()
    ->using('anthropic', 'claude-3-5-sonnet')
    ->withSystemPrompt('You are a helpful assistant.')
    ->withUserMessage('Hello!')
    ->reply()
    ->withUserMessage('Tell me a joke.')
    ->reply()
    ->asText();

3. Tool Integration

// Tools for dynamic actions
$response = Prism::text()
    ->using('openai', 'gpt-4o')
    ->withPrompt('Book a flight from {departure} to {arrival}.')
    ->withVariables(['departure' => 'NYC', 'arrival' => 'LAX'])
    ->withTools([
        Tool::as('book_flight')
            ->withDescription('Books a flight.')
            ->withParameters([
                'departure' => 'string',
                'arrival' => 'string',
                'date' => 'string',
            ]),
    ])
    ->asText();

4. Provider-Specific Features

// Anthropic prompt caching
$response = Prism::text()
    ->using('anthropic', 'claude-3-5-sonnet')
    ->withSystemPrompt(
        (new SystemMessage('Reusable system message.'))
            ->withProviderOptions(['cacheType' => 'ephemeral', 'cacheTtl' => '1h'])
    )
    ->withPrompt('Cached response.')
    ->asText();

Integration Tips

  • Service Providers: Bind Prism to your container for dependency injection:
    $this->app->bind(Prism::class, function ($app) {
        return new Prism($app['config']);
    });
    
  • Jobs/Queues: Offload LLM calls to background jobs for performance:
    use Prism\Prism\Facades\Prism;
    use Illuminate\Bus\Queueable;
    
    class GenerateTextJob implements Queueable {
        public function handle() {
            $response = Prism::text()
                ->using('openai', 'gpt-4o')
                ->withPrompt('Process this data: {data}')
                ->withVariables(['data' => $this->data])
                ->asText();
        }
    }
    
  • Caching Responses: Cache frequent queries to reduce API calls:
    $cacheKey = 'prism:query:' . md5($prompt);
    return Cache::remember($cacheKey, now()->addMinutes(5), function () use ($prompt) {
        return Prism::text()->using('openai', 'gpt-4o')->withPrompt($prompt)->asText();
    });
    

Gotchas and Tips

Pitfalls

  1. Provider-Specific Quirks:

    • Anthropic: Use cache_control or cacheType for prompt caching (see docs).
    • OpenAI: Tools require explicit parameter definitions; mismatches cause errors.
    • Timeouts: Default 30s may be too short for complex models (e.g., gpt-4). Override with:
      ->withClientOptions(['timeout' => 120])
      
  2. Configuration Overrides:

    • Dynamic overrides (e.g., usingProviderConfig()) re-resolve the provider, which can be costly. Prefer static config for performance.
  3. Variable Injection:

    • Always sanitize user-provided variables to avoid prompt injection:
      $safeVariables = collect($userInput)->map(fn($val) => htmlspecialchars($val))->all();
      
  4. Rate Limits:

    • Monitor API usage; Prism doesn’t handle retries by default. Use a library like spatie/rate-limiter for throttling.

Debugging

  • Enable Logging:
    Prism::setLogLevel(\Monolog\Logger::DEBUG);
    
  • Inspect Raw Requests: Use withClientOptions(['debug' => true]) to log HTTP payloads:
    ->withClientOptions(['debug' => true, 'curl' => function ($curl) {
        $curl->setopt(CURLOPT_VERBOSE, true);
    }])
    

Extension Points

  1. Custom Providers: Extend Prism\Prism\Contracts\Provider to support unsupported APIs:

    class CustomProvider implements Provider {
        public function generate($model, array $options) {
            // Implement your logic
        }
    }
    

    Register via config/prism.php:

    'providers' => [
        'custom' => [
            'class' => \App\Providers\CustomProvider::class,
            'config' => [...],
        ],
    ],
    
  2. Middleware: Add middleware to modify requests/responses globally:

    Prism::middleware(function ($request, $next) {
        $request->merge(['custom_header' => 'value']);
        return $next($request);
    });
    
  3. Event Listeners: Listen to prism.before-request and prism.after-response events for logging/auditing:

    Prism::listen('prism.before-request', function ($request) {
        Log::info('LLM Request', $request->toArray());
    });
    

Pro Tips

  • Cost Optimization: Use smaller models (e.g., gpt-3.5-turbo) for drafts, then upscale for final outputs.

    // Draft with cheaper model
    $draft = Prism::text()->using('openai', 'gpt-3.5-turbo')->withPrompt($prompt)->asText();
    // Final with premium model
    $final = Prism::text()->using('openai', 'gpt-4')->withPrompt("Improve this: {$draft}")->asText();
    
  • Prompt Engineering: Leverage Prism’s SystemMessage and UserMessage for structured prompts:

    $response = Prism::text()
        ->using('openai', 'gpt-4')
        ->withSystemPrompt('You are a technical writer.')
        ->withUserMessage('Explain Docker in simple terms.')
        ->asText();
    
  • Testing: Mock providers in tests using Prism::fake():

    use Prism\Prism\Facades\Prism;
    
    public function test_llm_response() {
        Prism::fake(['openai' => 'gpt-4o']);
    
        $response = Prism::text()
            ->using('openai', 'gpt-4o')
            ->withPrompt('Test prompt.')
            ->asText();
    
        $this->assertEquals('Mocked response', $response);
    }
    
Weaver

How can I help you explore Laravel packages today?

Conversation history is not saved when not logged in.
Prompt
Add packages to context
No packages found.
davejamesmiller/laravel-breadcrumbs
artisanry/parsedown
christhompsontldr/phpsdk
enqueue/dsn
bunny/bunny
enqueue/test
enqueue/null
enqueue/amqp-tools
bower-asset/punycode
bower-asset/inputmask
bower-asset/jquery
bower-asset/yii2-pjax
laravel/nova
spatie/laravel-mailcoach
spatie/laravel-superseeder
laravel/liferaft
nst/json-test-suite
danielmiessler/sec-lists
jackalope/jackalope-transport
twbs/bootstrap4