spatie/robots-txt
Generate and serve a correct robots.txt in Laravel with an expressive API. Add user-agents, allow/disallow rules, sitemaps and host directives, then publish it via a route or controller—perfect for managing crawler access per environment.
Install the package via Composer: composer require spatie/robots-txt. Publish the config file with php artisan vendor:publish --provider="Spatie\RobotsTxt\RobotsTxtServiceProvider". The config (config/robots-txt.php) defines default rules and allows environment-specific overrides. By default, a /robots.txt route is auto-registered (GET /robots.txt) and returns a dynamic response. Start by checking config/app.php to ensure the service provider auto-discovers (no manual registration needed). For first use, update config/robots.txt.php to add your production sitemap and disallow paths like /admin.
RobotsTxt::configureUsing() in AppServiceProvider@boot to apply logic-based rules (e.g., if (app()->isLocal()) { $robots->Disallow('/'); }).return RobotsTxt::create()
->userAgent('*', fn ($userAgent) => $userAgent
->disallow('/admin/')
->allow('/public/')
->sitemapUrl(env('APP_URL') . '/sitemap.xml')
);
RobotsTxt builder.GET /robots.txt returns expected status and directives using Pest’s assertSee() or PHPUnit’s assertSnapshotEquals() against a known-good config snapshot./robots.txt file in public/, Laravel will serve it instead of the route—delete the static file to avoid confusion.Cache::remember() or use Laravel’s response caching middleware.'googlebot' won’t match Googlebot’s real user-agent string 'Googlebot/2.1'). Prefer wildcard '*' unless targeting specific bots.configureUsing() callbacks—combine both carefully to avoid unintentional overrides.php artisan route:list | grep robots to confirm the route is registered. Use dd() inside configureUsing() to inspect runtime context (e.g., config('app.env')).robots.txt can block indexing—always validate responses in tools like Google Search Console’s robots.txt Tester.How can I help you explore Laravel packages today?