Experiment, collaborate and iterate on prompts while keeping your code clean. Proxy requests, deploy when you're ready.
PromptShuttle makes it easy to prompt different LLMs, compare the results and collaborate on prompts, making it easier to try different LLMs and prompts in deployed applications. It proxies LLM API requests, reducing the burden of implementing different APIs and consolidating logs and invoices.
Experiment
Try prompts against various LLMs, separate or in parallel, and compare the results.
Collaborate
Track changes, add inline comments and collaborate on prompts with your team
Deploy
Change active prompt versions and default LLMs per environment without code changes
Get instant access to all models now!
Access GPT4o, Llama3, Gemini 1.5 Flash and Pro, Claude 3.5 Sonnet, Mixtral 8x7B and dozens of other models right away and experiment with different LLMs
Prompt Templating
Template with [[tokens]]
and replace values in API calls with a simple { "key": "value" }
dictionary
Prompt Comments
Allow for inline comments with /* C-Style */
or // C++ Style
comments for better collaboration and change tracking
Template Versions
Version each change to your prompt templates and track changes with a simple UI
Centralized Billing and Limits
Centralize LLM billing and access keys through a single provider
LLM Proxy
Proxy requests to get statistics, use automated fallbacks and use a single interface
Fake LLM
Send requests to a trivial responder to test your prompt templates without incurring costs
PromptShuttle allows you to keep your code clean right from the start, no commitments and simple usage-based pricing. Access more features and cheaper pricing for tokens with our monthly or annual plans.
Get instant access to all models now!
Access GPT4o, Llama3, Gemini 1.5 Flash and Pro, Claude 3.5 Sonnet, Mixtral 8x7B and dozens of other models right away and experiment with different LLMs