Announcements

Stay informed about Portkey's newest releases, features, and improvements

  1. OpenAI Image Gen API on Portkey

    Announcement

    We're excited to announce that OpenAI's powerful new image generation API (with the gpt-image-1 model) is now supported on Portkey! 🎨

    What This Means For You

    You can now access OpenAI's latest image generation capabilities directly through your existing Portkey integration, with all the benefits of:

    • Higher fidelity, more accurate images
    • Diverse visual styles to match your creative needs
    • Precise image editing functionality
    • Rich world knowledge for generating contextually relevant visuals
    • Consistent text rendering for images with text elements
    image-gen-gpt-image

     Link to docs  

  2. 🎉 Our happy customers 🎉

    We're excited to share our customers' experiences and how our solutions have made an

    impact💪. Hear it straight from those who know us best!

     

    A huge thank you to our community for joining us in our journey. Your support helps us

    grow and improve! It'd be great if you could share your experience as well, here.

     

    Read the complete reviews here -  

      Review 1  

     Review 2 

    GPI_ReviewSnippet_190381_22042025
    GPI_ReviewSnippet_189823_22042025 (1)
  3. Virtual keys for self-hosted models!

    You can now create a virtual key for any self-hosted model - whether you're running

    Ollama, vLLM, or any custom/private model.

     

    ✅ No extra setup required

    ✅ Stay in control with logs, traces, and key metrics

    ✅ Manage all your LLM interactions through one interface

    Virtual keys for local models (1)
  4. Portkey now integrates with Perplexity!

    Perplexity has quickly become the go-to for relevant, real-time answers with citations — whether it’s for research, summarization, or travel planning.

     

    And now, you can use Perplexity through the Portkey AI Gateway.

    With Portkey, you can:

    ✅ Monitor every Perplexity call with full traceability

    ✅ Apply guardrails, retries, rate limits, and cost controls

    ✅ Set up fallbacks with other models like OpenAI, Claude, or Mistral

    ✅ Pass advanced request params like response_format and search_recency_filter to personalize and fine-tune results

     

    Read more about the integration here 

    image (13)
  5. ✨ Tool calling now available for OpenRouter!

     

    You can now use the tool calling with OpenRouter via Portkey - agents powered by OpenRouter models can now call functions, access external tools, and complete multi-step tasks — all while running through Portkey’s Gateway.

     

    No extra setup needed. Try today!

  6. ✨Shipping a smoother developer experience

    Some updates on our Gateway that will help you work faster and efficiently!

     

    ✅ You can now use shorthand for guardrails in your API calls, making the creation of raw guardrails faster and simpler.

     

    ✅ Gateway now fills in the model field in /chat/completions responses, even if your provider doesn’t, keeping everything consistent with OpenAI’s API signature.

     

    ✅ Introduced a new retry setting use_retry_after_header. When set to true, if the provider returns the retry-after or retry-after-ms headers, the Gateway will use these headers to determine retry wait times, rather than applying the default exponential backoff for 429 responses

  7. ✨ Portkey now integrates with Mistral guardrails

    You can now apply Mistral’s advanced content moderation across all your LLM traffic via Portkey — including OpenAI, Azure OpenAI, Anthropic, and more.

    With Mistral guardrails, you get:

     

    ✅ Fine-grained category filtering — including PII, violence, hate, and more

    ✅ Multilingual coverage across 11+ languages

    ✅ Seamless integration with all providers — no infra changes needed

     

    Read more about the integration here

     

     

  8. 🚀 Tool calling for Ollama models!

    You can now use tool calling with Ollama models via Portkey’s Gateway.

     

    ✅ Get full visibility into tool calls and latency

    ✅ Log metadata, inputs, and outputs for every run

    ✅ Build agents that use local models and tools together

     

    Read more about it here 

    Â