Announcements

Stay informed about Portkey's newest releases, features, and improvements

  1. Day 0 support for Claude Opus 4.1!

    Claude Opus 4.1 is Anthropic’s most capable model yet. It delivers stronger reasoning, better code generation, and agentic task handling, with a native 64K context window.

     

    With Portkey, bring Opus 4.1 into production with:

    ✅ Unified access via Bedrock, Vertex AI & Anthropic providers

    ✅ Guardrails for safe, compliant usage

    ✅ Full observability: logs, latency, spend

    ✅ Budget and rate-limit controls across teams and apps

    claude opus 4.1
  2. ⚡ You can now run GPT‑OSS models on Portkey!

    OpenAI’s new open-weight models, gpt-oss-120b and gpt-oss-20b, deliver strong reasoning, tool use, and long-context capabilities.

     

    With Portkey, get GPT‑OSS into production with:

     

    ✅ Smart routing via OpenAI, Together AI, Nscale, Groq & other providers

    ✅ Use directly in the Prompt Playground

    ✅ Prompt tooling, observability & caching built in

    ✅ Guardrails, retries, and failover out of the box

    gpt-oss (2)
  3. ⚡ Featherless.ai is live on Portkey!

    Featherless gives you access to 11,900+ open-source models with unlimited tokens, instantly deployable for fine-tuning, testing, and production at scale.

     

    With Portkey, bring Featherless into production with:

     

    ✅ Unified access to all Featherless models

    ✅ Secure API key management with no extra config

    ✅ Full observability: logs, latency, spend

    ✅ Built-in caching, retries, and failover for reliability

     

    See how you can integrate here 

    featherless
  4. AWS Bedrock Knowledge Bases integration is live 🎉

    AWS Bedrock Knowledge Bases enables you to give foundation models access to your company’s private data sources

     

    With Portkey, you can:

    • Create and query AWS Bedrock KBs via Portkey’s unified API
    • Manage authentication without extra config
    • Add observability, caching, and reliability

     

     Learn more 

     

  5. Automatic user attribution is live!

    Working with API keys across teams?

     

    Portkey now automatically adds _user metadata to every request made with a user key. No more manual tagging or messy workarounds.

     

    This makes it easier to debug issues and track usage, all tied back to the right user.

    user-attributin

     

  6. 🥳 Portkey is now the Internet2 NET+ AI Gateway provider!!

    Portkey is the official AI gateway provider for Internet2, helping universities roll out GenAI with centralized access, governance, and control.

     

    Want to see how leading universities are deploying GenAI across campus?

     

    Internet2 is hosting a live session where teams from NYU and Princeton University will share how they’re bringing GenAI into classrooms, research, and student-facing tools, securely and at scale.

     

    In this conversation, we’ll cover:

    • What centralized GenAI governance looks like in practice
    • How universities are managing access, budgets, and compliance
    • Lessons from NYU and Princeton on integrating GenAI across departments
    • How the NET+ evaluation and procurement process works

     

    Portkey Joins NET+ (1)

     

    📅 Thursday, August 1

    ⏰ 1:00 PM EDT

     🔗 Register 

  7. ⚡ You can now use Qwen 3-Coder on Portkey!

    One of the top open models on SWE-bench-Verified, Qwen3-Coder excels at long-context and agentic coding tasks with native 256K context and scaling up to 1M.

     

    With Portkey, bring it into production with:

    ✅ Smart routing via Dashscope, Ollama & vLLM providers

    ✅ Guardrails for safe, compliant usage

    ✅ Full observability: logs, latency, spend

    ✅ Budget and rate-limit controls across teams and apps

    Qwen 3-Coder
  8. 👉 Portkey in Action

    Always fun to see Portkey in the hands of builders!

     

    Matt Blake at Planet No Code just dropped a great walkthrough on how to connect LLMs to a no-code front-end using Portkey — from setup to routing.

    If you’re building AI apps and want more visibility and control (without all the backend glue), this one’s worth a watch.

     

    Big thanks to Planet No Code for the deep dive 🙌

     

  9. Configure request logging!

    Not every team wants to log everything. Some need full visibility into every LLM request. Others have strict compliance rules and want to keep things minimal.

     

    Portkey now gives org owners control over what gets logged — from full request details to just high-level metrics.

     

    You can apply this org-wide, or allow individual workspace managers to modify it for their teams.

     

     Configure it in your org  

     

    Request Logging
  10. Kimi K2 is now live on Portkey!

    Moonshot’s latest release, Kimi K2, is a low-cost, open-source model with capabilities that surpass top models on coding tasks.

    Bring Kimi K2 in production with Portkey’s AI gateway that provides:

     

    ✅ Smart routing, retries, and failover

    ✅ Guardrails for safe, compliant usage

    ✅ Full observability — logs, latency, and cost

    ✅ Budget and rate-limit controls across teams

     

     Integrate Kimi K2 with Portkey -> 

    Moonshot