Search overlay panel for performing site-wide searches

Build Your Next Big Thing on Heroku. Sign Up Now!

News

We’ve transitioned to a Sustaining Engineering model to better serve the customers who rely on us every day. Our mission is simple: to provide the most stable, secure, and reliable environment for your apps and data. We will continue releasing features and functionality that align with our Sustaining Engineering goals and provide a more robust and efficient platform to our customers.

Today we are excited to share three recent enhancements:

Heroku CLI v11 is now available. This release represents the most significant architectural overhaul in years, completing our migration to ECMAScript Modules (ESM) and oclif v4. This modernization brings faster performance, a new semantic color system, and aligns the CLI with modern JavaScript standards.

While v11 introduces breaking changes to legacy namespaces, the benefits are substantial: better performance, improved maintainability, and enhanced usability that simplifies how you manage Heroku resources from the command line.

Modern applications, especially those leveraging AI and data-heavy libraries, need more room to breathe. To support these evolving stacks and reduce developer friction, we’ve increased the default maximum compressed slug size from 500MB to 1GB.

The web browser and certificate authority industry is shortening the maximum allowed lifetime of TLS certificates. These changes will improve security on the Web, but you may have to change certificate maintenance practices for apps you run on Heroku.

The good news is that if you’re using Heroku Automated Certificate Management, no changes are required: Heroku already refreshes and updates certificates on your apps according to the new policies.

If you maintain and upload …

Heroku is introducing significant updates to Managed Inference and Agents. These changes focus on reducing developer friction, expanding model catalogue, and streamlining deployment workflows.

Large language models are good at writing code. Data from Anthropic shows that allowing Claude to execute scripts, rather than relying on sequential tool calls, reduces token consumption by an average of 37%, with some use cases seeing reductions as high as 98%.

Untrusted code needs a secure and isolated place to execute. We solved this with code execution sandboxes (powered by one-off dynos), launched alongside Heroku Managed Inference and Agents in May 2025.

Today, we are thrilled to announce the General Availability (GA) of the Heroku GitHub Enterprise Server Integration.

For our Enterprise customers, the bridge between code and production must be more than just convenient. It must be resilient, secure, and governed at scale. While our legacy OAuth integration served us well, the modern security landscape demands a shift away from personal credentials toward managed service identities.

Today, Heroku is transitioning to a sustaining engineering model focused on stability, security, reliability, and support. Heroku remains an actively supported, production-ready platform, with an emphasis on maintaining quality and operational excellence rather than introducing new features. We know changes like this can raise questions, and we want to be clear about what this means for customers.

There is no change for customers using Heroku today. Customers who pay via credit card in the Heroku …

If you’ve built a RAG (Retrieval Augmented Generation) system, you’ve probably hit this wall: your vector search returns 20 documents that are semantically similar to the query, but half of them don’t actually answer it.

A user asks “how do I handle authentication errors?” and gets back documentation about authentication, errors, and error handling in embedding space, but only one or two are actually useful.

This is the gap between demo and production. Most tutorials stop at vector search. This reference architecture shows what comes next. This AI Search reference app shows you how to build a production grade enterprise AI search using Heroku Managed Inference and Agents.

Today, we are announcing the general availability of reranking models on Heroku Managed Inference and Agents, featuring support for Cohere Rerank 3.5 and Amazon Rerank 1.0.

Semantic reranking models score documents based on their relevance to a specific query. Unlike keyword search or vector similarity, rerank models understand nuanced semantic relationships to identify the most relevant documents for a given question. Reranking acts as your RAG pipeline’s high-fidelity filter, decreasing noise and token costs by identifying which documents best answer the specific query.

Subscribe to the full-text RSS feed for News.