Heroku Blog
- Engineering
- Last Updated: February 13, 2026
- Karunasri (Karuna) Garigipati
If you’ve ever debugged a production incident, you know the drill: IDE on one screen, Splunk on another, Sentry open in a third tab, frantically copying error messages between windows while your PagerDuty keeps buzzing.
You ask “What errors spiked in the last hour?” but instead of an answer, you have to context-switch, recall complex query syntax, and mentally correlate log timestamps with your code. By the time you find the relevant log, you’ve lost your flow. Meanwhile the incident clock keeps ticking away.
The workflow below fixes that broken loop. We’ll show you how to use the Model Context …
- Ecosystem, News
- Last Updated: February 11, 2026
- Alberto Sigismondi
Today, we are thrilled to announce the General Availability (GA) of the Heroku GitHub Enterprise Server Integration.
For our Enterprise customers, the bridge between code and production must be more than just convenient. It must be resilient, secure, and governed at scale. While our legacy OAuth integration served us well, the modern security landscape demands a shift away from personal credentials toward managed service identities.
- News
- Last Updated: February 06, 2026
- Nitin T Bhat
Today, Heroku is transitioning to a sustaining engineering model focused on stability, security, reliability, and support. Heroku remains an actively supported, production-ready platform, with an emphasis on maintaining quality and operational excellence rather than introducing new features. We know changes like this can raise questions, and we want to be clear about what this means for customers.
There is no change for customers using Heroku today. Customers who pay via credit card in the Heroku dashboard—both existing and new—can continue to use Heroku with no changes to pricing, billing, service, or day-to-day usage. Core platform functionality, including applications, pipelines, teams, …
- Engineering, News
- Last Updated: January 29, 2026
- Anush DSouza
If you’ve built a RAG (Retrieval Augmented Generation) system, you’ve probably hit this wall: your vector search returns 20 documents that are semantically similar to the query, but half of them don’t actually answer it.
A user asks “how do I handle authentication errors?” and gets back documentation about authentication, errors, and error handling in embedding space, but only one or two are actually useful.
This is the gap between demo and production. Most tutorials stop at vector search. This reference architecture shows what comes next. This AI Search reference app shows you how to build a production grade enterprise …
- News
- Last Updated: January 15, 2026
- Anush DSouza, Mandeep Bal
Today, we are announcing the general availability of reranking models on Heroku Managed Inference and Agents, featuring support for Cohere Rerank 3.5 and Amazon Rerank 1.0.
Semantic reranking models score documents based on their relevance to a specific query. Unlike keyword search or vector similarity, rerank models understand nuanced semantic relationships to identify the most relevant documents for a given question. Reranking acts as your RAG pipeline’s high-fidelity filter, decreasing noise and token costs by identifying which documents best answer the specific query.
- News
- Last Updated: December 18, 2025
- Anush DSouza
This month marks significant expansion for Heroku Managed Inference and Agents, directly accelerating our AI PaaS framework. We’re announcing a substantial addition to our model catalog, providing access to leading proprietary AI models such as Claude Opus 4.5, Nova 2, and open-weight models such as Kimi K2 thinking, MiniMax M2, and Qwen3. These resources are fully managed, secure, and accessible via a single CLI command. We have also refreshed aistudio.heroku.com, please navigate to aistudio.heroku.com from your Managed Inference and Agents add-on to access the models you have provisioned.
Whether you are building complex reasoning agents or high-performance consumer …
Subscribe to the full-text feed.