Heroku Blog
- News
- Last Updated: February 17, 2026
- Anush DSouza
Large language models are good at writing code. Data from Anthropic shows that allowing Claude to execute scripts, rather than relying on sequential tool calls, reduces token consumption by an average of 37%, with some use cases seeing reductions as high as 98%.
Untrusted code needs a secure and isolated place to execute. We solved this with code execution sandboxes (powered by one-off dynos), launched alongside Heroku Managed Inference and Agents in May 2025.
- Engineering
- Last Updated: February 13, 2026
- Karunasri (Karuna) Garigipati
If you’ve ever debugged a production incident, you know the drill: IDE on one screen, Splunk on another, Sentry open in a third tab, frantically copying error messages between windows while your PagerDuty keeps buzzing.
You ask “What errors spiked in the last hour?” but instead of an answer, you have to context-switch, recall complex query syntax, and mentally correlate log timestamps with your code. By the time you find the relevant log, you’ve lost your flow. Meanwhile the incident clock keeps ticking away.
The workflow below fixes that broken loop. We’ll show you how to use the Model Context …
- Ecosystem, News
- Last Updated: February 11, 2026
- Alberto Sigismondi
Today, we are thrilled to announce the General Availability (GA) of the Heroku GitHub Enterprise Server Integration.
For our Enterprise customers, the bridge between code and production must be more than just convenient. It must be resilient, secure, and governed at scale. While our legacy OAuth integration served us well, the modern security landscape demands a shift away from personal credentials toward managed service identities.
- News
- Last Updated: February 06, 2026
- Nitin T Bhat
Today, Heroku is transitioning to a sustaining engineering model focused on stability, security, reliability, and support. Heroku remains an actively supported, production-ready platform, with an emphasis on maintaining quality and operational excellence rather than introducing new features. We know changes like this can raise questions, and we want to be clear about what this means for customers.
There is no change for customers using Heroku today. Customers who pay via credit card in the Heroku dashboard—both existing and new—can continue to use Heroku with no changes to pricing, billing, service, or day-to-day usage. Core platform functionality, including applications, pipelines, teams, …
- Engineering, News
- Last Updated: January 29, 2026
- Anush DSouza
If you’ve built a RAG (Retrieval Augmented Generation) system, you’ve probably hit this wall: your vector search returns 20 documents that are semantically similar to the query, but half of them don’t actually answer it.
A user asks “how do I handle authentication errors?” and gets back documentation about authentication, errors, and error handling in embedding space, but only one or two are actually useful.
This is the gap between demo and production. Most tutorials stop at vector search. This reference architecture shows what comes next. This AI Search reference app shows you how to build a production grade enterprise …
- News
- Last Updated: January 15, 2026
- Anush DSouza, Mandeep Bal
Today, we are announcing the general availability of reranking models on Heroku Managed Inference and Agents, featuring support for Cohere Rerank 3.5 and Amazon Rerank 1.0.
Semantic reranking models score documents based on their relevance to a specific query. Unlike keyword search or vector similarity, rerank models understand nuanced semantic relationships to identify the most relevant documents for a given question. Reranking acts as your RAG pipeline’s high-fidelity filter, decreasing noise and token costs by identifying which documents best answer the specific query.
Subscribe to the full-text feed.