Search overlay panel for performing site-wide searches

Boost Performance & Scale with Postgres Advanced. Join Pilot Now!

performance

Puma 7 is here, and that means your Ruby app is now keep-alive ready. This bug , which existed in Puma for years, caused one out of every 10 requests to take 10x longer by unfairly “cutting in line.” In this post, I’ll cover how web servers work, what caused this bad behavior in Puma, and how it was fixed in Puma 7; specifically an architectural change recommended by MSP-Greg that was needed to address the issue.

The Performance Penalty of Repeated Connections

Before the latest improvements to the Heroku Router, every connection between the router and your application dyno risked incurring the latency penalty of a TCP slow start . To understand why this is a performance bottleneck for modern web applications, we must look at the fundamentals of the Transmission Control Protocol (TCP) and its history with HTTP.

Maybe you’ve heard of a keep-alive connection, but haven’t thought much about what it is or why they exist. In this post, we’re going to peel away some networking abstractions in order to explain what a keep-alive…

Your app is slow. It does not spark joy. This post will use memory allocation profiling tools to discover performance hotspots, even when they're coming from inside a library. We will use this technique with a real-world application to identify a piece of optimizable code in Active Record that ultimately leads to a patch with a substantial impact on page speed.

In addition to the talk, I've gone back and written a full technical recap of each section to revisit it any time you want without going through the video.

I make heavy use of theatrics here,…

During the development of the recently released Heroku SSL feature, a lot of work was carried out to stabilize the system and improve its speed. In this post, I will explain how we managed to improve the speed of our TLS handshakes by 4-5x.

The initial reports of speed issues were sent our way by beta customers who were unhappy about the low level of performance. This was understandable since, after all, we were not greenfielding a solution for which nothing existed, but actively trying to provide an alternative to the SSL Endpoint add-on, which is provided by…

The asset pipeline is the slowest part of deploying a Rails app. How slow? On average, it’s over 20x slower than installing dependencies via $ bundle install. Why so slow? In this article, we’re going to take a look at some of the reasons the asset pipeline is slow and how we were able to get a 12x performance improvement on some apps with Sprockets version 3.3+ .

The Rails asset pipeline uses the sprockets library to take your raw assets such as javascript or Sass files and pre-build minified, compressed assets that are ready to be served by a production…

Last year, we launched the original Performance dyno, designed to support the largest apps running at-scale with more consistent service and faster response times. Today, with the goal of continuing to support our fast growing customers with more flexibility to choose the type of dynos best for their applications, we are excited to announce improvements to our performance dyno lineup:

Performance-L — an improved and more powerful version of the existing Performance dyno, renamed the Performance-L dyno
Performance-M — an entirely new dyno and smaller sibling to the Performance-L dyno

The Performance-L dyno now has 14GB of RAM, 133% more…

In a recent patch we improved Rails response time by >10% , our largest improvement to date. I’m going to show you how I did it, and introduce you to the tools I used, because.. who doesn’t want fast apps?

In addition to a speed increase, we see a 29% decrease in allocated objects. If you haven’t already, you can read or watch more about how temporary allocated objects affect total memory use . Decreasing memory pressure on an app may allow it to be run on a smaller dyno type, or spawn more worker processes to handle more throughput. Let’s…

Working with our support team, I often see customers having timeout problems. Typically, their applications will start throwing H12 errors.

The decision to timeout requests quickly wasn’t made to avoid having long-running requests on our router, nor to only have fast apps on our platform, but because standard web servers do not handle these types of requests particularly well.

How webservers work

All webservers will work in a similar way. Any new request will go to a queue, and the server will process them one after the other.

This means if you have 30 requests in your queue, each taking…

Performance is important, and if we can’t measure something, we can’t make it fast. Recently, I’ve had my eye on the ActionDispatch::Static middleware in Rails. This middleware gets put at the front of your stack when you set config.serve_static_assets = true in your Rails app. This middleware has to compare every request that comes in to see if it should render a file from the disk or return the request further up the stack. This post is how I was able to benchmark the middleware and give it a crazy speed boost.

How ActionDispatch::Static Works

Right now to…

Flow is an important part of software development. The ability to achieve flow during daily work makes software development a uniquely enjoyable profession. Interruptions in your code/test loop make this state harder to achieve. Whether you are running unit tests locally, launching a local webserver, or deploying to Heroku there's always some waiting and some interruption. Every second saved helps you stay in your flow.

We’ve been working on reducing the time it takes to build your code on Heroku. Read through this post for details on the process we used to make builds
fast, or check out the…

Subscribe to the full-text RSS feed for Engineering.