Rails vs. Ramaze Performance Comparison
Published: February 19, 2009 (over 6 years ago)
Updated: 4 months ago
One of my biggest concerns of late is that my “more than just a little trivial” Rails projects seem to find their way straight into the heavyweight category in no time at all. While I am quite hopeful that Merb being the 3.0 version of Rails will resolve many of the issues I face today, I have no idea when I can truly count on Rails 3.0’s arrival at the party. Thus, I have begun looking at other frameworks that are available today.
One I found is Ramaze. I have to say that Ramaze is a very minimalist approach to a framework that is actually rather enlightening. For one coming from a Rails’ world, it can feel a bit sparse at first. But lately, I’m attributing my lack of comfort to being put on my toes to do some actual real Ruby coding. Its both a little scary and liberating at the same time. I have found that I am enjoying Ruby a lot more as a result of my exposure to Ramaze and my thought process is actually radically changing, almost like a Visual Basic programmer going to Borland Delphi might feel. So I decided to take a hard look at performance metrics between Rails and Ramaze’s. At the same time, I wanted to see how MRI 1.8.7 stacked up against Ruby Enterprise Ed. 1.8.6, so I’m satisfying a dual curiosity herein. Being quite happy with Phusion Passenger, I decided to set things up with Apache 2.2.9 Passenger Rails/Ramaze. Since I’m quite comfortable with ERB and ActiveRecord, I tweaked Ramaze to be a bit more like the familiar Rails world I am familiar with.
Un-DRYing the Results
There’s no point in generating a lot of data and reporting on it if others cannot repeat yourself (CRY!) in replicating the results, so with that in mind, I attempt to refrain from further bad turn of words by busying myself in pushing everything I utilized to github.com and documenting the basic bootstrapping of Ubuntu Intrepid. For these tests in particular, I used two scripts, bootstrap-passenger-std.sh and bootstrap-passenter-ent.sh to set up four distinct configurations under the Ruby1.8 interpreter. Ruby 1.9.1. was a bit difficult to set up and there is a bootstrap-passenger-191.sh script that I assembled as I installed things, but I haven’t run it to ensure it works end-to-end.
|Passenger 2.0.6 (gem)||Passenger 2.1.0 (source)|
|Rails 2.2.2||Ramaze 2009.01||Ramaze 2009.01|
|MRI Ruby 1.8.7 package||Ent. Ruby 1.8.6 source||MRI Ruby 1.8.7 package||Ent. Ruby 1.8.6 source||MRI Ruby 1.9.1 source|
I utilized this simple call to Apache Bench to run through the four test scenarios: All data collected was based on 10,000 requests at 100 concurrent users after an initial “warmup” of the passenger threads with start of 100 requests at 10 concurrent users. The following command shows the basic set of parameters issued to Apache Bench:
ab -n 10000 -c 100 http://demo.u64rails01.local/posts
What was tested?
In order to compare, as close as possible, apples to apples, I opted to use Active Record and ERubis in Ramaze to set up the same MVC classes in both the Rails and Ramaze project. ActiveRecord is probably the heaviest weighted ORM that can be utilized with Ramaze. I tried briefly to get Erubis working with Rails so that the two frameworks’ projects would be more closely aligned than the out-of-box ERB rendering Rails ships with would have afforded. I did attempt Erubis with Rails, however, I was either doing something very wrong or Erubis under Rails takes a sizable performance hit and I stuck with out-of-box ERB for these tests given its better performance. As such, Rails uses out of box ERB rendering in these tests. The application itself is ridiculously simple, offering only one model and performing only a read against the database, but hopefully, an effective start towards building this benchmark framework towards more substantive benchmark tests to come. For now, lets take a look at how Rails and Ramaze stack up to each other and what affect the underlying interpreter can have on performance. Standard packaged Main Ruby Interpreter (MRI) 1.8.7 consisted of simple package install provided by the Ubuntu Intrepid repositories whle Ruby Enterprise Edition (REE) 1.8.6, which hails from the same folks bringing you Phusion Passenger, is compiled from source. As a bonus, I also compiled and clocked metrics for Ramaze on 1.9.1. This was a tricky environment to get set up and working. And my initial results made me think I did something horribly wrong with compiling and configuring Ruby 1.9.1. However, I fully replicated Antonio Cangiano’s benchmarks in my test environment (see The Great Ruby Shootout (December 2008)) and MRI 1.9.1. did indeed crush the other Ruby interpreters in Fibonacci number crunching (amongst others). The theory tossed around in the Ramaze IRC channel was that 1.9.1. change in implementation of the String class (and it being unicode native now) slows things down a good bit with string processing.
The following table tabulates the results gathered from Apache Bench as well as passenger-memory-stat. I ran a short run to warm things up (100 requests, 10 concurrently). And then ran the big bench at 10,000 requests, 100 concurrently, and then ran again and recorded the results. At around 5,000 requests, I ran passenger-memory-stat to collect the data during mid-processing and then again at the end. These tests were run on a Ubuntu 64-bit Intrepid 8.10 virtual server that was allocated 1 CPU core and 512MB of RAM. Swap space was never triggered (0k utilization), and CPU utilization was roughly 85% within the Virtual Machine during the test runs. The host machine, which is four core CPU, showed one core busy at near 100% with the other three cores pretty much idle.
|Time taken for tests||489.24||379.76||397.43||287.5||351.95||seconds|
|Requests per second||20.44||26.33||25.19||34.78||28.41||[#/sec] (mean)|
|Time per request||4892.42||3797.56||3970.43||2875.03||3519.49||[ms] (mean)|
|Time per request||48.92||37.98||39.7||28.75||35.2||[ms] (mean, across all concurrent requests)|
|Transfer rate||73.19||94.29||44.59||61.58||51.22||[Kbytes/sec] received|
|Total private dirty RSS||323.73||180.89||221.11||221.51||210.73||MB|
Everybody likes to talk about throughput, so lets start there. As you can see, MRI and Rails was the worst performer while REE and Ramaze clocked in with the highest throughput in terms of requests per second. Be forewarned that throughput doesn’t necessarily equate to user response times. This metric simply tells you the load at which your system is capable of delivering content at.
Time per Request
Time per request helps you see about how long an average user request takes for a given load. As you can see here, with times in the milliseconds, we’re talking about numbers that will easily be swallowed in the noise of network latency. Even so, its not hard to be impressed by what the REE folks managed to pull out of their hat with regards to performance on the main ruby interpreter.
This final graph shows the sizeable dent REE makes in memory consumption for running a Rails application. Ramaze is already pretty efficient and lean and REE made practically no dent in memory consumption here. However, REE’s use of the faster memory allocation routines does have a good effect on boosting Ramaze’s performance (as seen in above graphs), so there’s still lots to be gained by running Ramaze with REE.
For me, I have two very, very viable solutions before me. Rails on Enterprise Ruby or Ramaze on Ruby 1.8.7. The final choice comes down to how much I can bet on my own Ruby prowess. Ramaze is a very lean framework that gets you going, but doesn’t get you a nine inch pillowed mattress topping for your bed like Rails does. Because Ramaze is so lean, I realize I will have to really take my Ruby skills to the next level and truly understand the ruby code that other developers produce. Unlike Rails, where there’s gobs and gobs of plugins and blogs on the topic that make pretty much any need simple to satisfy and get working, I have to know my Ruby to get a solution implemented on Ramaze. Not a bad thing, but certainly a more challenging and potentially rewarding path to take. Aside from my personal concerns about making the jump, looking at these results, it is good to see that I have options and how easy it is to get a good deployment up and working compared to 3 or 4 years ago when I was banging my head against the wall for days on end to keep lighttpd fcgi processes up and running around the clock. In these tests, it was very, very satisfying that not one single crash or dropped request was recorded in all the benchmarking I performed. Mongrel came along and made a big impact, but you still needed something like monit to watch Mongrel and restart it on ocvasion. Now Passenger has come along and made yet another big impact in the ease of deploying Rails, Ramaze, or any of the other ruby-based frameworks. We’ve come a long ways, indeed. I, for one, certainly look forward to what the Ruby sphere will bring us over the next couple of years.
a.k.a. Code Connoisseur
- [email protected]
- ICQ ‐ 25239620
- AIM ‐ mwlang88
- Yahoo! ‐ mwlang88
- Google ‐ mwlang
- Twitter ‐ @mwlang88
EducationBachelor of Science
Information and Computer Science
- Mold Killer Recipe
- Gonna be Starting Something New
- Pitch Camp, what is it good for?
- Less communication can be more
- Let the Musings Begin
- Dynamic Routing in Rails Revisited
- Creating Dynamic Routes at runtime in Rails 4
- Adding Google Analytics script to Sprockets
- Gems you should consider for every Rails projects
- Weak Password will get you Hacked!
- Ramaze vs. Padrino Benchmarks
- A comparison JRuby vs. Ruby MRI with Sequel
- Getting Ruby to talk to MSDE
- Implementing Ruby jobs in the background
- What makes a top award candidate, anyhow?
- Status updating...