Rails: Fail your build based on benchmark results using timocratic/test_benchmark plugin

Happy Path: It all started here.
As I joined this team with RoR project, one thing I noticed was, we were using timocratic’s test_benchmark plug-in. The test_benchmark plug-in was really good and quite intuitive to print the top 10-15 slowest tests with their benchmark results in the log. If you guys are not using it, believe me, its worth giving it a try.

Limitations Found : Issues Arose.
But there seems to be a small issue with my team. We incorporated the benchmark plug-in as part of our build process so that the benchmark results be obtained. But it seemed that we just stopped there and was happy about seeing the benchmark results. Honestly none of us had enough time to take a look at the results and do something about the slowest tests.
As our RoR application was getting bigger day-by-day we realized that time taken for running all the tests as part of the build, is getting higher. So people started skipping the step of running the full test suite, at times and as an end result, we found that the continuous integration going to RED quite often. Hence I decided to do something about it.

On Track: In search of a solution.
One thing that I realized was as the time went by, there were some stale/unnecessary test cases which were floating around and on the other end I could very clearly notice that certain test cases could have optimized better for better performance. So I wanted to make it as a rule that people should go back and refactor their test cases in regular basis, preventing test cases from getting rotten as time goes by. As result I decided to use the test_benchmark results, to put that rule in place.

Solution : I derived something.
What I did was adding a simple extension to the existing test_benchmark plug-in.
1. An option to set a threshold (max time that a test case can take to run).
2. Have the build fail, if the benchmark results show a sign of any test exceeding this time limit.

Tough Part: Making the solution work.
For example, I started considering the slowest test (benchmark result) in the whole application. Assume that the slowest test’s benchmark result displays the time taken to run that test was 63 seconds, then, I can go ahead and set the benchmark threshold to 60 seconds. As a result, with my new test_benchmark extension, I fail the build and display the details of only the test case which was reason behind the failure. Now I would go ahead and fix (refactor) the slowest test and bring its execution time below, 60 seconds in order to make my build go GREEN.
As I do not see any more tests taking more than 60 seconds, gradually I can start reducing the threshold (to 50, 40 seconds etc) in regular period of time so that any team member making some modifications to the test also takes care of the execution time.

Its just a Beginning: At least, there’s something to start with.
I believe, this would be one of the good places to start with in order to maintain the test code optimized, avoid unnecessary tests and keep them from taking longer time to complete execution.

Go Public: Forked from the Master.
I forked the test_benchmark tool and made the changes for this extension and again re-published it right here at my github repository. Also the README file has got all the details about using this extended feature.

Times Up: 2 Face the Music!
Please feel free to use it or if you have any feedback/suggestion on the same, I will really glad to hear from you folks.

About this entry