Friday, 13 March 2009

Monitoring Rails builds with CruiseControl.rb and CCTray

More for my own memory than anything else...

CruiseControl.NET comes with a tool called CCTray  that gives you a handy way of monitoring the build status of multiple CruiseControl environments. It works out of the box with other CruiseControl.NET installations but needs a little trick to monitor the Ruby  and Java  versions (why we need the same app implemented three times is a subject for a rant one day I'm sure...).

For Ruby on Rails projects, set the monitoring URL in CCTray to this:
http://hostname.of.cruisecontrol.rb:3333/XmlStatusReport.aspx
It's not a real ASPX page but it returns XML that CCTray is expecting.

Cruise Control for Java is similar, but different ('natch):
http://hostname.of.cruisecontrol:3333/dashboard/cctray.xml
 Had trouble Googling that. :-)

Monday, 9 March 2009

Keep yourself logged in to a website with anti-idle

At $work I need to use a time sheet application which has a session timeout feature. I want a way to stay "logged in". So I've conceived a little plug-in for my personal web developer's proxy that will re-load certain web pages periodically in the background.

Could work like this:
  1. Start your personal proxy with the anti-idle plug-in in the chain (below).
  2. In your browser, go to the page you want to periodically re-load.
  3. At the end of the URL, append a CGI argument. For example you could append "?ttt_anti_idle=300" to reload the page every 5 minutes. If there are already CGI arguments in the URL just append: "&ttt_anti_idle=300".
  4. Load the new URL you've just typed. The anti-idle plug-in will strip out the extra argument you've appended prior to giving the URL to the "real" server.
  5. The anti-idle plug-in monitors its stream for "ttt_anti_idle" arguments and builds a list of pages to reload at certain intervals. It discards the result of course.
Here's how I imagine I'd set up the pipeline:

$ proxy | anti_idle --use_cgi=ttt_anti_idle | respond

[...]

Friday, 6 March 2009

Initial Load Values for Nagios Load Checks (Cheat Sheet)

I've put together a cheat sheet to show how you might want to initially configure your Nagios load checks. The thinking behind these initial values is set out in Tuning Nagios Load Checks.

Use OS Cores Warning Critical Notes
CMS (Teamsite) Solaris 1 10,7,5 20,15,10 Testing shows this app to be responsive up until these loads.
Web Server Linux 2 x 4 16,10,4 32,24,20 Web servers are paired, so want to know if reaching 50% capacity regularly. Testing shows performance degradation from a load of 20.
DB Server Linux 2 x 4 16,10,4 32,24,20 Same hardware, different use. Nevertheless, using same thresholds.
Nagios Linux 1 x 2 6,4,2 12,10,7 Small box, paired with backup.

General notes:
  • The UNIX servers (particularly the Sun SPARC ones) seem to be able to stay up and responsive even under heavy load. And they don't count processes waiting for I/O in their load counts the way Linux does. I have no explanation for this. :-)
  • We track these loads over time to predict demand growth for capacity planning -- the thresholds are not a long term goal but rather a short term alert threshold.
  • Transaction or revenue-earning web servers might have lower thresholds because of the different commercial implications of performance degradation. YMMV.
For more information on the Nagios check_load command, see Tuning Nagios Load Checks.

No more stupid YouTube comments

Prompted by Mark Damon Hughes' Stupid Comments Be Gone I wrote a small script that took YouTube HTML in on stdin, stripped out the comments, and spat the remainder out on stdout (Mark's trick uses CSS to hide them).

Now I can do this:

$ proxy | connect | kill_youtube_comments | respond
[...]

And lo! Works in all browsers. :-)

Breaking it down:
  1. The proxy command listens on port 8080 (I configure my browser to proxy to localhost:8080). It spits all requests it sees to stdout.
  2. The connect command reads a HTTP request on stdin, connects to the remote server, fetches the content, and spits a HTTP request on stdout.
  3. The kill_youtube_comments command reads in HTML and strips out the div that contains YouTube comments.
  4. The respond command reads a HTTP response and sends that (via named pipe) back to the proxy command so that it can return it to the browser.

I sometimes wonder if anyone else in the world would find a personal, hackable proxy useful.