Skip to content

Emery's Insights

Good Coding Practices, Software Design, and the Software industry.

A growing trend around the web are buttons and links that when clicked will either perform an action or prompt the user to login with a lightbox. Lightboxes are those frames that pop up in response to other actions. There are many Rails plugins that describe them in more detail, so I won’t be.

The generally accepted Rails way of doing this is with a before filter, such as the login_required filter supplied by many authentication plugins. However this approach causes a problem, once the user is logged in they are redirected to the last page they visited. Their action that triggered the login is ignored and they will have to do it again.

At the time of writing, this process is exemplified by Digg. Attempting to digg a link while logged out will prompt a login with a light box, upon submitting valid credentials the light box disappears and the user is logged in, but the digg is not counted.

This may be standard practice, but it makes for a poor user experience. Ideally, the login action should complete the interrupted action that triggered the login. In terms of the Digg example, I should automatically digg the link when I login, when I login by responding to a prompt that was triggered by an attempt to digg a link while logged out.

How about adding that ideal behaviour to your Ruby on Rails applications?

There are three problems with the standard Rails authentication process that need to be overcome before our applications can behave this way.

  1. Browsers in general do not allow redirecting to a POST request.
  2. redirect_to doesn’t preserve format without additional input.
  3. Store location does not preserve form data.

Conveniently, all three of these problems can be solved by eliminating redirects in the process of logging in as a response to a protected action.

With that in mind, we must devise a new combined authentication/action plan. The following is an example of the new login process triggered by an action requiring an authenticated user.

  1. AJAX request to a protected action (POST).
  2. required user filter triggers. If a user is logged in skip to step 6, otherwise proceed to step 3.
  3. render new session form containing hidden fields with post data required for the interrupted protected action.
  4. submit login information and original POST data back to the protected action (POST).
  5. required user captures session information and logs in. If login fails return to 3.
  6. protected action proceeds as expected.

Let’s start by defining the require_user filter that does the heavy lifting.

class ApplicationController < ActionController::Base
  def require_user
    unless logged_in?
      if params[:login].present? && params[:password].present?
        # attempt to log in
        self.current_user = User.authenticate(params[:login], params[:password])
        if logged_in?
          if params[:remember_me] == "1"
            current_user.remember_me unless  current_user.remember_token?
            cookies[:auth_token] = { :value => self.current_user.remember_token, 
           :expires => self.current_user.remember_token_expires_at}
          # login failed
          flash[:error] = "Invalid username/password combination"
    unless logged_in?
      flash[:notice] = "You'll need to login or register to do that"
      respond_to do |format|
        format.html {render :template => 'user_sessions/new'}
        format.js {
          render :template => 'user_sessions/new', :layout => false

The next step is to beef up the user_sessions#new form to include the old post data. The following is a partial that should be rendered by user_sessions/new.html.erb, and used to populate the light box by user_session/new.js.rjs

<% unless params[:controller] == "user_sessions" %>
  <%= render :partial => "#{params[:controller]}/#{params[:action]}_form_replica" %>
<% end %>
<% url = params[:controller] == "user_sessions ? user_sessions_url : {} %>
<% form_remote_tag @user_session, :url => url, :html => {:action => url} do |f| %>
  <%= yield :old_form unless params[:controller] == "user_sessions" %>
  <%= f.label :user_name %>
  <%= f.text_field :user_name %>
  <%= f.label :password %>
  <%= f.password_field :password %>
  <%= submit_tag %>

The empty hash for url in the form_tag looks like an error, but isn’t. It ensures that the form data is posted to the url that rendered the form. Which we want to be the protected action unless the user is using the vanilla log in page.

There’s one last thing to do. Create the replica partial for our protected action form referenced in the user_sessions/new view:

The following should be stored at app/views/controller/_action_form_replica

<% content_for :old_form do %>
  <%= hidden_field_tag "model[:field_a]", params[:model][:field_a] %>
  <%= hidden_field_tag "model[:field_b]", params[:model][:field_b] %>
<% end %>

All that’s left is to add the filter to the your controller:

class ExampleController < ApplicationController
  before_filter :require_user, :only => :protected_action
  def protected_action

To use this behaviour on other actions all, you need to do is add the filter to the controller, and create a new partial containing old form data.


  • Your required_user method may look different depending on what authentication plugin you use
  • The form replica partial must exist for any of these actions even if no form data needs to be preserved.
  • This technique should not be used for GET requests. There are many reasons for this, plain text authentication information in the url come to mind.

N.B.: For Rails 3 users replace the form_remote_tag with form_tag and use the :remote => true option.

One of the core tenets of Ruby on Rails is Don’t Repeat Yourself (frequently abbreviated as DRY). Eliminating repetition is often done through refactoring. Which can be done anywhere your project has repeating or similar stretches of code.

Refactoring views takes a little more effort. Because rails offers helpers and partials as two destinations for code refactored from views. However there are no stone-set rules dictating which should be used where.

Generally partials are better suited for refactoring sections that are more HTML/ERB/HAML than ruby. Helpers on the other hand are used for chunks of ruby code with minimal HTML or generating simple HTML from parameters.

The way helpers are processed hinder their use for producing large amounts of HTML. Which is why helpers produce minimal amounts of HTML. If you look at the source of the helpers that ship with rails you will notice that most of them generate html. The few that don’t, are mainly used to generate parameters and evaluate common conditions.

For example, any of the form helpers or link_to variants fit the first form of helpers. While things like url_for and logged_in? (as supplied by various authentication plugins) are of the second kind.

How to decide whether to extract common/similar view code into partials or helpers?

Every one is going to have their own answer to that question. Here’s the decision chain I use to choose whether to refactor to a partial or helper.

If the answer to any of these questions about the code to be refactored is yes, then it belongs in a helper.

  • Repeating [nearly] identical statements producing a single shallow html tag?
  • Common expression used as an argument or condition?
  • Long expression (more than 3 terms) used as an argument for another helper?
  • 4 or more lines of ruby (that is not evaluated into HTML)?

If the answer to all of these questions is no, then it might belong in a partial. I prefer to keep my partials completely independent units. Such that every HTML tag opened in that block is closed in that block and vice versa. While it’s not a necessity it does help to cut down on errors. This is forced when working with HAML, assuming there are no raw HTML closing tags (if there are raw HTML closing tags in your HAML documents, you’re doing it wrong.)

If the answer to any of the following questions about code that could be refactored is yes, then I would extract it into a partial.

  • Part of an iterative loop (for, each, etc)?
  • Section that could be returned as part of an AJAX call?
  • Common code occurring in multiple views?

Answering no to all of the above questions means the there will be trouble refactoring the code cleanly.

This is the second part of a 2 part post about Optimistic Locking in Ruby on Rails. Part 1, which describes the problem in detail can be found here.

Update: In the process of writing this blog post, I realized that the conflict_warnings plugin is not exactly easy to use with optimistic locking. I’ve started to rework conflict_warnings into a new plugin I call ‘better_optimistic_locking’ that will be better suited for handing stale optimistic locks. But it’s one of those things I work on when I have time to do so. (read: the odd evening or weekend)

To recap: Optimistic locking in Ruby on Rails prevents multiple changes to the same record from clobbering each other. It fails in practice because a lock cannot persist across HTTP requests. When I encountered this problem, I realized that nothing short of adding a state preserving parameter to the request and custom before filters could prevent the race conditions from causing inconsistencies in the database.

About the third time I needed this functionality I decided it was about time to abstract the code into a plugin. And thus conflict_warnings was born. Named after the HTTP 409 Conflict status, which RFC 2616 Section 10 describes as:

The request could not be completed due to a conflict with the current state of the resource. This code is only allowed in situations where it is expected that the user might be able to resolve the conflict and resubmit the request.

conflict_warnings provides a coherent set of filters, controller instance methods, and view helpers. When used together, they will block hazardous HTTP requests in much the same way that Active Record’s optimistic locking blocks saves. The helpers provide a state preserving parameter to links and forms. While the filters use the parameters to identify and block conflicting requests.


To install conflict_warnings, point script/plugin at the github repostiory

$ script/plugin install git://


Assuming you are following standard Rails naming schemes, basic usage is pretty straight forward. Just add one of the filter methods to your controllers.

class ExamplesController < ApplicationController
  filter_conflicts, :only => :update

Now use the appropriate helper to redefine your forms and links that could produce hazardous actions.

<% form_for_with_timestamp @example do |f|%>
<% end %>

That’s it! If the updated_at/updated_on attribute of the record in question is after than the timestamp parameter the filter kicks in. The default action is to render a special template if it exists, otherwise redirect_to :back. However, all the controller methods of conflict_warnings accept a block that will be executed instead of the default action.

conflict_warnings also provides controller instance methods so that you can create custom filters based around these methods. All that advanced usage is explained in the help files distributed in the doc directory of the plugin.

Here are a few more advanced examples.

Custom block

class ExamplesController < ApplicationController
   filter_conflicts :only => :confirm do
     respond_to do |format|
        format.html {render :action => "show"}
        format.js {
          render :update do |page|
            page.replace_html :notificaiton_area, :text => "Your
request could not be processed because the example has been modified
recently. Please try again"
            page.replace_html :status, :text => @example.status
            page.visual_effect :highlight, :status

If the a user loads the the show page for an example, and that same
example is modified by another user before that first user confirms,
that first users’ attempt to confirm is blocked.

Limited Resources

A common subset of problems that could benefit from conflict_warnings are those that model a system with resources that have limited availability, such as collecting reservations for an event. Validations do the job, but require some extra work to turn into reasonable responses to a shortage of the resource in question. conflict_warnings handles it with a single line of code.


class AttendeesController < ApplicationController
  filter_resource_unavailble :only => :create, :model => "Event"


<h1>Sold Out!</h1>
We regret to inform you that <> on <> has sold out before your transaction could be completed.

LockingResource/custom filter example:

conflict_warnings can be used to enforce mutex locks on resources at the controller level.

class LockingResourcesController < ApplicationController
  before_filter :login_required, :acquire_lock
    def acquire_lock
        catch_resources_unavailable current_user, 
          :accessor => :acquire_lock_for_user, :message => "Could not acquire lock"

If user cannot acquire a lock they are redirected back to the
referring page with the message
“Could not acquire lock” contained in flash[:warnings]

With the right options it can have some very creative uses, here are just a few I’ve used in the past.

  • Only update portions of a record that have changed and highlight
    them with Prototype or jQuery (requires some kind of model version
    tracking, maybe acts_as_audited)
  • Render custom forms displaying side by side comparisons of conflicting information.
  • Simplify actions upon failing to acquire a lock.
  • Enabling/Disabling some actions by when they occur.

Incomplete Features

  • Custom Form Builder: Displaying multiple versions of conflicting records for comparison in a form will be a common task, easily streamlined by a custom Form Builder
  • link/form helper for models already using optimistic locks: I see the default timestamp solution used by conflict_warnings as a good enough solution, but it’s still not flawless. The controller side of the plugin provides a filters that uses optimistic locking, but they still require more complicated helpers. To supply the I haven’t decided on the best syntax for them yet.


Passing the extra parameters is not the perfect solution either, because the parameter is not tamper resistant.

Like most other plugins, contributions and constructive feedback are always welcome.

Rails provides ActiveRecord::Locking::Optimistic as a method to ensure that multiple changes happening to the same record at roughly the same time do not clobber each other. In theory it makes sense, in practice Optimistic Locking is bound to model logic. Incorporating it into Rails’ stateless transactions is difficult at best.

In a nutshell, Optimistic Locking compares an attribute of a record in memory against the source record in the database before saving. If those attributes are equal the model has not been changed, the save can proceed safely and the attribute is updated to a previously unused value. Otherwise the transaction is cancelled and an error is thrown.

This all falls apart when your application needs to apply optimistic locking across a request. Take the following use case for example:

A user should not be able to update a record if that record has changed since the user last requested the record.

For a Rails application where multiple users have read/write access to records, this can be a common problem. It’s the perfect use for optimistic locking, but unfortunately optimistic locking is only effective in the case where another user changes the record between loading the object into memory and saving it, which does not apply to changes that occur to a record between the time a user loads a form and submits it. On the PUT request, the modified record will be loaded into memory, but, the change has already happened, so optimistic locking doesn’t trigger and the form data will overwrite the recent modifications. Which raises the question:

How can I be sure that my users are not overwriting each other’s data?

When I first asked this question, a lot of thought and a fair bit of Googling, and discarding countless dead ends, I arrived at three potential solutions.

  1. Preserve state in the session
  2. Periodically update the view with AJAX
  3. Add a parameter to preserve state to the requests and write custom before filters

I found the first two options to be incomplete solutions. In a world where there is only one browser tab or window allowed, preserving state in the session would be fine. However it fails when a user access a resource thorugh multiple tabs or windows. Periodically updating the view with AJAX can be foiled by the race condition. The third choice is easily the safest but can be tedious and not very DRY.

Eventually, I bit the bullet and implemented the third choice.
Continue on to Solving Rails’ Optimistic Locking Problem(Part 2: The Solution)

I’ve been using Firefox for a long time. I started way many years ago with version 0.5, when it was named Phoenix and tabbed browsing was still a huge deal. Over the years, I’ve accumulated a lot of plug-ins that I add to any Firefox installation. Some of which, I just can’t function without.

Currently I’ve got 25 installed plug-ins. Some haven’t been updated since I first installed them years ago. Others seem to be updated weekly. With so many plug-ins updating on their own schedule, I find myself faced with the plug-in update dialog at least 2 or 3 times each week. Firefox has the annoying habit of prompting for updates on start up if the previous session has detected pending updates to installed plug-ins. The worst part is that Firefox will halt the start up process until either I’ve ignored the updates, or the update has completed. I’ve come to call this behaviour Update Paralysis.

Update Paralysis isn’t limited to Firefox or its plug-ins. I haven’t really noticed it with other applications until I sat down to write this post, but it appears to plagues just about every application that checks for updates. Just no where near as frequently. Even those with their own task-bar bound update widgets.

Too often, I’ll start Firefox, or any other number of applications, with a specific task in mind, and then get distracted to the point of forgetting about the task by the time the update paralysis has subsided. With today’s internet speeds the entire process usually takes no more than a couple of minutes at worst, but it’s still an unnecessary interruption, and enough to cause my mind to wander.

Most of these programs check for updates at startup. Some will tease you by loading the application and disabling it until the update paralysis is satisfied. So I started thinking, what’s wrong with waiting until the session ends to install. In the case of Firefox, I’m thinking along the lines of triggering the update dialog as the last tab is closed. Post session updates will still take the same amount of time, and the application will still need to restart. However, with the update happening in the background after I’ve finished my session, I don’t care how long it takes because it’s not stopping me from completing the task at hand. Double bonus: with the restart action spread over the minutes/hours/days that go by before my next session, I’m not going to notice it.

This is a practice that, for the most part, needs to disappear. Critical Security Fixes should definitely cause update paralysis and should remain an exception. But all other updates should be done as unobtrusively as possible.

Thankfully update paralysis only effects a handful of applications I use. I believe the only reason I haven’t noticed it sooner is that 99% of the software I use, and update, come from the repositories of my current Linux distribution. These applications don’t check for updates. That’s the package manager’s job.

Speaking of the package manger, update widgets are another solution to update paralysis. But it’s only really a viable solution on Linux, where all application and system updates are collected for retrieval from the a single trusted source. In the Windows world, many vendors distribute their own update manager that check for and install updates for packages they provide. Given, that it’s nearly impossible to have a useful computer using only software from a single vendor. Relying on multiple update managers to get the job done is justtrading update paralysis for other problems.

In the case of Firefox, There is this bug report/feature request, but that doesn’t mean it ever get changed. Odds are I’ll end up patching it myself during some insomnia time.

Sometime ago I came across this question on Stack Overflow. The question asks for a way to add a comment to an audit created with the acts_as_audted plugin. At the time I thought it was an interesting problem, and solved it by patching the acts_as_audited plugin during a bout of insomnia.

I feel this is a useful feature worth sharing, and would be content with the patch being accepted into the master repository, but my pull requests have gone ignored. Now I have a need for acts_as_audited with comments in my current project. So, with this patch completed, I’ve decided to bring my fork out of the shadow of its parent.

So on with the details:

The plugin can be found in my github repository. acts_as_audited now adds an accessor attribute, :audit_comment, to the model. When an audit is created it will use the value of model.audit_comment to fill the comment field in the audit.

acts_as_audited also accepts :require_comment as an option now. If given :require_comment => true then any action that would create a new audit is blocked if a comment is missing.


script/plugin install git://

Generate the Migration:

If installing acts_as_audited to this project for the first time or you don’t mind wiping out your current audits table:

$  script/generate audited_migration add_audits_table

If updating from a release of acts_as_audited that does not support audit comments and which to keep your audits table:

$  script/generate audited_migration_update update_audits_table

Rake the Migration

$ rake db:migrate


Usage doesn’t change much from basic acts_as_audited. To add a comment to an audit set the audit_comment attribute before saving/creating/updating a record.

Form Example:


class Document < ActiveRecord::Base 


<%form_for (@document) do |f|%>
  <%= f.label :name %>
  <%= f.text_field :name %>
  <%= f.text_area :content %>
  <%= f.label :audit_comment %>
  <%= f.audit_comment %>
<% end %>


def update
  @document = Document.find(params[:id])

Example with optional comment:


class ChessGame < ActiveRecord::Base
   before_validation :populate_audit_comment
   def populate_audit_comment
     comments = []
     comments << "Capture" if piece_captured?
     comments << "Check" if exists_check?
     comments << "Mate" if exists_check_mate?
     self.audit_comments = comments.join("; ")


@chess_game.update_attributes(:white_queen => "F7")
Audit.as_user(@user) do 
  @chess_game.audits.last.comments # => "Check"

Example with required comment:


class VeryImportantDocument < ActiveRecord::Base
   acts_as_audited :comment_required => true


Audit.as_user(@user) do
  @vid = VeryImportantDocument.create(:content => "Lorem Ipsum ....",
    :project_id => 5)
@vid.errors #=> {:audit_comment => "Can't be blank"}
@vid.audit_comments = "Authorized by the boss" #=> true

I came across this post via @dhh’s twitter stream. Where Robby Russell describes his decision to trigger email delivery from model callbacks. It sparked a heated debate in the comments over the proper place for code that sends an email. My addition to the discussion quickly outgrew the size of a reasonable comment, and follows as this post.

When determining which component to send email notifications from, the distinction between Model, Controller and Observer is just a case of splitting hairs over semantics. There are valid arguments for it to go in either of the three spots.

In the controller it looks like this:

  flash[:message] = "Your account has been successfully created. We've sent you a welcome letter with..."
  redirect_to dashboard_path

In the model it looks like this:

after_create :send_welcome_message #, other callbacks..
def send_welcome_message

In an observer it looks like this

def after_create(customer)

Sending emails feels like business logic belonging in the model because it can happen asynchronously from HTTP requests, and often accompanies record updates. However this is not always the case, and the decision to send an email may be made in the controller.

Regardless of how you use ActionMailer, it is still written like a controller (despite its subclasses existing in app/models). It splices together model and view (template) from a request to send a response. The big difference between ActionMailer and ActionController is that ActionMailer’s requests and responses do not share source and destination. Anything can send ActionMailer a request, but the response in the form of Email over SMTP can be completely unrelated to the request.

What the before/after_save callback really accomplishes is triggering an ActionMailer request. If you treat ActionMailer like any other controller the choice becomes clear.

Placing the code in the model runs contrary to the MVC workflow. Placing it in the controller is not much better. Controllers should not be communicating directly with each other, the best you get is redirection. Which leaves the observer.

Triggering an email delivery form an observer makes the most sense. Even if there is functionally no difference between doing it from the callback in the model or an observer of the model. In my mind the difference between a model callback and an observer callback depends on the answer to question: “Does this callback need to succeed before the action can proceed?” If the answer is yes, it belongs in the model, otherwise it goes in an observer.

For those of you that argue about the dangers of a callback triggering when you don’t want it to. Such as when creating an administrator There’s always a way around that. I prefer the attr_accessor method.

class User < ActiveRecord::Base
   attr_accessor :cancel_delivery
class UserObserver < ActiveRecord::Observer
  def after_create(user)
     UserMailer.deliver_welcome_message(user) unless customer.cancel_delivery

Now the create won’t send an email. And not just because of the invalid email address I used in the example to throw off spam bots.

User.create(:email => "", 
   :name => "Emery", :cancel_delivery => true, :type => "administrator")

If you’ve subclassed Administrator from User this becomes even more DRY with another callback:

class Administrator < User 
  before_validation_on_create :set_cancel_delivery
  def set_cancel_delivery
    self.cancel_deliver = true

With this most recent change the following will not send an email:

Administrator.create(:name => "Emery", :email => "")

I’ve noticed an increasing trend of applications using custom controls to provide the appearance of cleaner interface. However, these pretty controls come at the price of breaking functionality of their standardized analogs. The biggest offender is the new text box most readily found in WordPress, Gmail, Facebook and to a lesser extent, virtually every Adobe AIR application. Not far behind are AJAX remote function links that do not have a standard html fallback.

I’ve come to rely on standard shortcuts. Ranging from the common Ctrl-C/Ctrl-V for copy and paste, to the X’s Middle Click to paste the last block of highlighted text. I have structured my composition style around these and other shortcuts. When some of them are unavailable I get understandably annoyed. However this is more post than a just rant for selfish reasons. Many of these improvements break other accessibility features.

Links to remote functions that lack a href target are a prime example of a custom control standing in the way of accessibility. In terms of controls, they’re closer to buttons than links. When the link doesn’t provide an href target it will not work unless a pointing device is used to select them. Breaking it in terms of both buttons and links. Common AJAX use depends on the onclick property of an element, usually links for historical reasons. There’s no reason it cannot be attached to any other element. The reason for using links is to provide an a HTML fallback for any number of reasons Javascript being unavailable is the most prominent. However when the HTML fallback is missing the control is useless without a pointing device. There are no good reasons to ignore the HTML fallback when modern frameworks make preparing for the case where AJAX is unavailable a trivial affair.

The major concern is that many developers believe that the standard input controls can not meet their needs. Resulting in many of them taking it upon themselves to roll their own versions of those controls. Despite the fact that nearly all of these custom controls can be achieved with some creative CSS and a little bit of JavaScript working in tandem with the standard input. Lets look at Facebook’s new “What’s on your mind?” input.

When the page is loaded it’s a standard text box:

Facebook's what's on your mind input at page load.

Facebook's what's on your mind input at page load.

As soon as it gains focus it becomes a div with a whole set of javascript triggers to update on key press events:

Facebook's what's on your mind input after gaining focus

Facebook's what's on your mind input after gaining focus

Once it becomes a div the only user interface features available are the ones the developer implemented. Div’s don’t naturally accept pasting, but JavaScript can fake it by watching for the shortcut and adding an entry to the context menu. There are other side effects of this form of a custom control. The most noticeable one are browser enhancements, such as spellchecking on form inputs. Firefox will not check the spelling of this div. That’s because Firefox doesn’t expect divs to be used for form input. Nobody does! That’s why input elements exists. There is nothing this custom text box accomplish that couldn’t be done with CSS and javascript and maintain the standard input behaviour.

Gmail’s compose area and WordPress’s visual post editor have many of the same problems, but both manage to get around most of them some how. These tools get a pass in this area because of the added requirement of rich formatting. With HTML4 there’s no good way of displaying formatting changes in the text area. A developer is limited to either using a markup language and providing a preview pane like StackOverflow does, or using a custom control like Gmail and WordPress use. Expecting the average user to learn any form of markup is unacceptable. StackOverflow’s users are anything but the average user, and probably prefer the level control of their posts offered as opposed to a more WYSIWYG editor. The WordPress developers realized that neither approach is perfect and offer both. Gmail does to, but to use the standard controls one must sacrifice many of the other pleasantries of Gmail’s interface.

Reduced functionality from these custom inputs usually aren’t noticeable unless the user goes beyond the standard shortcuts. However, there’s always a few that get away. Tweetdeck, for example, still doesn’t support undo in a text field via Ctrl-Z.

Very few users are ever impacted by these issues, making them a low priority. However, I can’t shake the feeling that in most cases of these custom controls, much more work went into design and testing of building from scratch, than what would have gone into adding the desired functionality to existing controls.

I got distracted before bed last night and created this for MacSE‘s annual T-Shirt Design Contest, from an idea I had kicking around in my head for some time.

Software Engineering: Your only hope of preventing the eventual robot uprising.

Software Engineering: Your only hope of preventing the eventual robot uprising.

Wish I could credit the original artist of the wireframe, but I have no idea where I found it.

This post has been cross-posted to my personal blog