Monday, 28 April 2014

Go with the flow

Besides being an awesome song by Queens of the stone age, it also describes a common requirement from our clients here at two red kites.

Workflow can mean many things, depending on the context, and for this post, we are talking about the process of an online form / information as it goes through the various stages of it’s life. For instance, a draft expense form, through to the approval and payment of the expenses.

An example workflow

Our experience with system development, and specifically workflow systems, is that "with visibility comes accountability”.  What the heck does that mean?

Well, we have found that by the mere fact of making a form / spreadsheet / information accessible to those that need to see it, motivates those who currently are assigned a task to move it to the next stage in a more timely manner.  If people know that others may be able to see how long a task is sitting with a particular person, it tends to move along at the appropriate pace.  

Let’s look at a typical discussion that we have, when asked to come and have a look at a workflow requirement.

This is a standard question that we are asked. So we have this spreadsheet with 10 fields that we email around, can you just put it online?

This is a simple enough request, but let’s run through some of the thought process we go through.

Let’s clarify the fields, are they free text areas, selectable from a predetermined list, automatically calculated, defaulted to a particular value, show or hide based on previous field contents, are they mandatory or optional, …

Security (within the organisation): who can view, who can update, do we lock off field changes after a particular status?

Alerts: Do we email the next person assigned to the task? Immediately or a daily summary?  Do you want a working view of your (or everyone's) tasks?

Reassigning Tasks: When someone is away or unable to complete a task, Do you need the ability to reassign individually and / or batch a group of tasks?

Escalation: At what point do you want to escalate that a task is due to be completed, or has sat dormant without any activity? 

Integration: Do you want to pre-populate any information from other systems / databases, whether real time or at regular intervals. Do we need to send information to other systems?

Security (external to the organisation): Do you need to extend to external users?  When should they see any information about a particular form? (the default being no access at all) What reduced list of fields are visible to the external party?

Reporting: What specific reports are required?  Who has access?  Do you need to export them to a different format / CSV?

Auditability: Do you want to see who did what action to process the workflow, and at what date and time?

Devices: What devices will people use with the system? What level of interaction with the system will they have. i.e. iPhone's are for view / approval only?

Fast forward a couple of months after using the new system, you now want to add other forms into the system, how does affect the original design?

The above need not be daunting, and it isn’t when we take you through our process to tease out what you do and don’t require in your system.  This then allows us to build upon pre-existing components and frameworks the deliver the solution without the monetary and time costs of a bespoke development.

Monday, 21 April 2014

Red, Green, and a Full 180-Degree Refactor

I have always coded first and tested later. Automated testing did not exist, and running unit tests was a manual and time-consuming process. We documented our test cases in an Excel spreadsheet and executed them manually. These were not ancient development projects from the early years of computing; I’m talking only two years ago.

I first came across TDD at university last year. It made up a small part of one lecture with a tutorial exercise to reinforce it. I loved that tutorial: getting my code to pass the tests was kind of like doing a crossword puzzle or completing a sudoku. But, to me, it was just fun. I did not appreciate the value of it.

Here at tworedkites, we have an extensive suite of test specs for each of our projects. And, like all good developers, we write our specs first before writing any code. In my haste to get started, I was not aware of this until after my first code push. What a disaster! I received an alert notifying me that I had broken no less than 527 specs. Way to get noticed on my first day!

After that, I became a spec-Nazi. Regardless of how small or purely cosmetic the change, I ran the entire test suite before pushing my code up. But I continued to also run my code manually so I could see the changes for myself. I still did not fully trust the automated testing process.

It took one more incident to get me 100% onboard: two weeks ago I made what seemed like a very simple change: a previously optional data item should now be compulsory. Making that change took me all of about 15 minutes. I manually walked myself through the process and everything behaved as expected. And then I ran the specs...

I spent the better part of a day dealing with the fallout. Those specs revealed all manner of side effects that I did not anticipate. Without them I would have fixed that one requirement and broken several other seemingly unrelated pieces.

I have learned many things during my internship at tworedkites. But the one that has had the greatest impact on me is the importance of writing specs.

Write them up front, before any code has been written. It makes you accountable. It forces you to do what you say you are going to do.

And not just any spec, it needs to be comprehensive. A comprehensive spec makes your application more robust. And because what seems unimportant or not worth testing might just be the one thing that could highlight a problem for someone else further down the line.

Sunday, 13 April 2014

Reflecting Routes

On an app I'm working on, several of the resources appear in the routes nested in various places. I was wondering: Is there a way to be sure that my controller test is hitting all of the routes for this controller?

The simplest use case for nesting a controller under different resources is index pages where the list is scoped to the resource it is under. Consider a simple blog app which has users with multiple blogs, with multiple posts under each blog. Posts may be listed under a specific blog, or under a user.

In more complex applications, a resource may have relationships with multiple other resources. For example, an Order may belong to a Supplier and to a Customer. Certain actions on the Order may take place nested under the Supplier or Customer according to the needs of the system.

Joining some dots.

An rspec controller test may show the following:

    describe "GET show" do
      context "under supplier" do
        before { get :show, supplier_id: 1, id: 2 }

We know that this will call the controller's show action with the supplier_id and id keys set in the params hash. It doesn't go straight there, however, the request parameters in the controller spec are used to build a full request path, and this is dispatched to the app through the application's routes. We know this does happen because if you forget a parameter, making the route invalid, the call never makes it to the controller, and you end up with an error (ActionController::RoutingError in Rails 3).

Capturing controller actions.

The controller test methods get, post, are defined in ActionController::TestCase. They call process with the specified HTTP method, which sets up the controller instance, request and response objects, and so on. Overloading this process method seems like a great place to start.

We can create a module and include it in all our controller test that defines this process method to inspect what is going on.

    module ControllerChecking
      def process(*)
        super # call through to ActionController::TestCase

We can get the method of the request from request.method and path of the request from request.fullpath. The parameters are more tricky. The parameters passed from the controller test are incomplete. They are missing the controller and action (and possibly format or other values specified in config/routes.rb). They can be found in the request object at request.env["action_dispatch.request.parameters"]. Be sure to check for nils that may happen when there are exceptions raised in the controller.

Digging into routes.

Rails routes are accessible from the Rails.application.routes.routes object. The first routes is the ActionDispatch::Routing::RouteSet, which contains URL helpers, and the second is the Journey::Routes collection of routes. Each route in the Journey::Routes collection has a few interesting methods on it:

  • verb (Regexp)
  • path (Journey::Path::Pattern)
  • defaults (Hash)

verb and path can be used to match the values we got above, and defaults is a hash of default values to be added to the parameters for the request.

So we can find all of the routes for our controller action by doing this:

    parameters = request.env["action_dispatch.request.parameters"]
    routes = do |route|
      route.defaults[:controller] == parameters[:controller] &&
        route.defaults[:action] == parameters[:action]

And then see if the controller action we just tested matched one of those routes with route.verb =~ request.method && route.path =~ path. We can use a hash to collect the routes we are interested in as keys, and a number in the value to count how many tests hit that route.

    counts = {}
    routes.each do |route|
      verb = route.verb.source.gsub(/[$^]/, '') # convert Regexp back to clean string
      key = "#{route.defaults[:controller]}##{route.defaults[:action]}  #{verb}  #{route.path.spec}" 
      # eg: "blogs#show  GET  /blogs/:blog_id/posts/:id"
      counts[key] ||= 0
      if route.verb =~ request.method && route.path =~ path
        counts[key] += 1

Rspec runs each controller test as a new instance of a class created for each context. This means that module methods like ours can't save instance variables to be seen in another test. For these counts to survive beyond a single controller test, we need to store them somewhere accessible. Using a module attribute will allow access to these counts in multiple tests and access from an after(:all) block to print the results.

Putting it all together we have:

    # spec/suppoer/controller_checking.rb:
    module ControllerChecking
      mattr_accessor :counts
      self.counts = {}

      def process(*)
          # Record controller hit even in the case of an exception. Perhaps the exception is expected.
          path = request.fullpath.split("?").first
          parameters = request.env["action_dispatch.request.parameters"] # will be nil on bad route

          if parameters.present?
            counts = ControllerChecking.counts
   { |route|
              route.defaults[:controller] == parameters[:controller] &&
                  route.defaults[:action] == parameters[:action]
            }.each do |route|
              verb = route.verb.source.gsub(/[$^]/, '') # convert Regexp back to clean string
              key = "#{route.defaults[:controller]}##{route.defaults[:action]}  #{verb}  #{route.path.spec}"
              # eg: "blogs#show  GET  /blogs/:blog_id/posts/:id"
              counts[key] ||= 0
              if route.verb =~ request.method && route.path =~ path
                counts[key] += 1

    # and in a controller spec:

    describe MyController do
      include ControllerChecking
      before :all do
        ControllerChecking.counts = {}
      after :all do
        puts "-----"
        ControllerChecking.counts.each_pair { |key, count| puts "%2d <-- #{key}" % [count] }
        puts "-----"

      describe "GET index" do
        before { get :index }
        it { should be_success } # and so on

And after running tests in this controller, you should see a nice output, something like:

     1 <-- posts#index  GET  /posts(.:format)
     0 <-- posts#index  GET  /users/:user_id/posts(.:format)
     0 <-- posts#index  GET  /blogs/:blog_id/posts(.:format)
     0 <-- posts#index  GET  /blogs/:blog_id/users/:user_id/posts(.:format)


Sunday, 6 April 2014

Spree and using Integration Specs to automate samples.

Spreecommerce is a completely opensource e-commerce storefront & backend written in Ruby on Rails. Getting a storefront running is about a 10 minute project, after that you can spend days understanding all the configurations and extensions. Trust me it is worth it.

As Spree runs as an engine, there is not much testing you need to write. After setting up custom Payment gateways in seed data and importing the customers existing clients and orders into spree. To test this I would end up going to the storefront webpage:-
  • select and item
  • click add to cart
  • click checkout
  • click new user
  • fill in details plus address
  • enter test credit card details
  • you get the idea....
And then repeat to test adding for an existing client.

After multiple times of copying and pasting in test mastercard numbers it became evident that I needed to  automate this.

Enter Integrations Specs…

This is not your typical spec as it has multiple expectations and is very long, the goal here is to run a full order process and the expectations are just there to let you know where it might of failed. Spree uses JS for many of its pages and could not select the state (‘Queensland’) unless this was run using Selenium. To do this, note the js: true

  it 'Check full payment order process', js: true do

This will open and run the script inside a web browser (normally firefox, but it is configurable). 

to use this your gemfile will need:-

  gem "capybara"
  gem 'selenium-webdriver'
The other important change is in spec_helper is right at the top, this line is change to 

  ENV["RAILS_ENV"] = ‘development'
note: that the ||= was changed to =, this could also just be added only in the integration spec so it does not effect any other specs.

and add below within the config block:-

  config.include Capybara::DSL

Now that we have this set we can actually run the script and it will create an entry in our development system, allowing you to use and see this order within the Spree admin system. Another advantage is that you can use and test the seed data you have created for a project.

Extra tips…

When the script completes it will close the browser window it opened. It is possible to change this behaviour but you end up with lots of open browser windows so its not recommended. Instead if you are having an issue and want to see what is happening, put a the following at the point where you want to debug:-


which will make the scrip sleep for 30 seconds allowing you to look at the failing screen.

Also remember you can dump the screens html by using:-

  p page.body

or using this gem, do screenshots:-


within the integration spec.

Have fun!