Archived entries for software

Notes To A Young Programmer

It makes me laugh to think of the code I wrote back in my early undergraduate days. How bad it was. A jumble of spaghetti without sound design principles. Ugly.

I’ve learned a lot since then, having worked professionally for some years now. For all the varied experiences from walking up at 5 AM every morning in a snowy, miserable February to finish a deadline, to help grow a startup from a handful of sever machines to several dozen, you start to see patterns. (Not to mention plenty of mistakes and failures along the way.)

Here is a list of “big picture” patterns I’ve come to understand in software development. This isn’t a commandement. It’s a personal take on what I believe matters for both individuals and teams looking to build products. It’s the list I wish I knew when I was starting out when spaghetti code was all I knew.

Find a mentor

The absolute best way to learn anything, in my experience, is to find someone who’s already good at the thing you want to get good at. Programming is no different. This is by far the one thing that, if you can do it, will accelerate your learning above all else.

Why? Because that’s how humans learn. Humans learn best by watching, observing and mimicking the behaviour of those around them. It’s like a shortcut. Being with a master let’s you see what’s important, the kind of questions they ask. It helps tremendously if you can have some of that “rub off” on you. Their attitude.

Luckily at work I’m surrounded by awesome programmers. I learn from them every day. I see how they use the command line. I pay attention to how they design code. If you can, find a mentor or place of work where you are close to great people. I can’t stress this enough.

I wish in my early career I payed more attention to this. So my advice: Make sure your first job or internship is at a place with great people.

It’s about humans, not “code”

I once got into an argument with some startup founders who asked for my help. The development lasted several months. Near the end, I told them our team in Indian wasn’t up to snuff and that we should have hired someone local to build the perfect product. Looking back, it was knee-jerk and wrong.

You see, starting engineers see code as a kind of Platonic creation. They think of the “perfect” solution to the problem if time and resources don’t matter. I know, because I was one of those people. And in that moment with the founders, I used that belief against them: Our problems would be solved if we had better written code.

But we aren’t in the business of writing perfect solutions. Software developers are in the business of building solutions for real people

Software is a just the medium. You wouldn’t say the physical sound and vibration of a guitar string is the point of music? We want to build something impactful and that only makes sense when you are building things that help actual, real people.

It’s vitally important for software developers to keep that top of mind. It keeps them–like I did with the startup founders–from jumping to “religious” viewpoints. I’ve found my skills of balancing business and customer needs with the act of software creation increasingly important. Does this feature make sense? Do our customers want it? Will it drive the business in the right directon? Given market conditions, do we have the time and money to do it? The 10x return comes from doing the right thing, not the wrong thing well.

Strive for simplicity

A corollary from above: Write code for people, not machines. This means solving the problem at hand, of course, but it also means making it understandable to others. And the best way to do this, I’ve found, is making things simple. Brevity is a virtue.

Recently, my team was in the thick of a major redesign. The first draft was sketched on a whiteboard. It had a lot of moving parts–circles and arrows abounded–but then again we needed to scale it out and we couldn’t avoid it. Or could we? Could something simpler do the job just as well? It wasn’t clear.

In a three hour session, we discussed and dissected our assumptions, seeing what was vital or things we could push to later. In the end, a layer collapsed to a single module and we introduced a new idea that would have otherwise complicated our business logic. (Thanks Angelo!) We also decided against a cache, since we could get away with it.

By meeting’s end we understood the system and saw how it worked. Simplifications at the design level are big wins because they reduce overall maintenance and errors while boosting readability. Time and again the KISS (Keep It Simple Stupid) principle payed dividends.

Which leads to the last point…

Don’t take the first option

Simplicity takes work. It doesn’t just happen. We got to our design by iterating. Your first idea is almost never the best choice. Don’t take the first option.

In a way, this is what design is: It’s a process of coming to a problem today, tomorrow, the day-after-tomorrow, pruning and pruning. It’s about editing. As Steve Jobs remarked, design isn’t about how something looks, it’s how it works. And to understand how something works means coming to the problem again and again.

An Example of Delightful Design

The app I use most on my iPhone is without a doubt Instapaper. I use it to catalog my web browsing. And I love how it syncs to readable form on all my devices. It makes reading enjoyable because it’s seamless and convenient.

But what I love most is how thoughtfully designed it is. What do I mean by this? Let me illustrate with examples.

When browsing the web on the phone I occasionally copy the URL to the clipboard to read later. Sometimes I need to do this when I want to import into Instapaper’s Read Later functionality. So I was delighted when all I had to do was launch Instapaper and it took care of the rest:

When you read an article in Instapaper you can go back to your list by tapping the article and having the Back button appear at bottom. One day while reading the end of an article I was preparing myself mentally to tap the screen to navigate back to the list. It looked something like this:

To my delight, however, I didn’t need to tap because as I approached the bottom, I got this:

The bottom navigation bar with the Back button automatically slid into view (left-most icon). Awesome! No need to tap because the application proactively predicted my next action.

This is delightful design. It’s delightful because I didn’t expect it, and when it did happen, I was grateful it did.

At its core delightful design is about caring. It’s about empathy. (Which, when you think about it, is what design really is.) Someone at Instapaper cared enough to put themselves in the user’s shoes to imagine the question: What can we do here to help the customer?

Great design like this builds relationships with your customers by showing that someone cared enough to understand you. That’s a great thing. No one likes to be ignored. It’s always a good feeling knowing someone put the effort into thinking things through to make the experience better.

As a result, I’ve decided to subscribe $1/month to Instapaper’s development. Money well spent.

Thoughts on Peer Reviewing Source Code

Last month I attended OSCON 2012 in Portland, Oregon. OSCON is the largest open-source convention in North America. Pretty much nerd heaven, which suits me perfectly fine. While there were some hits amoung the misses, my colleague Scott Hyndman and I came away with some motivating ideas to take back to Blu Trumpet.

One of those ideas is peer reviewing code. Yes, we were guilty of not reviewing code before commits. So three weeks ago we decided as a team to have every git commit be peer reviewed, as a first run. Our process works as follows:

  • commit changes as a feature branch via git flow and push the branch upstream
  • create a pull-request via github
  • randomly assign someone on the team to review the code via Campfire (this is done via Hubot, our IRC robot we’ve programmed internally at BT)
  • pair program and have the commiter explain the code to the reviwer before merging

After three weeks of this, I can personally say that our code quality has improved. We are slowly getting everyone on the same page on coding style because it’s enforced during the pair programming phase. Futhermore, we are getting nice knowledge transfer across our team as the random selection forces members to familiarize themselves with different components of our system.

It has also dawned on me that peer review is a great way to refactor the code base. We often think of refactoring as a hammer job, but minor refactors over time can really clean up the code long term. We’ve taken a “always have a commit make the code base better” attitude on every merge.

But what I love most about peer review is that it feels good. I get a little dopamine spike every time I commit something knowing that the code style has improved and any minor refactors have taken place. This is probably it’s most powerful benefit.

You see, over time, when building a team, you want good habits. And how do you build habits? By applying a well-known pattern known to create them:

  • have a stable repeating anchor (pull-request)
  • engineer as easy an habit as possible (review the code for style and design improvements)
  • be rewarded for the habit (the good feeling I get when I know I’ve done my job)

This is precisely why peer review works. It plugs into a habit-forming routine that transfers across team members. We plan to make improvements and adjustments as we move forward, but so far it’s been a positive experience.

What Selling Toilet Paper Taught Me About Success

Several months ago I was asked by David Gillespie and the Collector’s Edition crew to build a site to sell toilet paper. I thought the idea was brilliant and funny. I went to work during my free evenings and weekends to building the web application.

Shortly after, Shitter goes live and as far as we could tell, took hold of a small corner of the interwebs. We got coverage in Forbes (twice), Mashable, Business Week, Huffington Post, Perez Hilton (!) and reached #3 on Hacker News. We received coverage as far as China, Germany and Spain. Exciting stuff.

But something unexpected (and depressing) happened. The feeling of success and accomplishement I thought would be prolonged lasted all of one hour and was soon replaced by anxiety about our servers going down under the weight of coverage.

I spent the following days backing up data, fixing bugs, monitoring server logs and being more or less in a state of apprehension, hoping to avoid a crash. Thankfully, things held together.

As things died down I gathered my thoughts. What was I really in this for? It put into sharp focus something I know but don’t always articulate: “success” (whatever that means) is short lived. Realy, really short lived. That is, if you think of “success” as a form of public triumph; if you think of it as a moment instead of it as an opportunity to create. And in a certain way I thought that. We all do things hoping in the end it makes people feel a certain way about us. I’m no different.

So I had to go back and reflect where the good moments were to, in a way, affirm my effort. Yes, putting product out into the world is great fun, one of the most ecstatic feelings you can have. But for me, the joy was building something. The knowledge that you are involved in creating something instead of consuming something.

In the last half-decade of the Ruby community (Ruby is a computer programming language), there was an influential programmer who called himself “Why The Lucky Stiff” or just _why. He was irreverant, funny and put a mirror to ourselves and questioned the value of our work. Then, suddenly, he left.

_why left behind several nuggets of wisdom, but the following quote speaks most deeply. It reminds me to be proactive, to improve by producing and to see things out. It reminded me again what I strive to do everyday in my work.

when you don’t create things, you become defined by your tastes rather than ability. your tastes only narrow & exclude people. so create.

How We Built a HackTO Winning App

HackTO with Team Blu Trumpet

Yesterday, I took part along with my awesome teamates Scott Hyndman and Victor Mota, in HackTO’s April 2012 hackathon as part of the Canada-wide HackDays. We had a great time thinking through the problems and the day was a lot of fun. To boot, our entry LastResort ended up placing first! We all agreed before the winners were announced that the day was a success regardless, but it was a nice icing on the cake. There were a lot of great apps.

After talking with the judges and our own observations about our approach and seeing the other 22 demos, our team brainstormed possible reasons why we placed higher than expected. Here’s what we came up with.

Do your homework

In a hackathon, it’s all about execution. You don’t have time. Non-demoable apps do not make the final cut, so it was crucial we knew what we were getting into before the event.

That’s why the week prior our team sat down to scope possible ideas. We familiarized ourselves with the APIs to get a feel for their range and capabilities. We even played with some of the APIs, making calls and playing with sandbox tokens. You don’t want to waste your time with mundane issues like OAuth that slow you down.

Choose your problem carefully

As mentioned, incomplete apps don’t demo. We made sure we had a problem that was interesting and could be done in seven hours. A completed, less glamorous app is better than an unfinished, ambitious project.

After much discussion, we settled on the following problem: A tool that monitors your existing email stream for critical issues and calls the appropriate support people by phone who can fix them.

The problem had several benefits: it scratched our own itch, so we knew it was useful (utility brownie points); it could be completed in seven hours; we double-checked the APIs were capable of doing what we wanted; and it had the bonus of using more than one of the sponsored APIs (ContextIO for email mining and Twilio for phone calls, both awesome APIs)–which we learned afterward one of the judges appreciated.

Scope the day’s work

Once we had a problem, we scoped out an MVP (Minimum Viable Product) the day before. What is the absolute minimum, demonstratable demo? We decided it was phoning a list of contacts in sequence when a specified email was sent.  That’s it. On the actual day, we hardcoded the contact list and the email was triggered by a manual send during the demo. There was no special phone call behaviour. (You can easily see how features can be added, but again the goal is brevity and speed of execution to the demo.)

We built a mantra on the team: “Build to the MVP.” Anything that sidetracked us was thrown out. It gave our team focus. We knew what the goal was.

We also wrote out the actual tasks we would need to build: github setup, necessary API keys, etc. Then we assigned a team member to each task or component. Again, we made sure items contributed to the MVP. We also made a list of “nice-to-haves” of additional features that we could add on but weren’t critical to the MVP. In the end, we didn’t implement anything off this latter list.

Why did this work? Because everyone on the team knew exactly what they were doing, why they were doing it, and how it all contributed to the end goal. Most importantly, each of us knew what we didn’t need to do, and many times we found ourselves rejecting tasks before getting sucked into time wasting.

Choose a good team

This goes without saying. Scott and I work together at Blu Trumpet and Victor was one of our previous interns. We all like and respect each other and know our strengths and weaknesses. We also have the added benefit of being candid and critical and there was plenty of healthy discussions throughout the day. We were constantly talking with each other, pair programming at times, pitching to help whenever. “Hey, what do you guys think of this? How does this look?” was a common phrase. We were constantly checking in on each other.

We were especially vocal during our presentation prep. We constantly asked ourselves “Is this clear? Can we make this shorter? Can we make this more impactful?” We were quick to point out when one of us was using too many words or when key points were lost. It helped a ton.

It’s next to impossible to churn something out with random people you met that day. What’s true in business is true at hackathons.

Present as if you are pitching to VCs

We made sure we completed at least one hour before the 5pm deadline. Why? We knew a compelling pitch was critical so we left time for it. How you present your product is really, really important. How people perceive your product is the product. Good impressions go a long way.

We put together a three slide Keynote presentation that made sure we (1) defined the problem and explained why it was important to solve it (server downtime is bad bad bad); (2) which audience or “customer segment” our solution served (bootstrapped startups); (3) what our value-adds were (free, developer-friendly).

We spent the final hour going over and over our presentation, timing each time. We couldn’t afford to not finish. This saved the presentation, as a final decision to shorten the demo allowed us to finish exactly in three minutes, without a second to spare (literally).

We also added humour and that never hurts! It was truly a team presentation as Victor and I did the actual presenting while Scott worked the slides and executed the demo.

One final note…

Writing this now everything seems “obvious.” But believe me, we were very close to not finishing. A server failure or API hangup would have cost us and put the whole thing into doubt. My heart was pounding during the presentation and thank goodness everything went as planned. Afterward, we were all surprised how nicely it all came together, as in software this rarely happens!

It’s amazing what you can accomplish in one day. Obviously, you can’t work at this speed everyday. But it was fun participating in a “mini-startup” from end-to-end. And for those thinking of working in the startup world, I highly recommend it.

Nudges, Defaults and the Success of Rails

Humans are lazy. You didn’t need me to tell you that. Status quo bias and inertia constantly work against our better judgement to learn and improve.

In Richard Thaler and Cass Sunstein’s fascinating book Nudge: Improving Decisions About Health, Wealth, and Happiness the two researchers explore how certain psychological triggers and observations can be used to design choices to persuade and “nudge” people to desirable behaviours. One of those observations is the use of defaults.

Default choices are a powerful but simple way to nudge behaviour. Opt-out defaults for magazine subscriptions work wonderfully, as Thaler and Sunstein point out: people continue to pay even when they stop reading. More dramatically, opt-out 401(k) company plans have more than double the savings rates than opt-in plans. When given the choice to do nothing, most will.

As a software developer, this got me thinking: What kind of defaults can I design that will nudge my users to behaviours I find desirable?

It hit me that the influence of defaults went beyond asking the question. Defaults in fact are a big reason that my professional life is easier and more enjoyable than it was before. And I have Rails to thank for that.

Rails is filled with defaults. This is one of the main reason for its success. Someone, somewhere decided that 80% of things developers deliberate on don’t matter. How to structure an application, how to setup a database, which server to test against—there are defaults for all of these and they work out-of-the box for most developers. There’s even a phrase for this: Convention over configuration.

David Heinemeier Hansson, the creator of Rails, had this to say at RailsConf 2008:

One of the points I keep coming back to with Ruby on Rails, is that we confess commonality. That we confess that we’re not as special as we like to believe. We confess that we’re not the only ones climbing the same mountain…I think the conclusion—the conclusion that we’re not as special and unique as we like to believe—is the fact that the flexibility we think we need, we want—we really don’t.

By designing defaults intelligently, Rails was able to hit that sweet spot of doing as much work for you, while enabling it to get out of the way when needed. I’ve had experienced Rails developers echo this very thing. It goes to show that as something as influential as Rails, defaults can have a big impact.

Digging Into RubyGems

Every developer has experienced an episode of painful dependency management. Missing libs, “dll hell,” and hours of wasted effort. Been there. It’s painful.

Luckily for those working in the Ruby ecosystem, there is a nice tool that helps with dependency management: RubyGems.

In this post we’ll dive a little deeper into how RubyGems works with your Ruby code to properly load and manage gems. Understanding the load process will better prepare you when things go wrong (and things will, won’t they?) It will also give you insights into how to hook and innovate outside the normal RubyGems’s behaviour if you so choose.

Ruby’s $LOAD_PATH or Where’s my library?!

When you load a dependency via Ruby’s require or load, where does Ruby go to fetch that library? Answer: $LOAD_PATH.

Ruby’s $LOAD_PATH is an array of directories that Ruby will search in to find and load dependencies.

This is what my Mac OS X system Ruby’s $LOAD_PATH looks like (Ruby 1.8.7):

$ irb
irb> $LOAD_PATH
=> ["/opt/local/lib/ruby/site_ruby/1.8", "/opt/local/lib/ruby/site_ruby/1.8/i686-darwin10",
"/opt/local/lib/ruby/site_ruby", "/opt/local/lib/ruby/vendor_ruby/1.8", "/opt/local/lib/ruby/vendor_ruby/1.8/i686-darwin10", "/opt/local/lib/ruby/vendor_ruby", "/opt/local/lib/ruby/1.8", "/opt/local/lib/ruby/1.8/i686-darwin10","."]

When I require 'myfile' in my Ruby code (if I am running my system 1.8.7 version), Ruby will try and find the file myfile.rb in one of the directories above and run it. If not, it will raise a LoadError exception.

At a crude level, you could manually drop dependencies in the $LOAD_PATH to load libraries. But programmers are lazy and that’s a lot of work. Wouldn’t it be cool if there was a tool to manage gems for you?

Enter RubyGems.

RubyGems: Painless dependency management

RubyGems is a tool to discover, distribute, manage and build gems. When you install RubyGems (http://docs.rubygems.org/read/chapter/3), it does two things: it writes the source to one of the directories in Ruby’s $LOAD_PATH so you can require 'rubygems' in Ruby and installs the command line tool gem to help manage the gems themselves. They work together: by installing gems in a standard place, RubyGems can then work more intelligently about how to make libraries accessible from Ruby.

After I installed RubyGems 1.6.2, I see that rubygems.rb was installed in /opt/local/lib/ruby/site_ruby/1.8, which is part of the $LOAD_PATH. The gem command line tool was installed in /opt/local/bin.

Furthermore, the gem command tool I use to install gems has the following environment setup:

$ gem environment
RubyGems Environment:
- RUBYGEMS VERSION: 1.6.2
- RUBY VERSION: 1.8.7 (2010-01-10 patchlevel 249) [i686-darwin10]
- INSTALLATION DIRECTORY: /opt/local/lib/ruby/gems/1.8
- RUBY EXECUTABLE: /opt/local/bin/ruby
- EXECUTABLE DIRECTORY: /opt/local/bin
- RUBYGEMS PLATFORMS:
- ruby
- x86-darwin-10
- GEM PATHS:
- /opt/local/lib/ruby/gems/1.8
- /Users/iha/.gem/ruby/1.8
- GEM CONFIGURATION:
- :update_sources => true
- :verbose => true
- :benchmark => false
- :backtrace => false
- :bulk_threshold => 1000
- REMOTE SOURCES:
- http://rubygems.org/

Notice the value of INSTALLATION DIRECTORY; that’s where my gems are installed when I run gem install name_of_a_gem. How does RubyGems decide to put gems there? It’s relative to where the ruby executable directory is, which in my case is in /opt/local/bin/.

No wasted effort deciding for yourself where to put gems, the gem tool decides for you. You’ll notice too that it’ll try and find gems from the remote site http://rubygmes.org, a popular gem hosting site. Nice!

Let’s go ahead and install Nokogiri, a popular XML parser, from rubygems.org.

$ gem install nokogiri # go off to remote source http://rubygems.org
...
$ gem list</code>

*** LOCAL GEMS ***

nokogiri (1.4.4)

Great! We just installed our first gem. And going to the install directory we notice:

$ cd /opt/local/lib/ruby/gems/1.8/gems
$ ls
nokogiri-1.4.4/

as expected by the gem command-line tool’s environment.

Let’s run some Ruby code via irb and require Nokogiri.

$ irb
irb> require 'nokogiri'
LoadError: no such file to load -- nokogiri
from (irb):1:in `require'
from (irb):1
from :0

As expected, it can’t find Nokogiri, since the Nokogiri gem was installed in /opt/local/lib/ruby/gems/1.8/gems/nokogiri-1.4.4/, which is not in $LOAD_PATH, so Ruby can’t find it. To make installed gems accessible from Ruby we first have to load RubyGems (recall rubygems.rb is in the $LOAD_PATH):

irb> require 'rubygems'
=> true
irb> require 'nokogiri'
=> true

Success! We just loaded our first gem.

At this point, we might reasonably assume that RubyGems has placed Nokogiri in $LOAD_PATH, which is why require 'nokogiri' now works, but this is not the case. In fact, $LOAD_PATH has not changed at all:

irb> $LOAD_PATH
=> ["/opt/local/lib/ruby/gems/1.8/gems/nokogiri-1.4.4/lib", "/opt/local/lib/ruby/site_ruby/1.8", "/opt/local/lib/ruby/site_ruby/1.8/i686-darwin10", "/opt/local/lib/ruby/site_ruby", "/opt/local/lib/ruby/vendor_ruby/1.8", "/opt/local/lib/ruby/vendor_ruby/1.8/i686-darwin10", "/opt/local/lib/ruby/vendor_ruby", "/opt/local/lib/ruby/1.8", "/opt/local/lib/ruby/1.8/i686-darwin10", "."]

If $LOAD_PATH is unchanged, how were we able to require Nokogiri?? By overriding require.

Looking at the source, we notice that RubyGems 1.6.2 has done exactly that:

#--
# Copyright 2006 by Chad Fowler, Rich Kilmer, Jim Weirich and others.
# All rights reserved.
# See LICENSE.txt for permissions.
#++

module Kernel

 if defined?(gem_original_require) then
    # Ruby ships with a custom_require, override its require
    remove_method :require
  else
    ##
    # The Kernel#require from before RubyGems was loaded.

    alias gem_original_require require
    private :gem_original_require
  end

  ##
  # When RubyGems is required, Kernel#require is replaced with our own which
  # is capable of loading gems on demand.
  #
  # When you call require 'x', this is what happens:
  # * If the file can be loaded from the existing Ruby loadpath, it
  # is.
  # * Otherwise, installed gems are searched for a file that matches.
  # If it's found in gem 'y', that gem is activated (added to the
  # loadpath).
  #
  # The normal require functionality of returning false if
  # that file has already been loaded is preserved.

  def require path
 if Gem.unresolved_deps.empty? or Gem.loaded_path? path then
      gem_original_require path
    else
      spec = Gem.searcher.find_active path

      unless spec then
        found_specs = Gem.searcher.find_in_unresolved path
        unless found_specs.empty? then
          found_specs = [found_specs.last]
        else
          found_specs = Gem.searcher.find_in_unresolved_tree path
        end

        found_specs.each do |found_spec|
          # FIX: this is dumb, activate a spec instead of name/version
          Gem.activate found_spec.name, found_spec.version
        end
      end

      return gem_original_require path
    end
  rescue LoadError =&gt; load_error
 if load_error.message.end_with?(path) and Gem.try_activate(path) then
      return gem_original_require(path)
    end

    raise load_error
  end

  private :require

end

Requiring RubyGems loads the Gem module which has it’s own internal path array. This array is used by the overridden require method to search for installed gems, in addition to looking in $LOAD_PATH.

That path is:

irb> Gem.all_load_paths
=> ["/opt/local/lib/ruby/gems/1.8/gems/nokogiri-1.4.4/lib"]

So we see the path to Nokogiri in the Gem module’s internal path, which is used in the overwritten require to search for gems. The Gem module constructs this array by going into the default gem respository (in this case /opt/local/lib/ruby/gems/1.8) and writes the absolute path to the lib directory of each gem into the array, as well as any gem-specific load paths defined in the .gemspec file of each gem. You can override the default gem repository by defining the environment variable $GEM_PATH.

This explains how we were able to require Nokogiri without changing $LOAD_PATH.

So now we see the big picture: Manage gems via the gem command-line tool; require 'rubygems' and then require any gem in Ruby after that.

Ruby 1.9

As of Ruby 1.9, you no longer need to explicitly require 'rubygems' as it’s now baked right into Ruby. However, if you are running 1.8, be careful of literring your code with require 'rubygems' all over as some prominent Rubyists have argued (https://gist.github.com/54177)–and I believe rightly–that it unecessarily couples your code to RubyGems, which is really a environment setup configuration.

The workaround is to set the RUBYOPT environment variable to ‘rubygems’ so that Ruby will run with ‘-rubygems’ as an option to automatically load RubyGems on startup.

Conclusion or The Path To Enlightenment

It might have occurred to you that despite my plug for RubyGems, there’s a problem. By default require 'nokogiri' pulls the latest gem from your gem repo. But what if I have different versions of Nokogiri? And what if my code needs to load different versions depending on some condition (say, testing versus development)?

Luckily there’s an answer: Bundler. We’ll explore Bundler and versioned dependency management in Rails in our next post. But in the meantime, you can require RubyGems and make your life that more painless for now.



Copyright © 2004–2009. All rights reserved.

RSS Feed. This blog is proudly powered by Wordpress and uses Modern Clix, a theme by Rodrigo Galindez.