Archive – Page 6

The Archival Benefits of Static Site Generators

2020 Update: While I’m all in now on Bridgetown, a modern fork of Jekyll, I’m leaving this up since you can apply many of these same principles to Bridgetown as well.

I’ve been on a nostalgia trip lately, poring over old snapshots of various sites and blogs I worked on in the past (stretching all the way back to 1996). Thank goodness for the Wayback Machine! But it’s gotten me thinking about the impermanence of the digital artifacts we create all the time as designers, developers, and content authors.

All the work you’ve put into that app, that blog post, that video, that Instagram story…blink and it’s gone! In some cases that’s by design. Content that expires and is quickly forgotten has become desirable in certain circles, the artform being all about its “in the moment”-ness.

But in the instances where you want to preserve your content for posterity, the options become challenging. Let’s focus on blogging for the sake of this discussion. I’ve run a number of blogs over the years, and I deeply care about preserving those—at the very least for myself but also for my children and their children (etc.). But the sad truth of the matter is that I’ve “lost” almost all of them. They’re either folders of PHP spaghetti code or SSI files (Server-Side Includes…remember those?) or WordPress installations scattered across multiple ancient backup drives—some of which are in formats or using connectors can’t even use any more. Plus in many cases the content itself doesn’t even live in those folders, but rather exists in old MySQL databases which I would have to track down, load up, and possibly convert in order to access any of that content!

Bottom line: I’m essentially forced to rely on the Wayback Machine to look up my old content, but not all of the posts and domains were properly archived—and on many of the pages that do work, image links are often broken. It’s hardly an ideal scenario.

There is a Better Way: Make Your Site Static #

Thankfully, I’m now building all of my sites (including this one!) in a completely new way. I’m using Jekyll, which is a Static Site Generator. What does that mean? It means the content for the site—both the blog posts and pages as well as all of the template & layout files, Javascript code, and stylesheets—lives entirely within a simple folder hierarchy and consists solely of plain text files (other than images and other media of course). No databases to install, no weird dynamic code to run on the fly. All you have to do is run jekyll build at the command line (or in my case gulp build ; jekyll build because I process the source SCSS and JS files with Gulp) and in seconds you get a _site folder with your complete website generated and ready to deploy and view anywhere. As long as you write your content in Markdown (.md) or HTML (.html), you’re golden.

But the special one-two punch of using a Static Site Generator such as Jekyll is the fact that you can save your site and all of its content into a version controlled repository. Once your site is stored in a Git repo, you have endless options for how you want to archive and protect your data. Not only do you have every version of your site archived within the repo (so you can “go back in time” to view past iterations of the site), you can easily store the repo in multiple places at once. All of my sites are stored both locally on my computer as well as “in the cloud” in Bitbucket or GitHub, and in some cases they’re also stored on DigitalOcean servers I’ve set up with custom web apps I use to manage the content files using WYSIWYG editing tools. If my computer is busted, my sites are safe online, and if the internet completely goes down, at least I still have my local copies.

Why is this all so important for archival purposes? Here are three big reasons:

  1. You can view your site without any special software. Just fire up the most basic web server imaginable, drop your _site folder into its root location, and your site is up and running. No PHP, Ruby, Go, Python, or any other server language or framework required. There is no step three!
  2. Your content is contained within the most future-proof formats possible. Markdown files are just plain text with minimal decoration. HTML is the most well-established and widely-supported data exchange format in history. JPEG images certainly aren’t going anywhere any time soon. It’s safe to say that (unless you build your site with a bunch of crazy client-side Javascript rendering such that nothing works until all your code runs) you’ll be able to load a web browser decades from now and your site will just work.
  3. Your content is automatically backed up in multiple contexts, consistently. If all of your content is “silo’ed” in a single MySQL database somewhere on some WordPress host, and that host goes down or their backup gets corrupted, you’re toast. Years of work, gone. (And let’s not forget the fact that WordPress sites are prime targets of hacker attacks on a daily basis!) However, if your content lives within Git repositories that likely exist in multiple locations simultaneously, the likelihood you’ll completely lose your repo and all that data is vanishingly small.

The Future is Static #

A lot of web developers are using the term JAMstack these days to describe static sites build with the latest generation of tools, because the word “static” got a bad rap back in the day when new “dynamic” tools such as MovableType or WordPress were taking over the world. But there’s nothing truly static about static sites built with tools such as Jekyll, Hugo, and many others.

I can use extremely sophisticated build processes to create fantastic website designs with tons of interactivity, and I can log into admin interfaces and use WYSYWIG editors if I want to to to manage content and publish updates at the click of a button. Using Jekyll doesn’t mean you have to hand-code every blog post in raw HTML and “FTP” it somewhere like in the old days. We live in a new age where static site generators are not only slick and amazing, but are in fact paving the way for the future of modern web development.

Static is dead. Love live static!

So to sum it all up, if you want to create blogs and websites that will stand the test of time, that will still be readable ten, twenty, probably even fifty years from now, that will not get buried in a stack of hard drives somewhere or lost in some database black hole on the internet, then you need to try out Jekyll (or one of its competitors). I guarantee you: once you go JAMstack, you’ll never go back!



Why Service Objects are an Anti-Pattern

I have been vocal from time to time in internet discussions regarding service objects and why I believe they are the wrong solution to a legitimate problem. In fact, not only do I think better solutions exist than service objects in the majority of cases, I maintain that service objects are an anti-pattern which indicates a troubling lack of regard for sound object-oriented design principles.

What if I told you that adding service objects doesn't make your Rails codebase any better

It’s hard to get such lofty points across in a random tweet here or comment there. So I decided to write this article and dig into some real-world code that illustrates my position precisely.

Quick Aside: If you read this article and still think I’m off my rocker, here’s another recent take on the subject (with code examples!) by Jason Swett that I think does a great job illustrating the issue.
 

So…what do I mean when I use the term anti-pattern? Here’s a reasonable description from StackOverflow:

Anti-patterns are certain patterns in software development that are considered bad programming practices. As opposed to design patterns which are common approaches to common problems which have been formalized and are generally considered a good development practice, anti-patterns are the opposite and are undesirable.

In order to demonstrate why I don’t like service objects, I’m going to look at some code I inherited from a past development team for a client project. I can’t go too much into context since this application is still in private beta, but let’s just say it’s a social platform where you can rate media (images or videos) and those ratings trigger certain callback-style actions such as updating algorithmic data and adding activities to various users’ timelines.

We have a pretty simple data model where a Rating object can be created in the database that belongs_to both a User object and a Media object (all these examples are shortened from the production files):

class Rating < ActiveRecord::Base
  belongs_to :user
  belongs_to :media
end
class Media < ActiveRecord::Base
  has_many :ratings
end

You get the idea. Now in order to handle an incoming rating from a user, the previous developer created a service object called MediaRating which gets called from the controller:

class MediaRating
  def self.rate(user, media, rating)
    mr = MediaRating.new(user)
    rating_record = mr.update_rating(media, rating)
  end

  def initialize(user)
    @user = user
  end

  def update_rating(media, rating)
    rating_record = @user.ratings.where(media: media).first
    if rating_record.nil?
      # do create stuff
    else
      # do update stuff
    end

    # do some extra stuff here like run algorithmic data processing,
    # add social activities to timelines, etc.
  end
end

And here’s the relevant controller code:

media = Media.find(params[:media_id])
rating = params[:rating].to_i
MediaRating.rate(current_user, media, rating)

Bear in mind that this code was originally written quite a while ago. These days, all the cool cats writing service objects have settled on a bit of formality in terms of the API presented, so if I were to rewrite this service object, I’d probably do something like this:

# add this to Gemfile:
gem 'smart_init'

class UserMediaRater
  extend SmartInit

  initialize_with :user, :media, :rating
  is_callable

  def call
    rating_record = @user.ratings.where(media: @media).first
    # etc.
  end
end

# updated command from the controller:
UserMediaRater.call(current_user, media, rating)

Problem Time #

Now this code doesn’t look so bad, right? It seems pretty clean and well-structured and easy to test. Well, the problem is that what you are seeing here is my polished up, greatly simplified version of this service object. The actual one in the codebase is 74 lines of spaghetti code with methods calling other methods which call other methods because the code to trigger algorithmic data processing and timeline updates and so forth is all shoehorned into this one service object. So actually, the flow is more like this:

Controller > Service Object > Rate Method > Update Rating > Some Other Update Method + (Run Algorithm > Refresh Related Data), then Invalidate Caches + Add Timeline Activities

So every time I open up the codebase fresh and want to look at the block of code that simply creates or updates a rating of a media object by a user, I’m forced to wade through a bunch of ancillary functionality to get at the basic code path.

Well, you might say, that developer obviously didn’t do a very good job writing the service object! They should have kept it simple and focused, and instead put additional processing code in other objects (maybe even other service objects!)

Now wait a minute! The whole reason we are told we need to extract code contained within standard Rails MVC patterns into service objects is because they help us break up complex code flows into standalone functions. But the problem is that there’s nothing to enforce that rule. Nothing! You can write a simple service object, no doubt about it. But you can equally write a complex service object containing a bunch of methods that quickly turn into spaghetti code.

What does this mean? It means the service object pattern has no intrinsic ability to make your codebase easier to read, easier to maintain, simpler, or exhibit better separation of concerns.

If a pattern can foster nearly any sort of programming style with a nearly infinite spectrum of simple to highly complicated, then it ceases to be a useful pattern and describes nothing specific to developers.

So What Should We Do Instead? #

When I’m preparing to write a fair bit of code that I know will have to process incoming data and either create or update records along with other related functionality, I typically start by writing a class method on the most appropriate model. Now hold your horses, I’m not saying this is a superior pattern. I’m saying this is where I begin, before I start looking for another pattern that might be a better fit.

Let’s take a look what what it might look like if rating media were done using a class method on Rating itself:

class Rating < ActiveRecord::Base
  belongs_to :user
  belongs_to :media

  def self.rate(user, media, rating)
    rating_record = Rating.find_or_initialize_by(user: user, media: media)
    rating_record.rating = rating
    rating_record.save

    # do some extra stuff here like run algorithmic data processing,
    # add social activities to timelines, etc.
  end
end

And the updated controller code:

media = Media.find(params[:media_id])
rating = params[:rating].to_i
Rating.rate(current_user, media, rating)

Now I’m already breathing a sigh of relief when I read this code, because putting the rating code directly in the Rate model ensures that the functionality is closer to the data structures that are most impacted by the code. Want to open up the codebase and find out how to rate something? Look in the Rate model! It’s very straightforward.

However…I’m ultimately still not happy with this code for one big reason. As a rule of thumb, I like to call instance methods and use Rails associations whenever possible. To me it’s code smell to sprinkle class methods all over the place and avoid using associations and standard OOP principles as intended. In this case, it seems weird to me that I can’t do something along the lines of @media.rate in the controller. After all, I’m loading up a media object and I want to rate it. Why isn’t there a clear interface to do that?

Concerns and POROs are Your Friends #

Once I’m convinced I need to start moving complex code out of a model class method, I’m going to want to find a better pattern than just stuffing a bunch of bits into various models’ instance methods. After all, the problems that come with fat models is why people recommend breaking code out into service objects in the first place!

But in reality, the downsides of fat models isn’t so much that you have a single object with a lot of methods, it’s that those methods (and presumably related unit tests) are all jumbled together in one file. What you really need is a way to keep bits of key functionality separated out from other bits of key functionality in terms of code comprehension, and then you need some sort of rule of thumb for which bits of code really should be relocated into separate objects altogether.

Let’s take a look at what we could do with this media rating business. First, I’m going to extract the chunk of code we’ve been wrestling with into a concern (which is just a slightly enhanced Rails version of the standard Ruby mixin). Let’s call this concern Ratable:

module Ratable
  extend ActiveSupport::Concern

  included do
    has_many :ratings
  end

  def create_or_update_user_rating(user:, rating:)
    rating_record = ratings.find_or_initialize_by(user: user)
    rating_record.rating = rating
    rating_record.save

    # do some extra stuff here like run algorithmic data processing,
    # add social activities to timelines, etc.

    rating_record
  end
end

The Media class now benefits as well, as we can take that has_many :ratings directive out and keep that contained within the new concern:

class Media < ActiveRecord::Base
  include Ratable
end

And the updated controller code:

rating = params[:rating].to_i
Media.find(params[:media_id]).create_or_update_user_rating(
  user: current_user,
  rating: rating
)

Ah, this is already feeling much better. All I have to do in the controller is find the media object and call a single instance method that’s clearly named as to what it does. It’s a friendly interface that feels Rails-y in the best possible way.

There’s still a problem though. This create_or_update_user_rating method is trying to do way too much. It makes sense to handle the database access here, but the algorithmic data processing and timeline updates seem like actions that should be triggered to happen after the fact and defined someplace else.

The standard Rails way would be to put this code into ActiveRecord callbacks. Now I have no problem with callbacks, and I’ll gladly use them if it feels like a reasonable fit. But in this case, the two main things that need to happen seem like totally unrelated bits of functionality that are only tangentially related to the particular media, rating, and user objects involved.

So let’s use this opportunity to do some proper domain modeling and move that extra functionality out of the concern and into other POROs. We’ll keep our create_or_update_user_rating method nice and simple by pointing to those new objects:

def create_or_update_user_rating(user:, rating:)
  rating_record = ratings.find_or_initialize_by(user: user)
  rating_record.rating = rating
  rating_record.save

  # Let's extract out additional functionality to POROs or relevant models.
  # Better yet, encapsulate these into background jobs?
  # Left as an exercise for the reader...
  Rating::Processor.run(rating_record)
  Timeline::Activities.add_for_rating(rating_record)

  rating_record
end

Now before you start to get twitchy there, Rating::Processor and Timeline::Activites aren’t more “service objects.” These are POROs (Plain Old Ruby Objects) that are modeled using carefully considered OOP paradigms. One object is what I call a “processor” pattern: it takes input, crunches some numbers, and then saves the output somewhere. The other is a collection pattern that manages adding and removing items and the consequence of those actions. Nothing fancy or original here, but that’s the point.

We could have attempted to use the service object pattern here instead, perhaps by refactoring UserMediaRater to call additional services objects such as ProcessNewRating and AddTimelineActivityForRating. But how is that any more readable or any more well-structured than using concerns and POROs? Instead of succumbing to a huge app/services folder filled with what are essentially functions, we can engage instead in real domain modeling to come up with class names, data structures, and object methods that are designed for readability and ease of use.

And that’s my final point: using concerns and POROs instead of service objects encourages better interfaces, proper separation of concerns, sound use of OOP principles, and easier code comprehension.

I’m out of time to talk about testing strategies, but if you’re worried that using concerns or more advanced POROs will cause additional problems with your tests as compared with service objects, here are a couple of useful resources:

There’s a lot more I could talk about regarding how model or controller-level Rails concerns combined with useful PORO patterns is a better fit than service objects in the vast majority of cases, so keep an eye out for future articles in this vein.

TL;DR: service objects are crappy and better solutions exist most of the time. Please use those instead. Thank you!

Send your thoughtful, rage-free responses to @jaredcwhite 😊



Use Ruby Objects to Keep Your Rake Tasks Clean

I’ve been inspired by David Heinemeier Hansson’s new YouTube series On Writing Software Well, because I think it’s positively delightful when somebody takes the time and care to walk through real-world, production code and discuss why things were done the way they were and the tradeoffs involved, as well as the possibilities for improving that code further.

Today, I want to talk about how to keep ancillary pieces of your infrastructure fairly clean and minimalist. In terms of Rails, one place I’ve seen where it’s easy to end up with “bags of code” that aren’t really structured or straightforward to test are Rake tasks.

Let’s look at a Rake task I recently refactored on a client project. We were using Heroku’s new Review Apps functionality, which allows every pull request on GitHub to spawn a new application. QA specialists or product managers are then able to look at that particular feature branch’s functionality in isolation, which is a good thing. However, the post-deploy rake task we had in place to make sure we were setting up the proper subdomains, SSL certificates, indexing data for search, etc., was getting increasingly unwieldy. It was just a big “bag of code,” and that to me was a sign some refactoring was sorely needed.

Let’s take a look at the before code (a few bits of private data have been changed to protect the innocent):

namespace :heroku do
  desc "Run as the postdeploy script in heroku"
  task :setup do
    heroku_app_name = ENV['HEROKU_APP_NAME']
    begin
      new_domain = "#{ENV['HEROKU_APP_NAME']}.domain.com"

      # set up Heroku domain (or use existing one on a redeploy)
      heroku_domains = heroku.domain.list(heroku_app_name)
      domain_info = heroku_domains.find{|item| item['hostname'] == new_domain}
      if domain_info.nil?
        domain_info = heroku.domain.create(heroku_app_name, hostname: new_domain)
      end

      key = ENV['CLOUDFLARE_API_KEY']
      email = ENV['CLOUDFLARE_API_EMAIL']
      connection = Cloudflare.connect(key: key, email: email)
      zone = connection.zones.find_by_name("domain.com")

      # delete old dns records
      zone.dns_records.all.select{|item| item.record[:name] == new_domain}.each do |dns_record|
        dns_record.delete
      end

      response = zone.dns_records.post({
        type: "CNAME",
        name: new_domain,
        content: domain_info['cname'],
        ttl: 240,
        proxied: false
      }.to_json, content_type: 'application/json')

      # install SSL cert
      s3 = AWS::S3.new
      bucket = s3.buckets['theres_a_hole_in_the_bucket']
      crt_data = bucket.objects['__domain_com.crt'].read
      key_data = bucket.objects['__domain_com.key'].read
      if heroku.ssl_endpoint.list(heroku_app_name).length == 0
        heroku.ssl_endpoint.create(heroku_app_name, certificate_chain: crt_data, private_key: key_data)
      end

      sh "rake heroku:start_indexing"
    rescue => e
      output =  "** ERROR IN HEROKU RAKE **\n"
      output << "#{e.inspect}\n"
      output << e.backtrace.join("\n")
      puts output
    ensure
      heroku.app.update(heroku_app_name, maintenance: false)
    end
    puts "Postdeploy script complete"
  end

  def heroku
    @heroku ||= PlatformAPI.connect_oauth(ENV['HEROKU_PLATFORM_KEY'])
  end
end

Whew! That’s a lot to wade through. Not only is the task getting pretty long at this point, there are certain dependencies between the blocks of code being executed that are difficult to ascertain just by a cursory examination.

Now let’s look at how I refactored this. First, I created a new class in the lib folder called HerokuReviewAppPostDeploy and extracted each block into a separate method. You’ll notice we are actually doing even more in this new object, such as connecting to the GitHub repository and getting the branch name of the pull request so we can put a Jira ticket number right in the review app’s subdomain. That requirement turned up right as I was in the middle of refactoring, so I was thankful I avoided an even larger bag of code!

Here’s the full class:

class HerokuReviewAppPostDeploy
  attr_accessor :heroku_app_name, :heroku_api

  def initialize(heroku_app_name)
    self.heroku_app_name = heroku_app_name
    self.heroku_api = PlatformAPI.connect_oauth(ENV['HEROKU_PLATFORM_KEY'])
  end

  def turn_on_maintenance_mode
    heroku_api.app.update(heroku_app_name, maintenance: true)
  end

  def turn_off_maintenance_mode
    heroku_api.app.update(heroku_app_name, maintenance: false)
  end

  def determine_subdomain
    new_subdomain = heroku_app_name
    pull_request_number = begin
      heroku_app_name.match(/pr-([0-9]+)/)[1]
    rescue NoMethodError; nil; end
    unless pull_request_number.nil?
      github_info = HTTParty.get('https://api.github.com/repos/organization/reponame/pulls/' + pull_request_number, basic_auth: {username: 'janedoe', password: ENV["GITHUB_API_KEY"]}).parsed_response
      if github_info["head"]
        branch = github_info["head"]["ref"]
        jira_id = begin
          branch.match(/WXYZ-([0-9]+)/)[1]
        rescue NoMethodError; nil; end
        unless jira_id.nil?
          new_subdomain = "#{heroku_app_name.match(/^([a-z]+)/)[1]}-wxyz-#{jira_id}"
        end
      end
    end
    new_subdomain
  end

  def determine_domain
    "#{determine_subdomain}.domain.com"
  end

  def setup_domain_on_heroku(new_domain)
    # set up Heroku domain (or use existing one on a redeploy)
    heroku_domains = heroku_api.domain.list(heroku_app_name)
    domain_info = heroku_domains.find{|item| item['hostname'] == new_domain}
    if domain_info.nil?
      heroku_api.domain.create(heroku_app_name, hostname: new_domain)
    else
      domain_info
    end
  end

  def setup_domain_on_cloudflare(new_domain, heroku_domain_info)
    key = ENV['CLOUDFLARE_API_KEY']
    email = ENV['CLOUDFLARE_API_EMAIL']
    connection = Cloudflare.connect(key: key, email: email)
    zone = connection.zones.find_by_name("domain.com")
    zone.dns_records.all.select{|item| item.record[:name] == new_domain}.each do |dns_record|
      dns_record.delete
    end
    response = zone.dns_records.post({
      type: "CNAME",
      name: new_domain,
      content: heroku_domain_info['cname'],
      ttl: 240,
      proxied: false
    }.to_json, content_type: 'application/json')
  end

  def setup_ssl_cert_on_heroku
    # install SSL cert
    s3 = AWS::S3.new
    bucket = s3.buckets['theres_a_hole_in_the_bucket']
    crt_data = bucket.objects['__domain_com.crt'].read
    key_data = bucket.objects['__domain_com.key'].read
    if heroku_api.ssl_endpoint.list(heroku_app_name).length == 0
      heroku_api.ssl_endpoint.create(heroku_app_name, certificate_chain: crt_data, private_key: key_data)
    end
  end
end

Not only does this new approach allow us to use an object to break out bits of functionality into single-purpose methods, but because certain methods require data generated by other methods, we can include those variables as method arguments (for example, passing new_domain explicitly into setup_domain_on_heroku).

So how does our Rake task look now? Much, much better:

namespace :heroku do
  desc "Run as the postdeploy script in heroku"
  task :setup do
    heroku_app_name = ENV['HEROKU_APP_NAME']
    post_deploy = HerokuReviewAppPostDeploy.new(heroku_app_name)
    begin
      post_deploy.turn_on_maintenance_mode
      new_domain = post_deploy.determine_domain
      heroku_domain_info = post_deploy.setup_domain_on_heroku(new_domain)
      post_deploy.setup_domain_on_cloudflare(new_domain, heroku_domain_info)
      post_deploy.setup_ssl_cert_on_heroku
      Rake::Task['db:migrate'].invoke
      sh "rake heroku:start_indexing"
    rescue => e
      output =  "** ERROR IN HEROKU RAKE **\n"
      output << "#{e.inspect}\n"
      output << e.backtrace.join("\n")
      puts output
    ensure
      post_deploy.turn_off_maintenance_mode
    end
    puts "Postdeploy script complete"
  end
end

It’s way easier to see the individual steps needed to go through the process of completing the review app setup, and through the use of setting a variable returned from one method and passing it along to another, the data dependencies between the steps are now clear. In addition, because HerokuReviewAppPostDeploy uses straightforward method names that describe exactly what’s going on, the explanatory need for code comments is greatly reduced.

You can use this extract-into-a-standalone-object technique for other “bag of code” areas of your application. Background jobs are another good example. I prefer to keep my Sidekiq workers very minimalist…a lot of the time I make sure they call a single method on a single model and that’s all.

I hope this was helpful in giving you some new ideas on how to improve your own codebase, based on live production code. Stay tuned for the next article in this series.



Swift for Javascript and Ruby Developers

Last week I had the privilege of presenting on the topic of learning Swift from the perspective of a developer currently familiar with Ruby or Javascript. I showed off some of the reasons why Swift is a pretty exciting language for those used to working with lightweight scripting languages, and I also demonstrated some example code that highlights similar functionality implementations across all three languages.

You can see the presentation slides here, and code examples are available on Github. If you yourself are a developer in one or more of these languages and have suggestions for further code examples or useful comparisons, please submit a pull request on Github and let me know!



Just Say No to GDD: Guilt-Driven Development

When I finally decided to ditch PHP back in 2006 and learn Ruby on Rails, one of the main reasons was that PHP just wasn’t fun anymore. I’d previously built my own custom web framework using the latest hotness of PHP 5 soon after it was first released. But then Zend announced their official web framework, and I figured my little project didn’t stand a chance. (Interestingly, Zend Framework didn’t end up being that huge of a deal and a lot of PHP programmers are using other frameworks. But I digress.)

I didn’t like how Zend was doing things when I looked over the initial docs, so I decided just to jump ship and try RoR. My own framework was already somewhat influenced by Rails, so I figured the learning curve wouldn’t be too high once I got the hang of writing Ruby code.

Boy oh boy, did I fall in love with both Ruby & Rails. I’d never had so much fun programming in my life. Finally a language and a methodology of writing websites and webapps that felt simple, clean, fast, and maintainable.

But then a few years went by. New updates to Rails. New tools. New gems. New philosophies. New testing frameworks. New client-side Javascript frameworks. New server deployment best practices. New things to learn Every. Darn. Minute.

Suddenly, writing Rails apps didn’t feel so much fun anymore. It felt difficult. And I felt guilty. Guilty I’m not writing enough tests (and not using the right testing gem). Guilty I’m not setting up my servers right. (Darn it all, why even set up servers? Use Heroku, right? That’s what all the cool kids use.) Guilty I’m relying on server-backed HTML views. Shouldn’t I learn HAML anyway? Skip that, just use JSON on the frontend and Handlebars on the front end. Actually, why even use Rails for a simple JSON API? Just use Node.js and be done with it. Bye-bye Ruby.

What?!?! This is madness! I left the world of PHP and learned Ruby (and Rails) for specific and valid reasons, reasons that simply hadn’t stopped being relevant for producing web software. Certainly competiting technologies might have an edge in one area or another. But my criteria for evaluating languages and frameworks hadn’t changed.

Is it fun to read, fun to write? Is it concise and easy to understand? Does it embrace the way the web works or does it try to do something weird or non-standard? Does the community and the tooling/best practices/packages/etc. seem to be top-notch?

In all of those areas, I remain quite pleased with Ruby on Rails, and in one particular area (client-side interactivity or “rich UI” instances), I have found an amazing tool in Opal, a Ruby-to-JS transpiler with which I have developed (what, again?) my own custom front-end framework. Actually, it’s barely a framework…more a straightforward way to organzine code and objects in an MVC pattern suitable for client-side development. YMMV. If you want to stick with popular JS client frameworks, knock your socks off.

Editor’s Note: Since I wrote this article, ES6+ took off along with Webpack, Stimulus, and LitElement, so I no longer use Opal. But hat’s off to you if you do!

My point is this: after a period of falling prey to that terrible practice known as GDD: Guilt-Driven-Development, I finally snapped out of it and realized that the only person forcing me to look at tools and technologies I simply don’t want or need was myself. As long as there’s a job open somewhere for a Rails developer, I’m good to go. And I don’t need to learn every single darn gem on the planet. All I need to learn is what I need to learn to do a good job building exactly what I need to build and no more.

So, I’m thankful I was able to leave GDD behind and embrace a better practice, which I call HDD: Happiness-Driven-Development. When I’m happy, I write better code. I care more about why I’m writing it and what’s it’s supposed to be. And, in the long run, I believe all developers should strive to follow the HDD philosophy. After all, that’s why we have Ruby in the first place!

“I hope to see Ruby help every programmer in the world to be productive, and to enjoy programming, and to be happy. That is the primary purpose of Ruby language.”

Thanks Matz. I almost forgot.



The Best Code is the Code Nobody Writes

There are many metrics that people use for determining what is good code vs. bad code. Things like:

There are many more I’m sure you could think of to add to the list. Here’s another metric I think is very important to consider for judging code quality:

Think about it. Code that doesn’t exist is the most readable code, because nobody has to read it. Code that doesn’t exist is the most maintainable code, because nobody has to maintain it. Code that doesn’t exist is the most easily tested, because nobody has to write and maintain test suites to validate that code. And so forth.

You may think I’m being flippant, but heed my words, young padawan: I have worked on many a software project in my day that was full of spaghetti code, deadends (aka code that wasn’t in active use but still in the codebase), untested code, and even semi-duplicated code because multiple people implemented similar functionality more than once. And, I regret to say, many of those mistakes were ones I made as well.

If you build it, you are stuck with it. #

One of the major challenges I’ve run into is dealing with clients who are unfamiliar with programming best practices and the concept of technical debt. In their minds, building a new software feature is just like painting a picture or constructing a chair. You work on some stuff, and then it’s done, and then you look at it and see if you like it.

But as we all know, software doesn’t work like that. Nearly every single line of code you write ends up with a lifecycle and an impact that goes far beyond the one feature you’re working on.

And then, after adding new methods to User and creating new controllers and placing yet another service object in a growing folder of modules and including those three new gems in your Gemfile (not to mention adding new database tables and columns on existing tables), the client decides to delay the feature and work on another feature. Is this a temporary delay or a permanent one? Who knows!

Now you have a ton of code strewn all over your codebase, gems you don’t need, and a bloated database. What do you do? Spend hours to carefully remove all of the changes you made? Sorry, clients don’t really like spending money for you to do things that don’t result in another fancy demo at the next executive meeting. So you do what a lot of programmers (myself included) typically do: leave it. Sure, you might comment out a route or remove a button in a view, but that’s it. The “feature” remains, lurking beneath the surface like a hideous creature, just waiting to leap out and bite a chunk out of you or another programmer when you later come along refactoring code and cleaning up cruft.

Just Say No. #

If you’re lucky enough to work in a company with a culture of software engineering excellence, these issues may be relatively rare because everyone is aware of them and the business side understands and respects the concerns of the software team. But if you are a contractor working with a variety of clients—some of whom may know next to nothing about the discipline of programming—you simply don’t have that luxury. This means you are going to have to be very careful to set expectations up front.

Clients need to be aware that your job as a programmer isn’t simply to build what they say they want. Your job is to create a healthy codebase and grow it slowly and deliberately, always being mindful that at some point in the future, you might, God forbid, be hit by a bus (or fired, or leave for greener pastures…) and someone else is going to have to figure out what in tarnation is going on with your code. Your job, in many cases, is to tell the client no.

If you do need to write code (shock! horror!), keep it as self-contained as possible. #

There’s a concept in software security to keep the “surface area” of a possible attack vector as minimal as possible. The less accessible a portion of code is to the “outside world”, the less likely it can be used for a malicious act.

We need to be vigilant to reduce the surface area of new features. When you’re figuring out how to go about writing code to support new development, think in terms of small, modular components that are easily changed or removed in the future if the feature is no longer required or if the requirements change drastically.

Nobody can blame you for code that doesn’t work if the code doesn’t exist. #

I’ll close on a somewhat cynical note, but it’s absolutely true. Code that doesn’t exist is bug-free and never causes any problems. Now it’s true that bugs in existing code can be due to “missing” code (aka the code to validate that input value is missing). But missing code and non-existing code are slightly different. Missing code in those instances is usually just existing code that was badly written. Avoid writing object.value + 1 or object.title.upcase when you can write object.value.to_i + 1 or object.title.try(:upcase) (unless of course you’ve already vetted your variables in some way).

Non-existing code is the code you’ve chosen not to write. Far too many programmers lack the foresight or the courage to chose not to write code. Or maybe their ego is tied up in the clever solutions they can come up with or the lines of code they commit to GitHub every day. Resist these temptations! And always remember this:

The best code, next to the code nobody writes, is the code written carefully and deliberately, with humility. The client could be wrong. You could be wrong. The project could end up completely different in the near future. So don’t waste time and effort building stuff that’s easy to break and hard to remove. Go the extra mile and get it done right.


Additional reading: The Best Code is No Code At All.

Newer Posts
Skip to content