Rails 4.1, initializers, and secrets.yml

Posted by – April 9, 2014

I'm using 4.1 on a new project. When I tried to set up an initializer using the values in secrets.yml, I got this error:


/Users/barry/.rvm/gems/ruby-2.1.1@rails4.1/gems/railties-4.1.0.rc2/lib/rails/application.rb:311:in `secrets': uninitialized constant Rails::Application::YAML (NameError)
from /Users/barry/projects/archiv8-billing/config/initializers/chargify.rb:2:in `block in '
from /Users/barry/.rvm/gems/ruby-2.1.1@rails4.1/gems/chargify_api_ares-1.0.4/lib/chargify_api_ares/config.rb:6:in `configure'
from /Users/barry/projects/archiv8-billing/config/initializers/chargify.rb:1:in `
'

I fixed it by adding require 'yaml' to the top of my initializer:


require 'yaml'
Chargify.configure do |c|
  c.api_key   = Rails.application.secrets.chargify_key
  c.subdomain = Rails.application.secrets.chargify_subdomain
end

Validating URLs for non-ActiveRecord objects

Posted by – July 9, 2010

I'm using Mongoid and MongoDB on a new project, so my models are not derived from ActiveModel. On previous projects I just used the validates_url_format_of plugin, but for this project I put the code from the module into an initializer (config/initializers/validation.rb) and used extend.

module ValidatesUrlFormatOf
  IPv4_PART = /\d|[1-9]\d|1\d\d|2[0-4]\d|25[0-5]/  # 0-255
  REGEXP = %r{
    \A
    https?://                                                    # http:// or https://
    ([^\s:@]+:[^\s:@]*@)?                                        # optional username:pw@
    ( (([^\W_]+\.)*xn--)?[^\W_]+([-.][^\W_]+)*\.[a-z]{2,6}\.? |  # domain (including Punycode/IDN)...
        #{IPv4_PART}(\.#{IPv4_PART}){3} )                        # or IPv4
    (:\d{1,5})?                                                  # optional port
    ([/?]\S*)?                                                   # optional /whatever or ?whatever
    \Z
  }iux
 
  DEFAULT_MESSAGE     = 'does not appear to be a valid URL'
  DEFAULT_MESSAGE_URL = 'does not appear to be valid'
 
  def validates_url_format_of(*attr_names)
    options = { :allow_nil => false,
                :allow_blank => false,
                :with => REGEXP }
    options = options.merge(attr_names.pop) if attr_names.last.is_a?(Hash)
 
    attr_names.each do |attr_name|
      message = attr_name.to_s.match(/(_|\b)URL(_|\b)/i) ? DEFAULT_MESSAGE_URL : DEFAULT_MESSAGE
      validates_format_of(attr_name, { :message => message }.merge(options))
    end
  end
 
end

Then my model extends that module:

class Location
  include Mongoid::Document
  include Mongoid::Timestamps
  extend ValidatesUrlFormatOf
 
  validates_url_format_of :url, :allow_blank => true
...

Testing HTTP basic authentication with RSpec 2

Posted by – July 6, 2010

Here's how I test my admin controllers that use HTTP basic authentication using RSpec 2:

before(:each) do
    user = 'test'
    pw = 'test_pw'
    request.env['HTTP_AUTHORIZATION'] = ActionController::HttpAuthentication::Basic.encode_credentials(user,pw)
end

Actually, that's how I did it when I first tested things, but I've since put it in its own module under spec/support/auth_helper:

module AuthHelper
  # do admin login
  def admin_login
    user = 'test'
    pw = 'test_pw'
    request.env['HTTP_AUTHORIZATION'] = ActionController::HttpAuthentication::Basic.encode_credentials(user,pw)
  end  
end

and now my controller spec looks like this:

describe Admin::LocationsController do
 
  include AuthHelper
 
  before(:each) do
    admin_login
  end
 
  describe "GET index" do
    it "assigns all locations as @locations" do
      loc = Factory.create(:location)
      get :index
      assigns(:locations).should eq([loc])
    end
  end
 
  describe "GET show" do
    it "assigns the requested location as @location" do
      loc = Factory.create(:location)
      get :show, :id => loc.id
      assigns(:location).should === loc
    end
  end  
 
end

That "Factory" line comes from my use of factory_girl rather than fixtures.

Rails 3, RSpec, Mongoid and Database Cleaner

Posted by – July 5, 2010

After wrestling with various combinations of cleaning out my database between tests, this is what I'm using on a new Rails 3 application that uses Mongoid, RSpec 2, and Database Cleaner. I have one table (neighborhoods) which is populated using rake db:seed, so I'm excluding that from the cleanup.

Put this into your spec/spec_helper.rb:

require 'database_cleaner'
 
RSpec.configure do |config|
  config.mock_with :rspec
 
  config.before(:each) do
    DatabaseCleaner.orm = "mongoid" 
    DatabaseCleaner.strategy = :truncation, {:except => %w[ neighborhoods ]}
    DatabaseCleaner.clean
  end
end

UPDATE: This isn't working for me now. Apparently the config.before(:each) part isn't being called in the versions of rspec (2.0.0.beta.21), cucumber (0.8.5), and cucumber-rails (0.3.2) that I'm using now. I'm now using the approach by Kevin Faustino here.

MacPorts annoyance with PHP and MySQL

Posted by – October 27, 2009

I'm doing some local WordPress development, so I set up Apache, PHP, and MySQL using MacPorts. Apparently, the default setup does not set the location of the MySQL socket for you, so I copied /opt/local/etc/php5/php.ini-development to /opt/local/etc/php5/php.ini and changed these lines:

pdo_mysql.default_socket=/opt/local/var/run/mysql5/mysqld.sock
mysql.default_socket =/opt/local/var/run/mysql5/mysqld.sock
mysqli.default_socket =/opt/local/var/run/mysql5/mysqld.sock

Quick fix for moving to Vlad the Deployer 2.0.0 with git

Posted by – September 9, 2009

I just upgraded my dev machine to version 2.0.0 of Vlad the Deployer. I got one unexpected error -- "Please specify the deploy path via the :deploy_to variable" -- but here is how I fixed it:

git support is now a separate gem, so remember to run

sudo gem install vlad-git

I also had some problems when version 1.4.0 was also on my machine, so I uninstalled that one with:

sudo gem uninstall vlad -v "1.4.0"

Using Amazon’s CloudFront with Rails & Paperclip

Posted by – September 8, 2009

It took me a bit of experimentation, and I never found an example in a single place that showed how to set it up exactly how I wanted, so here is my code in my model for storing images used by the ArtCat calendar on Amazon S3. I am using Paperclip version 2.3.1.

First you will need to set up the distribution in Amazon for your given bucket, so that you have a URL to use for the :s3_host_alist value. I also set up a CNAME so that I can use a nice url like calcdn.artcat.com.

Note that I don't want to store any images other than my resized ones, so my :default_style is set to :original. Some of these values are actually constants in my config files, but I've replaced those here to make it more clear.

    has_attached_file :image,
      :storage => 's3',
      :s3_credentials => "#{RAILS_ROOT}/config/s3_credentials.yml",
      :bucket => 'artcal-production',
      :s3_host_alias => 'calcdn.artcat.com',
      :url => ':s3_alias_url',
      :path => "images/:class/:id_:timestamp.:style.:extension",
      :styles => { :thumb  => '60x60#', :medium => '270x200#', :original  => '600x600>' },
      :default_style => :original,
      :default_url => 'http://cdn1.artcat.com/pixel.gif',
      :s3_headers => { 'Expires' => 1.year.from_now.httpdate },
      :convert_options => { :all => '-strip -trim' }

Note that you do NOT have to set the ActionController::Base.asset_host to your CNAME for images. Paperclip just handles it as expected for these images.

You'll notice an interpolation in the :path that is not standard. Thanks to this Intridea Company Blog post I learned that I will need to change my image names when they are updated. CloudFront will not update my image due to that Expires header I set above for a whole year, which is not what we want to happen. I solved this by including the timestamp based on the updated_at value for the image. Based on that Intridea post, this is the code I added to config/initalizers/paperclib.rb.

Paperclip.interpolates(:timestamp) do |attachment, style|
  attachment.instance_read(:updated_at).to_i  
end

At first I was storing the images on the file system and serving them via Apache. Moving them to CloudFront improved my page load times by at least 50%, and means that I don't have to run as powerful as server to handle a lot of traffic on this image-heavy site as I might otherwise need.

Don’t buy a Samsung monitor & Tekserve rocks

Posted by – April 10, 2009

The perils of buying a Samsung monitor are now evident.

  • Feburary 6: bought a Samsung 2043BWX monitor from Tekserve
  • Last week of February: monitor dies -- no image whatsoever
  • March 10: trying to meet deadlines, finally have time to submit an exchange request to Samsung, giving a credit card so they can mail a replacement
  • Next 4 weeks: status on Samsung site does change. I call weekly, am told that it will ship out after 15 business days, and I should be patient.
  • April 3: Told no monitor is available for an exchange, and I will be called back with options such as other monitors they could send.
  • April 6: I call back after getting no call from Samsung service, am told the person who could authorize shipping me a different model has been out and they will call me the next day.
  • April 8: I call again, and I'm told that shipping a replacement is impossible. I will need to ship them my monitor and wait 5+ business days for it to be repaired.

So, basically 4+ weeks of phone calls, and empty promises to call me back, to learn they’re not going to replace a monitor that I could have shipped in had I been told to do so on March 10. Since I bill hourly for my consulting work, I would have been better off just throwing away the monitor based on the time this has taken.

Update: I sent an email to customer service at Tekserve with a link to this blog post asking for help, and one of the owners(!) wrote back to say they’ll replace it right away.

Notes on mysql replication in Ubuntu hardy

Posted by – September 8, 2008

I'm moving some sites currently on shared hosting to Slicehost: and setting up MySQL replication so I can run backups from the slave and push them to Amazon S3 storage on a regular basis using s3sync.

This ONLamp article gives a handy step-by-step guide. One thing that got me though, was that I had set up iptables to firewall my servers, and I needed to get my master and slave able to talk to each other without opening up MySQL to the whole world. My solution was to email Slicehost and request private IP addresses for both servers.

I then changed this line in /etc/mysql/my.cnf from

bq. bind-address          = 127.0.0.1

to

bq. bind-address          = my private IP address

on each server. Then I added this to my iptables rules on both:

-A INPUT -p tcp -m tcp --dport 3306 -j ACCEPT

I reloaded the iptables rules with iptables-restore, and then checked with telnet from another server outside of those 2 to make sure I couldn't reach port 3306.

The next problem I encountered was an error like this when I restarted the slave after copying over the data in the master to get started:

error: 'Access denied for user 'debian-sys-maint'@'localhost' (using password: YES)'

A search of the Ubuntu forums gave me this thread which explained that debian uses a file called /etc/mysql/debian.cnf for that user password, and when I moved things over from the master DB, I changed the password for that user. The solution is to change to password in that file on the slave to the same password found on the master.

Culture Pundits official launch

Posted by – September 8, 2008

cp-logo

We first set up the Culture Pundits website in August 2007, but now that we have reached a critical mass of websites, traffic, and advertisers, we have announced the official launch.

Check out the press release.