Part 7 in the series A Comprehensive Guide To Debugging Rails {: .series-highlight}
Logs, aka Historic Mirrors
Interpreting the Rails server built-in logs: Seasoned Rails pros will have interpreting logs down second nature, but I'll outline the important realisations I had as a beginner for the sake of the version of me out there who is new to Rails development. Whenever you visit a page/fill a form/do an Ajax request in Rails, the server logs out a boatload of useful data. When should you refer to these logs? When you notice something you don't expect to happen or see an error that you don't know how to even start debugging. In these cases begin by systematically comparing what you see output in the logs with what you expect to see. Let's look at the workflow for interpreting the Rails log:
- Manage your log tab with linux Usually there is way too much information on your log screen. Start your debugging sessions by clearing the output in your terminal tab with Command+K. Alternatively, if the bug you want to identify has already occurred and for this reason you don't want to clear the screen, then use the terminal's Command+F feature to search all its contents (sometimes 1000s of screens worth of data) for something that identifies your bug. Finally I like to pipe log output into a grep command to filter the information I want. For example when debugging Rendered information I might access my logs with
tail -f log/development.log | grep Rendered
, thereby only displaying lines containing the string "Rendered". Note that this configuration of tail and grep is a real-time filter, and will update the screen with any new Rendered lines added to the logs as you interact with your Rails application. - Was the right URL hit? Your logs separate each request to your Rails application by a line of white space, so your first step is to browse to where the log entry for the request in question begins by looking for lines of white space. Immediately underneath you'll see the beginning of a request to a specific URL. For example
Started GET "/admin/customers/43434/"
means that a GET request was sent to the admin/customers/43434 URL. Does the URL requested match your expectations? You might see the wrong URL here when your Javascript/form/link_to url_helper was incorrect. - Did the right controller and action process the request? So you're hitting the right URL. Next question is whether that URL was routed correctly to the controller and action you expect. The line
Processing by Spree::Admin::OrdersController#edit as HTML
explains that theedit
method of theOrdersController
in the module Spree::Admin was called whenever the URL was requested. You'll typically see a mismatch with your expectations when there is a bug in your routes.rb mapping URLs to controllers.The as HTML component means HTML was requested, as opposed to another data-type like JSON or JSthis information is useful for debugging controller actions that use respond_to to deal with multiple-formats such as HTML, JS or JSON requests from an API. - Did the router name the parameters correctly? Within your controller action, say #edit above, you expect certain information available in the
params
object. For example, you might want to haveparams[:customer_id]
available to be fed into a Customer#find database call. If your logger showsParameters: {"user_id"=>"43434"}
then you'll only haveparams[:user_id]
available instead of the params[:customer_id]. In this case the parameter was misnamed, either in the Javascript/HTML form/Rails view that generated the request or within the parameter naming in the entry for that URL in the routes.rb file. In other cases you might have forgotten to include an important parameter altogether. In other cases again your parameters are present and have the right names but you've confused the values each parameter contains. For example imagine your logs had Parameters:{"day"=>"25", "month"=>"2014", "year"=>"4"}
. This shows that you've confused the values of the month and year parameters. - Were the right SQL statements run? Assuming, as is default in development mode, the log level of your Rails application is set to "debug", any SQL statements run in processing your request will be output in your logs. Sometimes these SQL statements differ from what you expect based on your Ruby code. For example there may be a method (e.g. a sneaky defaultscope) adding a not-always-wanted condition to every one of your ActiveRecord scopes. For example I once had a defaultscope that limited all searches of my Subject model to entries where the "deletedat column is NULL". A year later I'd forgotten about the deafultscope and kept wondering why I was getting inaccurate results when I performed SQL queries against what I thought was "all" of my Subject models, but was in fact only all my undeleted subjects. I only became aware of this issue by seeing that the SQL generated in the log contained this added statement about "deleted_at" being null
SQL (0.7ms) Subject Load (0.8ms) SELECT "subjects".* FROM "subjects" WHERE "user"."id" = 110 AND "deleted_at" is NULL
; - Were the right templates and partials rendered?: The Rails logs lists the various views and partials that were used to compile the frontend output, typically the HTML web page. If the wrong templates or partials appear listed here that indicates a mistake in your templating logic. A more complicated type of potential error here are "silent overrides", which happen when your Rails application and its dependencies have two templates with the same name (e.g. product.html.erb), and one version (say the one included within the gem "spreesocial") is rendered instead of the one you wanted (contained, say in the gem "spree_core"). Therefore, to debug these issues, pay attention to the full filename of the rendered partial in question.
- Manage your log tab with linux Usually there is way too much information on your log screen. Start your debugging sessions by clearing the output in your terminal tab with Command+K. Alternatively, if the bug you want to identify has already occurred and for this reason you don't want to clear the screen, then use the terminal's Command+F feature to search all its contents (sometimes 1000s of screens worth of data) for something that identifies your bug. Finally I like to pipe log output into a grep command to filter the information I want. For example when debugging Rendered information I might access my logs with
text
Rendered /spree_social/backend/app/views/spree/_product.html.erb (0.0ms)
Rendered /backend/app/views/spree/admin/shared/_alert.html.erb (0.0ms)
Rendered /Users/jkinsella/.rvm/gems/ruby-1.9.3-p545@oxnotes/bundler/gems/
spree-ecd9f3f29779/backend/app/views/spree/admin/shared/_tabs.html.erb
Were any exceptions/warnings logged? Unrescued exceptions appear alongside their error message and backtrace within your logs. When your front-end seems unresponsive, say after you click "save comment" and nothing happens, then check your logs to see whether an exception appeared.
Was the cache used? Entries such as
Cache write: views/taxonomies_sidebar/2014-04-28
show that your application has cached data at the key (view/taxon...). The corresponding read entry,Cache read: views/taxonomies_sidebar/2014-04-28
indicates that your application is pulling previously stored data at that key out of the cache. The presence and absence of cache reads and writes for given keys helps you debug caching problems.Was the HTTP status correct? At the end of a log entry for a request will be something like
Completed 200 OK in 1700.0ms
. The three digit number after "Completed", 200 here, is your HTTP status code. 200 indicates success, which is usually what you want for typical web-pages. In other situations, you specifically want other HTTP codes, be that for prudent redirecting (301, 302), communication of "unauthorised"(401), or acknowledgement of missing content (404). Often you'll see the status code 500 in debugging, and rarely will this be desired, for this indicates an internal server error. Client-side Javascript code often branches depending on the HTTP status returned from an AJAX request, so these entries in the log will come in handy debugging issues here.Tailed live Heroku logs: Heroku doesn't store your logs, it being a read-only file system, but luckily you can view live requests as they hit our website with
heroku logs -t
. Pro-tip: leave this open in a terminal for some time (e.g. 20 minutes), then use the search function of your terminal app (Command + F) to search for the keywords which would indicate your bug. Unlike your development mode log, your production log is by default set to "info", meaning that you won't get information about caching or SQL commands run. When you want this additional information, increase the log level to debug withheroku config:add LOG_LEVEL="debug"
. (This assumes you have set up your production.rb environment file to respond to ENV["LOG_LEVEL"]) As soon as you've finished debugging switch the log level back to the default withheroku config:add LOG_LEVEL="info"
otherwise our logs will fill up rapidly and become expensive to archive in the third party software we employ.Archived logs: As mentioned above, our production server sits on Heroku, which, as a read-only filesystem, doesn't save our web-app logs for future reference. Therefore we use a third party service, LogEntries, to capture and store our logs. You can access the web-app interface for LogEntries through the Heroku dashboard, and choose time-ranges within which to search by keyword for application errors, HTTP statuses, specific parameters, or logged information you manually output from your application.
Heroku Scheduler Logs: View all the processes that Heroku has currently running for your application with
heroku ps
. While a scheduled task is running you'll see a process named something like scheduler.X, with X representing an integer number, say 1 in this case. You can inspect the logs of this scheduled process with$ heroku logs --ps scheduler.1
.s3 Access Logs: Like many Heroku applications we store binary data on S3. Requests for this data sometimes completely skip our website, and this is done by intentionally as a performance optimisation of our own design. This means we lose potential debugging information. To remedy this we have enabled the optional s3 logging feature in the relevant Oxbridge Notes buckets (through the AWS S3 web console) and thereby created logs within a bucket
oxbridgenotes-logging
. Information about interpreting these logs is found in this article, and advice on using Ruby to read them in a usable manner is found here.Alerts based on logs: Through our production log archiving software, LogEntries, we can set alerts to be sent via email/Campfire/other services when the logs contain an entry matching a certain regular expression and optionally when that same entry matches a given number of times, e.g. once, ten times, one hundred times. This allows us to inform ourselves about errors that would not be reported by our Rails based exception notification, that is errors that happen outside of the Rails application. In particular these are errors at the level of Heroku platform, such as request time-outs, failure of our Rails app to boot at all, or failures of supporting services such as Memcachier, which are provisioned through the Heroku platform.
Intentional logging: At any point in your application code you may ask the Rails logger to output some information, such as the contents of a suspect local variable. In the development environment logging variables is a poor mans debugger. I don't like to make general statements but Ill make an exception here: it's always faster to throw in a binding.pry and poke around in the debugger. The intentional logging technique is used in production for bugs that are difficult or impossible to reproduce in your development environment, for example bugs involving third party services without reliable sandbox environments (hi Paypal!). The keys to intentional logging are a) label the variables (i.e. don't output the contents of the variable paypal_response_code, which might be "420", because alone and without context you will need to refer back to the code to understand what the hell a single line in your log with the number 420 means. Instead output "The Paypal response code is 420. They must live in Colorado. ") b) choose relatively unique keywords within your log statements so that you can search/grep weeks' worth of logs and only return the results you want (you'll probably get this benefit "for free" if you label your variables as per my last point); c) be careful that your log entry doesn't itself cause an error (example:
Rails.logger.info {puts order.products.first.country }
could blow up and cause the entire webpage to fail when there are no products, an error that would not exist without the added logging. Insulate against this class of error by adding protection against there being a nil, for example by inserting the try method into your chainRails.logger.info {puts order.products.first.try(:country)})
; d) log out all variables that could potentially be useful in debugging. When you're using intentional logging you'll often feel that there's a 90% chance that one specific variable is relevant, and only a 10% chance that another one is relevant. I've repeatedly found it useful to liberally add logging statements for all variables that might potentially be useful, my reasoning being that it's cheap to add an extra logging statement, and slow and frustrating to realise after the fact that you have not logged enough information and need to modify the code, redeploy, wait for the bug to appear again, then interpret the log once more; e) never delete a logging statement, only comment it out. Why? Because bugs are like acne in that they keep reappearing, and you'll be glad of your reusable commented out logging statements when it rears its ugly head again.