Category: Theme Development


Check Out Atom, GitHub’s New Development Editor

It’s been awhile since we’ve seen any updates in the editor space. The last big splash was made by Sublime Text which took the web development community by storm, especially once Package Control came around to serve as the package manager for the editor.



Sites of the Week for January 24th 2014

Line25 Sites of the Week is a weekly roundup of the most outstanding website designs that I stumble across during my every day browsing. In this week’s collection, we have designs from BUNQ, Adaptive Path, Pace Law, and Highway One Roadtrip. BUNQ View the website Adaptive Path View the website Pace Law View the […]



Browser Testing in the Cloud Redux

I’ve written quite a bit about browser testing solutions trying to help identify techniques and tools that make cross-browser development easier. My last article on the subject covered how to use BrowserStack to test any number of browsers all from one central tool; your own browser.

I was on a Windows PC back then so testing multiple browsers was a bit easier and testing tools were mainly complementary to my work. Now that I’m on OS X, the need for tools to round out my testing strategies is even more important, specifically because of the lack of Internet Explorer on the OS.

I’m a bit of a stickler for what I install on my computers and I prefer online tools when available. I’m also always on the hunt for new tools that make cross-browser testing easier and decided to give a run. I’ll go over some of the key features of the service and how to leverage it to improve your testing capabilities.

ZOMG That’s a Lot of Browsers

First, let’s mention that like every reliable service in this space, charges a monthly fee. I’m not surprised at all by this because the bottom line is that they have an infrastructure to support and well, that costs money. Their fee structure is based on the number of minutes you’d like available to you on a monthly basis but with a unique twist in that they allow you to roll over a certain number of minutes, month to month. So if you don’t use all of your minutes, you can roll some over for the next month.

Onto the service itself. There are a couple of things that are important to me in these types of services. These are:

  • Breadth of browser support across major OS versions
  • Mobile support (as I’m starting to shift to mobile web)
  • Debugging tool support
  • Responsiveness of the UI
  • Form factor support
  • Local system testing support (for example: proxy-based debugging)

All of these matter because they provide you the broadest testing surface across multiple devices. But to be honest, without debugging tool support (like Chrome DevTools, IE F12 Tools, etc.), a service like this would be compelling to use and only marginally better than a screenshot service. And being able to test locally is an obvious must-have to allow you to test interactively before deploying to staging or production. So this criteria is important to consider.

The first thing I noticed about the service is its amazing breadth of browser and device form factor support. Every major OS is covered (including Ubuntu) and every OS version has a fairly comprehensive list of supported browser versions for testing.


In addition, there’s extensive support for mobile devices and browsers covering earlier and more modern versions of Android, iOS, Blackberry Bold and Windows Phone 8. The interesting (and really beneficial) thing is that for specific Android versions, they’re allowing you to test against competing browsers like Firefox Mobile, Maxthon and Opera.

Testing With the Service

If you’ve used BrowserStack or a similar service, you’ll feel right at home in The user experience matches very closely to what I’ve seen before which made jumping into it fairly trivial. You’re initially presented with a dashboard that gives you access to the main features. These include:

  • Live browser testing
  • Automated screenshot service
  • Establishing a local connection

The live browser testing is what I’m most interested in. For me, I need to ensure that the rendering is consistent so the first thing I did was to do a baseline test to see if a site will render the same in my virtual browser as it does in my local browser. To mimic my local settings I chose to start the session in Mavericks, running under the most recent stable version of Chrome:


One thing to note is that in the OS/browser selection form, you’re solely presented with the browser options available for that specific OS version like this:


I went with GNC’s website because, well, I’m a bit of a fitness buff and they have a lot of interactive points as well, such as JavaScript-based fly-over menus and cycling feature panels. I figured it was a good test to see if the service could handle all of the interaction.

Looking at the two screenshots, you can see that the rendering for Chrome on Mavericks on both systems is exactly the same. This is a good thing, although it’s a bit trippy to see Chrome on Mavericks within Chrome on Mavericks. Inception anyone?


Local machine


Remote virtual browser

Once your session is running, you can change your target OS and browser version at any time by clicking on the Change Configuration button which displays the panel with dropdown choices. Note that changing the OS or browser will reload your session but it sure beats having to spark up multiple virtual machines, especially for cursory reviews of pages.

Getting the baseline UI was great but a more important test is to see how the site responds to interaction. Let me preface this by saying that I’ve not found a service like this that offers instantaneous response. There will always be a lag because these browsers are virtualized. The key thing that you want is to ensure that normal interaction, like hovering over a menu or controlling UI controls (like a scrolling panel) performs as expected (albeit a little slower). For example, GNC’s site has a dropdown menu system that expands when you hover over a menu option. Notice that hovering over it will expand the menu and equally important give me the option to drill-down into it.


This interactivity is what makes these services so valuable. The days of having to rely on screenshot services and a ton of VMs to see how your site renders across a ton of browsers are gone.

What About Debugging?

Good question. Browser-based developer tools have really progressed nicely and we depend on them daily. Thankfully, has included the default debugging tools with each browser giving us access to Chrome DevTools, the IE F12 Developer Tools, and Firefox’s Web Developer Tools as well as Firebug for older versions of the browser. Notice here that I’ve fired up the IE F12 tools in IE11 on Windows 7.


The tools are completely functional allowing me to inspect the markup and DOM structure of the page as well as set styles and change text, just like you would on your local PC. You can see here how I’m able to update the inline JavaScript on the site:


What this translates to is the ability to leverage the debuggers to do advanced debugging work like script debugging across any browser and browser version.

One thing I was concerned about is whether the tools would accurately show page load times via the network traffic monitoring panels and in my tests, they seem to be consistent with what I saw locally. This means I can feel confident, to some degree, that the load times will be more or less on par (of course taking into account network issues).

The one thing that I think would be very hard to measure, though, is page performance via the new suite of performance profilers included in Chrome and Internet Explorer. A lot of that data is directly affected by aspects of your computer, especially when rendering is GPU-enhanced. Testing this on virtualized browsers or virtual machines just isn’t real-world so I wouldn’t recommend it. If you’re an interactive developer (games), then it’s best to test on your own device to get a better understanding of performance.

Testing Different Form Factors

As I begin focusing on mobile more and more, the need to test across multiple mobile OSs and different form factors becomes a high priority. Unfortunately, short of getting a very big inheritance, winning the lotto, or finding a loving sponsor, building a full-featured mobile device lab just isn’t in the cards. And at the pace things are going, things are only get tougher as manufacturers continue to push the limits of mobile browsers and device size. offers the ability to test across the major mobile OSs simulating most of the popular mobile devices like iPads, iPhones, Nexus 7s and such. This is certainly not an all-encompassing list of mobile devices and I’m assuming is meant to tackle the most modern OSs and devices available.

The process to testing is exactly the same as what we did for desktop browsers, except the rendering will be within the size of the specific mobile device you’ve selected:


Again, the service uses simulators to allow you to test out how your site will render on a mobile device. Keep in mind, though, that while simulators are good it’s always best to test against a real device if possible.

New devices come out all the time and I wouldn’t expect every form factor to be on here. I think a nice addition would be to allow a user of the service to be able to define the viewport size as opposed to solely being presented default screen resolutions. This would also offer more flexibility in testing sites that are responsive.


Before interactive services like became available, screenshot services became known as one of the quickest ways of seeing how your site rendered across multiple browsers. While they’re kind of passe now, they’re still useful and interestingly enough, I’m seeing most of these browser testing services spin up screenshot capture as part of their offerings. So it seems this practice is having a bit of a renaissance, most likely driven by the increasing number of browser versions, devices and form factors we need to account for.

Using the service is straightforward and as easy as entering a URL, selecting the browsers you’d like screenshots from, and clicking the Take Screenshots button:


The nice thing about this is that it allows you to choose as many device/OS/browser combinations as you’d like as well as define the resolution on a per-target basis. This generates a series of snapshots that you can review:


Clicking individual screenshots displays a larger image allowing you to get a detailed view of the rendering.

A couple of things to keep in mind: It takes a little while for the screenshots to be captured and rendered. So the more browsers you select, the longer you’ll wait. Unlike other services where you wait your turn in a queue, this wait seems to be simply associated with processing time. You’re paying for the service so I can’t imagine there being a queue like Also bear in mind that some of these screenshots are invariably derived from simulators and as I mentioned before, simulators don’t always render the same as a real browser. Lastly, the screenshot is for a specific page, not the entire site.

Nonetheless, the fact that I can fairly quickly get an idea of how my site is rendering across so many devices helps me to drill-down into specific browser combinations that need special attention.

And that’s where a really neat feature comes in. The service offers the ability to compare layouts side-by-side so you can see rendering differences between different browsers:


As you can see in the screenshot, it goes a step further by also detailing the differences and creating a transparent yellow overlay on each panel to highlight the actual differences. I’m sure you can relate to the frustration many a developer has felt over discovering slight layout differences after the fact. This helps to bring that forward during the testing process. And you can scroll through and compare multiple scenarios by clicking the Prev and Next buttons.

Testing Local Files Remotely

The true value of a service like this is to facilitate your local debugging efforts. Simply allowing you to test publicly-available sites offers such limited value in terms of your overall testing strategy. provides the ability to test your local files against their remote servers using a Java-based proxy applet or the command line, again leveraging Java to create a proxy. This is similar to other services and is necessary to establish the connection between your local PC and the remote servers as well as allowing you to be able to tunnel past any firewalls you might have in your company. Once the connection is set, you’re able to test out both local files via direct access or via URL from your local web server.

The team at have created a video which gives you a good explanation and demonstration of how this part of the service works.

Closing Thoughts

It’d be truly great if we didn’t need these services. That would mean every browser rendered totally as expected across every device that supported them. Unfortunately, we still have a bit of browser fragmentation and every browser version tends to have their own quirks to contend with. So services like provide real value in streamlining cross-browser testing.

Overall, I think the service is very good albeit not without some quirks of its own. There were some intermittent lockups that I experienced in the live testing which may be attributed to Flash and in some sessions, seeing a number of browser icons in the OS dock left me scratching my head as to why they were there when I chose a specific target browser. These issues didn’t necessarily prevent me from doing what I wanted to do (testing) but it felt like things needed to be tidied up a bit.

The layout comparison feature, though, was pretty hot and something I could see myself using regularly.

What I am seeing is that price could be a big success factor for the breadth of services they’re offering. appears to have set themselves at a very competitive price point incorporating live testing, screenshots and local testing into one fixed monthly cost as opposed to separate pricing for specific services. This is very appealing, especially for price-conscious developers.

The big factor, though, will be how much time you need for testing. From experience, two and a half hours (the amount of time allotted for the Basic plan) seems a little limited especially when accounting for latency of rendering. Again, your mileage may vary but it’s certainly something to consider.



How Working Walls Unlock Creative Insight

Research wall, design wall, research board, ideation wall, inspiration board, moodboard, pinboard — Working walls are known by countless names. Underlying them all is a single idea: that physically pinning our sources of inspiration and work in progress, and surrounding ourselves with them, can help us to rearrange concepts and […]


Web Design

Graphic Design Tutorials Greatest Hits 2013

I’ve developed a little annual tradition to look back at my yearly content and summarise my posts into a greatest hits compilation. Today’s article collates my top content from 2013 based on the total number of social media likes and shares each post received. Were there any you missed? I’d love to know which post […]



Redirect Users to Custom Pages by Role

WordPress is being used more and more as a web application framework. With that use case comes a bunch of extra circumstances that WordPress doesn’t cover. Do you really want your application users to see the WordPress admin?

In my web application development experience, answer to that question is usually “no.”

Today I’m going to teach you how to redirect a user based on their role to a custom page in WordPress.

Getting Set Up

Let’s start this by building a plugin. You want this in a plugin because it’s likely you’ll change your theme design and still want the redirect functionality. Any functionality that you want to live past the current theme design should be in a plugin.

Create a new plugin folder in your wp-content/plugins directory called ‘cm-redirect-by-role‘ and add a file called cm-redirect-by-role.php. To that file we’re going to add the basic WordPress plugin header seen below.

Plugin Name: Redirect Users by Role
Plugin URI:
Description: Redirects users based on their role
Version: 1.0
Author: SFNdesign, Curtis McHale
Author URI:
License: GPLv2 or later
*/ /*
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either version 2
of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
GNU General Public License for more details. You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
// TODO ?>

Now that you have a plugin started let’s take a look at how user login works.

User Login Flow

The default spot that a user can log in to your WordPress site is via When you log in to a site from that location the site sends you to the WordPress admin dashboard.


That means that the WordPress admin is starting up and you need to use an admin action to catch the user. I always hook the admin_init action since it runs late enough that you have access to user data but not so late that the user will see anything on the dashboard.

Using the admin_init action means that even if they are already logged in and try to access the WordPress admin they will still get redirected.

Now let’s take a look at the code we are going to use. For our example we’ll assume that we want to redirect all subscribers but this will work with any standard or custom role in WordPress.

/** * Redirects users based on their role * * @since 1.0 * @author SFNdesign, Curtis McHale * * @uses wp_get_current_user() Returns a WP_User object for the current user * @uses wp_redirect() Redirects the user to the specified URL */
function cm_redirect_users_by_role() { $current_user = wp_get_current_user(); $role_name = $current_user->roles[0]; if ( 'subscriber' === $role_name ) { wp_redirect( '' ); } // if } // cm_redirect_users_by_role
add_action( 'admin_init', 'cm_redirect_users_by_role' );

We start this process by getting our current user object with wp_get_current_user(). Out of that we get our role name and assign it to the $role_name variable.

Then we check if $role_name matches with the role we want to redirect. If it does we use wp_redirect to send the user to our location of choice.

While this will work we still have one more piece to add.

Making It AJAX safe

When making AJAX calls in WordPress you should always call the WordPress AJAX routing file which is inside the WordPress admin. If we leave our code as it is any AJAX call made by our matching roles will fail since it will meet our conditional and be redirected.

To fix that we need to check if we are currently doing an AJAX call and if so skip the role check.

function cm_redirect_users_by_role() { if ( ! defined( 'DOING_AJAX' ) ) { $current_user = wp_get_current_user(); $role_name = $current_user->roles[0]; if ( 'subscriber' === $role_name ) { wp_redirect( '' ); } // if $role_name } // if DOING_AJAX } // cm_redirect_users_by_role
add_action( 'admin_init', 'cm_redirect_users_by_role' );

Now we have our redirect function wrapped in a check for the DOING_AJAX constant. If that is defined, we are running an AJAX call and we want to skip the redirect code.


That’s it we can now redirect users based on their role to a custom location of our choosing. We could even redirect users with different roles to different pages if we wanted.

All we’d need to do is add a second conditional to match the second role and set the location to where we wanted to redirect.