Ancestry Library Edition Hack
As an industry, we’re historically terrible at drawing lines between things. We try to segment devices based on screen size, but that doesn’t take into account hardware functionality, form factor, and usage context, for starters. The laptop I’m writing this on has the same resolution as a 1080p television. They’d be lumped into the same screen-size–dependent groups, but they are two totally different device classes, so how do we determine what goes together?

Dollarhide’s Genealogy Rule No. 42: If you took family group sheets to the last wedding you attended, you are probably an addicted genealogist. Old city directories. Visit President Rutherford B. Hayes’ wooded estate named Spiegel Grove, home of America’s first presidential library. Tour the president’s 31-room Victorian.
That’s a simple example, but it points to a larger issue. We so desperately want to draw lines between things, but there are often too many variables to make those lines clean. Why, then, do we draw such strict lines between our roles on projects? What does the area of overlap between a designer and front-end developer look like? A front- and back-end developer? A designer and back-end developer? The old thinking of defined roles is certainly loosening up, but we still have a long way to go.
The chasm between roles that is most concerning is the one between web designers/developers and native application designers/developers. We often choose a camp early on and stick to it, which is a mindset that may have been fueled by the false “native vs. Web” battle a few years ago. It was positioned as an either-or decision, and hybrid approaches were looked down upon. The two camps of creators are drifting farther and farther apart, even as the products are getting closer and closer. John Gruber best described the overlap that users see: When I’m using Tweetbot, for example, much of my time in the app is spent reading web pages rendered in a web browser. Surely that’s true of mobile Facebook users, as well.
What should that count as, “app” or “web”? I publish a website, but tens of thousands of my most loyal readers consume it using RSS apps. What should they count as, “app” or “web”? The people using the things we build don’t see the divide as harshly as we do, if at all. More importantly, the development environments are becoming more similar, as well.
Swift, Apple’s brand new programming language for iOS and Mac development, has a strong resemblance to the languages we know and love on the web, and that’s no accident. One of Apple’s top targets for Swift, if not the top target, is the web development community. It’s a massive, passionate, and talented pool of developers who, largely, have not done iOS or Mac work—yet.
As someone who spans the divide regularly, it’s sad to watch these two communities keep at arm’s length like awkward cousins at a family reunion. We have so much in common—interests, skills, core values, and a ton of technological ancestry. The difference between the things we build is shrinking in the minds of our shared users, and the ways we build those things are aligning. I dream of the day when we get over our poorly drawn lines and become the big, happy community I know we can be. At the very least, please start reading each other’s blogs. (Quelle: A List Apart: The Full Feed).
You keep it by giving it away.Jeffrey Zeldman It’s a philosophy that’s always guided us at A List Apart: that we all learn more—and are more successful—when we share what we know with anyone who wants to listen. And it comes straight from our publisher, Jeffrey Zeldman.
For 20 years, he’s been sharing everything he can with us, the people who make websites—from advice on table layouts in the ‘90s to Designing With Web Standards in the 2000s to educating the next generation of designers today. Our friends at Lynda.com just released a documentary highlighting Jeffrey’s two decades of designing, organizing, and most of all sharing on the web.
You should watch it. Jeffrey Zeldman: 20 years of Web Design and Community from lynda.com. (Quelle: A List Apart: The Full Feed). I remember January 10, 2010, rather well: it was the day we lost a project’s complete history. We were using Subversion as our version control system, which kept the project’s history in a central repository on a server. And we were backing up this server on a regular basis—at least, we thought we were.
The server broke down, and then the backup failed. Our project wasn’t completely lost, but all the historic versions were gone.
Shortly after the server broke down, we switched to Git. I had always seen version control as torturous; it was too complex and not useful enough for me to see its value, though I used it as a matter of duty. But once we’d spent some time on the new system, and I began to understand just how helpful Git could be. Since then, it has saved my neck in many situations. During the course of this article, I’ll walk through how Git can help you avoid mistakes—and how to recover if they’ve already happened. Every teammate is a backup Since Git is a distributed version control system, every member of our team that has a project cloned (or “checked out,” if you’re coming from Subversion) automatically has a backup on his or her disk.
This backup contains the latest version of the project, as well as its complete history. This means that should a developer’s local machine or even our central server ever break down again (and the backup not work for any reason), we’re up and running again in minutes: any local repository from a teammate’s disk is all we need to get a fully functional replacement. Branches keep separate things separate When my more technical colleagues told me about how “cool” branching in Git was, I wasn’t bursting with joy right away. First, I have to admit that I didn’t really understand the advantages of branching. And second, coming from Subversion, I vividly remembered it being a complex and error-prone procedure. With some bad memories, I was anxious about working with branches and therefore tried to avoid it whenever I could. It took me quite a while to understand that branching and merging work completely differently in Git than in most other systems—especially regarding its ease of use!
So if you learned the concept of branches from another version control system (like Subversion), I recommend you forget your prior knowledge and start fresh. Let’s start by understanding why branches are so important in the first place. Why branches are essential Back in the days when I didn’t use branches, working on a new feature was a mess. Essentially, I had the choice between two equally bad workflows: (a) I already knew that creating small, granular commits with only a few changes was a good version control habit.
However, if I did this while developing a new feature, every commit would mingle my half-done feature with the main code base until I was done. It wasn’t very pleasant for my teammates to have my unfinished feature introduce bugs into the project.
(b) To avoid getting my work-in-progress mixed up with other topics (from colleagues or myself), I’d work on a feature in my separate space. I would create a copy of the project folder that I could work with quietly—and only commit my feature once it was complete. But committing my changes only at the end produced a single, giant, bloated commit that contained all the changes. Neither my teammates nor I could understand what exactly had happened in this commit when looking at it later. I slowly understood that I had to make myself familiar with branches if I wanted to improve my coding.
Working in contexts Any project has multiple contexts where work happens; each feature, bug fix, experiment, or alternative of your product is actually a context of its own. It can be seen as its own “topic,” clearly separated from other topics.
If you don’t separate these topics from each other with branching, you will inevitably increase the risk of problems. Mixing different topics in the same context: makes it hard to keep an overview—and with a lot of topics, it becomes almost impossible; makes it hard to undo something that proved to contain a bug, because it’s already mingled with so much other stuff; doesn’t encourage people to experiment and try things out, because they’ll have a hard time getting experimental code out of the repository once it’s mixed with stable code. Using branches gave me the confidence that I couldn’t mess up. In case things went wrong, I could always go back, undo, start fresh, or switch contexts. Branching basics Branching in Git actually only involves a handful of commands.
Let’s look at a basic workflow to get you started. To create a new branch based on your current state, all you have to do is pick a name and execute a single command on your command line.
We’ll assume we want to start working on a new version of our contact form, and therefore create a new branch called “contact-form”: $ git branch contact-form Using the git branch command without a name specified will list all of the branches we currently have (and the “-v” flag provides us with a little more data than usual): $ git branch -v You might notice the little asterisk on the branch named “master.” This means it’s the currently active branch. So, before we start working on our contact form, we need to make this our active context: $ git checkout contact-form Git has now made this branch our current working context.
(In Git lingo, this is called the “HEAD branch”). All the changes and every commit that we make from now on will only affect this single context—other contexts will remain untouched. If we want to switch the context to a different branch, we’ll simply use the git checkout command again. In case we want to integrate changes from one branch into another, we can “merge” them into the current working context.
Imagine we’ve worked on our “contact-form” feature for a while, and now want to integrate these changes into our “master” branch. All we have to do is switch back to this branch and call git merge: $ git checkout master $ git merge contact-form Using branches I would strongly suggest that you use branches extensively in your day-to-day workflow. Branches are one of the core concepts that Git was built around. They are extremely cheap and easy to create, and simple to manage—and there are plenty of resources out there if you’re ready to learn more about using them.
Undoing things There’s one thing that I’ve learned as a programmer over the years: mistakes happen, no matter how experienced people are. You can’t avoid them, but you can have tools at hand that help you recover from them. One of Git’s greatest features is that you can undo almost anything. This gives me the confidence to try out things without fear—because, so far, I haven’t managed to really break something beyond recovery. Amending the last commit Even if you craft your commits very carefully, it’s all too easy to forget adding a change or mistype the message. With the —amend flag of the git commit command, Git allows you to change the very last commit, and it’s a very simple fix to execute.
For example, if you forgot to add a certain change and also made a typo in the commit subject, you can easily correct this: $ git add some/changed/files $ git commit --amend -m 'The message, this time without typos' There’s only one thing you should keep in mind: you should never amend a commit that has already been pushed to a remote repository. Respecting this rule, the “amend” option is a great little helper to fix the last commit. (For more detail about the amend option, I recommend Nick Quaranto’s excellent walkthrough.) Undoing local changes Changes that haven’t been committed are called “local.” All the modifications that are currently present in your working directory are “local” uncommitted changes. Discarding these changes can make sense when your current work is well worse than what you had before. With Git, you can easily undo local changes and start over with the last committed version of your project. If it’s only a single file that you want to restore, you can use the git checkout command: $ git checkout -- file/to/restore Don’t confuse this use of the checkout command with switching branches (see above).
If you use it with two dashes and (separated with a space!) the path to a file, it will discard the uncommitted changes in a given file. On a bad day, however, you might even want to discard all your local changes and restore the complete project: $ git reset --hard HEAD This will replace all of the files in your working directory with the last committed revision. Just as with using the checkout command above, this will discard the local changes. Be careful with these operations: since local changes haven’t been checked into the repository, there is no way to get them back once they are discarded!
Undoing committed changes Of course, undoing things is not limited to local changes. You can also undo certain commits when necessary—for example, if you’ve introduced a bug. Basically, there are two main commands to undo a commit: (a) git reset The git reset command really turns back time. You tell it which version you want to return to and it restores exactly this state—undoing all the changes that happened after this point in time. Just provide it with the hash ID of the commit you want to return to: $ git reset -- hard 2be18d9 The —hard option is the easiest and cleanest approach, but it also wipes away all local changes that you might still have in your working directory.
So, before doing this, make sure there aren’t any local changes you’ve set your heart on. (b) git revert The git revert command is used in a different scenario. Imagine you have a commit that you don’t want anymore—but the commits that came afterwards still make sense to you. In that case, you wouldn’t use the git reset command because it would undo all those later commits, too! The revert command, however, only reverts the effects of a certain commit.
It doesn’t remove any commits, like git reset does. Instead, it even creates a new commit; this new commit introduces changes that are just the opposite of the commit to be reverted. For example, if you deleted a certain line of code, revert will create a new commit that introduces exactly this line, again. To use it, simply provide it with the hash ID of the commit you want reverted: $ git revert 2be18d9 Finding bugs When it comes to finding bugs, I must admit that I’ve wasted quite some time stumbling in the dark. I often knew that it used to work a couple of days ago—but I had no idea where exactly things went wrong. It was only when I found out about git bisect that I could speed up this process a bit. With the bisect command, Git provides a tool that helps you find the commit that introduced a problem.
Imagine the following situation: we know that our current version (tagged “2.0”) is broken. We also know that a couple of commits ago (our version “1.9”), everything was fine.
The problem must have occurred somewhere in between. This is already enough information to start our bug hunt with git bisect: $ git bisect start $ git bisect bad $ git bisect good v1.9 After starting the process, we told Git that our current commit contains the bug and therefore is “bad.” We then also informed Git which previous commit is definitely working (as a parameter to git bisect good). Git then restores our project in the middle between the known good and known bad conditions: We now test this version (for example, by running unit tests, building the app, deploying it to a test system, etc.) to find out if this state works—or already contains the bug. As soon as we know, we tell Git again—either with git bisect bad or git bisect good. Let’s assume we said that this commit was still “bad.” This effectively means that the bug must have been introduced even earlier—and Git will again narrow down the commits in question: This way, you’ll find out very quickly where exactly the problem occurred. Once you know this, you need to call git bisect reset to finish your bug hunt and restore the project’s original state. A tool that can save your neck I must confess that my first encounter with Git wasn’t love at first sight.
In the beginning, it felt just like my other experiences with version control: tedious and unhelpful. But with time, the practice became intuitive, and gained my trust and confidence. After all, mistakes happen, no matter how much experience we have or how hard we try to avoid them. What separates the pro from the beginner is preparation: having a system in place that you can trust in case of problems.
It helps you stay on top of things, especially in complex projects. And, ultimately, it helps you become a better professional. References Feel free to learn more about amending, reverting, and resetting commits. Make yourself familiar with “git bisect” with this detailed example. A detailed introduction to branching. (Quelle: A List Apart: The Full Feed).
Growing up, I learned there were two kinds of reviews I could seek out from my parents. One parent gave reviews in the form of a shower of praise. The other parent, the one with a degree from the Royal College of Art, would put me through a design crit. Today the reviews I seek are for my code, not my horse drawings, but it continues to be a process I both dread and crave. In this article, I’ll describe my battle-tested process for conducting code reviews, highlighting the questions you should ask during the review process as well as the necessary version control commands to download and review someone’s work. I’ll assume your team uses Git to store its code, but the process works much the same if you’re using any other source control system. Completing a peer review is time-consuming.
In the last project where I introduced mandatory peer reviews, the senior developer and I estimated that it doubled the time to complete each ticket. The reviews introduced more context-switching for the developers, and were a source of increased frustration when it came to keeping the branches up to date while waiting for a code review. The benefits, however, were huge. Coders gained a greater understanding of the whole project through their reviews, reducing silos and making onboarding easier for new people.
Senior developers had better opportunities to ask why decisions were being made in the codebase that could potentially affect future work. And by adopting an ongoing peer review process, we reduced the amount of time needed for human quality assurance testing at the end of each sprint. Let’s walk through the process. Our first step is to figure out exactly what we’re looking for. Determine the purpose of the proposed change Our code review should always begin in a ticketing system, such as Jira or GitHub. It doesn’t matter if the proposed change is a new feature, a bug fix, a security fix, or a typo: every change should start with a description of why the change is necessary, and what the desired outcome will be once the change has been applied. This allows us to accurately assess when the proposed change is complete.
The ticketing system is where you’ll track the discussion about the changes that need to be made after reviewing the proposed work. From the ticketing system, you’ll determine which branch contains the proposed code.
Let’s pretend the ticket we’re reviewing today is 61524—it was created to fix a broken link in our website. It could just as equally be a refactoring, or a new feature, but I’ve chosen a bug fix for the example. No matter what the nature of the proposed change is, having each ticket correspond to only one branch in the repository will make it easier to review, and close, tickets. Set up your local environment and ensure that you can reproduce what is currently the live site—complete with the broken link that needs fixing.
When you apply the new code locally, you want to catch any regressions or problems it might introduce. You can only do this if you know, for sure, the difference between what is old and what is new.
Review the proposed changes At this point you’re ready to dive into the code. I’m going to assume you’re working with Git repositories, on a branch-per-issue setup, and that the proposed change is part of a remote team repository. Working directly from the command line is a good universal approach, and allows me to create copy-paste instructions for teams regardless of platform. To begin, update your local list of branches. Git fetch Then list all available branches.
Git branch -a A list of branches will be displayed to your terminal window. It may appear something like this: * master remotes/origin/master remotes/origin/HEAD ->origin/master remotes/origin/61524-broken-link The * denotes the name of the branch you are currently viewing (or have “checked out”). Lines beginning with remotes/origin are references to branches we’ve downloaded.
We are going to work with a new, local copy of branch 61524-broken-link. When you clone your project, you’ll have a connection to the remote repository as a whole, but you won’t have a read-write relationship with each of the individual branches in the remote repository. You’ll make an explicit connection as you switch to the branch. This means if you need to run the command git push to upload your changes, Git will know which remote repository you want to publish your changes to. Git checkout --track origin/61524-broken-link Ta-da! You now have your own copy of the branch for ticket 61524, which is connected (“tracked”) to the origin copy in the remote repository. You can now begin your review!
First, let’s take a look at the commit history for this branch with the command log. Git log master. Sample output: Author: emmajane Date: Mon Jun 30 17: -0400 Link to resources page was incorrectly spelled.
Resolves #61524. This gives you the full log message of all the commits that are in the branch 61524-broken-link, but are not also in the master branch. Skim through the messages to get a sense of what’s happening. Next, take a brief gander through the commit itself using the diff command. This command shows the difference between two snapshots in your repository. You want to compare the code on your checked-out branch to the branch you’ll be merging “to”—which conventionally is the master branch.
Git diff master How to read patch files When you run the command to output the difference, the information will be presented as a patch file. Patch files are ugly to read. You’re looking for lines beginning with +. These are lines that have been added or removed, respectively.
Scroll through the changes using the up and down arrows, and press q to quit when you’ve finished reviewing. If you need an even more concise comparison of what’s happened in the patch, consider modifying the diff command to list the changed files, and then look at the changed files one at a time: git diff master --name-only git diff master Let’s take a look at the format of a patch file. Diff --git a/about.html b/about.html index a3aa100.a644 --- a/about.html +++ b/about.html @@ -48,5 +48,5 @@ (2004-05) - A full list of public + A full list of public presentations and workshops Emma has given is available I tend to skim past the metadata when reading patches and just focus on the lines that start with - or +. This means I start reading at the line immediate following @@. There are a few lines of context provided leading up to the changes. These lines are indented by one space each. The changed lines of code are then displayed with a preceding - (line removed), or + (line added).
Going beyond the command line Using a Git repository browser, such as gitk, allows you to get a slightly better visual summary of the information we’ve looked at to date. The version of Git that Apple ships with does not include gitk—I used Homebrew to re-install Git and get this utility. Any repository browser will suffice, though, and there are many GUI clients available on the Git website. Gitk When you run the command gitk, a graphical tool will launch from the command line.
An example of the output is given in the following screenshot. Click on each of the commits to get more information about it. Many ticket systems will also allow you to look at the changes in a merge proposal side-by-side, so if you’re finding this cumbersome, click around in your ticketing system to find the comparison tools they might have—I know for sure GitHub offers this feature. Now that you’ve had a good look at the code, jot down your answers to the following questions: Does the code comply with your project’s identified coding standards? Does the code limit itself to the scope identified in the ticket?
Does the code follow industry best practices in the most efficient way possible? Has the code been implemented in the best possible way according to all of your internal specifications? It’s important to separate your preferences and stylistic differences from actual problems with the code. Apply the proposed changes Now is the time to start up your testing environment and view the proposed change in context. How does it look?
Does your solution match what the coder thinks they’ve built? If it doesn’t look right, do you need to clear the cache, or perhaps rebuild the Sass output to update the CSS for the project? Now is the time to also test the code against whatever test suite you use. Does the code introduce any regressions? Does the new code perform as well as the old code? Does it still fall within your project’s performance budget for download and page rendering times? Are the words all spelled correctly, and do they follow any brand-specific guidelines you have?
Depending on the context for this particular code change, there may be other obvious questions you need to address as part of your code review. Do your best to create the most comprehensive list of everything you can find wrong (and right) with the code. It’s annoying to get dribbles of feedback from someone as part of the review process, so we’ll try to avoid “just one more thing” wherever we can. Prepare your feedback Let’s assume you’ve now got a big juicy list of feedback. Maybe you have no feedback, but I doubt it. If you’ve made it this far in the article, it’s because you love to comb through code as much as I do. Let your freak flag fly and let’s get your review structured in a usable manner for your teammates.
For all the notes you’ve assembled to date, sort them into the following categories: The code is broken. It doesn’t compile, introduces a regression, it doesn’t pass the testing suite, or in some way actually fails demonstrably. These are problems which absolutely must be fixed. The code does not follow best practices.
You have some conventions, the web industry has some guidelines. These fixes are pretty important to make, but they may have some nuances which the developer might not be aware of. The code isn’t how you would have written it.
You’re a developer with battle-tested opinions, and you know you’re right, you just haven’t had the chance to update the Wikipedia page yet to prove it. Submit your evaluation Based on this new categorization, you are ready to engage in passive-aggressive coding. If the problem is clearly a typo and falls into one of the first two categories, go ahead and fix it. Obvious typos don’t really need to go back to the original author, do they? Sure, your teammate will be a little embarrassed, but they’ll appreciate you having saved them a bit of time, and you’ll increase the efficiency of the team by reducing the number of round trips the code needs to take between the developer and the reviewer. If the change you are itching to make falls into the third category: stop. Do not touch the code.
Instead, go back to your colleague and get them to describe their approach. Asking “why” might lead to a really interesting conversation about the merits of the approach taken. It may also reveal limitations of the approach to the original developer. By starting the conversation, you open yourself to the possibility that just maybe your way of doing things isn’t the only viable solution. If you needed to make any changes to the code, they should be absolutely tiny and minor. You should not be making substantive edits in a peer review process.
Make the tiny edits, and then add the changes to your local repository as follows: git add. Git commit -m '[#61524] Correcting identified in peer review.' You can keep the message brief, as your changes should be minor. At this point you should push the reviewed code back up to the server for the original developer to double-check and review. Assuming you’ve set up the branch as a tracking branch, it should just be a matter of running the command as follows: git push Update the issue in your ticketing system as is appropriate for your review. Perhaps the code needs more work, or perhaps it was good as written and it is now time to close the issue queue. Repeat the steps in this section until the proposed change is complete, and ready to be merged into the main branch.
Merge the approved change into the trunk Up to this point you’ve been comparing a ticket branch to the master branch in the repository. This main branch is referred to as the “trunk” of your project.
(It’s a tree thing, not an elephant thing.) The final step in the review process will be to merge the ticket branch into the trunk, and clean up the corresponding ticket branches. Begin by updating your master branch to ensure you can publish your changes after the merge.
Git checkout master git pull origin master Take a deep breath, and merge your ticket branch back into the main repository. As written, the following command will not create a new commit in your repository history. The commits will simply shuffle into line on the master branch, making git log −−graph appear as though a separate branch has never existed.
If you would like to maintain the illusion of a past branch, simply add the parameter −−no-ff to the merge command, which will make it clear, via the graph history and a new commit message, that you have merged a branch at this point. Check with your team to see what’s preferred. Git merge 61524-broken-link The merge will either fail, or it will succeed.
If there are no merge errors, you are ready to share the revised master branch by uploading it to the central repository. Git push If there are merge errors, the original coders are often better equipped to figure out how to fix them, so you may need to ask them to resolve the conflicts for you. Once the new commits have been successfully integrated into the master branch, you can delete the old copies of the ticket branches both from your local repository and on the central repository.
It’s just basic housekeeping at this point. Git branch -d 61524-broken-link git push origin --delete 61524-broken-link Conclusion This is the process that has worked for the teams I’ve been a part of. Without a peer review process, it can be difficult to address problems in a codebase without blame. With it, the code becomes much more collaborative; when a mistake gets in, it’s because we both missed it. And when a mistake is found before it’s committed, we both breathe a sigh of relief that it was found when it was. Regardless of whether you’re using Git or another source control system, the peer review process can help your team.
Peer-reviewed code might take more time to develop, but it contains fewer mistakes, and has a strong, more diverse team supporting it. And, yes, I’ve been known to learn the habits of my reviewers and choose the most appropriate review style for my work, just like I did as a kid.
(Quelle: A List Apart: The Full Feed). Freelancers and self-employed business owners can choose from a huge number of conferences to attend in any given year.
There are hundreds of industry podcasts, a constant stream of published books, and a never-ending supply of sites all giving advice. It is very easy to spend a lot of valuable time and money just attending, watching, reading, listening and hoping that somehow all of this good advice will take root and make our business a success. However, all the good advice in the world won’t help you if you don’t act on it. While you might leave that expensive conference feeling great, did your attendance create a lasting change to your business? I was thinking about this subject while listening to episode 14 of the Working Out podcast, hosted by Ashley Baxter and Paddy Donnelly. They were talking about following through, and how it is possible to “nod along” to good advice but never do anything with it. If you have ever been sent to a conference by an employer, you may have been expected to report back.
You might even have been asked to present to your team on the takeaway points from the event. As freelancers and business owners, we don’t have anyone making us consolidate our thoughts in that way. It turns out that the way I work gives me a fairly good method of knowing which things are bringing me value. Tracking actionable advice I’m a fan of the Getting Things Done technique, and live by my to-do lists. I maintain a Someday/Maybe list in OmniFocus into which I add items that I want to do or at least investigate, but that aren’t a project yet. If a podcast is worth keeping on my playlist, there will be items entered linking back to certain episodes. Conference takeaways might be a link to a site with information that I want to read.
It might be an idea for an article to write, or instructions on something very practical such as setting up an analytics dashboard to better understand some data. The first indicator of a valuable conference is how many items I add during or just after the event. Having a big list of things to do is all well and good, but it’s only one half of the story. The real value comes when I do the things on that list, and can see whether they were useful to my business. Once again, my GTD lists can be mined for that information.
When tickets go on sale for that conference again, do I have most of those to-do items still sat in Someday/Maybe? Is that because, while they sounded like good ideas, they weren’t all that relevant? Or, have I written a number of blog posts or had several articles published on themes that I started considering off the back of that conference? Did I create that dashboard, and find it useful every day? Did that speaker I was introduced to go on to become a friend or mentor, or someone I’ve exchanged emails with to clarify a topic I’ve been thinking about? By looking back over my lists and completed items, I can start to make decisions about the real value to my business and life of the things I attend, read, and listen to. I’m able to justify the ticket price, time, and travel costs by making that assessment.
I can feel confident that I’m not spending time and money just to feel as if I’m moving forward, yet gaining nothing tangible to show for it. A final thought on value As entrepreneurs, we have to make sure we are spending our time and money on things that will give us the best return.
All that said, it is important to make time in our schedules for those things that we just enjoy, and in particular those things that do motivate and inspire us. I don’t think that every book you read or event you attend needs to result in a to-do list of actionable items.
What we need as business owners, and as people, is balance. We need to be able to see that the things we are doing are moving our businesses forward, while also making time to be inspired and refreshed to get that actionable work done.Footnotes1. Have any favorite hacks for getting maximum value from conferences, workshops, and books?
Tell us in the comments! (Quelle: A List Apart: The Full Feed). “Why don’t we just use this plugin?” That’s a question I started hearing a lot in the heady days of the 2000s, when open-source CMSes were becoming really popular. We asked it optimistically, full of hope about the myriad solutions only a download away. As the years passed, we gained trustworthy libraries and powerful communities, but the graveyard of crufty code and abandoned services grew deep.
Many solutions were easy to install, but difficult to debug. Some providers were eager to sell, but loath to support. Years later, we’re still asking that same question—only now we’re less optimistic and even more dependent, and I’m scared to engage with anyone smart enough to build something I can’t. The emerging challenge for today’s dev shop is knowing how to take control of third-party relationships—and when to avoid them. I’ll show you my approach, which is to ask a different set of questions entirely.
A web of third parties I should start with a broad definition of what it is to be third party: If it’s a person and I don’t compensate them for the bulk of their workload, they’re third party. If it’s a company or service and I don’t control it, it’s third party. If it’s code and my team doesn’t grasp every line of it, it’s third party. The third-party landscape is rapidly expanding.
Github has grown to almost 7 million users and the WordPress plugin repo is approaching 1 billion downloads. Many of these solutions are easy for clients and competitors to implement; meanwhile, I’m still in the lab debugging my custom code. The idea of selling original work seems oddlyold-fashioned. Yet with so many third-party options to choose from, there are more chances than ever to veer off-course. What could go wrong?
At a meeting a couple of years ago, I argued against using an external service to power a search widget on a client project. “We should do things ourselves,” I said. Not long after this, on the very same project, I argued in favor of a using a third party to consolidate RSS feeds into a single document. “Why do all this work ourselves,” I said, “when this problem has already been solved?” My inconsistency was obvious to everyone. Being dogmatic about not using a third party is no better than flippantly jumping in with one, and I had managed to do both at once! But in one case, I believed the third party was worth the risk. In the other, it wasn’t.
I just didn’t know how to communicate those thoughts to my team. I needed, in the parlance of our times, a decision-making framework. To that end, I’ve been maintaining a collection of points to think through at various stages of engagement with third parties. I’ll tour through these ideas using the search widget and the RSS digest as examples.
The difference between a request and a goal This point often reveals false assumptions about what a client or stakeholder wants. In the case of the search widget, we began researching a service that our client specifically requested. Fitted with ajax navigation, full-text searching, and automated crawls to index content, it seemed like a lot to live up to.
But when we asked our clients what exactly they were trying to do, we were surprised: they were entirely taken by the typeahead functionality; the other features were of very little perceived value. In the case of the RSS “smusher,” we already had an in-house tool that took an array of feed URLs and looped through them in order, outputting x posts per feed in some bespoke format. They’re too good for our beloved multi-feed widget? But actually, the client had a distinctly different and worthwhile vision: they wanted x results from their array of sites in total, and they wanted them ordered by publication date, not grouped by site. It might seem like an obvious first step, but I have seen projects set off in the wrong direction because the end goal is unknown. In both our examples now, we’re clear about that and we’re ready to evaluate solutions. To dev or to download Before deciding to use a third party, I find that I first need to examine my own organization, often in four particular ways: strengths, weaknesses, betterment, and mission.
Strengths and weaknesses The search task aligned well with our strengths because we had good front-end developers and were skilled at extending our CMS. So when asked to make a typeahead search, we felt comfortable betting on ourselves. Had we done it before?
Not exactly, but we could think through it. At that same time, backend infrastructure was a weakness for our team. We had happened to have a lot of turnover among our sysadmins, and at times it felt like we weren’t equipped to hire that sort of talent. As I was thinking through how we might build a feed-smusher of our own, I felt like I was tempting a weak underbelly.
Maybe we’d have to set up a cron job to poll the desired URLs, grab feed content, and store that on our servers. Not rocket science, but cron tasks in particular were an albatross for us.
Betterment of the team When we set out to achieve a goal for a client, it’s more than us doing work: it’s an opportunity for our team to better themselves by learning new skills. The best opportunities for this are the ones that present challenging but attainable tasks, which create incremental rewards.
Some researchers cite this effect as a factor in gaming addiction. I’ve felt this myself when learning new things on a project, and those are some of my favorite work moments ever. Teams appreciate this and there is an organizational cost in missing a chance to pay them to learn. The typeahead search project looked like it could be a perfect opportunity to boost our skill level.
Organizational mission If a new project aligns well with our mission, we’re going to resell it many times. It’s likely that we’ll want our in-house dev team to iterate on it, tailoring it to our needs. Indeed, we’ll have the budget to do so if we’re selling it a lot.
No one had asked us for a feed-smusher before, so it didn’t seem reasonable to dedicate an R&D budget to it. In contrast, several other clients were interested in more powerful site search, so it looked like it would be time well spent. We’ve now clarified our end goals and we’ve looked at how these projects align with our team. Based on that, we’re doing the search widget ourselves, and we’re outsourcing the feed-smusher. Now let’s look more closely at what happens next for both cases. Evaluating the unknown The frustrating thing about working with third parties is that the most important decisions take place when we have the least information. But there are some things we can determine before committing.
Familiarity, vitality, extensibility, branding, and Service Level Agreements (SLAs) are all observable from afar. Familiarity: is there a provider we already work with?
Although we’re going to increase the number of third-party dependencies, we’ll try to avoid increasing the number of third-party relationships. Working with a known vendor has several potential benefits: they may give us volume pricing. Markup and style are likely to be consistent between solutions. And we just know them better than we’d know a new service.
Vitality: will this service stick around? The worst thing we could do is get behind a service, only to have it shut down next month. A service with high vitality will likely (and rightfully) brag about enterprise clients by name. If it’s open source, it will have a passionate community of contributors. On the other hand, it could be advertising a shutdown. More often, it’s somewhere in the middle. Noting how often the service is updated is a good starting point in determining vitality.
Extensibility: can this service adapt as our needs change? Not only do we have to evaluate the core service, we have to see how extensible it is by digging into its API. If a service is extensible, it’s more likely to fit for the long haul. APIs can also present new opportunities. For example, imagine selecting an email-marketing provider with an API that exposes campaign data.
This might allow us to build a dashboard for campaign performance in our CMS—a unique value-add for our clients, and a chance to keep our in-house developers invested and excited about the service. Branding: is theirs strong, or can you use your own? White-labeling is the practice of reselling a service with your branding instead of that of the original provider. For some companies, this might make good sense for marketing. I tend to dislike white-labeling. Our clients trust us to make choices, and we should be proud to display what those choices are. Either way, you want to ensure you’re comfortable with the brand you’ll be using.
SLAs: what are you getting, beyond uptime? For client-side products, browser support is a factor: every external dependency represents another layer that could abandon older browsers before we’re ready. There’s also accessibility. Does this new third-party support users with accessibility needs to the degree that we require? Perhaps most important of all is support. Can we purchase a priority support plan that offers fast and in-depth help? In the case of our feed-smusher service, there was no solution that ran the table.
The most popular solution actually had a shutdown notice! There were a couple of smaller providers available, but we hadn’t worked with either before. Browser support and accessibility were moot since we’d be parsing the data and displaying it ourselves. The uptime concern was also diminished because we’d be sure to cache the results locally.
Anyway, with viable candidates in hand, we can move on to more productive concerns than dithering between two similar solutions. Relationship maintenance If someone else is going to do the heavy lifting, I want to assume as much of the remaining burden as possible. Piloting, data collection, documentation, and in-house support are all valuable opportunities to buttress this new relationship. As exciting as this new relationship is, we don’t want to go dashing out of the gates just yet. Instead, we’ll target clients for piloting and quarantine them before unleashing it any further. Cull suggestions from team members to determine good candidates for piloting, garnering a mix of edge-cases and the norm. If the third party happens to collect data of any kind, we should also have an automated way to import a copy of it—not just as a backup, but also as a cached version we can serve to minimize latency.
If we are serving a popular dependency from a CDN, we want to send a local version if that call should fail. If our team doesn’t have a well-traveled directory of provider relationships, the backstory can get lost. Let a few months pass, throw in some personnel turnover, and we might forget why we even use a service, or why we opted for a particular package.
Everyone on our team should know where and how to learn about our third-party relationships. We don’t need every team member to be an expert on the service, yet we don’t want to wait for a third-party support staff to respond to simple questions.
Therefore, we should elect an in-house subject-matter expert. It doesn’t have to be a developer. We just need somebody tasked with monitoring the service at regular intervals for API changes, shutdown notices, or new features.
They should be able to train new employees and route more complex support requests to the third party. In our RSS feed example, we knew we’d read their output into our database. We documented this relationship in our team’s most active bulletin, our CRM software.
And we made managing external dependencies a primary part of one team member’s job. DIY: a third party waiting to happen? Stop me if you’ve heard this one before: a prideful developer assures the team that they can do something themselves. It’s a complex project. They make something and the company comes to rely on it. Time goes by and the in-house product is doing fine, though there is a maintenance burden.
Eventually, the developer leaves the company. Their old product needs maintenance, no one knows what to do, and since it’s totally custom, there is no such thing as a community for it. Once you decide to build something in-house, how can you prevent that work from devolving into a resented, alien dependency? Consider pair-programming.
What better way to ensure that multiple people understand a product, than to have multiple people build it? “Job-switch Tuesdays.” When feasible, we have developers switch roles for an entire day.
Literally, in our ticketing system, it’s as though one person is another. It’s a way to force cross-training without doubling the hours needed for a task. Hold code reviews before new code is pushed. This might feel slightly intrusive at first, but that passes. If it’s not readable, it’s not deployable. If you have project managers with a technical bent, empower them to ask questions about the code, too. Bring moldy code into light by displaying it as phpDoc, JSDoc, or similar.
Beware the big. Create hourly estimates in Fibonacci increments. As a project gets bigger, so does its level of uncertainty. The Fibonacci steps are biased against under-budgeting, and also provide a cue to opt out of projects that are too difficult to estimate. In that case, it’s likely better to toe-in with a third party instead of blazing into the unknown by yourself. All of these considerations apply to our earlier example, the typeahead search widget.
Most germane is the provision to “beware the big.” When I say “big,” I mean that relative to what usually works for a given team. In this case, it was a deliverable that felt very familiar in size and scope: we were being asked to extend an open-source CMS. If instead we had been asked to make a CMS, alarms would have gone off. Look before you leap, and after you land It’s not that third parties are bad per se.
It’s just that the modern web team strikes me as a strange place: not only do we stand on the shoulders of giants, we do so without getting to know them first—and we hoist our organizations and clients up there, too. Granted, there are many things you shouldn’t do yourself, and it’s possible to hurt your company by trying to do them—NIH is a problem, not a goal. But when teams err too far in the other direction, developers become disenfranchised, components start to look like spare parts, and clients pay for solutions that aren’t quite right. Using a third party versus staying in-house is a big decision, and we need to think hard before we make it. Use my line of questions, or come up with one that fits your team better. After all, you’re your own best dependency. (Quelle: A List Apart: The Full Feed).
We all want our websites to be fast. We optimize images, create CSS sprites, use CDNs, cache aggressively, and gzip and minimize static content. We use every trick in the book. But we can still do more. If we want faster outcomes, we have to think differently. What if, instead of leaving our users to stare at a spinning wheel, waiting for content to be delivered, we could predict where they wanted to go next?
What if we could have that content ready for them before they even ask for it? We tend to see the web as a reactive model, where every action causes a reaction. Users click, then we take them to a new page. They click again, and we open another page. But we can do better. We can be proactive with prebrowsing.
The three big techniques Steve Souders coined the term prebrowsing (from predictive browsing) in one of his articles late last year. Prebrowsing is all about anticipating where users want to go and preparing the content ahead of time. It’s a big step toward a faster and less visible internet. Browsers can analyze patterns to predict where users are going to go next, and start DNS resolution and TCP handshakes as soon as users hover over links. But to get the most out of these improvements, we can enable prebrowsing on our web pages, with three techniques at our disposal: DNS prefetching Resource prefetching Prerendering Now let’s dive into each of these separately. DNS prefetching Whenever we know our users are likely to request a resource from a different domain than our site, we can use DNS prefetching to warm the machinery for opening the new URL.
The browser can pre-resolve the DNS for the new domain ahead of time, saving several milliseconds when the user actually requests it. We are anticipating, and preparing for an action. Modern browsers are very good at parsing our pages, looking ahead to pre-resolve all necessary domains ahead of time. Chrome goes as far as keeping an internal list with all related domains every time a user visits a site, pre-resolving them when the user returns (you can see this list by navigating to chrome://dns/ in your Chrome browser). However, sometimes access to new URLs may be hidden behind redirects or embedded in JavaScript, and that’s our opportunity to help the browser.
Let’s say we are downloading a set of resources from the domain cdn.example.com using a JavaScript call after a user clicks a button. Normally, the browser would have to resolve the DNS at the time of the click, but we can speed up the process by including a dns-prefetch directive in the head section of our page: Doing this informs the browser of the existence of the new domain, and it will combine this hint with its own pre-resolution algorithm to start a DNS resolution as soon as possible. The entire process will be faster for the user, since we are shaving off the time for DNS resolution from the operation.
(Note that browsers do not guarantee that DNS resolution will occur ahead of time; they simply use our hint as a signal for their own internal pre-resolution algorithm.) But exactly how much faster will pre-resolving the DNS make things? In your Chrome browser, open chrome://histograms/DNS and search for DNS.PrefetchResolution.
You’ll see a table like this: This histogram shows my personal distribution of latencies for DNS prefetch requests. On my computer, for 335 samples, the average time is 88 milliseconds, with a median of approximately 60 milliseconds. Shaving 88 milliseconds off every request our website makes to an external domain? That’s something to celebrate. But what happens if the user never clicks the button to access the cdn.example.com domain?
Aren’t we pre-resolving a domain in vain? We are, but luckily for us, DNS prefetching is a very low-cost operation; the browser will need to send only a few hundred bytes over the network, so the risk incurred by a preemptive DNS lookup is very low. That being said, don’t go overboard when using this feature; prefetch only domains that you are confident the user will access, and let the browser handle the rest. Look for situations that might be good candidates to introduce DNS prefetching on your site: Resources on different domains hidden behind 301 redirects Resources accessed from JavaScript code Resources for analytics and social sharing (which usually come from different domains) DNS prefetching is currently supported on IE11, Chrome, Chrome Mobile, Safari, Firefox, and Firefox Mobile, which makes this feature widespread among current browsers. Browsers that don’t currently support DNS prefetching will simply ignore the hint, and DNS resolution will happen in a regular fashion. Resource prefetching We can go a little bit further and predict that our users will open a specific page in our own site. If we know some of the critical resources used by this page, we can instruct the browser to prefetch them ahead of time: The browser will use this instruction to prefetch the indicated resources and store them on the local cache.
This way, as soon as the resources are actually needed, the browser will have them ready to serve. Unlike DNS prefetching, resource prefetching is a more expensive operation; be mindful of how and when to use it. Prefetching resources can speed up our websites in ways we would never get by merely prefetching new domains—but if we abuse it, our users will pay for the unused overhead. Let’s take a look at the average response size of some of the most popular resources on a web page, courtesy of the HTTP Archive: On average, prefetching a script file (like we are doing on the example above) will cause 16kB to be transmitted over the network (without including the size of the request itself).
This means that we will save 16kB of downloading time from the process, plus server response time, which is amazing—provided it’s later accessed by the user. If the user never accesses the file, we actually made the entire workflow slower by introducing an unnecessary delay.
If you decide to use this technique, prefetch only the most important resources, and make sure they are cacheable by the browser. Images, CSS, JavaScript, and font files are usually good candidates for prefetching, but HTML responses are not since they aren’t cacheable. Here are some situations where, due to the likelihood of the user visiting a specific page, you can prefetch resources ahead of time: On a login page, since users are usually redirected to a welcome or dashboard page after logging in On each page of a linear questionnaire or survey workflow, where users are visiting subsequent pages in a specific order On a multi-step animation, since you know ahead of time which images are needed on subsequent scenes Resource prefetching is currently supported on IE11, Chrome, Chrome Mobile, Firefox, and Firefox Mobile. (To determine browser compatibility, you can run a quick browser test on prebrowsing.com.) Prerendering What about going even further and asking for an entire page? Let’s say we are absolutely sure that our users are going to visit the about.html page in our site. We can give the browser a hint: This time the browser will download and render the page in the background ahead of time, and have it ready for the user as soon as they ask for it.
The transition from the current page to the prerendered one would be instantaneous. Needless to say, prerendering is the most risky and costly of these three techniques. Misusing it can cause major bandwidth waste—especially harmful for users on mobile devices. To illustrate this, let’s take a look at this chart, also courtesy of the HTTP Archive: In June of this year, the average number of requests to render a web page was 96, with a total size of 1,808kB. So if your user ends up accessing your prerendered page, then you’ve hit the jackpot: you’ll save the time of downloading almost 2,000kB, plus server response time. But if you’re wrong and your user never accesses the prerendered page, you’ll make them pay a very high cost.
When deciding whether to prerender entire pages ahead of time, consider that Google prerenders the top results on its search page, and Chrome prerenders pages based on the historical navigation patterns of users. Using the same principle, you can detect common usage patterns and prerender target pages accordingly.
You can also use it, just like resource prefetching, on questionnaires or surveys where you know users will complete the workflow in a particular order. At this time, prerendering is only supported on IE11, Chrome, and Chrome Mobile. Neither Firefox nor Safari have added support for this technique yet. (And as with resource prefetching, you can check prebrowsing.com to test whether this technique is supported in your browser.) A final word Sites like Google and Bing are using these techniques extensively to make search instant for their users. Now it’s time for us to go back to our own sites and take another look. Can we make our experiences better and faster with prefetching and prerendering?
Browsers are already working behind the scenes, looking for patterns in our sites to make navigation as fast as possible. Prebrowsing builds on that: we can combine the insight we have on our own pages with further analysis of user patterns. By helping browsers do a better job, we speed up and improve the experience for our users. (Quelle: A List Apart: The Full Feed). When I first met Kevin Cornell in the early 2000s, he was employing his illustration talent mainly to draw caricatures of his fellow designers at a small Philadelphia design studio. Even in that rough, dashed-off state, his work floored me.
It was as if Charles Addams and my favorite Mad Magazine illustrators from the 1960s had blended their DNA to spawn the perfect artist. Kevin would deny that label, but artist he is. For there is a vision in his mind, a way of seeing the world, that is unlike anyone else’s—and he has the gift to make you see it too, and to delight, inspire, and challenge you with what he makes you see. Kevin was part of a small group of young designers and artists who had recently completed college and were beginning to establish careers. Others from that group included Rob Weychert, Matt Sutter, and Jason Santa Maria. They would all go on to do fine things in our industry. It was Jason who brought Kevin on as house illustrator during the A List Apart 4.0 brand overhaul in 2005, and Kevin has worked his strange magic for us ever since.
If you’re an ALA reader, you know how he translates the abstract web design concepts of our articles into concrete, witty, and frequently absurd situations. Above all, he is a storyteller—if pretentious designers and marketers haven’t sucked all the meaning out of that word. For nearly 10 years, Kevin has taken our well-vetted, practical, frequently technical web design and development pieces, and elevated them to the status of classic New Yorker articles. Tomorrow he publishes his last new illustrations with us. There will never be another like him.
And for whatever good it does him, Kevin Cornell has my undying thanks, love, and gratitude. (Quelle: A List Apart: The Full Feed). After 200 issues—yes, two hundred—Kevin Cornell is retiring from his post as A List Apart’s staff illustrator. Tomorrow’s issue will be the last one featuring new illustrations from him.
For years now, we’ve eagerly awaited Kevin’s illustrations each issue, opening his files with all the patience of a kid tearing into a new LEGO set. But after nine years and more than a few lols, it’s time to give Kevin’s beautifully deranged brain a rest.
We’re still figuring out what comes next for ALA, but while we do, we’re sending Kevin off the best way we know how: by sharing a few of our favorite illustrations. Read on for stories from ALA staff, past and present—and join us in thanking Kevin for his talent, his commitment, and his uncanny ability to depict seemingly any concept using animals, madmen, and circus figures. — Of all the things I enjoyed about working on A List Apart, I loved anticipating the reveal: seeing Kevin’s illos for each piece, just before the issue went live. Every illustration was always a surprise—even to the staff.
My favorite, hands-down, was his artwork for “The Discipline of Content Strategy,” by Kristina Halvorson. In 2008, content was web design’s “elephant in the room” and Kevin’s visual metaphor nailed it. In a drawing, he encapsulated thoughts and feelings many had within the industry but were unable to articulate.
That’s the mark of a master. —Krista Stevens, Editor-in-chief, 2006–2012 In the fall of 2011, I submitted my first article to A List Apart. I was terrified: I didn’t know anyone on staff. The authors’ list read like a who’s who of web design. The archives were intimidating. But I had ideas, dammit.
I told just one friend what I’d done. His eyes lit up. You’d get a Kevin Cornell!” he said.
I might get a Kevin Cornell?! I hadn’t even thought about that yet. Like Krista, I fell in love with Kevin’s illustration for “The Discipline of Content Strategy”—an illustration that meant the world to me as I helped my clients see their own content elephants. The idea of having a Cornell of my own was exciting, but terrifying.
Could I possibly write something worthy of his illustration? Months later, there it was on the screen: little modular sandcastles illustrating my article on modular content. I was floored.
Now, after two years as ALA’s editor in chief, I’ve worked with Kevin through dozens of issues. But you know what? I’m just as floored as ever. Thank you, Kevin, you brilliant, bizarre, wonderful friend. —Sara Wachter-Boettcher, Editor-in-chief It’s impossible for me to choose a favorite of Kevin’s body of work for ALA, because my favorite Cornell illustration is the witty, adaptable, humane language of characters and symbols underlying his years of work. If I had to pick a single illustration to represent the evolution of his visual language, I think it would be the hat-wearing nested egg with the winning smile that opened Andy Hagen’s “High Accessibility is Effective Search Engine Optimization.” An important article but not, perhaps, the juiciest title A List Apart has ever runand yet there’s that little egg, grinning in his slightly dopey way.
If my memory doesn’t fail me, this is the second appearance of the nested Cornell egg—we saw the first a few issues before in Issue 201, where it represented the nested components of an HTML page. When it shows up here, in Issue 207, we realize that the egg wasn’t a cute one-off, but the first syllable of a visual language that we’ll see again and again through the years. And what a language!
Who else could make semantic markup seem not just clever, but shyly adorable? A wander through the ALA archives provides a view of Kevin’s changing style, but something visible only backstage was his startlingly quick progression from reading an article to sketching initial ideas in conversation with then-creative director Jason Santa Maria to turning out a lovely miniature—and each illustration never failed to make me appreciate the article it introduced in a slightly different way. When I was at ALA, Kevin’s unerring eye for the important detail as a reader astonished me almost as much as his ability to give that (often highly technical, sometimes very dry) idea a playful and memorable visual incarnation. From the very first time his illustrations hit the A List Apart servers he’s shared an extraordinary gift with its readers, and as a reader, writer, and editor, I will always count myself in his debt. —Erin Kissane, Editor-in-chief, contributing editor, 1999–2009 So much of what makes Kevin’s illustrations work are the gestures. The way the figure sits a bit slouched, but still perched on gentle tippy toes, determinedly occupied pecking away on his phone. With just a few lines, Kevin captures a mood and moment anyone can feel.
—Jason Santa Maria, Former creative director I’ve had the pleasure of working with Kevin on the illustrations for each issue of A List Apart since we launched the latest site redesign in early 2013. By working, I mean replying to his email with something along the lines of “Amazing!” when he sent over the illustrations every couple of weeks.
Prior to launching the new design, I had to go through the backlog of Kevin’s work for ALA and do the production work needed for the new layout. This bird’s eye view gave me an appreciation of the ongoing metaphorical world he had created for the magazine—the birds, elephants, weebles, mad scientists, ACME products, and other bits of amusing weirdness that breathed life into the (admittedly, sometimes) dry topics covered. If I had to pick a favorite, it would probably be the illustration that accompanied the unveiling of the redesign, A List Apart 5.0. The shoe-shine man carefully working on his own shoes was the perfect metaphor for both the idea of design as craft and the back-stage nature of the profession—working to make others shine, so to speak. It was a simple and humble concept, and I thought it created the perfect tone for the launch. —Mike Pick, Creative director So I can’t pick one favorite illustration that Kevin’s done. I just can’t.
I could prattle on about this, that, or that other one, and tell you everything I love about each of ’em. I mean, hell: I still have a print of the illustration he did for my very first ALA article. (The illustration is, of course, far stronger than the essay that follows it.) But his illustration for James Christie’s excellent “Sustainable Web Design” is a perfect example of everything I love about Kevin’s ALA work: how he conveys emotion with a few deceptively simple lines; the humor he finds in contrast; the occasional chicken. Like most of Kevin’s illustrations, I’ve seen it whenever I reread the article it accompanies, and I find something new to enjoy each time.
It’s been an honor working alongside your art, Kevin—and, on a few lucky occasions, having my words appear below it. Thanks, Kevin.
—Ethan Marcotte, Technical editor Kevin’s illustration for Cameron Koczon’s “Orbital Content” is one of the best examples I can think of to show off his considerable talent. Those balloons are just perfect: vaguely reminiscent of cloud computing, but tethered and within arm’s reach, and evoking the fun and chaos of carnivals and county fairs. No other illustrator I’ve ever worked with is as good at translating abstract concepts into compact, visual stories.
A List Apart won’t be the same without him. —Mandy Brown, Former contributing editor Kevin has always had what seems like a preternatural ability to take an abstract technical concept and turn it into a clear and accessible illustration. For me, my favorite pieces are the ones he did for the 3rd anniversary of the original “Responsive Web Design” articlethe web’s first “responsive” illustration? Try squishing your browser here to see it in action—Ed —Tim Murtaugh, Technical director I think it may be impossible for me to pick just one illustration of Kevin’s that I really like. Much like trying to pick your one favorite album or that absolutely perfect movie, picking a true favorite is simply folly. You can whittle down the choices, but it’s guaranteed that the list will be sadly incomplete and longer (much longer) than one. If held at gunpoint, however ridiculous that sounds, and asked which of Kevin’s illustrations is my favorite, close to the top of the list would definitely be “12 Lessons for Those Afraid of CSS Standards.” It’s just so subtle, and yet so pointed.
What I personally love the most about Kevin’s work is the overall impact it can have on people seeing it for the first time. It has become commonplace within our ranks to hear the phrase, “This is my new favorite Kevin Cornell illustration” with the publishing of each issue. And rightly so. His wonderfully simple style (which is also deceptively clever and just so smart) paired with the fluidity that comes through in his brush work is magical.
Case in point for me would be his piece for “The Problem with Passwords” which just speaks volumes about the difficulty and utter ridiculousness of selecting a password and security question. We, as a team, have truly been spoiled by having him in our ranks for as long as we have. Thank you Kevin.
—Erin Lynch, Production manager The elephant was my first glimpse at Kevin’s elegantly whimsical visual language. I first spotted it, a patient behemoth being studied by nonplussed little figures, atop Kristina Halvorson’s “The Discipline of Content Strategy,” which made no mention of elephants at all. Yet the elephant added to my understanding: content owners from different departments focus on what’s nearest to them.
The content strategist steps back to see the entire thing. When Rachel Lovinger wrote about “Content Modelling,” the elephant made a reappearance as a yet-to-be-assembled, stylized elephant doll. The unflappable elephant has also been the mascot of product development at the hands of a team trying to construct it from user research, strutted its stuff as curated content, enjoyed the diplomatic guidance of a ringmaster, and been impersonated by a snake to tell us that busting silos is helped by a better understanding of others’ discourse conventions. The delight in discovering Kevin’s visual rhetoric doesn’t end there. With doghouses, birdhouses, and fishbowls, Kevin speaks of environments for users and workers. With owls he represents the mobile experience and smartphones.
With a team arranging themselves to fit into a group photo, he makes the concept of responsive design easier to grasp. Not only has Kevin trained his hand and eye to produce the gestures, textures, and compositions that are uniquely his, but he has trained his mind to speak in a distinctive visual language—and he can do it on deadline. That is some serious mastery of the art. —Rose Weisburd, Columns editor (Quelle: A List Apart: The Full Feed).
Not too long ago, I had a few rough days in support of a client project. The client had a big content release, complete with a media embargo and the like. I woke up on the day of the launch, and things were bad. I was staring straight into a wall of red.
Thanks to the intrinsic complexity of software engineering, these situations happen—I’ve been through them before, and I’ll certainly be through them again. While the particulars change, there are two guiding principles I rely on when I find myself looking up that hopelessly tall cliff of red. You can’t be at the top of your game while stressed and nervous about the emergency, so unless there’s an obvious, quick-to-deploy resolution, you need to give yourself some cover to work. What that means will be unique to every situation, but as strange as it may sound, don’t dive into work on the be-all and end-all solution right off the bat. Take a few minutes to find a way to provide a bit of breathing room for you to build and implement the long-term solution in a stable, future-friendly way. Ideally, the cover you’re providing shouldn’t affect the users too much.
Consider beefing up your caching policies to lighten the load on your servers as much as possible. If there’s any functionality that is particularly taxing on your hardware and isn’t mission critical, disable it temporarily. Even if keeping the servers alive means pressing a button every 108 minutes like you’re Desmond from Lost, do it.
After you’ve got some cover, work the problem slowly and deliberately. Think solutions through two or three times to be sure they’re the right course of action. With the pressure eased, you don’t have to rush through a cycle of building, deploying, and testing potential fixes. Rushing leads to oversight of important details, and typically, that cycle ends the first time a change fixes (or seemingly fixes) the issue, which can lead to sloppy code and weak foundations for the future. If the environment doesn’t allow you to ease the pressure enough to work slowly, go ahead and cycle your way to a hacky solution.
But don’t forget to come back and work the root issue, or else temporary fixes will pile up and eat away at your system’s architecture like a swarm of termites. Emergencies often require more thought and planning than everyday development, so be sure to give yourself the necessary time. Reactions alone may patch an issue, but thoughtfulness can solve it. (Quelle: A List Apart: The Full Feed).
I want you to think about what you’re doing right now. I mean really think about it. As your eyes move across these lines and funnel information to your brain, you’re taking part in a conversation I started with you.
The conveyance of that conversation is the type you’re reading on this page, but you’re also filtering it through your experiences and past conversations. You’re putting these words into context. And whether you’re reading this book on paper, on a device, or at your desk, your environment shapes your experience too. Someone else reading these words may go through the same motions, but their interpretation is inevitably different from yours.
This is the most interesting thing about typography: it’s a chain reaction of time and place with you as the catalyst. The intention of a text depends on its presentation, but it needs you to give it meaning through reading. Type and typography wouldn’t exist without our need to express and record information. Sure, we have other ways to do those things, like speech or imagery, but type is efficient, flexible, portable, and translatable.
This is what makes typography not only an art of communication, but one of nuance and craft, because like all communication, its value falls somewhere on a spectrum between success and failure. The act of reading is beautifully complex, and yet, once we know how, it’s a kind of muscle memory. We rarely think about it.
But because reading is so intrinsic to every other thing about typography, it’s the best place for us to begin. We’ve all made something we wanted someone else to read, but have you ever thought about that person’s reading experience? Just as you’re my audience for this book, I want you to look at your audience too: your readers. One of design’s functions is to entice and delight. We need to welcome readers and convince them to sit with us. But what circumstances affect reading? Readability Just because something is legible doesn’t mean it’s readable.
Legibility means that text can be interpreted, but that’s like saying tree bark is edible. We’re aiming higher. Readability combines the emotional impact of a design (or lack thereof ) with the amount of effort it presumably takes to read. You’ve heard of TL;DR (too long; didn’t read)? Length isn’t the only detractor to reading; poor typography is one too. To paraphrase Stephen Coles, the term readability doesn’t ask simply, “Can you read it?” but “Do you want to read it?” Each decision you make could potentially hamper a reader’s understanding, causing them to bail and update their Facebook status instead.
Don’t let your design deter your readers or stand in the way of what they want to do: read. Once we bring readers in, what else can we do to keep their attention and help them understand our writing? Let’s take a brief look at what the reading experience is like and how design influences it.
The act of reading When I first started designing websites, I assumed everyone read my work the same way I did. I spent countless hours crafting the right layout and type arrangements. I saw the work as a collection of the typographic considerations I made: the lovingly set headlines, the ample whitespace, the typographic rhythm (fig 1.1). I assumed everyone would see that too.
Chronica Feudalism Pdf Files. Fig 1.1: A humble bit of text. But what actually happens when someone reads it? It’s appealing to think that’s the case, but reading is a much more nuanced experience.
It’s shaped by our surroundings (am I in a loud coffee shop or otherwise distracted?), our availability (am I busy with something else?), our needs (am I skimming for something specific?), and more. Reading is not only informed by what’s going on with us at that moment, but also governed by how our eyes and brains work to process information. What you see and what you’re experiencing as you read these words is quite different. As our eyes move across the text, our minds gobble up the type’s texture—the sum of the positive and negative spaces inside and around letters and words.
We don’t linger on those spaces and details; instead, our brains do the heavy lifting of parsing the text and assembling a mental picture of what we’re reading. Our eyes see the type and our brains see Don Quixote chasing a windmill. Or, at least, that’s what we hope. This is the ideal scenario, but it depends on our design choices.
Have you ever been completely absorbed in a book and lost in the passing pages? Good writing can do that, and good typography can grease the wheels. Without getting too scientific, let’s look at the physical process of reading. Saccades and fixations Reading isn’t linear.
Instead, our eyes perform a series of back and forth movements called saccades, or lightning-fast hops across a line of text (fig 1.2). Sometimes it’s a big hop; sometimes it’s a small hop.
Saccades help our eyes register a lot of information in a short span, and they happen many times over the course of a second. A saccade’s length depends on our proficiency as readers and our familiarity with the text’s topic. If I’m a scientist and reading, uh, science stuff, I may read it more quickly than a non-scientist, because I’m familiar with all those science-y words. Full disclosure: I’m not really a scientist.
I hope you couldn’t tell. Fig 1.2: Saccades are the leaps that happen in a split second as our eyes move across a line of text.
Between saccades, our eyes stop for a fraction of a second in what’s called a fixation (fig 1.3). During this brief pause we see a couple of characters clearly, and the rest of the text blurs out like ripples in a pond. Our brains assemble these fixations and decode the information at lightning speed. This all happens on reflex. Pretty neat, huh? Fig 1.3: Fixations are the brief moments of pause between saccades. The shapes of letters and the shapes they make when combined into words and sentences can significantly affect our ability to decipher text.
If we look at an average line of text and cover the top halves of the letters, it becomes very difficult to read. If we do the opposite and cover the bottom halves, we can still read the text without much effort (fig 1.4). Fig 1.4: Though the letters’ lower halves are covered, the text is still mostly legible, because much of the critical visual information is in the tops of letters. This is because letters generally carry more of their identifying features in their top halves.
The sum of each word’s letterforms creates the word shapes we recognize when reading. Once we start to subconsciously recognize letters and common words, we read faster. We become more proficient at reading under similar conditions, an idea best encapsulated by type designer Zuzana Licko: “Readers read best what they read most.” It’s not a hard and fast rule, but close. The more foreign the letterforms and information are to us, the more slowly we discern them.
If we traveled back in time to the Middle Ages with a book typeset in a super-awesome sci-fi font, the folks from the past might have difficulty with it. But here in the future, we’re adept at reading that stuff, all whilst flying around on hoverboards. For the same reason, we sometimes have trouble deciphering someone else’s handwriting: their letterforms and idiosyncrasies seem unusual to us. Yet we’re pretty fast at reading our own handwriting (fig 1.5). Fig 1.5: While you’re very familiar with your own handwriting, reading someone else’s (like mine!) can take some time to get used to. There have been many studies on the reading process, with only a bit of consensus. Reading acuity depends on several factors, starting with the task the reader intends to accomplish.
Some studies show that we read in word shapes—picture a chalk outline around an entire word—while others suggest we decode things letter by letter. Most findings agree that ease of reading relies on the visual feel and precision of the text’s setting (how much effort it takes to discern one letterform from another), combined with the reader’s own proficiency. Consider a passage set in all capital letters (fig 1.6).
You can become adept at reading almost anything, but most of us aren’t accustomed to reading lots of text in all caps. Compared to the normal sentence-case text, the all-caps text feels pretty impenetrable.
That’s because the capital letters are blocky and don’t create much contrast between themselves and the whitespace around them. The resulting word shapes are basically plain rectangles (fig 1.7). Fig 1.6: Running text in all caps can be hard to read quickly when we’re used to sentence case. Fig 1.7: Our ability to recognize words is affected by the shapes they form.
All-caps text forms blocky shapes with little distinction, while mixed-case text forms irregular shapes that help us better identify each word. Realizing that the choices we make in typefaces and typesetting have such an impact on the reader was eye-opening for me. Small things like the size and spacing of type can add up to great advantages for readers. When they don’t notice those choices, we’ve done our job. We’ve gotten out of their way and helped them get closer to the information. Baixar Download Gratis Programa Photoshop Cs5 Portable Em Portugues.
Stacking the deck Typography on screen differs from print in a few key ways. Readers deal with two reading environments: the physical space (and its lighting) and the device. A reader may spend a sunny day at the park reading on their phone. Or perhaps they’re in a dim room reading subtitles off their TV ten feet away. As designers, we have no control over any of this, and that can be frustrating. As much as I would love to go over to every reader’s computer and fix their contrast and brightness settings, this is the hand we’ve been dealt.
The best solution to unknown unknowns is to make our typography perform as well as it can in all situations, regardless of screen size, connection, or potential lunar eclipse. We’ll look at some methods for making typography as sturdy as possible later in this book. It’s up to us to keep the reading experience unencumbered. At the core of typography is our audience, our readers. As we look at the building blocks of typography, I want you to keep those readers in mind. Reading is something we do every day, but we can easily take it for granted.
Slapping words on a page won’t ensure good communication, just as mashing your hands across a piano won’t make for a pleasant composition. The experience of reading and the effectiveness of our message are determined by both what we say and how we say it. Typography is the primary tool we use as designers and visual communicators to speak. (Quelle: A List Apart: The Full Feed).
“Just put it up on a server somewhere.” “Just add a favorite button to the right side of the item.” “Just add [insert complex option here] to the settings screen.” Usage of the word “just” points to a lot of assumptions being made. A few months ago, Brad Frost shared some thoughts on how the word applies to knowledge. “Just” makes me feel like an idiot. “Just” presumes I come from a specific background, studied certain courses in university, am fluent in certain technologies, and have read all the right books, articles, and resources. He points out that learning is never as easy as it is made to seem, and he’s right. But there is a direct correlation between the amount of knowledge you’ve acquired and the danger of the word “just.” The more you know, the bigger the problems you solve, and the bigger the assumptions are that are hiding behind the word.
Take the comment, “Just put it up on a server somewhere.” How many times have we heard that? But taking a side project running locally and deploying it on real servers requires time, money, and hard work. Some tiny piece of software somewhere will probably be the wrong version, and will need to be addressed.
The system built locally probably isn’t built to scale perfectly. “Just” implies that all of the thinking behind a feature or system has been done. Even worse, it implies that all of the decisions that will have to be made in the course of development have already been discovered—and that’s never the case. Things change when something moves from concept to reality. As Dave Wiskus said on a recent episode of Debug, “everything changes when fingers hit glass.” The favorite button may look fine on the right side, visually, but it might be in a really tough spot to touch.
What about when favoriting isn’t the only action to be taken? What happens to the favorite button then? Even once favoriting is built and in testing, it should be put through its paces again. In use, does favoriting provide enough value to warrant is existence? After all, “once that feature’s out there, you’re stuck with it.” When you hear the word “just” being thrown around, dig deep into that statement and find all of the assumptions made within it. Zoom out and think slow.
Your product lives and dies by the decisions discovered between ideation and creation, so don’t just put it up on a server somewhere. (Quelle: A List Apart: The Full Feed).
The stream—that great glut of ideas, opinions, updates, and ephemera that pours through us every day—is the dominant way we organize content. It makes sense; the stream’s popularity springs from the days of the early social web, when a huge number of users posted all types of content on unpredictable schedules. The simplest way to show updates to new readers focused on reverse chronology and small, discrete chunks, as sorting by newness called for content quick to both produce and digest. This approach saw wide adoption in blogs, social networks, notification systems, etc., and ever since we’ve flitted from one stream to another like sugar-starved hummingbirds. Problem is, the stream’s emphasis on the new above all else imposes a short lifespan on content.
Like papers piled on your desk, the stream makes it easy to find the last thing you’ve added, while anything older than a day effectively disappears. Solely relying on reverse-chronology turns our websites into graveyards, where things pile up atop each other until they fossilize. We need to start treating our websites as gardens, as places worthy of cultivation and renewal, where new things can bloom from the old. The stream, in print The stream’s focus on the now isn’t novel, anyway.
Old-school modes of publishing like newspapers and magazines shared a similar disposability: periodic updates went out to subscribers and were then thrown away. No one was expected to hang onto them for long. Over the centuries with print, however, we came up with a number of ways to preserve and showcase older material. Newspapers put out annual indexes cataloguing everything they print ordered by subject and frequency. Magazines get rebound into larger, more substantial anthologies. Publishers frequently reach into their back catalogue and reprint books with new forewords or even chapters.
These acts serve two purposes: to maintain widespread and cheap access to material that has gone out of print, and to ensure that material is still relevant and useful today. But we haven’t yet developed patterns for slowing down on the web.
In some ways, access is simpler. As long as the servers stay up, content remains a link away from interested readers. But that same ease of access makes the problem of outdated or redundant content more pronounced. Someone looking at an old magazine article also holds the entire issue it was printed with. With an online article, someone can land directly on the piece with little indication of who it’s by, what it’s for, and whether it’s gone out of date. Providing sufficient context for content already out there is a vital factor to consider and design for. You don’t need to be a writer to help fix this.
Solutions can come from many fields, from targeted writing and design tweaks to more overarching changes in content strategy and information architecture. Your own websites are good places to start. Here are some high-level guidelines, ordered by the amount of effort they’ll take. Your site will demand its own unique set of approaches, though, so recombine and reinvent as needed.
Reframe Emma is a travel photographer. She keeps a blog, and many years ago she wrote a series about visiting Tibet. Back then, she was required to travel with a guided tour. That’s no longer the case, as visitors only need to obtain a permit. The most straightforward thing to do is to look through past content and identify what’s outdated: pieces you’ve written, projects you worked on, things you like. The goal is triage: sorting things into what needs attention and what’s still fine.
Once you’ve done that, find a way to signal their outdated status. Perhaps you have a design template for “archived” content that has a different background color, more strongly emphasizes when it was written, or adds a sentence or two at the top of your content that explains why it’s outdated. If entire groups of content need mothballing, see whether it makes sense to pull them into separate areas. (Over time, you may have to overhaul the way your entire site is organized—a complicated task we’ll address below.) Emma adds an tag to her posts about her guided tour and configures the site’s template to show a small yellow notification at the top telling visitors that her information is from 2008 and may be irrelevant. She also adds a link on each post pointing to a site that explains the new visa process and ways to obtain Tibetan permits. On the flip side, separate the pieces that you’re particularly proud of.
Your “best-of” material is probably getting scattered by the reverse-chronology organization of your website, so list all of them in a prominent place for people visiting for the first time. Recontextualize I hope that was easy!
The next step is to look for old content you feel differently about today. When Emma first started traveling, she hated to fly. She hated waiting in line, hated sitting in cramped seats, and especially hated the food.
There are many early blog posts venting about this. Maybe what you wrote needs additional nuance or more details.
Or maybe you’ve changed since then. Explain why—lead readers down the learning path you took. It’s a chance for you to reflect on the delta. Now that she’s gotten more busy and has to frequently make back-to-back trips for clients, she finds that planes are the best time for her to edit photos from the last trip, catch up on email, and have some space for reflection. So she writes about how she fills up her flying time now, leaving more time when she’s at her destination to shoot and relax. Or expand on earlier ideas.
What started as a rambling post you began at midnight can turn into a series or an entire side project. Or, if something you wrote provokes a big response online, you could gather those links at the bottom of your piece.
It’s a service to your new readers to collect connected pieces together, so that they don’t have to hunt around to find them all. Revise and reorganize Hopefully that takes care of most of your problematic content. But for content so dire you’re embarrassed to even look at it, much less having other people reading it, consider more extreme measures: the act of culling, revising, and rewriting. Looking back: maybe you were completely wrong about something, and you would now argue the opposite. Or you’re shocked to find code you wrote one rushed Friday afternoon—well, set aside some time to start from the ground up and do it right.
Emma started her website years ago as a typical reverse-chron blog, but has started to work on a redesign based around the concepts of LOCATIONS and TRIPS. Appearing as separate items in the navigation, they act as different ways for readers to approach and make sense of her work. The locations present an at-a-glance view of where she’s been and how well-traveled she is. The trips (labeled Antarctica: November 2012, Bangkok: Fall 2013, Ghana: early 2014, etc.) retain the advantages of reverse-chronology by giving people updates on what she’s done recently, but these names are more flexible and easier to explain than dates and timestamps on their own. Someone landing directly on a post from a trip two years ago can easily get to the other posts from that trip, but they would be lost if the entries were only timestamped. If the original structure no longer matches the reality of what’s there, it’s also the best case for redesigning and reorganizing your website. Now is the time to consider your content as a whole.
Think about how you’d explain your website to someone you’re having lunch with. Are you a writer, photographer, artist, musician, cook? What sorts of topics does your site talk about? What do you want people to see first? How do they go deeper on the things they find interesting? This gets rather existential, but it’s important to ask yourself.
Remove If it’s really, truly foul, you can throw it out. You officially have permission.) Not everything needs to live online forever, but throwing things out doesn’t have to be your first option when you get embarrassed by the past. Deploying the internet equivalent of space lasers does, I must stress, come with some responsibility.
Other sites can be affected by changes in your links: If you’re consolidating or moving content, it’s important to set up redirects for affected URLs to the new pages. If someone links to a tutorial you wrote, it may be better to archive it and link to more updated information, rather than outright deleting it. Conclusion Everything we’ve done so far applies to more than personal websites, of course.
Businesses have to maintain scores of announcements, documentation, and customer support. Much of it is subject to greatly change over time, and many need help looking at things from a user’s perspective. Content strategy has been leading the charge on this, from developing content models and relationships, to communicating with empathy in touchy situations, to working out content standards.
Newspapers and magazines relentlessly publish new pieces and sweep the old away from public view. Are there opportunities to highlight material from their archives? What about content that can always stay interesting? How can selections be best brought together to generate new connections and meaning?
Museums and libraries, as they step into their digital shoes, will have to think about building places online for histories and archives for the long term. Are there new roles and practices that bridge the old world with the networked, digital one? How do they preserve entirely new categories of things for the public? No one has all the answers.
But these are questions that come from leaving the stream and approaching content from the long view. These are problems that the shapers and caretakers of the web are uniquely positioned to think about and solve. As a community, we take pride in being makers and craftsmen. But for years, we’ve neglected the disciplines of stewardship—the invisible and unglamorous work of collecting, restoring, safekeeping, and preservation. Maybe the answer isn’t to post more, to add more and more streams. Let’s return to our existing content and make it more durable and useful. You don’t even have to pick up a shovel.
(Quelle: A List Apart: The Full Feed). When I recently read Geoff Dimasi’s excellent article I thought: this is great—values-based business decisions in an efficient fashion. But I had another thought, too: where, in that equation, is the money? If I’m honest with myself, I’ve always felt that on some level it’s wrong to be profitable. That making money on top of your costs somehow equates to bilking your clients.
I know, awesome trait for a business owner, right? Because here’s the thing: a business can’t last forever skating on the edge of viability. And that’s what not being profitable means.
This is a lesson I had to learn with Bearded the hard way. Several times.
Shall we have a little bit of story time? “Yes, Matt Griffin,” you say, “let’s!” Well OK, then. At Bearded, our philosophy from the beginning was to focus on doing great web work for clients we believed in. The hope was that all the sweat and care we put into those projects and relationships would show, and that profit would naturally follow quality. For four years we worked our tails off on project after project, and as we did so, we lived pretty much hand-to-mouth. On several occasions we were within weeks and a couple of thousand bucks from going out of business. I would wake up in the night in a panic, and start calculating when bills went out and checks would come in, down to the day.
I loved the work and clients, but the other parts of the business were frankly pretty miserable. Then one day, I went to the other partners at Bearded and told them I’d had it. In the immortal words of Lethal Weapon’s Sergeant Murtaugh, I was getting too old for this shit. I told them I could put in one more year, and if we weren’t profitable by the end of it I was out, and we should all go get well-paid jobs somewhere else. That decision lit a fire under us to pay attention to the money side of things, change our process, and effectively do whatever it took to save the best jobs we’ve ever had. By the end of the next quarter, we had three months of overhead in the bank and were on our way to the first profitable year of our business, with a 50 percent growth in revenue over the previous year and raises for everyone. All without compromising our values or changing the kinds of projects we were doing.
This did not happen on its own. It happened because we started designing the money side of our business the way we design everything else we care about. We stopped neglecting our business, and started taking care. “So specifically,” you ask, “what did you do to turn things around?
I am interested in these things!” you say. Very good, then, let’s take a look. Now it’s time for a breakdown Besides my arguably weird natural aversion to profit, there are plenty of other motivations not to examine the books.
Perhaps math and numbers are scary to you. Maybe finances just seem really boring (they’re no CSS pseudo-selectors, amiright?). Or maybe it’s that when we don’t pay attention to a thing, it’s easier to pretend that it’s not there. But in most cases, the unknown is far scarier than fact.
When it comes down to it, your businesses finances are made up of two things: money in and money out. Money in is revenue. Money out is overhead. And the difference?
That’s profit (or lack thereof). Let’s take a look at the two major components of that equation. Overhead Overheels First let’s roll up our sleeves and calculate your overhead.
Overhead includes loads of stuff like: Staff salaries Health insurance Rent Utilities Equipment costs Office supplies Snacks, meals, and beverages Service fees (hosting, web services, etc.) In other words: it’s all the money you pay out to do your work. You can assess these items over whatever period makes sense to you: daily, weekly, annually, or even by project. For Bearded, we asked our bookkeeper to generate a monthly budget in Quicken based on an average of the last six months of actual costs that we have, broken down by type. This was super helpful in seeing where our money goes. Not surprisingly, most of it was paying staff and covering their benefits.
Once we had that number it was easy to derive whatever variations were useful to us. The most commonly used number in our arsenal is weekly overhead. Knowing that variable is very helpful for us to know how much we cost every week, and how much average revenue needs to come in each week before we break even. Everything old is revenue again So how do we bring in that money? You may be using pricing structures that are fixed-fee, hourly, weekly, monthly, or value-based.
But at the end of the day you can always divide the revenue gained by the time you spent, and arrive at a period-based rate for the project (whether monthly, weekly, hourly, or project length). This number is crucial in determining profitability, because it lines up so well with the overhead number we already determined. Remember: money in minus money out is profit. And that’s the number we need to get to a point where it safely sustains the business.
If we wanted to express this idea mathematically, it might look something like this: (Rate × Time spent × Number of People) - (Salaries + Expenses) = Profit Here’s an example: Let’s say that our ten-person business costs $25,000 a week to run. That means each person, on average, needs to do work that earns $2,500 per week for us to break even. If our hourly rate is $100 per hour, that means each person needs to bill 25 hours per week just to maintain the business. If everyone works 30 billable hours per week, the business brings in $30,000—a profit of 20 percent of that week’s overhead. In other words, it takes five good weeks to get one extra week of overhead in the bank. That’s not a super great system, is it? How many quality billable hours can a person really do in a week—30?
And is it likely that all ten people will be able to do that many billable hours each week? After all, there are plenty of non-billable tasks involved in running a business. Not only that, but there will be dry periods in the work cycle—gaps between projects, not to mention vacations! We won’t all be able to work full time every week of the year. Seems like this particular scenario has us pretty well breaking even, if we’re lucky.
So what can we do to get the balance a little more sustainable? Well, everyone could just work more hours. Doing 60-hour weeks every week would certainly take care of things. But how long can real human beings keep that up? We can lower our overhead by cutting costs. But seeing as most of our costs are paying salaries, that seems like an unlikely place to make a big impact.
To truly be more profitable, the business needs to bring in more revenue per hour of effort expended by staff. That means higher rates.
Let’s look at a new example: Our ten-person business still costs $25,000 a week. Our break-even is still at $2,500 per week per person.
Now let’s set our hourly rate at $150 per hour. This means that each person has to work just under 17 billable hours per week for the business to break even.
If everyone bills 30 hours in a week, the business will now bring in $45,000—or $20,000 in profit. That’s 80 percent of a week’s overhead. That scenario seems a whole lot more sustainable—a good week now pays for itself, and brings in 80 percent of the next week’s overhead. With that kind of ratio we could, like a hungry bear before hibernation, start saving up to protect ourselves from less prosperous times in the future. Nature metaphors aside, once we know how these parts work, we can figure out any one component by setting the others and running the numbers.
In other words, we don’t just have to see how a specific hourly rate changes profit. We can go the other way, too. Working for a living or living to work One way to determine your system is to start with desired salaries and reasonable work hours for your culture, and work backwards to your hourly rate.
Then you can start thinking about pricing systems (yes, even fixed price or value-based systems) that let you achieve that effective rate. Maybe time is the most important factor for you.
How much can everyone work? How much does everyone want to work? How much must you then charge for that time to end up with salaries you can be content with? This is, in part, a lifestyle question. At Bearded, we sat down not too long ago and did an exercise adapted from an IA exercise we learned from Kevin M. We all contributed potential qualities that were important to our business—things like “high quality of life,” “high quality of work,” “profitable,” “flexible,” “clients who do good in the world,” “efficient,” and “collaborative.” As a group we ordered those qualities by importance, and decided we’d let those priorities guide us for the next year, at which point we’d reassess. That exercise really helped us make decisions about things like what rate we needed to charge, how many hours a week we wanted to work, as well as more squishy topics like what kinds of clients we wanted to work for and what kind of work we wanted to do.
Though finances can seem like purely quantitative math, that sort of qualitative exercise ended up significantly informing how we plugged numbers into the profit equation. Pricing: Where the rubber meets the road Figuring out the basics of overhead, revenue, and profit, is instrumental in giving you an understanding of the mechanics of your business. It lets you plan knowledgeably for your future.
It allows you to make plans and set goals for the growth and maintenance of your business. But once you know what you want to charge there’s another question—how do you charge it? There are plenty of different pricing methods out there (time unit-based, deliverable-based, time period-based, value-based, and combinations of these). They all have their own potential pros and cons for profitability. They also create different motivations for clients and vendors, which in turn greatly affect your working process, day-to-day interactions, and project outcomes. But that, my friends, is a topic for our next column.
Stay tuned for part two of my little series on the money side of running a web business: pricing! (Quelle: A List Apart: The Full Feed). “I don’t like it”—The most dreaded of all design feedback from your client/boss/co-worker. This isn’t so much a matter of your ego being damaged, it’s just not useful or constructive criticism. In order to do better we need feedback grounded in understanding of user needs.
And we need to be sure it’s not coming from solely the client’s aesthetic preferences, which may be impeccable but may not be effective for the product. Aesthetics are a matter of taste. Design is not just aesthetics.
I’m always saying it, but it’s worth repeating: there are aesthetic decisions in design, but they are meant to contribute to the design as a whole. The design as a whole is created for an audience, and with goals in mind, so objectivity is required and should be encouraged. Is the client offering an opinion based on her own taste, trying to reflect the taste of the intended audience, or trying to solve a perceived problem for the user? Don’t take “I don’t like it” at face value and try to respond to it without more communication.
How do we elicit better feedback? To elicit the type of feedback we want from clients, we should encourage open-ended critiques that explain the reasons behind the negative feedback, critiques that make good use of conjunctions like “because.” “I don’t like it because” is already becoming more valuable feedback.
Designer: Why don’t you like the new contact form design? Client: I don’t like it because the text is too big. Whether that audience can achieve their goals with our product is the primary factor in its success. We need clients to understand that they may not be the target audience. Sometimes this can be hard for anyone close to a product to understand. We may be one of the users of the products we’re designing, but the product is probably not being designed solely for users like us. The product has a specific audience, with specific goals.
Once we’ve re-established the importance of the end user, we can then reframe the feedback by asking the question, “how might the users respond?” Designer: Do you think the users will find the text too big? They’d rather see everything without having to scroll. Designer: The text will have to be very small if we try to fit it all into the top of the page. It might be hard to read.
Client: That’s fine. All of our users are young people, so their eyesight is good. Throughout the design process, we need to check our hidden assumptions about our users. We should also ensure any feedback we get isn’t based upon an unfounded assumption. If the client says the users won’t like it, ask why.
Uncover the assumption—maybe it’s worth testing with real users? Designer: Can we be certain that all your users are young people? And that all young people have good eyesight? We might risk losing potential customers unless the site is easy for everyone to read. How do we best separate out assumptions from actual knowledge?
Any sweeping generalizations about users, particularly those that assume users all share common traits, are likely to need testing. A thorough base of user research, with evidence to fall back on, will give you a much better chance at spotting these assumptions.
The design conversation As designers, we can’t expect other people to know the right language to describe exactly why they think something doesn’t work. We need to know the right questions that prompt a client to give constructive criticism and valuable feedback. I’ve written before on how we can pre-empt problems by explaining our design decisions when we share our work, but it’s impossible to cover every minute detail and the relationships between them. If a client can’t articulate why they don’t like the design as a whole, break the design into components and try to narrow down which part isn’t working for them. Designer: Which bit of text looks particularly big to you?
Client: The form labels. When you’ve zeroed in on a component, elicit some possible reasons that it might not be effective.
Designer: Is it because the size of the form labels leaves less space for the other elements, forcing the users to scroll more? We need to make the text smaller. Reining it in Aesthetics are very much subject to taste. You know what colors you like to wear, and the people you find attractive, and you don’t expect everyone else to share those same tastes. Nishant wrote a fantastic column about how Good Taste Doesn’t Matter and summarized it best when he said: good and virtuous taste, by its very nature, is exclusionary; it only exists relative to shallow, dulltastes. And if good design is about finding the most appropriate solution to the problem at hand, you don’t want to start out with a solution set that has already excluded a majority of the possibilities compliments of the unicorn that is good taste.
Taste’s great Designer: But if we make the text smaller, we’ll make it harder to read. Most web pages require scrolling, so that shouldn’t be a problem for the user. Do you think the form is too long, and that it might put users off from filling it in? Client: Yes, I want people to find it easy to contact us. Designer: How about we take out all the form fields, except the email address and the message fields, as that’s all the information we really need? Client: Yes, that’ll make the form much shorter. If you’re making suggestions, don’t let a client say yes to your first one.
These suggestions aren’t meant as an easy-out, allowing them to quickly get something changed to fit their taste. This is an opportunity to brainstorm potential alternatives on the spot. Working collaboratively is the important part here, so don’t just go away to work out the first alternative by yourself. If you can work out between you which solution is most likely to be successful, the client will be more committed to the iteration.
You’ll both have ownership, and you’ll both understand why you’ve decided to make it that way. (Quelle: A List Apart: The Full Feed). I call kids between ages 4 and 6 the “muddy middle,” because they’re stuck right in between the cute, cuddly preschool children and the savvy, sophisticated elementary-schoolers. They’re too old for games designed for toddlers, but they can’t quite read yet, so they struggle with sites and apps geared toward older kids. Unfortunately, you rarely see a digital product designed specifically for this age group, because they’re hard to pin down, but these little guys are full of ideas, knowledge, creativity, and charisma. Like the 2–4s, these children are still in the preoperational stage, but they present their own set of design challenges based on where they are cognitively, physically, and emotionally. Who are they?
Table 5.1 shows some key characteristics that shape the behavior and attitudes of 4–6-year-olds and how these might impact your design decisions. You’ll find that 4–6-year-olds have learned “the rules” for how to behave, how to communicate, and how to play. Now they’re looking for ways to bend and break these rules.
They understand limitations—angry parents, broken toys, and sad friends have taught them well—but they still take every opportunity to test these limitations. Digital environments provide a perfect place for these active kids to challenge the status quo and learn more about the world around them. 4–6 year-olds This means that You’ll want to Are empathetic. They’re beginning to see things from other perspectives. Make interactions feel more “social,” even if the kids aren’t actually communicating with others. Have an intense curiosity about the world. They’re very interested in learning new ideas, activities, and skills, but may become frustrated when that learning takes longer than they would like.
Set attainable goals for the tasks and activities you create. Provide context-based help and support so kids have an easier time processing information. Are easily sidetracked. They sometimes have trouble following through on a task or activity. Keep activities simple, short, and rewarding. Provide feedback and encouragement after milestones.
Have wild imaginations. They prefer to create on their own rather than following strict instructions or step-by- step directions. Make “rules” for play/engagement as basic as possible and allow for a lot of invention, self-expression, and storytelling. Are developing increased memory function. Can recall complex sequences of events just by watching someone perform them. Include multi-step activities and games, with more than one main goal (for example, touch the red stars and green apples to get points of different values).
Table 5.1: Considerations for 4–6-year-olds Make it social When you think of social design for adults, you may think of experiences that let users communicate and interact with others. The same is true of social design for kids, but in this case, “others” doesn’t have to mean other kids or even other humans. It means that kids need to feel like part of the experience, and they need to be able to observe and understand the interactions of characters in the experience, as players and contributors. Kids at this age understand that individual differences, feelings, and ideas are important and exciting. Showcasing these differences within the experience and directly communicating with users allows this social aspect to come through and provide additional depth and context to interactions.
Sometimes, making something feel social is as easy as presenting it in the first person. When characters, elements, and instructions speak directly to kids, it makes it easier for them to empathize and immerse themselves in the experience. Let’s take a look at an example from Seussville. The designers of this highly engaging site keep the uniqueness of Dr.
Seuss’s characters vibrantly alive in their lovely character chooser. Every character (and I do mean every) from every Dr. Seuss book glides by on whimsical conveyor belts, letting the user pick one to play with (see Figure 5.1). This character chooser provides a strong social experience for kids, because it allows them to “meet” and build relationships with the individual characters. Kids can control the viewer, from a first-person perspective, to see the visual differences among the characters, as well as personality details that make the characters unique, much like how they’d go about meeting people in real life (without the conveyor belt, of course). When users choose a character, they are shown a quote, a book list, and details about the character on the pull-down screen to the right. On the left side of the screen, a list of games and activities featuring the character magically appears.
FIGURE 5.1: Seussville presents a first-person perspective to kids. FIGURE 5.2: Seussville feels social, even though kids don’t interact with other humans. This social experience is carried through across most of the games on the site. For example, when users pick the “Horton Hears a Tune” game from Horton the elephant’s list of activities, they can compose their own melody on the groovy organ-like instrument under the supportive eyes of Horton himself. Then, in true social fashion, they can save their tune and share it with family and friends.
FIGURE 5.3: “Horton Hears a Tune” lets kids compose music and share it. Make learning part of the game As a designer, you know that providing help when and where your users need it works better than forcing them to leave the task they’re trying to complete to get help. This is especially true for 4–6-year-olds, who have a strong curiosity for why things are the way they are and want to know everything right away. Unlike the “school stinks” mentality of earlier generations, today’s kids are fascinated with learning and want to soak up as much information as possible. This new attitude could be because learning is more dynamic, more hands-on, and more inventive than it’s been in the past, or because computers, tablets, and other digital teaching tools make learning fun. However, younger kids still lack patience when learning takes longer than they’d like.
You’ll want to provide short, manageable instructions to make learning fast, easy, and pleasurable, and to incorporate learning into the experience itself. The Dinosaur Chess app does a great job with structured teaching, as well as on-the-spot assistance to help kids learn how to play chess (see Figure 5.4). Upon launching the app, children get to choose what they want to do. The great thing about Dinosaur Chess is that it’s not just all about chess—kids can take lessons, check their overall progress, and even participate in a “dino fight!” One perk is how the app links the activities via a treasure-hunt-style map on the menu screen. It gently recommends a progression through the activities (which older kids will follow), but is subtle enough to allow exploration.
This feature is great for kids who like to break the rules, because it establishes a flow, yet invites users to deviate from it in a subtle yet effective way. FIGURE 5.4: Dinosaur Chess offers many opportunities for learning. When users select the “learn” option, they are taken to a screen where an avuncular dinosaur (who, for some reason, is Scottish) talks kids through the mechanics of chess in a non-intimidating way. Since these kids are still learning to read, the designers used voice-overs instead of text, which works really well here.
The lessons are broken up into short, manageable chunks—essential for learning via listening—which let the 4–6s learn a little at a time and progress when they are ready. The children can also try out various moves after learning them, which is particularly effective with younger users who learn by seeing and doing (see Figure 5.5). If this app were designed for an adult audience, the lessons would be a little longer and would probably include text explanations in addition to the audio, since a combination of listening and reading works best for grown-ups.
However, the brief audio segments coupled with animated examples are perfect for younger users’ short attention spans and desire to learn as much as quickly as possible. FIGURE 5.5: Dinosaur Chess teaches kids how to play chess in short, informational chunks. My favorite aspect of Dinosaur Chess is its guided playing. At any point during the game, kids can press the “?” button for help.
Instead of popping a layer, which many sites and apps do (even those designed for a younger audience), Dinosaur Chess uses subtle animation and voice-overs to show the users what their next moves should be, as shown in Figure 5.6. FIGURE 5.6: Dinosaur Chess uses animation and voice-overs to provide contextual help. Give feedback and reinforcement As anyone knows who has dealt with this age group, 4–6-year-olds have short attention spans. This is particularly true of the younger ones, because kids ages 6 and up are able to pay attention for longer periods of time and absorb more information in a single session.
What’s interesting (and challenging) about these younger ones is that they get frustrated at themselves for not being able to focus, and then they channel that frustration onto the experience. A common response to this from designers is: “Well, I’ll make my app/game/site super fun and interesting so that kids will want to play longer.” That’s not going to happen. A better approach is to identify opportunities within the experience to provide feedback, in order to encourage kids to continue.
Here are some ways to keep children focused on a particular activity: Limit distractions. With a child audience, designers tend to want to make everything on the screen do something, but if you want your 4–6s to complete a task (for example, finish a puzzle or play a game), then remove extra functionality. As when you’re designing for 2–4s, it’s best to break activities for 4–6s into manageable components. The components can be a bit bigger than ones you might design for a younger audience, but many clear, simple steps are better than fewer, longer ones.
While adult users prefer to complete as few steps as possible, and scroll down to finish a task on a screen, 4–6s like finishing a step and moving to a new screen. Make it rewarding.
Provide feedback after each piece of an activity is completed, which will help your users stay motivated to continue. If you have the time and budget, use a combination of feedback mechanisms, to keep an element of surprise and discovery in the task-completion process.
Keep it free-form The 4–6-age bracket gravitates toward activities that are open and free-form, with simple, basic rules (and lots of opportunities to deviate from the rules). This changes pretty dramatically when kids hit age 7 or so. At that point, they become quite focused on staying within boundaries and need a certain level of structure in order to feel comfortable. However, these younger kids like to break the rules and test limits, and digital environments are the perfect places to do this. Zoopz.com has a great mosaic-maker tool, which lets kids enhance existing mosaic designs or create their own from scratch (see Figures 5.7 and 5.8). FIGURE 5.7: An existing mosaic design from Zoopz.com, which lets kids experiment and test limits.
FIGURE 5.8: Zoopz.com mosaic-creator enables kids to create their own cool designs. The nice thing about Zoopz is that it requires little to no explanation in order to make mosaics—kids can jump right in and start playing.
This feature is important, as younger ones will get frustrated if they need to listen to detailed instructions before getting started and will likely move on to something else before the instructions are complete. Typically, 4- and 5-year-olds will leave websites and close apps that they can’t immediately figure out. Older kids will hang around and pay attention to directions if the perceived reward is high enough, but young ones abandon the site right away. So if your game allows for free exploration, make sure that it’s really free and doesn’t require lots of information in order to play. An important thing to note about open exploration/creation: If you’re designing something with a “takeaway,” as Zoopz is, make sure that kids can either print or save their creations. The only thing kids like better than playing by their own rules is showing their work to others. Zoopz misses an opportunity here, because it doesn’t offer the ability for kids to share their work, or print it out to show to friends and family.
This feature becomes even more important as kids get older. We’ll talk at length about sharing, saving, and storing in Chapter 6, “Kids 6–8: The Big Kids.” Keep it challenging The worst insult from a child between the ages of 4 and 5 is to call something “babyish.” They’re part of the big-kid crowd now, and the last thing they want is to feel like they’re using a site or playing a game that’s meant for younger kids. Unfortunately, it’s hard to pin down exactly what “babyish” means, because the definition changes from kid to kid, but in my experience, children call something “babyish” when it’s not difficult or challenging enough for them. Since kids show increased memory function (and more sophisticated motor skills) starting at around age 4, adding multiple steps to games and activities helps keep them on their toes. As designers, we instinctively want to make stuff that users can master immediately. If you’re designing for elementary-school kids, you’ll want to move away from that mindset. While it’s true that children need to be able to easily figure out the objectives of a game or app right away, they don’t necessarily have to do it perfectly the first time.
Instead, build in easier layers early on so that kids can complete them quickly, but throw in some extras that might be a little harder for them. For example, if you’re designing a game where kids have to shoot at flying objects, send in a super-fast projectile they have to catch to win extra points or add a harder “bonus round.” Kids will be less likely to call something “babyish” if it takes them several tries to master.
And they’ll appreciate the vote of confidence you’re giving to their memory and agility. Parents are users, too When adding complexity to your game or app, you’ll still need to make the basic premise simple and clear. A little parental intervention is sometimes necessary, in order to explain rules and demonstrate interactions, but when parents or siblings have to become very involved in game mechanics, it’s frustrating for all parties. Try not to place too much emphasis on “winning” and keep the perceived “rewards” small and unexciting, if you have them at all.
Kids tend to ask parents to step in and help with the trickier parts if the reward for winning is really high. While I believe that a parent should be in the room when kids are online and should check on kids frequently when they’re using a device, too much involvement takes away some autonomy from the kids and prevents them from learning as much as they could and should. Chapter checklist Here’s a checklist for designing for 4–6-year-olds.
Does your design cover the following areas? Feel “social”? Break up instructions and progression into manageable chunks?
Provide immediate positive feedback after each small milestone? Allow for invention and self-expression?
Include multi-step activities to leverage improved memory function? (Quelle: A List Apart: The Full Feed). Jason Santa Maria recently shared some thoughts about pacing content, and my developer brain couldn’t help but think about how I’d go about building the examples he talked about. The one fool-proof way to achieve heavily art-directed layouts like those is to write the HTML by hand. The problem is that content managers are not always developers, and the code can get complex pretty quickly. That’s why we use content management systems—to give content managers easier and more powerful control over content.
There’s a constant tension between that type of longform, art-directed content and content management systems, though. It’s tough to wrangle such unique layouts and styles into a standardized CMS that scales over time. For a while, the best we could do was a series of custom fields and a big WYSIWYG editor for the body copy. While great for content entry, WYSIWYG editors lack the control developers need to output the semantic and clean HTML that make the great experiences and beautiful layouts we’re tasked with building. This tension leaves developers like myself looking for different ways to manage content. My attention recently has been focused on Craft, a new CMS that is just over a year old.
Craft’s solution for longform content is the Matrix field. With Matrix, developers have the flexibility to provide custom fields to be used for content entry, and can write custom templates (using Twig, in Craft’s case) to be used to render that content.
A Matrix field is made up of blocks, and each block type is made up of fields—anything from text inputs, to rich text, dropdowns, images, tables, and more. Here’s what a typical Matrix setup looks like: Instead of fighting with a WYSIWYG editor, content managers choose block types to add to the longform content area, fill out the provided fields, and the content is rendered beautifully using the handcrafted HTML written by developers. I use the Matrix field to drive longform content on my own site, and you can see how much flexibility it gives me to create interesting layouts filled with images with captions, quotes with citations, and more. To pull back the curtain a bit, here’s how my blog post Unsung Success is entered into the Matrix field: Three block types are used in the post seen above—an image block, a quote block, and a text block. Notice that the text block is using a WYSIWYG editor for text formatting—they’re still good for some things! The Matrix field is endlessly customizable, and provides the level of flexibility, control, and power that is needed to achieve well-paced, art-directed longform content like the examples Jason shared.
This is a huge first step beyond WYSIWYG editors and custom fields, and as we see more beautifully designed longform pieces, our tools will only get better. (Quelle: A List Apart: The Full Feed). The outrage being directed at Facebook right now centers on its experiment in manipulating the emotions of 689,003 users in 2012.
Regardless of where you stand on the issue, there’s no denying the phantasmagorical irony that we’re upset (and sad) about how Facebook affects our emotions thanks to learning about a study where Facebook affected our emotions through someone on Facebook. Maybe that, too, was to be expected.
One of the motivations for Facebook’s controversial study was to debunk the notion that seeing our friends’ happy posts in our news feeds actually makes us sadder. And according to a post by Adam Kramer, the primary author of the study, it did exactly that, “We found the exact opposite to what was then the conventional wisdom: Seeing a certain kind of emotion (positive) encourages it rather than suppresses it.” But, how profound is this effect on users’ overall enjoyment while they’re using Facebook? That remains unknown, and in my experience, it’s not much at all. We already know that social media has a profound effect on our emotions. I’ve personally struggled with the emotional rollercoaster for years now. My Achilles’ heel used to be Twitter because I used to be a heavy user.
I even quit the service for a whole year to regain my bearings. And while the hiatus turned out to be very positive, I didn’t quite get to the bottom of what inevitably turns me off about Twitter. And then, of course, there was Facebook. Facebook affected my mood so dramatically that I’d stopped using it entirely for years until a few months ago. I used to refer to Facebook as, “The place my Instagram pictures go to die.” This was partly in jest, partly serious.
My Instagram account is dedicated to my dog, and it’s hard to not notice that a picture or video that can get a few hundred likes, spur over a hundred comments, and bring so much joy to both me and my followers is often met with dead silence or, worse, scorn on Facebook (and honestly, on Twitter as well). There are many reasons for this, several that I covered in one of my prior columns, The REAL Real Problem with Facebook.
But there is one above all: Not everyone is interested in pictures of my dog. *Blasphemy!* OK, so this isn’t really news, and it’s hardly blasphemous.
It’s understandable that people wouldn’t want to see images of someone else’s dog every day. But then why the disparity between how enthusiastically my content is received on Instagram as opposed to Facebook (or even Twitter)?
Therein lies the key to the puzzle. It’s really quite simple: people follow me on Instagram specifically for pictures of my Weimaraner (yes, it’s a notoriously difficult to pronounce dog breed). I never intended on turning my Instagram account into a dog account. It just happened.
And in the process I met loads of Weimaraner (and dog) people from around the world (some whom, true story, I’ve subsequently met in real life). I now honor an informal contract to only post pictures of my dog. And what happens when I break that contract and post the occasional picture of something else?
I’m rewarded with crickets in terms of engagement. What escaped me back when I quit Twitter or when I silently shunned Facebook was that the negativity or the positivity of the posts wasn’t even relevant to the compounding effect of the social network on my emotional well being. What was more to blame was the lack of engagement; the lack of feeling a connection.
As much as we do in all life, online we want to meet, engage, and be engaged by others who share our passions and interests. And when that doesn’t happen, well, it can be a bummer. Over the past few months I’ve joined numerous groups related to my interests on Facebook (yes, including a Weimaraner group).
The result is that my Facebook news feed is now flooded with content I enjoy far more. I’ve essentially hacked my Facebook world to feel a lot more like my Instagram world—more focused on my interests and pastimes. Sharing and talking with folks who care about the same things has made Facebooking infinitely more enjoyable. In an unexpected way, I think it has also helped me understand the mid-conversation exclamations I receive from some people about how much they love Pinterest. One would think that Pinterest would be the ideal social network for most of us, especially me. After all, on Pinterest you can follow someone’s Weimaraner board, and dodge all their gardening, baby, culinary, and political content.
What’s not to like? Well, clearly something, because like loads of people, I’ve never quite gotten into Pinterest. I have some theories why that’s the case, but my disinterest is beside the point.
What seems clear to me is that Pinterest is really onto something. We need a social network that acknowledges that we all have facets, and that it’s OK for us to pick and choose each other based on our interests. In my experience, the amount of happiness you feel on a social network seems to relate more closely to how much the content caters to your interests. So, if you’re looking to maximize your happiness on social networks, here’s the short-term solution: fill your account with content that’s interesting to you.
Like or follow your favorite sports teams, TV shows, clubs, non-profits, news organizations, web design magazines, and anything else you’re into. In other words, make your feeds about things you genuinely like, happy or sad, instead of about your real-world social obligations. And that may also mean muting or unfollowing the people filling your feed with posts about their gardens, babies, food, or politics. Or, god forbid, their dogs. (Quelle: A List Apart: The Full Feed).