Menu Search
Jump to the content X X
Smashing Conf New York

You know, we use ad-blockers as well. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. our upcoming SmashingConf New York, dedicated to smart front-end techniques and design patterns.

Why You Should Stop Installing Your WebDev Environment Locally

Have you heard of Docker but thought that it’s only for system administrators and other Linux geeks? Or have you looked into it and felt a bit intimidated by the jargon? Or are you silently suffering with a messy development environment that seems to break all of the time in various mysterious ways? Then read on. By the end of this article, you should have a basic understanding of Docker and have it working on your computer!

The first part of this article gives a bit of background to help you understand the concepts behind Docker through some metaphors. But if you just want to get started with the tutorial, skip to the “Time to Play!”1 section.

Further Reading on SmashingMag: Link

A Brief History Of The Shipping Industry Link

Break-Bulk Shipping Link

The loading and unloading of individual goods in barrels, sacks and wooden crates from land transporters to ship, and back again on arrival, used to be slow and cumbersome. Nevertheless, this process, referred to as break-bulk shipping, was the only known way to transport goods via ship up until the second half of the 20th century.

Needless to say, this process was very labor-intensive. A ship could easily spend more time at port than at sea, as dockworkers moved cargo into and out of tight spaces below decks. There was also high risk of accident, loss and theft.

Queen’s Wharf, Port Adelaide, before 1927.5
Queen’s Wharf, Port Adelaide, before 1927 (Image: State Library of South Australia6) (View large version7)

The Introduction of Shipping Containers Link

Fast-forward to 26 April 1956, when Malcolm McLean’s converted World War II tanker, the Ideal X, made its maiden voyage from Port Newark to Houston. She had a reinforced deck carrying 58 metal container boxes, as well as 15,000 tons of bulk petroleum.

By the time the container ship docked at the port of Houston six days later, the company was already taking orders to ship goods back to Port Newark in containers. McLean’s enterprise later became known as Sea-Land Services, a company that changed the face of shipping forever.

Many famous inland ports (including London’s Docklands) were completely shut down, as ever-larger container ships had to use open (and usually new) seaside ports.

Malcolm McLean at railing, Port Newark, 19578
Malcolm McLean at Port Newark, 1957 (Image: Maersk Line9) (View large version10)

But Why Would a Developer Care About Shipping Containers? Link

In the 1950s, Harvard University economist Benjamin Chinitz predicted that containerization would benefit New York by allowing it to ship its industrial goods to the southern United States more cheaply than from other areas, like the midwest. But what actually happened is that importing such goods from abroad became cheaper, wiping out the state’s dominant apparels industry, which was meant to be the beneficiary.

While I obviously can’t foresee the future, it looks like a new wave of containerization is about to transform software, particularly web development, with some potentially major consequences.

But what are the main characteristics of shipping containers?

  • They abstract what’s inside with a tough corrugated steel shell, the private and protected contents of which are known only to the creators.
  • They provide a standardized interface for efficient handling and stacking throughout the delivery chain.
  • They make it easy for anyone to scale their operations quickly using this existing standardized infrastructure.

Hmm, some familiar keywords in there, right?

What Is The Problem With Local Development Environments? Link

Even if you install dependencies only for projects that you have to actively work on, after a few new projects, things will start to become a mess — a mess that is difficult to clean up and even harder to restore if you need to work on some old project again.

And if you’re not paying the bills with just a few projects but also want to contribute to open-source libraries — which you totally should, but that means you might need to compile them — then that would get totally out of hand.

Queen’s Wharf, Port Adelaide, before 1927, with runtimes all over the place (View large version12)

What if we could have a software Malcolm, locking each project’s mess into the digital equivalent of corrugated steel boxes?

Malcolm McLean at railing, Port Newark, 1957, with runtimes neatly in containers13
Malcolm McLean, Port Newark, 1957, with runtimes neatly in containers (View large version14)

Approaches for Multi-Project Development Environments Link

A few years ago, the best solution around was to use package managers (RubyGems and Bundler, npm, Maven, etc.) with local project-specific dependency bundles, instead of installing libraries on a global operating-system level. Together with runtime version switchers (Ruby Version Manager, rbenv, nvm, JSelect, etc.), these provided temporary relief, but there were still differences between environments on different developer machines, often resulting in broken builds and generally weird behavior. Of course, you were almost guaranteed to have to set up every single project again after every OS update and to remember quirky workarounds for legacy projects (if the dependencies were even still available).

Then, virtualization started becoming more mainstream (and open source) on the desktop, after years of success on servers. So, with help of the likes of Virtualbox and Vagrant, it became possible to run an entire operating system in a virtualized environment, independent of the host system. This way, the environments for all development computers and even production servers could be identical. While this is a powerful and versatile solution, the downside is a big hit on the resources of the host machine.

Taking the sealed-box approach from full-fat virtualization, yet sharing the kernel and some low-level components from the host operating system, containerization offers the best of both worlds. Because no actual virtualization is happening on a container level (just a few fences are drawn up and torn down for isolation), start-up is pretty much instant, and the direct access to the CPU and other hardware components eliminates performance overhead.

Enter Docker’s Containers Link

Docker started out by being built on an implementation called Linux Containers (or LXC) which has been around for quite a while; so, containers aren’t a totally new concept. That being said, Docker’s makers later decided to create a layer that depends less on Linux distribution-specific components, which was first Libcontainer and now RunC. Quite a lot of details are involved in how these container implementations actually work; however, from a user perspective, it’s enough to know that the high-level architecture is to run what’s inside the containers, with a limitation and prioritization of resources, its own networking, a mounted file system and so on, thereby creating practically complete but securely fenced child operating systems inside the host.

Docker itself is open-source and offers a large ecosystem of tools and contributors on top of the basic container technology, addressing several levels of the problem.

Docker Engine Link

The Docker Engine, or just Docker, is the core and deals with the containers themselves. It bundles together the barebones Unix-style utilities for handling various aspects of the containers, and it adds several convenience functions around them.

With Docker, a nice REST API is exposed on the containers, which makes it possible for both command-line and GUI tools (locally) and deployment scripts (remotely) to interact with them. It also makes it very simple to define images with a single Dockerfile, which enables one to build layered, incremental images quickly, including downloading and compiling all of the dependencies.

Because Docker is Linux-based, if you want to use it on Windows or Mac, it’ll need virtualization after all, which is typically done with the open-source Virtualbox and the tiny Boot2Docker image (although there are some promising up-and-comers, like xhyve15). Of course, you can have as many containers in one box as you’d like, inside which containers would work the same way as on Linux, sharing the resources of one host machine.

Time To Play! Link

Installing Docker Toolbox Link

The simplest way to get started is to download the official Docker Toolbox16. This is an installer that puts in place all Docker-related tools. It also makes sure Virtualbox is installed (in case you’re not using Linux and don’t already have it) and sets up the Boot2Docker image.

Once the installation process is completed, you should start Kitematic (Docker’s UI interface), an app that will automatically create a Docker host environment (named default) for you. If everything has gone well, you should see a list of downloadable Docker images on the right.

Kitematic screenshot with a cornucopia of Docker images17
Kitematic screenshot with a cornucopia of Docker images (View large version18)

Cloning the Tutorial Repository Link

Now we are ready to Dockerize a small React, Sass and Node.js app, compiled with Gulp, which should be just enough to see something beyond “Hello World.”

To get started with this, clone the tutorial’s repository from GitHub19.

Note: Make sure your working copy is under /Users/[username] (Mac OS X) or C:\Users\[username] (Windows). Otherwise, mounting source-code folders won’t work later — these folders are automatically mapped by Docker.

If you haven’t done so yet, crack open a terminal shell, and go to the folder where you checked out the repository! All commands below (starting with, but without pasting, >) will have to be executed there.

Getting the Docker Host Ready Link

Because we’re going to use the default Docker host, you don’t have to create one. If you want to do it (later), you can use docker-machine create.

However, we need to tell Docker that we want to use this default host. So, go ahead and type this:

> eval "$(docker-machine env default)"

You’ll need to do this for every new shell session, or else put it in your .profile or .bashrc file. The reason for this is that Docker can work with multiple hosts locally, remote hosts like AWS, and swarms, so it can’t safely assume where you want to work.

To verify that everything has gone well, type the following:

> docker-machine ls

This should return all of the Docker hosts you have set up, and you should see default set as active. So, something like this:

NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
default   *        virtualbox   Running   tcp://           v1.9.1

Writing the Dockerfile Link

This is where things get really interesting! The Dockerfile is essentially a step-by-step recipe for Docker to compile an image and create a container environment exactly the way you need it to be. For the sake of this tutorial, we’ll need to set up a simple Node.js environment20 so that we can install libraries using npm and compile everything using Gulp.

So, open Dockerfile in your favorite text editor, and let’s get started!

The first line takes the official Node version 5 image from Docker Hub21, prebuilt on top of Debian Jessie as a starting point:

FROM node:5

Because Docker images can be pushed to Docker Hub and shared with other people, putting your name and email address in there is always a good practice. So, go ahead and edit the second line:


You’re on your own now — good luck!

Just kidding. I left the rest empty so that we can fill it in together step by step.

To tell Node.js that this is a development image, let’s set the NODE_ENV environment variable to development:

ENV NODE_ENV=development

Then, we need to set the base folder inside the image for Docker to put files in. So, let’s put this:

WORKDIR /usr/local/src

Now, we’re ready to start copying files from the local file system to the image:

COPY package.json /usr/local/src/package.json

Having put package.json in there, let’s install the dependencies:

RUN npm install

Now, we can copy our source files and start compiling with Gulp:

COPY gulpfile.js /usr/local/src/gulpfile.js
COPY .babelrc /usr/local/src/.babelrc
COPY src /usr/local/src/src
RUN npm run compile

In case you’re wondering why there are two COPY and RUN sections, Docker caches the results as intermediate images after each line. In this case, if we don’t change the contents of package.json, it will just take the image from the previous run, with all of the npm dependencies already installed, making the build incomparably faster than if it had to do it from scratch every time.

By default, Docker containers are completely locked down. So, we need to open a port for the Node.js server:


Finally, all Docker images need a default command to be executed automatically when running the container. In our case, let’s start a development Node.js server:

CMD ["babel-node", "src/server"]

Building the Image Link

Before you can use it in a container, you have to build your image. Type the following (making sure to include the . at the end):

> docker build -t node-tutorial .

Here, the -t parameter gives a name to the image; so, we can refer to it later without having to use the generated UUID hash.

This might take a while because Docker needs to download the Node.js image and all of the npm dependencies. If everything has gone well, something like this should be at the end of the output:

Step 12 : CMD babel-node src/server
 ---> Running in c5fc0a3a5940
 ---> ee02b5ac9bf4
Removing intermediate container c5fc0a3a5940
Successfully built ee02b5ac9bf4

Run the Container Link

At this point, you’re ready to run the container! So, let’s try this mouthful of a command — I’ll explain later what each parameter means:

> docker run -p 8877:8877 -p 3001:3001 --name node-tut -v $(pwd)/src:/usr/local/src/src --sig-proxy=false node-tutorial npm run browsersync

You should see the shell output inside Docker, with the Gulp compilation kicking off. And in a few seconds, the BrowserSync proxy should start up. So, if everything has gone well, things should be settling in, with something like these as the last few lines:

[14:08:59] Finished 'watch' after 70 ms
[14:08:59] [nodemon] child pid: 28
[14:08:59] [nodemon] watching 4 files
[BS] [info] Proxying: http://localhost:8878
[BS] Access URLs:
       Local: http://localhost:8877
          UI: http://localhost:3001
 UI External:
[BS] Reloading Browsers…
docker-tutorial 1.0.0 up and running on 8878

If that’s the case, you’ve just passed another big milestone and are ready to see the results in a browser!

Let’s exit Docker’s shell session by pressing Ctrl + C. In your own shell, type this:

> docker-machine ip default

This should return the IP address of the Docker host’s virtual machine. Knowing that, let’s open our favorite development browser and paste in this address, followed by port 8877. So, something like

Application winning in Firefox screenshot22
Application winning in Firefox screenshot (View large version23)

If you’re particularly adventurous, try editing any application-related file under src; in a moment, you should see the page reload. Congratulations! You have a relatively complex Node.js environment running in Docker!

With this moment of triumph, let’s look back and see what we did with this long docker run command. The basic anatomy looks like this:


These were our OPTIONS:

  • -p hostPort:containerPort
    This maps the container ports to the host so that you can expose and access the web server ports.
  • -v hostDir:containerDir
    This mounts the local files and folders so that your local edits get pushed to the container without requiring a rebuild.
  • --name containerName
    This assigns a name to your container so that you don’t have to use the UUID hash.
  • --sig-proxy=false
    This lets us exit from Docker’s shell without killing the running process inside (by not proxying the SIGTERM — hence, the name).

Finally, the name of the IMAGE is node-tutorial, and the COMMAND + ARG… are npm + run browsersync.

I Want More! Link

While finishing the tutorial above should cover the basics of getting started with Docker for development, there’s much more to it. I’ve gathered some tips and pointers to get you started on the journey. There’s also a good (but rather long) guide to best practices24 in Docker’s documentation.

Images vs. Containers Link

Perhaps one of the most important things to understand is the difference between images and containers. Images are read-only and come in layers. In the tutorial above, the layers are: the Debian Jessie base Linux image → the official Node.js 5 image → our customizations in the Dockerfile. These layers can be modified only by rebuilding the images; this way, you’ll always know what you’re getting, and it becomes a manageable process to share images in the Docker hub by building, pushing and pulling them. Containers, then, are the “machines” that run these images and add the layer of a writable file system on top. So, in the tutorial, the container is what enables volumes to be mounted and, with that, is what enables a way to keep on pushing and executing code inside without having to rebuild the entire image every time.

Because of this immutable nature of images, don’t expect files to stick around in a container when you restart it. This is why it’s important to use file system volumes for your code and, once you get to a more advanced level and want to deploy to production, volume containers for databases, logs, etc.

Docker Machine Link

Apart from helping with some basic management of the Docker container host (picking the virtual machine or starting and stopping it), once you have several containers between and maybe even within projects or you want to deploy and manage them on a cloud provider, Docker Machine will give you some simple commands to help with these.

Docker Hub and Registry Link

Docker Hub25 is the place to go for community-maintained containers. These containers range from simple base Linux distribution flavors with the bare minimum (Alpine26 is a great lean image, for example) all the way up to complete application stacks (for WordPress27, Minecraft, etc.), ready to be started up and used.

As well as the automatic builds, you also get web hooks. So, integrating your continuous integration or deployment system shouldn’t be a problem.

Kitematic Link

While I’m reasonably comfortable with the terminal shell, I’m also the first to admit that, for example, unless I need to do something really complex with Git (like fixing a messed-up commit history), I won’t bother doing it manually and will happily do my commits, diffs and interactive rebases in the SourceTree app.

If you want to avoid the command line, that’s totally fine. Docker recently bought the similarly open-source Kitematic, which is a simple but decent UI to drive Docker. It’s still in its infancy but already lets you monitor logs; start, stop and restart containers; and view settings such as volumes, ports, etc.

Docker in Production or Not? Link

It is worth making clear that you don’t need to commit to using Docker in production in order to use it to manage your development environment. You can stick to whatever way you’re doing it now. In fact, in this case, even on the same team, people can use Docker or a local environment. It doesn’t matter because the application code and repository will be the same except for the added Dockerfile.

That being said, if you do want to go to production, one important principle of containers is that they should run only one process and do one thing. Linking multiple containers to a network is not that difficult, but it is definitely a different approach and concept from working with virtual machines. If you want to see a working example of a few containers working in tandem, we have open-sourced our website code behind ustwo.com28.

More Than Just Servers and Daemons Link

Just to give you an idea of how versatile a tools container can be, you can even use them instead of locally installed desktop tools. For example, you can convert an image with ImageMagick without having to worry about installing it with all of its dependencies. So, if you have open.png in the current folder and you want a .jpg version, you can just do this:

> docker run --rm -v $(pwd):$(pwd) jess/imagemagick convert $(pwd)/open.png $(pwd)/open.jpg

This will pull the tiny ImageMagick image, mount the current folder under the same path inside the container, do the conversion, sync the file back, exit, and remove itself when finished.

For more inspiration, check out the blog post “Docker Containers on the Desktop29” on the brilliant Jessie Frazelle’s blog.

Beyond Docker Link

While the Linux-based Docker is the current star in the space, the Open Container Initiative30 is taking the containerization idea forward with a cross-platform standard. It’s backed by a big industry alliance, including Microsoft, which is promising to make it work natively on Windows. Apple is notably absent at this point, but let’s hope it is just taking its time.

The Future Link

Containerization will start a revolution in open source similar to what Git did, by making it much simpler to take any code and start compiling it right away.

You can quote me on that. I’m fairly confident that we’ll see more and more open-source projects — especially complex ones — embrace containerization for dependency management. All it takes is for someone to add a Dockerfile, after all. And in most cases, people may still choose not to use it and go with their existing local setup instead.

This will dramatically lower the barrier to entry for newcomers to a project, enabling potentially an order of magnitude more people to get involved, just like Git and GitHub did.

Takeaways Link

  • Containerize any non-trivial development environment, especially legacy ones.
  • Containerize your open-source projects to remove the barrier to entry for new contributors.
  • Containerize other people’s open-source projects, instead of setting up the environment on your local machine, so that you solve it for all.

(rb, ml, al, il)

Footnotes Link

  1. 1 #time-to-play
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25
  26. 26
  27. 27
  28. 28
  29. 29
  30. 30

↑ Back to top Tweet itShare on Facebook

Daniel started his web design and development journey with Photoshop 3.0 and Flash 5, focusing on UI and motion design. After working with several different languages and platforms over the years, he decided to settle with the open and (now) powerful web stack. He works at an independent international studio called ustwo, creating beautiful, functional and relevant digital products.

  1. 1

    Shannon Young

    April 20, 2016 9:23 am

    Might want to clarify the second $ in command:

    $ docker run -p 8877:8877 -p 3001:3001 –name node-tut -v $(pwd)/src:/usr/local/src/src –sig-proxy=false node-tutorial npm run browsersync

    is intended.

    I copied them as two separate lines.

    Great tutorial though!

    • 2

      Hey Shannon,

      That’s confusing indeed, based on your feedback I just updated the article to use > instead of $ to symbolise the prompt!

      Thanks for your comment!

  2. 3

    Or you could simply use Buddy for docker-based builds and deliveries and forget about manually setting up the environment.

    Great and exhausting piece Daniel, I’d never expect going as far as to 1950’s in an article like this – cool way to introduce new users.

  3. 4

    I just hate command line.

  4. 6

    You can spin up a docker SSD VPS at Digital Ocean and test out docker for free.

  5. 7

    Seriously not enjoying the over complication of web debelopment workflow these days – simplify your workflow by installing MORE software that complicates the hell out of it! I’m sure Docker is great but so is my local dev environment…

    • 8

      Robert Dundon

      April 20, 2016 1:36 pm

      I already have a few VMs that run on VirtualBox. I usually run them “headless” (meaning no window/GUI), and connect to them via HTTP/FTP, etc.

      One (minor) inconvenience is having to update the packages of the OS. Another is that I’d like to have more contained environments, so I can be more brave without conflicting with other projects and encapsulate projects a little better. I might look into Docker for the latter.

      But I agree it’s not for everyone, and it’s not a good idea to just use tools “just because”, aside from trying out and learning, etc.

      Websites can be as complex or simple as needed. We don’t need shipping containers for a packing a minivan, but it can be useful for shipping cargo from overseas.

    • 9

      Daniel Demmel

      April 20, 2016 5:23 pm

      Hey Daniel,

      Your local dev environment is great until you start working on many different projects and have more and more legacy ones, at which point your heart will sink when a client asks you if you can change one tiny thing on that old website and you have no idea where to even start…

      But of course everyone’s way of working is different, so without knowing yours I might be totally wrong and you’ll be just fine.

      That said, I don’t think containers are overcomplicating things, especially if you also have to be responsible for deploying and maintaining your code in different server environments.

      • 10

        Well, yes and no. If you’re deploying just a small to medium-sized WP site, something like a separate VM for each of them looks a lot like total overkill.

        But then, if its something more complex, or you have to test out specific extensions, parts, plugins, you name it, of your CMS or Framework in different environments – just think of e-commerce plugin development – then this might just be the way to roll.

        Still, I prefer the “there is XAMPP and a bit of VirtualHosting” style waaay more.
        Cause it’s so damn simple: Got a working OS, no matter which flavour you like, working internet connection, and off you go downloading & installing XAMPP (or a similar kind of LAMP-flavour). Next on, set up your favorite IDE (in my case, that’s Geany – being cross-platform just enhances the fun), and tadaa – all you need right at hand :)

        Though sometimes you cannot avoid VM scenarios – but for this, see above ;)

        cu, w0lf.

  6. 11

    I get the following error Error parsing reference: “/src:/usr/local/src/src” is not a valid repository/tag…

    If i remove -v $(pwd)etc it works but it doesnt refresh the browser. And if I make changes inside src they wont show up in the browser, even if I restart the container.

    • 12

      Daniel Demmel

      April 20, 2016 4:47 pm

      Hey Dan,

      That’s strange, which operating system and shell do you use?

  7. 13

    A very comprehensive piece Daniel, especially when outlining all the steps needed to Dockerize. Here’s 2 other resources that might help on the considerations needed to run docker in production: and a quick how to video for getting started with Docker:

  8. 14

    James Nadeau

    April 20, 2016 1:39 pm

    I’d really recommend committing you dependencies to your repository and replacing npm install with npm rebuild. It not only speeds up an un-cacheable build because you don’t have to download everything, it’s safer in the long run in case a package goes away, or npm is not able to be connected to.

    • 15

      Daniel Demmel

      April 20, 2016 4:51 pm

      Hey James,

      That’s cool, didn’t know about npm rebuild!

      The great thing with containers though is that once you build them, all the dependencies are baked in, so you don’t need to worry about them.

      But it’s definitely a plus if you don’t have to download all the packages every time you build!

  9. 16

    Nicolas Martel

    April 20, 2016 2:21 pm

    Am I doing something wrong here?

    nmartel@l-2246:~/repos/examples/docker-tutorial$ docker run -p 8877:8877 -p 3001:3001 –name node-tutorial -v /home/nmartel/repos/examples/docker-tutorial:/usr/local/src/src –sig-proxy=false node-tutorial npm run browsersync
    npm info it worked if it ends with ok
    npm info using npm@3.8.3
    npm info using node@v5.10.1
    npm info lifecycle docker-tutorial@1.0.0~prebrowsersync: docker-tutorial@1.0.0
    npm info lifecycle docker-tutorial@1.0.0~browsersync: docker-tutorial@1.0.0

    > docker-tutorial@1.0.0 browsersync /usr/local/src
    > gulp watch –browsersync –dev

    [13:16:48] [Gulp flags] production: false | build: false | watch: true | syncbrowser: true
    [13:16:49] Using gulpfile /usr/local/src/gulpfile.js
    [13:16:49] Starting ‘clean’…
    [13:16:49] Starting ‘sass’…
    [13:16:49] Starting ‘babelify’…
    [13:16:49] ‘babelify’ errored after 19 ms
    [13:16:49] Error: Cannot find module ‘./src/app/app.jsx’

    Can’t find the main file.

    • 17

      Daniel Demmel

      April 20, 2016 5:11 pm

      Hey Nicolas,

      Hmm, not sure what’s going on here…

      Did the build step finish successfully beforehand?

      You have the src folder and that file in you local machine, right?

      • 18

        Nicolas Martel

        April 21, 2016 5:56 pm

        The docker build step was done successfully.

        It worked for me without the:

        docker run -p 8877:8877 -p 3001:3001 –name node-tut -v src:/usr/local/src/src –sig-proxy=false node-tutorial npm run browsersync

        Putting $(pwd) or absolute url for the host-src wasn’t doing it for my setup.

        The one thing left is being able to edit a file under src and make the browsersync update on the docker build. Haven’t been able to make that work.

        • 19

          Daniel Demmel

          April 21, 2016 7:24 pm

          Ah I see!

          What if you try with `-v ./src:/usr/local/src/src`, so:

          docker run -p 8877:8877 -p 3001:3001 –name node-tut -v ./src:/usr/local/src/src –sig-proxy=false node-tutorial npm run browsersync

          If the local reference works this should do!

          • 20

            Daniel Demmel

            April 22, 2016 10:42 am

            Actually hang on, that won’t work, relative paths are not supported after all…

            But you could try the backtick substitution instead of the dollar one, so this:

            > docker run -p 8877:8877 -p 3001:3001 --name node-tut -v `pwd`/src:/usr/local/src/src --sig-proxy=false node-tutorial npm run browsersync

  10. 21

    lurker above

    April 20, 2016 5:55 pm

    “don’t expect files to stick around in a container when you restart it.”

    I think this is muddled — you can restart a previously-stopped container with “docker start “, and all your (container) changes are of course preserved as you would expect.

    If you “docker run “, you are simply firing up a new container based on , nothing to do with the previously stopped container.

    You don’t talk about container management at all (docker ps, etc), so that’s why I think there is some confusion.

    Let me know if I missed anything, thanks!

    • 22

      Daniel Demmel

      April 21, 2016 9:24 am

      You’re right of course about docker start / stop / ps / etc, but since the article is already very long I had to make some tough cuts :(

      That said, I think it’s conceptually useful to always assume that nothing can persist in a container and use volumes for data and rebuild the image from scratch if you need to install a package for example, as that’ll make your life much easier once you get to deploy to a cluster.

  11. 23

    Michael Canfield

    April 20, 2016 7:45 pm

    Thank you! I’ve been meaning to figure out how to offload local dev tasks to docker and make the host just a dumb file editor. This seems like a really powerful concept. Lots of questions below! Point me elsewhere if you’re in the know.

    So, on actively developed projects our team has historically grabbed latest versions of dependencies instead of shrink-wrapping or checking into source. We handle breaking changes as they come, which isn’t “too” often. I know that’s a highly debatable topic, and I personally have been bitten with old projects not running due to dependency hell… With that discussion aside, on large teams it would be nice to enforce the host is just “for editing source files” and “everything else is containerized” in order to prevent “not on my machine” discussions (which happen a lot!)

    Have you any thoughts on how to implement such an enforcement for a large distributed team? Either strictly or loosely.

    After reading this I’m currently thinking of a few ideas. 1.) Loosely enforce with the dev workflow on host by expecting devs to execute an npm script like ‘npm run project’ which will create an updated docker image when dependencies have changed, launch it, and dev continues coding on host with files syncing as described. Downside, building docker image pretty often. 2.) Strictly enforce that build with git hooks (although those are semi-opt-in). Same downside. 3.) For speed sake, continuous deployment pipeline creates an updated dev env docker image and host pulls it down with source ( e.g. via git LFS integration, etc.). Advantage only network time needed and compute resources for the build are offloaded. Maybe there isn’t a huge advantage for offloading the build, need some experience. Also, having tracked dev docker images might not be that useful. So perhaps just the build pipeline piece is sufficient with another way of fetching the image (that becomes a cross platform issue… until Windows gets bash this summer!).

    I’m also curious about how to take advantage of editor plugins that kick off gulp/build/test tasks (sublime and vscode come to mind)? You’d need to have some middle ware that pipes those commands through to the docker image? And what about those docker image gulp tasks that need to run something host side like protractor controlling host browsers from within docker image…? Containerized xvfb could handle that for automated testing sake, but for writing tests you need to do things non-headless! It seems a containerized dev environment starts to fall apart at these integration points without additional tooling. Increasing complexity. Any answers out there? Thanks again!

    • 24

      Daniel Demmel

      April 21, 2016 9:48 am

      Hey Michael,

      Very good questions! :)

      To answer the first half I can share a workflow which we’ve been using for a few projects lately. I’ll have to start with a caveat that we’re not using Docker Compose yet and instead have Makefiles as this gives much more flexibility and control over things like differences between environments (dev / CI / staging / production) and makes these super long Docker commands more manageable.

      So the way we handle this is by baking the dependencies into pre-built and versioned images. So when developer A needs to install a new package or wants to do an update, they go and edit the Docker file and / or the package manifest, do a build, make sure everything works. Then she increments the version number for the image in the appropriate Make task file (we have one for each image) and pushes the pre-built image to Docker Hub and the source code changes to Git. Then when developer B pulls the source and starts the containers, the latest version of the image will automatically be pulled for him as the name is tagged with the version.

      This method keeps everyone on precisely the same environment, but still makes it trivial to update when needed. It’s also very convenient to deploy these pre-built images to a server then.

      If you have a CI and a bit more formal release train based on Git tags for example, have a look at what we did on

      As for the second part, you can either have these running inside Docker and watching the file system, or you can trigger stuff inside using `docker exec`. We successfully had things like BrowserSync running inside Docker in proxy mode, as it’s doing stuff in browsers on the host using websockets so no one notices the difference :)

      • 25

        Great stuff! Thank you for the detailed response. What do you mean by “proxy mode”? I assume you’re still manually starting the host browser and pointing it to the containerized process URL at which point web sockets take over. That makes sense. I was referring to the (magical) scenarios where a container chromedriver launches a host browser (security issue?) or the host editor runs a command palette gulp command on the container (it’s not aware of the container?). I guess the answer is “containerize all the things!” Which is fine with me, but convincing a large team to do that is, difficult. But since the argument is to truly standardize the dev environment, browsers would be part of that. The struggle is giving developers that final autonomy over their editor. Taking that away… mankind has no law to fit such a crime. Convincing and training them to containerize is probably the most sane thing to do. And ultimately the most beneficial by enabling more stable and composable workflows for one and all.

        Also, I wanted to let you guys know I’m really impressed with this post and the github repo. I’m frequently irritated by the majority of blogs that dangle a tantalizing solution, but prove to be a weekend hack session that is salted with “you wouldn’t do this in production though!”-isms. I appreciate their time and enthusiasm, but dishing out paltry nuggets of insight without the sustenance doesn’t serve the developer community all that well. You guys taking your time and going that extra step to thoroughly document an in the trenches dev workflow and in-prod service is a breath of fresh air. Thank you!

        • 26

          Yes, you’re right, what I meant is that BrowserSync runs a proxy inside the container so when you (manually) hit the app URL in your local browser, it’ll be controlled / refreshed using the websocket library.

          If you want to run tests with ChromeDriver (or any other browser which runs on Linux), those should run inside the container too. If you need to, you can even peek at the rendered output using VNC, see for example:

          As for editor commands, these would need to be redone to do docker exec container gulp build (or even better docker exec container npm run build) instead of gulp build. But this should be possible with a repo level config file for most editors, so only one person needs to figure it out for everyone in the team to work (with the same editor).

          And finally, thank you so much for the kind words! The motivation behind this article was the total revolution of how we work, and is indeed something which we’ve been battle testing for a while first, instead of this being just an over-excited experiment with the latest hype tool :)

          • 27

            Hah, I have no idea how my googling didn’t include a simple “selenium docker” search :facepalm: Wow, thank you. Good details on the editor config solution. The concepts here definitely scratch the entire workflow itch I’ve been whining about. Maybe a main article edit with some of this in the addendum section would prove useful for others? In any case thanks again! Until your next article, cya!

  12. 28

    Docker sounds Awesome! Maybe with this guide I might run it someday. Always interested in new and better. Other than running multiple hardwares, Vbox is the only other option.

  13. 29

    Matt Campbell

    April 21, 2016 3:15 am

    >… once you get to a more advanced level and want to deploy to production, volume containers

    Volume containers?

    • 30

      Daniel Demmel

      April 21, 2016 3:45 pm

      Hey Matt,

      Right, so in order to reliably persist data with Docker, you should use volume containers / data volumes. These will create shared folders similar to what I was showing in the article to share the source code between the container and the host, but in this case it will be between two (or more) containers.

      The official documentation is huge, so a better way is reading this post by Alex Collins on his blog and this post on Medium by Raman Gupta to get a good understanding!

  14. 31

    Very good writing.

    One suggestion for your image, when you map the host `src` to `/usr/local/src/src` you’re overwriting the previous `COPY src /usr/local/src/src`. So you could simply remove that `COPY` and use `ENTRYPOINT` instead of `RUN` for your `npm run compile`.

    Also, volumes from the host can be specified from relative paths, so it might be simpler to do `-v ./src:/usr/local/src/src` instead of -v $(pwd)/src:/usr/local/src/src.

    I think the only thing you missed in this tutorial is to talk about `docker-compose`. It’s a great tool worth mentioning.

    Thanks for sharing your insights.


    • 32

      Daniel Demmel

      April 21, 2016 4:27 pm

      Hey Fatore,

      Ah, very good points, the second one especially could have made things much simpler in the tutorial! I’ll see if I can get these edited in.

      Also good one on Docker Compose, even though I tend to use Makefiles to tie containers together in a project and to prevent typing these long commands by hand, Compose is good to get started with existing tutorials and documentation.

      Thanks a lot for your comment!

    • 33

      Daniel Demmel

      April 22, 2016 10:41 am

      Hi again Fatore,

      I just tested and it seems relative paths are not supported in the volume:

      Shame :(

  15. 34

    Overall Docker idea sounds great but in real life Docker is unusable in development process. I’ve spent two weeks trying to set up basic PHP dev environment without success. What I needed was just: PHP5, mySQL, Apache, SFTP, GIT and possibility to preserve changes made in files and in DB by allowing containers to access local filesystem. After reading whole documentation, going through dozen of tutorials and many days of tries I still wasn’t able to achieve my goal. As I said at the beginning, idea of Docker is great but it’s to difficult to use for single developer, maybe when You have help of well qualified system administrator things are different.

    • 35

      Daniel Demmel

      April 21, 2016 4:10 pm

      Hey Khamyll,

      It’s of course not unusable, but (as you experienced) very different conceptually, so to get a bit more complicated setup like you mentioned running, you’ll need to understand a few moving parts.

      In case you want to give it another go, here are some pointers based on what you wrote (hope this will also help others):

      You’ll need at least 3 containers, one for the MySQL engine, one data volume for your database (see my reply to Matt Campbell’s question), and another for Apache, PHP and your source code.
      The database will never be written to your local file system, but instead live in the data volume which you can dump / back up / restore / etc.
      You won’t need Git and SFTP as the source code will be pushed into the Apache / PHP container from your development computer’s file system using a volume as I was showing in the article.
      You’ll need to connect the MySQL and Apache / PHP containers with a network so they can talk to each other (using docker run –net) whereas the data volume will need to be connected to MySQL as volume (using docker run –volumes-from).

      I had a quick look and this tutorial for getting WordPress and PHPMyAdmin running using Docker Compose is good and simple. It’ll get you all the way there except the data volume, but persisting the data is the last thing you should worry about.

      Let me know if this helps!

  16. 36

    Daniel, thank you for a great tutorial. I’m currently going through dependency hell with Nuget at my current employer, so containerization is something I want to explore. Unfortunately, I’m running into an issue in the early stages of the tutorial. The issue I’m having centers around gulp. I have gulp installed globally and I ran npm install in the docker-tutorial directory. See the pasted error message below. I put asterisks to the right of the lines that caught my attention. Thanks again and in advance.

    sh: 1: gulp: not found ****************
    npm info lifecycle docker-tutorial@1.0.0~compile: Failed to exec compile script
    npm ERR! Linux 4.1.19-boot2docker
    npm ERR! argv “/usr/local/bin/node” “/usr/local/bin/npm” “run” “compile”
    npm ERR! node v5.10.1
    npm ERR! npm v3.8.3
    npm ERR! file sh
    npm ERR! code ELIFECYCLE
    npm ERR! errno ENOENT
    npm ERR! syscall spawn
    npm ERR! docker-tutorial@1.0.0 compile: `gulp build`
    npm ERR! spawn ENOENT
    npm ERR!
    npm ERR! Failed at the docker-tutorial@1.0.0 compile script ‘gulp build’. *****************
    npm ERR! Make sure you have the latest version of node.js and npm installed.
    npm ERR! If you do, this is most likely a problem with the docker-tutorial packa ge,
    npm ERR! not with npm itself.
    npm ERR! Tell the author that this fails on your system:
    npm ERR! gulp build
    npm ERR! You can get information on how to open an issue for this project with:
    npm ERR! npm bugs docker-tutorial
    npm ERR! Or if that isn’t available, you can get their info via:
    npm ERR! npm owner ls docker-tutorial
    npm ERR! There is likely additional logging output above.

    npm ERR! Please include the following file with any support request:
    npm ERR! /usr/local/src/npm-debug.log

    • 37


      You don’t actually have to have Gulp or anything else other than Docker installed.

      When you do docker build, all these steps like npm install, etc will happen as part of building the image, and then when you do docker run it’ll run gulp build inside the container.

      I might be misunderstanding you, so please tell me which step / command from the tutorial you got stuck with!

      • 38

        The error came at this step: docker build -t node-tutorial .

        I received the same error before and after installing gulp globally. I already had npm installed as apart of my VS2015 setup.

        I’m attempting to do this tutorial on Windows 10 64-bit if that helps.

        • 39


          The only thing I can think of right now is that maybe something’s missing from your Dockerfile. I have pulled out this tutorial from a boilerplate I created earlier, if you have a few minutes can you try if you can build it?

          • 40

            Hi Daniel,
            Thanks for the response. I’m past the issue we were discussing and I immediately ran into another:
            $ docker run -p 8877:8877 -p 3001:3001 –name node-tut -v $(pwd)/src:/usr/local/ src/src –sig-proxy=false node-tutorial npm run browsersync
            C:\Program Files\Docker Toolbox\docker.exe: An error occurred trying to connect: Post http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.23/containers/create?name=node-tu t: open //./pipe/docker_engine: The system cannot find the file specified..
            See ‘C:\Program Files\Docker Toolbox\docker.exe run –help’.

  17. 41

    Neville Franks

    April 26, 2016 11:41 am

    Great article Daniel. I’ve been working with Docker the last few weeks and am certainly impressed with what it offers. One thing I’m yet to understand though is why the container needs it’s own Linux OS when it is already running on Linux (not VBox).

    Also readers need to be aware of how big containers can get. The move to microcontainers using Alpine Linux etc. is good, however it doesn’t yet have support for packages I need. For example I hit a brick wall today because Alpine Linux’s MongoDB port doesn’t include mongorestore and mongodump so no easy way to backup and restore db’s. And then there are issues running something MongoDB in Docker on VMBox (Windows/Mac) as the database can’t reside back on the host because of limitations with Virtual Box.

    So as much as I want to like Docker and may well use it in the future, I’ve had to drop for the project I was hoping to use it for, which is a pity.

    Finally there really is a lot to learn to get serious with Docker and folks need to be aware of that.

    • 42

      Hey Neville,

      You’re totally right, on Linux the setup is much simpler, no need to use VMs or Docker Machine. I intentionally left Linux out of the article to save adding several asides / variations, focusing at the majority of front end developers using Mac or Windows. That said, once the new Docker Native (using built in hypervisors) is out of beta, the 3 platforms will behave the same way, so I’m planning to update the article.

      On image size, since version 1.10 Docker images are sharing layers globally, so if you use a same (bigger) distro as starting point it will only be downloaded once and reused I think.

      For databases, you should use volume containers / data volumes instead of the file system, see my response to Matt Campbell:

      There is a learning curve, true, but it’s a good investment if you want to understand the dev ops side of things a bit more and organically learn how to deploy and maintain your stack.

  18. 43

    Who is this article for? Is it only for rockstar developers building the next Facebook?

    Perhaps I only work on really simple websites, but I can’t imagine ever needing something like this. I personally know half a dozen guys doing the same work as me and I’m confident none of them would use anything remotely this complicated. Certainly none of them are “contributing to open-source libraries”.

    Your response to Daniel that said “your heart will sink when a client asks you if you can change one tiny thing on that old website and you have no idea where to even start…” is in my experience a complete exaggeration. I constantly have people asking that exact question. I simply open up the folder where I store a local copy of their site, and make the change, and upload it.

    • 44

      Hey Tim,

      This tutorial is for you if any of these is true: you work in a big team; you work with different technologies over time; you build microservices; you need to work with legacy code or software versions; etc

      It’s not that useful if you: use exactly the same tech for every project; don’t have to worry about deploying it to different environments; work alone; not drawn all the time to shiny new programming languages; etc

      So if you’re building simple websites, focusing on theming and writing straight HTML / JS / CSS then it’s a bit of an overkill, but you don’t need to be a rockstar developer to need a bit of Sass compilation here or play a bit with Node.js there, all without littering your computer with difficult to remove components and all their dependencies.

      And as a final word, if you can spare the time (or even better convince your client / employer to get some paid time for), contributing back to the open source projects which you rely on to make you productive is a very rewarding thing to do and is what keeps them going. You don’t need to solve the toughest core problems, but even a bit of documentation can make a big difference for a lot of people using them.

  19. 45

    J. J. Cuningham

    April 27, 2016 9:48 am

    What about Vagrant ( Does anybody use this dev environment? It seems easier to be set up.

  20. 46
    • 47


      Wrote this reply to your previous comment offline, just noticed you found the article!

      Vagrant is a great alternative and is what I used to use before Docker. It’s more flexible by creating full distro VMs enabling you to make your environment exactly the same as your server so deployment is a cinch, but (re)building, starting, stopping, etc these full VMs takes absolute ages.

      So Vagrant is great if you use a stack all in one VM, not that great if you are building microservices or just want to wrap tools / scripts into containers to prevent your computer becoming a battlefield of abandoned installations :)

  21. 48

    howard tyler

    May 20, 2016 7:33 am

    This is very great article!

  22. 50

↑ Back to top