Quick post, but before I forget, and have to search this again later, I must write it down. VMware publishes VMware tools packages for RHEL, SLES and Ubuntu distributions for it’s ESX software. RHEL guests (can’t say for other distros) will show up as ‘third-party/unsupported’ when you look the tools’ version for a given guest. Wait what?

I really don’t get it. It’s their own packages. You can get them on their repo (http://packages.vmware.com/tools/esx/).

Anyway, it’s easy to fix. Turns out there’s a setting you can change on the guest that will force the tools to push its version to it’s ESX host.

Simply edit /etc/vmware-tools/tools.conf, and make it look like this:

[user@host]:~ git:(master) ✗ $ cat /etc/vmware-tools/tools.conf

Restart vmware-tools-services and voila!

If there’s one thing I don’t like, it’s physical medias. They’re unpractical and environmental unfriendly. They also tend to break, and that’s when I just plain don’t lose them. Over the last few years, I gradually stopped using them.

That’s some of the reasons why all the movies and TV shows I watch come from the Internet. In today’s age, it’s even faster to download a movie than going to the store to grab it. All that in 1080p of course. I remember watching characters come up slowly on my monitoring back in the 90s using a 14.4k baud modem when accessing BBSes. Now, I can stream on-demand 1080p content. Talk about progress.

So a little over two years ago, I bought a Boxee box to replace a PS3. Mainly due to the mess Sony made by removing the boot other OS option, and then using Cinavia. On top of that, the PS3 had a very limited set of formats it supported.

Boxee was great. In the first months, there was a lot of updates, and it could play pretty much anything you threw at it. The only problem I had with it was the poor quality of the Netflix app. The video quality was terrible. But more on that later. Overall, I was a happy camper.

Time went on, and updates stopped. The little Boxee started to age. More and more, it would hang on me, needing a hard reset. Not a big hassle, but still annoying.

And then one day, we started talking about the Roku at the office. I started to read about it. I started to want one, and I was curious to see if the Netflix experience would be better on it.

I finally gave in on Friday, stopped at my local electronics store to grab one.

I decided to write a review comparison of it vs the Boxee.


Well, just like the Boxee, the Roku’s installation was dead simple. Connect the network and HDMI cables, and that’s it. That said, the Roku needs a Roku account in order to be used. And that requires a valid credit card. I’m guessing to simplify the purchase of channels. Kinda like the App store or Google Play. I read some people’s review online that were put off by it. Personally, I don’t mind.

In any case, Boxee’s was simpler to setup and thus, wins this round.

NOTE: The Roku is so tiny compared to the Boxee. It’s like 10 times smaller.


Both devices are straight forward to configure, but the Boxee has more parameters to configure. Especially at the network level. You can’t configure DNS servers on the Roku manually. Which is used by a lot of people to bypass Netflix’s security and access American content.

For that fact, Boxee wins this round too.


Well, in section, Roku wins hands down. I much prefer it’s UI. There seems to be more consistency across all channels (apps) compared to the Boxee. Often on the boxee, an ‘app’ will just open the Boxee browser (it’s really awful) and let you browse to your content.

On the Roku, all channels I’ve used have the same polished UI. For that, the Roku wins this round.

Channels (apps)

Well, if I relied solely on those for content, on both devices, I wouldn’t watch much. Most of them have crippled content in Canada. Compared to the US. It’s sad really, because it kinda forces my hand to turn to piracy. Because it’s a much better experience. I’d be willing to pay if there was a really good service that offered current seasons of shows I watch. The media cartels really don’t get it. Their loss I guess.

In any case, the apps are much better on the Roku as I mentioned previously. One that’s really much nicer is Youtube’s. Netflix’s new design is great also. A nice upgrade.

Roku wins this round.


Let’s take a minute to talk about Crunchyroll. It’s a site that streams animes. These guys gets it. They understand the new media reality. For free, you can stream a decent quality, but for a 6$/month fee, you get full HD streams. No dicking around: an hour after the show aired, it’s online for you to watch.

If only more sites like that existed. I’ll gladly support these guys just because they are cool. And because I like animes :)


Netflix deserves a special category. It was well known on the Boxee forums that the video quality of Netflix on the Boxee, well.., it just sucked. Add to that the poor content in Canada, was the reason why I never bothered to register past the free monthly trial.

I know I could easily circumvent that restriction. It’s well documented and easy to do. But I also believe in voting with my money. If I accept to pay for a service’s that’s abusing me and treating me poorly, I’m shooting myself in the foot.

That said, I decided to give Netflix another try, mainly out of curiosity to see if the video quality was better on the Roku. Holy Jeebus! It’s like night and day. I get real HD and the SD content is not bad either.

It’s to the point that I’m considering signing up. I’m kinda torn on this one.

In any case, Roku wins this round.

Streaming local content

Well, this one is easy. Boxee is a better experience for that. Simple as that. Boxee can connect to NFS, AFS and SMB/CIFS shares to read content. The Roku can’t. It relies on DLNA.

After reviewing the available options, I decided to give a shot to Plex. Mainly because it can run on my NAS. Well, that didn’t end well. My NAS is way too slow to run such a CPU intensive app. Plex can transcode on the fly video formats the Roku doesn’t support. Not sure I’m going to stick with it either. All the clients needs to be purchased. I might test other setups using a different DLNA server.

In any case, Boxee wins this round for being simpler in that regard.


Not going to say much about this except that the Roku 3 has a motion sensor on the remote, kinda like on the Wii. So you can play games with it. You get one of the Angry Birds game for free. To be honest, I don’t care much about that one. I’m a purist who likes to game on PCs ;)

Since the Boxee doesn’t have games, that makes a defacto victory for the Roku in this round.


But remotes use direct Wifi. Which means, you don’t need to point it at the device to use it. Kinda neat. The Boxee’s also has a QWERTY keyboard on one of it’s side. But I always found it klunky.

I much prefer the Roku’s. It’s form fits better in my hand.

Oh, and one more thing, you can install a Roku app on your iOS or Android devices to get an enhanced remote.

Roku wins again.


All in all, the Boxee was great years ago. But the Roku clearly is a better device in 2014. Not sure what I’ll do with my old Boxee. I thought about building a bedroom setup when I change my TV, bring the old TV in there with the Boxee. But the Roku is so affordable, I might just get a new one.

In many web infrastructure, you have a centralized share that serves data to a farm of web servers. The protocol used is irrelevant to this post. One problem that might happen is when that share is unavailable, web servers can’t know about it. They can’t differentiate writing a file on a local disk or shared drive.

Why does it matter?

You might end up with data inconsistencies. While the share is down, the web servers might keep writing, but instead of the writes happening on the share, they will happen on the local file system. So when the ops janitor gets around to remount the share, you’ll end up missing some files.

I had to deal with such a case this week. After resyncing the data to get back to a consistent state, I started looking for a permanent solution.

I first started by changing the ownership of the directory used as a mount point to root. And changed it’s permissions to 000. That solves part of the problem: any daemons running as anyone except root.

[root@localhost ~]# mkdir test
[root@localhost ~]# chmod 000 test
[root@localhost ~]# touch test/foo
[root@localhost ~]# mkdir test/bar
[root@localhost ~]# ll test/
total 4
drwxr-xr-x 2 root root 4096 Dec 12 15:38 bar
-rw-r--r-- 1 root root    0 Dec 12 15:38 foo

But what if your daemon is running as root?

On Linux systems, the all might root can write anywhere. One way to prevent that, would be to use SElinux, but that seemed overkill. And it doesn’t work on all distros. The solution is to use chattr. With a bit of voodoo. You can see below the solutions.

[root@localhost ~]# touch test/.immutable
[root@localhost ~]# chattr +i test/.immutable
[root@localhost ~]# chattr +i test/
[root@localhost ~]# mkdir test/foobar
mkdir: cannot create directory `test/foobar': Permission denied
[root@localhost ~]# touch test/barfoo
touch: cannot touch `test/barfoo': Permission denied

Cool trick.


I’d like to thank Simon Plourde, a coworker of mine, who taught me this technique today.

I’m a geek. As such, I like to play with different tools and technologies. Even when I’m not at work. I find working with computers relaxing. It’s my life long passion really.

That explains why I use Pingdom to monitor my blog for uptime, performance and real user monitoring. On top of that, I also use Cloudflare to cache my site worldwide.

Overkill? Over engineered? Yes and yes! Both are free. And their geek cool factor is around awesome.

That said, Pingdom connects every few seconds to monitor uptime and performance. Which pollutes my access logs. Well, used to pollute. As there’s a way to prevent Nginx from logging messages coming from Pingdom (or any other such site).

I added the following in my vhost configuration:

server {
  location / {
     if ($http_user_agent ~ "Pingdom") {
      access_log off;

After restarting Nginx, I retailed the access logs file, and no more log pollution!

I’ve been managing UNIX systems for nearly 15 years now. Over the years, I’ve picked up some habits while using some basic commands. Never to revisit those commands, assuming nothing as changed in those 15 years. One such command is the venerable tar.

It’s been around for quite some time. It was added to the POSIX standard in 1988, so I’m guessing it’s even older. Back then, SMP machines weren’t as commonplace as today. So it made sense to have single thread applications. tar is one of them. And as such, it’s slower than a multi-thread application.

How I got to look into speeding up compression with tar was through this task I’m working on at the office. Due to some business requirements, we need to be able to bring back a MySQL instance and lose less than 15 minutes of data. To do that, we use the excellent tool from Percona: Xtrabackup. That tool allows for incremental backups when using the InnoDB engine. It’s pretty awesome.

Even then, I needed to still improve performance, to bring down the time it takes to backup and archive said backup. Compression was now an issue, with a database of over 100GB.

After a bit of googling, I found this ‘next-gen’ tool for compression: pigz. Overall, it’s a gzip replacement that’s capable of using multiple CPUs to speed up compression.

Some numbers

So, having found this new tool, I decided to benchmark both techniques. You’ll find below the results.

tar and gzip

[dbadmin@dbserver backup]$ time tar czpf test.tar.gz full/

real  11m48.164s
user  7m21.513s
sys   0m17.688s

tar and pigz

[dbadmin@dbserver backup]$ time tar -c full/ | pigz > test2.tar.gz

real  5m28.211s
user  8m5.384s
sys   0m16.658s

As you can see, on this VM with 2 vCPUs, pigz reduced by half the compression time. Adding 2 more vCPUs would bring down again that time.

In the first installment of my Docker serie, we learned how to create a RHEL/CentOS base image. The steps described in that post can most likely be applied to any other Linux distribution. Just need to find a tool similiar to febootstrap to create the image.

Running the examples is kinda neat. For a whole minute. So what now? Well, when I decided to investigate Docker, my goal was to be able to run my blog (and some other sites I host for friends) in a container. All that content is served by Nginx. The first logical step is to get Nginx running in our base image.


That file is similar to a Gemfile, Vagrantfile or Berksfile. It’s the center piece of the Docker contrainer build process. Sure, you could docker run -i -t foo /bin/bash and then customize your container, before commiting it. But that’s so manual, 90s like. With a configuration file, you know you can reproduce the same container, over and over again, with no possible deviation. It’ll always be the same. Kinda important when you run software stacks.

I’ll take you through a minimal Dockerfile configuration that will get Nginx running in a container. I’ll explain the basics, please refer to the official Dockerfile documentation for a complete overview.


Should be the first entry in your Dockerfile. Tells Docker which base image to use to build the new image.

FROM centos


Shameless publicity if you publish them :)

MAINTAINER Jean-Francois Theroux <[email protected]>


I maintain a custom build of Nginx + Modsecurity for my needs. I need to fetc the YUM repository configuration file in order for YUM to install the proper RPM.

ADD http://static.theroux.ca/repository/failshell.repo /etc/yum.repos.d/

Basically, ADD allows you to fetch files and copy them somewhere in your container.


Used to customize your container. You can stack as many as you want in your Dockerfile.

NOTE: At this point, I’m not sure yet how I’ll integrate Chef with Docker. Not much of a fan of shell type configurations. The only negative point of Docker as far as I’m concerned.

RUN yum -y install nginx

Note on foreground vs background

According to the Docker way, your container should run only one service. That’s the whole purpose of using containers after all. So, instead of backgrounding your service, you should leave it running in the foreground. You basically run one command, that’s the sole purpose of your container. A very simple-minded container :)

So, we need to tell Nginx to do just that!

RUN echo "daemon off;" >> /etc/nginx/nginx.conf


Expose ports from your container to the outside.


NOTE: That’s the internal port. It will be a different one that will be used by the outside. It’s assigned dynamically by Docker.


The heart and soul of your container. What it does. It’s life’s goal!

CMD CMD /usr/sbin/nginx -c /etc/nginx/nginx.conf

Complete file

FROM centos
MAINTAINER Jean-Francois Theroux <[email protected]>

# Configure my repo to use my custom Nginx with modsec
ADD http://static.theroux.ca/repository/failshell.repo /etc/yum.repos.d/

# install deps
RUN yum -y install nginx

# tell Nginx to stay foregrounded
RUN echo "daemon off;" >> /etc/nginx/nginx.conf

# expose HTTP

# Run
CMD /usr/sbin/nginx -c /etc/nginx/nginx.conf

Build the container

Ok so now what? I have a configured Dockerfile. But will it work?

Let’s build it to see.

[failshell@banshee]:nginx git:(master) ✗ $ docker build -t nginx:base .
Uploading context 10240 bytes
Step 1 : FROM centos
 ---> 5df8a8c8477a
Step 2 : MAINTAINER Jean-Francois Theroux <[email protected]>
 ---> Using cache
 ---> 52df3c0f5244
Step 3 : ADD http://static.theroux.ca/repository/failshell.repo /etc/yum.repos.d/
 ---> a4fef1c7065a
Step 4 : RUN yum -y install nginx
 ---> Running in 7048bfe64f8e
Loaded plugins: fastestmirror
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package nginx.x86_64 0:1.4.1-5.el6.modsec will be installed
--> Processing Dependency: libaprutil-1.so.0()(64bit) for package: nginx-1.4.1-5.el6.modsec.x86_64
--> Processing Dependency: libapr-1.so.0()(64bit) for package: nginx-1.4.1-5.el6.modsec.x86_64
--> Running transaction check
---> Package apr.x86_64 0:1.3.9-5.el6_2 will be installed
---> Package apr-util.x86_64 0:1.3.9-3.el6_0.1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

 Package         Arch          Version                   Repository        Size
 nginx           x86_64        1.4.1-5.el6.modsec        failshell        482 k
Installing for dependencies:
 apr             x86_64        1.3.9-5.el6_2             base             123 k
 apr-util        x86_64        1.3.9-3.el6_0.1           base              87 k

Transaction Summary
Install       3 Package(s)

Total download size: 692 k
Installed size: 1.9 M
Downloading Packages:
Total                                           915 kB/s | 692 kB     00:00
warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID c105b9de: NOKEY
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
Importing GPG key 0xC105B9DE:
 Userid : CentOS-6 Key (CentOS 6 Official Signing Key) <[email protected]>
 Package: centos-release-6-4.el6.centos.10.x86_64 (@febootstrap/$releasever)
 From   : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : apr-1.3.9-5.el6_2.x86_64                                     1/3
  Installing : apr-util-1.3.9-3.el6_0.1.x86_64                              2/3
  Installing : nginx-1.4.1-5.el6.modsec.x86_64                              3/3

Thanks for using NGINX!

Check out our community web site:
* http://nginx.org/en/support.html

If you have questions about commercial support for NGINX please visit:
* http://www.nginx.com/support.html

  Verifying  : nginx-1.4.1-5.el6.modsec.x86_64                              1/3
  Verifying  : apr-util-1.3.9-3.el6_0.1.x86_64                              2/3
  Verifying  : apr-1.3.9-5.el6_2.x86_64                                     3/3

  nginx.x86_64 0:1.4.1-5.el6.modsec

Dependency Installed:
  apr.x86_64 0:1.3.9-5.el6_2          apr-util.x86_64 0:1.3.9-3.el6_0.1

 ---> 301e8ca0a501
Step 5 : RUN echo "daemon off;" >> /etc/nginx/nginx.conf
 ---> Running in d4f524f94080
 ---> 7a811663f5be
Step 6 : EXPOSE 80
 ---> Running in 5ad4136d0b77
 ---> ba0b898d2763
Step 7 : CMD /usr/sbin/nginx -c /etc/nginx/nginx.conf
 ---> Running in 57ddfad34428
 ---> a2e31e8f510e
Successfully built a2e31e8f510e

We should now see our new docker in our available images.

[failshell@banshee]:nginx git:(master) ✗ $ docker images
REPOSITORY          TAG                 ID                  CREATED              SIZE
nginx               base                a2e31e8f510e        About a minute ago   12.29 kB (virtual 371.3 MB)
centos              latest              5df8a8c8477a        5 days ago           316.8 MB (virtual 316.8 MB)

We’re now ready to try to run our container :)

Running your container

[failshell@banshee]:nginx git:(master) ✗ $ docker run -d nginx:base

As you can see, each time you run a container, Docker generates a new ID for it.

Let’s see if it’s really running.

[failshell@banshee]:nginx git:(master) ✗ $ docker ps
ID                  IMAGE               COMMAND                CREATED             STATUS              PORTS
f98324f6bb5f        nginx:base          /bin/sh -c /usr/sbin   49 seconds ago      Up 49 seconds       49161->80

The ID matches. Our container is up and running. We can also see which publicly exposed port is our service running on.

Let’s curl it.

[failshell@banshee]:nginx git:(master) ✗ $ curl http://localhost:49161
<!DOCTYPE html>
<title>Welcome to nginx!</title>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>

That wraps it up. The main thing to remember is that the application you’ll run inside your container will run in foreground. You only wan to run one application per container.

I’m now ready to Docker all the things :)

That said, I still have a few things to look into before I can seriously consider using Docker. Mainly, how to integrate Chef (Berkshel? Vagrant?) with Docker.

There’s been a lot of buzz lately around the Go programming language. A lot of cool tools have been written using it. One in particular, which has a lot of buzz too: Docker. Like many others, I’m really interested in Docker, because it has the potential to help me resolve a recurring issue: facilitating software deployments as much as possible.

Most deployments issues have been solved, for me at least, by using a CM tool like Chef. Problem is, sometimes, you need to roll back a deployment, and even if you use a CM, that’s gonna be tricky.

With Docker, you simply have to replace a contrainer, and you are good to go.

So as I explore that new technology, I plan on posting my findings and experience with it here.

In this first installment, I will explore building a RHEL/Centos Docker base image.

What is Docker?

Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.


  • a RHEL/CentOS VM
  • a kernel supporting AUFS
  • Docker
  • febootstrap (available in the EPEL repository)
  • IP Forwarding enabled (otherwise, networking in containers won’t work)

Getting our environment ready

I’ve written a Chef cookbook (use >= 0.1.6) that takes care of setting up a Docker-ready server. I recommend using that to get up and running quickly. Otherwise, make sure you meet the above requirements before moving on.

Building the image

We’ll be using febootstrap to create a RHEL/CentOS image in a fakeroot. It mimics the behavior of debootstrap. Very useful to create images for LXC/OpenVZ.

[root@banshee ~]# febootstrap -i iputils -i vim-minimal -i iproute -i bash -i coreutils -i yum centos centos http://centos.mirror.iweb.ca/6.4/os/x86_64/ -u http://centos.mirror.iweb.ca/6.4/updates/x86_64/
febootstrap                                                                                                | 3.7 kB     00:00
febootstrap/primary_db                                                                                     | 4.4 MB     00:03
febootstrap-updates                                                                                        | 3.4 kB     00:00
febootstrap-updates/primary_db                                                                             | 4.4 MB     00:02
Setting up Install Process
Resolving Dependencies

NOTE: Make sure you run febootstrap as root. Otherwise, you gonna walk into a world of pain with permissions in your container.

Now that we have our image in our fakeroot, we need to import it into Docker.

[root@banshee ~]# cd centos/
[root@banshee centos]# tar -c . | docker import - centos

Our newly created image is now available and ready for use.

[root@banshee centos]# docker images
REPOSITORY          TAG                 ID                  CREATED             SIZE
centos              latest              5df8a8c8477a        38 seconds ago      316.8 MB (virtual 316.8 MB)

Testing our new image

Let’s see if it works.

[root@banshee centos]# docker run centos /bin/ping google.com -c 1
PING google.com ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=60 time=10.7 ms

--- google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 13ms
rtt min/avg/max/mdev = 10.742/10.742/10.742/0.000 ms
[root@banshee centos]# docker run -t -i centos /bin/bash
bash-4.1# uname -r
bash-4.1# hostname

That’s it.

In the next installment, I will explain how to customize our new base image to run a website using Nginx.