The Shining Path of Least Resistance

LeastResistance.Net

Archive for October, 2016

Habitat and AWS’ Elastic Container Service

Posted by mattray on October 26, 2016

I was fortunate to present to the AWS Sydney North User Group on the topic “Build Better Containers for ECS with Habitat” (slides here). I was new to using Amazon’s Elastic Elastic Container Service (ECS) so I figured I’d document my findings as I went.

Habitat

If you’re unfamiliar with Habitat, go read through the introduction and try out the tutorial. It’s a new project with the ambitious goal of providing

Application automation that enables modern application teams to build, deploy, and run any application in any environment – from traditional data-centers to containerized microservices.

I’m not going to dive too far into Habitat, but it makes building portable applications and exporting them to Docker really easy. For the demonstration purposes, I reused the National Parks demo from this recent Chef blog post.

The National Park plan Bill used in his blog post was published to GitHub and the MongoDB plan was published to the Habitat depot, so I could build my own docker containers with them. Assuming you have Habitat already installed and your docker-machine running, you can build docker images for the Mongo database and National Parks Tomcat application (after checking it out from GitHub).

  $ git clone git@github.com:billmeyer/national-parks-plan.git
  $ cd national-parks-plan
  $ hab studio enter

And once you’re in the studio

  [1][default:/src:0]# build
  ...
  [2][default:/src:0]# hab pkg export docker mattray/national-parks
  ...
  [3][default:/src:0]# hab pkg export docker billmeyer/mongodb

After exiting the studio, your docker images are ready.

  $ docker images
  REPOSITORY                                                                 TAG                    IMAGE ID            CREATED             SIZE
  billmeyer/mongodb                                                          3.2.6-20160824195527   dc1e785cb432        8 seconds ago       301 MB
  billmeyer/mongodb                                                          latest                 dc1e785cb432        8 seconds ago       301 MB
  mattray/national-parks                                                     0.1.3-20161026234736   bdf5dc7b7465        32 seconds ago      708.5 MB
  mattray/national-parks                                                     latest                 bdf5dc7b7465        32 seconds ago      708.5 MB
  habitat-docker-registry.bintray.io/studio                                  0.11.0                 7ebd429888ef        12 days ago         293.4 MB

EC2 Container Registry

Once we’ve got our containers built locally it’s time to move them to Amazon’s EC2 Container Registry, their private Docker Registry. I’m not going to go into the specifics of configuring your AWS developer setup, but you’ll need the aws and ecs-cli tools installed. First we’ll need to login to the ECR registry

  $ aws ecr get-login

and take this output to login to our new ECR docker registry.

  $ docker login ...

Habitat put the containers into namespaces of billmeyer/mongodb and mattray/national-parks, so we’ll need to create these within ECR.

  $ aws ecr create-repository –repository-name billmeyer/mongodb
  ...
  $ aws ecr create-repository –repository-name mattray/redis
  ...

Once we have these we’ll tag and push our images to the ECR (note you’ll need your aws_account_id and availability zone).

  $ docker tag billmeyer/mongodb:latest aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/billmeyer/mongodb:latest
  $ docker tag mattray/national-parks:latest aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/mattray/national-parks:latest
  $ docker push aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/billmeyer/mongodb:latest
  The push refers to a repository [aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/billmeyer/mongodb]
  2922e8bbae38: Pushed
  latest: digest: sha256:105add47da75fb85ba605a0bdf58a4877705c80d656955b55792005267365a11 size: 5920
  $ docker push aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/mattray/national-parks:latest
  The push refers to a repository [aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/mattray/national-parks]
  b3ddf26b58dc: Pushed
  latest: digest: sha256:9a74d0cddd5688e126d328527d63c5225d2ce320da67cadfc73fdf92f2fd1dcf size: 6676

EC2 Compute Service

Now that our docker images are pushed to ECR, let’s run them on Amazon’s ECS. First we’ll need to set up our ecs-cli tooling:

  $ ecs-cli configure --cluster hab-demo

which creates a ~/.ecs/config file that may need your credentials. With that in place, we can provision EC2 instances to host our containers.

  $ ecs-cli up --keypair mattray-apac --capability-iam --size 2 --instance-type t2.medium --port 22

This creates a Cloudformation stack with 2 t2.medium Amazon Linux’s ECS-tuned hosts with SSH access open. If you have an existing VPC, you could add the cluster to it and attach a security group opening up any additional ports you may need. For this demo I went into the AWS console and opened inbound port 8080.

The National Parks application is in a Docker compose np-demo.yml script:

  version: '2'
  services:
    mongo:
      image: aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/billmeyer/mongodb:latest
      hostname: "mongodb"
    national-parks:
      image: aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/mattray/national-parks:latest
      ports:
        - "8080:8080"
      links:
        - mongo
      command: --peer mongodb --bind database:mongodb.default

We have the mongo and national-parks services and use the docker images from the ECR. The docker compose documentation indicates that links should create /etc/hosts entries, but this does not appear to currently work with ECS so we assign the hostname: "mongodb" so we can have Habitat automatically peer to this node and connect the National Parks Tomcat application to Mongo. links does manage the deployment order of the containers, so it’s still worth using. We launch our ECS Task with

  $ ecs-cli compose --file np-demo.yml -p np-demo up

From the AWS console, find the public IP address of the ECS host in the cluster and connect to it at http://ecs-host-ip:8080/national-parks

screenshot-2016-10-27-13-50-22
You can also SSH to this host to run docker commands locally (ie. docker logs) for debugging purposes.

Posted in aws, chef, docker | Tagged: , , , , , | Leave a Comment »

Building Chef on the BeagleBone Black

Posted by mattray on October 20, 2016

chef-logoI wanted to get Chef running on my BeagleBone Black running Debian, using the full-stack Omnibus builder they use for their packages. While ARM is not a supported platform, the open source community had already done a lot of work getting it ready. The first step was to get the build toolchain in place, so I followed the instructions from https://github.com/chef/omnibus-toolchain. I had to make 1 small fix (already merged), but here’s how I got omnibus-toolchain installed:

sudo apt-get install autoconf binutils-doc bison build-essential flex gettext ncurses-dev libssl-dev libreadline-dev zlib1g-dev git libffi6 libffi-dev ruby ruby-dev
sudo gem install bundler
git clone https://github.com/chef/omnibus-toolchain.git
cd omnibus-toolchain
sudo bundle install --without development
sudo bundle exec omnibus build omnibus-toolchain
sudo FORCE_UNSAFE_CONFIGURE=1 bundle exec omnibus build omnibus-toolchain

Note the FORCE_UNSAFE_CONFIGURE=1, there was a bug in gtar that I didn’t debug.

Luckily Carl Perry already had an ARMHF Chef 12.8.1 build available for bootstrapping.

After installing the package locallly

dpkg -i /tmp/chef_12.8.1%2B20160319051316-1_armhf.deb

I did a chef-client run remotely

$ knife bootstrap 192.168.0.11 -x debian --sudo -N beaglebone
Creating new client for beaglebone
Creating new node for beaglebone
Connecting to 192.168.0.11
192.168.0.11 -----> Existing Chef installation detected
192.168.0.11 Starting the first Chef Client run...
192.168.0.11 Starting Chef Client, version 12.8.1
192.168.0.11 resolving cookbooks for run list: []
192.168.0.11 Synchronizing Cookbooks:
192.168.0.11 Installing Cookbook Gems:
192.168.0.11 Compiling Cookbooks...
192.168.0.11 [2016-10-13T07:53:21+11:00] WARN: Node beaglebone has an empty run list.
192.168.0.11 Converging 0 resources
192.168.0.11
192.168.0.11 Running handlers:
192.168.0.11 Running handlers complete
192.168.0.11 Chef Client finished, 0/0 resources updated in 14 seconds

The next step was to get the omnibus cookbook in place to use my machine as a builder. After sorting through the dependencies and getting it uploaded, I had to make 1 small change to disable grabbing the omnibus-toolchain because I had already built it locally. Once that was in place, it was a matter of sudoing to the omnibus user, downloading the Chef source and running

. load-omnibus-toolchain.sh
cd chef/omnibus
bundle install --without development
bundle exec omnibus build chef -l debug

And approximately 3 hours later I had a new chef_12.15.27+20161013214455-1_armhf.deb which worked great once installed.

root@beaglebone:/home/omnibus/chef-12.15.27/omnibus/pkg# dpkg -i chef_12.15.27+20161013214455-1_armhf.deb
Selecting previously unselected package chef.
(Reading database ... 82288 files and directories currently installed.)
Preparing to unpack chef_12.15.27+20161013214455-1_armhf.deb ...
Unpacking chef (12.15.27+20161013214455-1) ...
Setting up chef (12.15.27+20161013214455-1) ...
Thank you for installing Chef!

I’ll continue to refine the build process and follow along with new releases of Chef. Now I can move on to the next more important piece, which is actually using the box. Feel free to download it: chef_12.15.27+20161013214455-1_armhf.deb

Posted in chef, linux | Tagged: , , , | 4 Comments »

Installing Debian 8.6 on a BeagleBone Black

Posted by mattray on October 14, 2016

1996-01I’ve finally had practical reason to get the BeagleBone Black out of the drawer and start using it as an home server (more later). It’s a nice, quiet little machine with 512 megs of RAM and a 1ghz ARM CPU. I followed the instructions from https://beagleboard.org/getting-started to connect to it via the serial port over USB, which allowed me to connect to the web server on the included OS. Turns out I didn’t really need to do this, all I needed to do was flash my microSD card and install Debian on it.

For more in-depth Linux notes, I referred to http://elinux.org/Beagleboard:BeagleBoneBlack

I downloaded the latest Debian stable “Jessie” build for ARMHF from here. That image turned out to be a bit bloated with X and desktop tools, so I switched to the “IOT” image. I flashed the image onto a 32 gig microSD card with Etcher for OSX, which was quite painless.

Debian on the BeagleBone Black

Next I popped the microSD card into the BeagleBone and rebooted into Debian. I was able to connect to the serial console over USB with instructions from here. For my instance, the command was

screen /dev/tty.usbmodem1413

I changed the debian user password away from the default and plugged in a network cable.
Once it was on the network I could SSH to it, I probably didn’t need to use the serial console at all if I’d just looked for the IP address off the router.

I copied over my SSH key so I wouldn’t need to use my password when logging in.

scp ~/.ssh/id_rsa.pub debian@192.168.0.11:~/.ssh/authorized_keys

Next I did an apt-get update; apt-get upgrade to get the latest bits and then shut it down.

I plugged directly into the router and powered via the USB port, since it’s meant to be an externally-accessible bastion box.

Final touches

I also needed to make sure we used the whole microSD, so I followed these instructions:

cd /opt/scripts/tools/
git pull
sudo ./grow_partition.sh
sudo reboot

I checked the list of timezones and set mine to Sydney.

timedatectl list-timezones
sudo timedatectl set-timezone Australia/Sydney

and finally

apt-get install emacs-nox

Now it was ready to use.

Posted in geekery, linux, Uncategorized | Tagged: , | 1 Comment »