The Shining Path of Least Resistance

LeastResistance.Net

Archive for the ‘chef’ Category

Chef 14 ARM on the BeagleBone Black – UPDATED 14.5.33

Posted by mattray on April 4, 2018

Chef 14 is now available and there are a few minor updates from the Chef 13 ARM build. Looking forward to the “official” ARM builds, but for now here’s what needed to be updated.

Ruby 2.5.0

Chef 14 is built with Ruby 2.5.0, so the first step was to build the latest. I already had an omnibus user and rbenv installed, so it was simply:

sudo su - omnibus
bash
export PATH="$HOME/.rbenv/bin:$PATH"
eval "$(rbenv init -)"
rbenv install 2.5.0
rbenv global 2.5.0
gem install bundler

omnibus-toolchain

With Ruby 2.5.0 installed, I decided to update the omnibus-toolchain to the current release.

git clone https://github.com/chef/omnibus-toolchain.git
cd omnibus-toolchain
bundle install --without development
bundle exec omnibus build omnibus-toolchain
dpkg -i pkg/omnibus-toolchain*deb
cd

Chef 14.0.190

With omnibus-toolchain built with Ruby 2.5.0 I could now build Chef 14.

export PATH="$HOME/.rbenv/bin:/opt/omnibus-toolchain/bin:$PATH"
wget https://github.com/chef/chef/archive/v14.0.190.tar.gz
tar -xzvf v14.0.190.tar.gz
cd chef-14.0.190/omnibus/
bundle install --without development
bundle exec omnibus build chef -l debug

A few hours later I had my ARM Chef client .deb built. As the root user I installed it and ran it.

dpkg -i chef_14.0.190+20180403233706-1_armhf.deb
...
Thank you for installing Chef!

You may download the chef_14.0.190+20180403233706-1_armhf.deb or follow these instructions to build your own package for your platform.

UPDATE

Chef 14.1.1 is now available, I’ve built new 14.1.1 ARM packages updating the above to Ruby 2.5.1, the latest omnibus-toolchain and the Chef 14.1.1 source.

chef_14.1.1_armhf.deb

UPDATE 2

Chef 14.4.56 is now the latest stable build, so I’ve built new ARM packages. Ruby 2.5.1, omnibus-toolchain 1.1.90 and Chef 14.4.56 from source.

chef_14.4.56_armhf.deb

UPDATE 3

Chef 14.5.33 is the latest stable build, so new ARM packages are available. Ruby 2.5.1, omnibus-toolchain 1.1.90 and Chef 14.5.33 from source.

chef_14.5.33_armhf.deb

Posted in chef | Tagged: | 2 Comments »

Habitat and AWS’ Elastic Container Service

Posted by mattray on October 26, 2016

I was fortunate to present to the AWS Sydney North User Group on the topic “Build Better Containers for ECS with Habitat” (slides here). I was new to using Amazon’s Elastic Elastic Container Service (ECS) so I figured I’d document my findings as I went.

Habitat

If you’re unfamiliar with Habitat, go read through the introduction and try out the tutorial. It’s a new project with the ambitious goal of providing

Application automation that enables modern application teams to build, deploy, and run any application in any environment – from traditional data-centers to containerized microservices.

I’m not going to dive too far into Habitat, but it makes building portable applications and exporting them to Docker really easy. For the demonstration purposes, I reused the National Parks demo from this recent Chef blog post.

The National Park plan Bill used in his blog post was published to GitHub and the MongoDB plan was published to the Habitat depot, so I could build my own docker containers with them. Assuming you have Habitat already installed and your docker-machine running, you can build docker images for the Mongo database and National Parks Tomcat application (after checking it out from GitHub).

  $ git clone git@github.com:billmeyer/national-parks-plan.git
  $ cd national-parks-plan
  $ hab studio enter

And once you’re in the studio

  [1][default:/src:0]# build
  ...
  [2][default:/src:0]# hab pkg export docker mattray/national-parks
  ...
  [3][default:/src:0]# hab pkg export docker billmeyer/mongodb

After exiting the studio, your docker images are ready.

  $ docker images
  REPOSITORY                                                                 TAG                    IMAGE ID            CREATED             SIZE
  billmeyer/mongodb                                                          3.2.6-20160824195527   dc1e785cb432        8 seconds ago       301 MB
  billmeyer/mongodb                                                          latest                 dc1e785cb432        8 seconds ago       301 MB
  mattray/national-parks                                                     0.1.3-20161026234736   bdf5dc7b7465        32 seconds ago      708.5 MB
  mattray/national-parks                                                     latest                 bdf5dc7b7465        32 seconds ago      708.5 MB
  habitat-docker-registry.bintray.io/studio                                  0.11.0                 7ebd429888ef        12 days ago         293.4 MB

EC2 Container Registry

Once we’ve got our containers built locally it’s time to move them to Amazon’s EC2 Container Registry, their private Docker Registry. I’m not going to go into the specifics of configuring your AWS developer setup, but you’ll need the aws and ecs-cli tools installed. First we’ll need to login to the ECR registry

  $ aws ecr get-login

and take this output to login to our new ECR docker registry.

  $ docker login ...

Habitat put the containers into namespaces of billmeyer/mongodb and mattray/national-parks, so we’ll need to create these within ECR.

  $ aws ecr create-repository –repository-name billmeyer/mongodb
  ...
  $ aws ecr create-repository –repository-name mattray/redis
  ...

Once we have these we’ll tag and push our images to the ECR (note you’ll need your aws_account_id and availability zone).

  $ docker tag billmeyer/mongodb:latest aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/billmeyer/mongodb:latest
  $ docker tag mattray/national-parks:latest aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/mattray/national-parks:latest
  $ docker push aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/billmeyer/mongodb:latest
  The push refers to a repository [aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/billmeyer/mongodb]
  2922e8bbae38: Pushed
  latest: digest: sha256:105add47da75fb85ba605a0bdf58a4877705c80d656955b55792005267365a11 size: 5920
  $ docker push aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/mattray/national-parks:latest
  The push refers to a repository [aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/mattray/national-parks]
  b3ddf26b58dc: Pushed
  latest: digest: sha256:9a74d0cddd5688e126d328527d63c5225d2ce320da67cadfc73fdf92f2fd1dcf size: 6676

EC2 Compute Service

Now that our docker images are pushed to ECR, let’s run them on Amazon’s ECS. First we’ll need to set up our ecs-cli tooling:

  $ ecs-cli configure --cluster hab-demo

which creates a ~/.ecs/config file that may need your credentials. With that in place, we can provision EC2 instances to host our containers.

  $ ecs-cli up --keypair mattray-apac --capability-iam --size 2 --instance-type t2.medium --port 22

This creates a Cloudformation stack with 2 t2.medium Amazon Linux’s ECS-tuned hosts with SSH access open. If you have an existing VPC, you could add the cluster to it and attach a security group opening up any additional ports you may need. For this demo I went into the AWS console and opened inbound port 8080.

The National Parks application is in a Docker compose np-demo.yml script:

  version: '2'
  services:
    mongo:
      image: aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/billmeyer/mongodb:latest
      hostname: "mongodb"
    national-parks:
      image: aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/mattray/national-parks:latest
      ports:
        - "8080:8080"
      links:
        - mongo
      command: --peer mongodb --bind database:mongodb.default

We have the mongo and national-parks services and use the docker images from the ECR. The docker compose documentation indicates that links should create /etc/hosts entries, but this does not appear to currently work with ECS so we assign the hostname: "mongodb" so we can have Habitat automatically peer to this node and connect the National Parks Tomcat application to Mongo. links does manage the deployment order of the containers, so it’s still worth using. We launch our ECS Task with

  $ ecs-cli compose --file np-demo.yml -p np-demo up

From the AWS console, find the public IP address of the ECS host in the cluster and connect to it at http://ecs-host-ip:8080/national-parks

screenshot-2016-10-27-13-50-22
You can also SSH to this host to run docker commands locally (ie. docker logs) for debugging purposes.

Posted in aws, chef, docker | Tagged: , , , , , | Leave a Comment »

Building Chef on the BeagleBone Black

Posted by mattray on October 20, 2016

chef-logoI wanted to get Chef running on my BeagleBone Black running Debian, using the full-stack Omnibus builder they use for their packages. While ARM is not a supported platform, the open source community had already done a lot of work getting it ready. The first step was to get the build toolchain in place, so I followed the instructions from https://github.com/chef/omnibus-toolchain. I had to make 1 small fix (already merged), but here’s how I got omnibus-toolchain installed:

sudo apt-get install autoconf binutils-doc bison build-essential flex gettext ncurses-dev libssl-dev libreadline-dev zlib1g-dev git libffi6 libffi-dev ruby ruby-dev
sudo gem install bundler
git clone https://github.com/chef/omnibus-toolchain.git
cd omnibus-toolchain
sudo bundle install --without development
sudo bundle exec omnibus build omnibus-toolchain
sudo FORCE_UNSAFE_CONFIGURE=1 bundle exec omnibus build omnibus-toolchain

Note the FORCE_UNSAFE_CONFIGURE=1, there was a bug in gtar that I didn’t debug.

Luckily Carl Perry already had an ARMHF Chef 12.8.1 build available for bootstrapping.

After installing the package locallly

dpkg -i /tmp/chef_12.8.1%2B20160319051316-1_armhf.deb

I did a chef-client run remotely

$ knife bootstrap 192.168.0.11 -x debian --sudo -N beaglebone
Creating new client for beaglebone
Creating new node for beaglebone
Connecting to 192.168.0.11
192.168.0.11 -----> Existing Chef installation detected
192.168.0.11 Starting the first Chef Client run...
192.168.0.11 Starting Chef Client, version 12.8.1
192.168.0.11 resolving cookbooks for run list: []
192.168.0.11 Synchronizing Cookbooks:
192.168.0.11 Installing Cookbook Gems:
192.168.0.11 Compiling Cookbooks...
192.168.0.11 [2016-10-13T07:53:21+11:00] WARN: Node beaglebone has an empty run list.
192.168.0.11 Converging 0 resources
192.168.0.11
192.168.0.11 Running handlers:
192.168.0.11 Running handlers complete
192.168.0.11 Chef Client finished, 0/0 resources updated in 14 seconds

The next step was to get the omnibus cookbook in place to use my machine as a builder. After sorting through the dependencies and getting it uploaded, I had to make 1 small change to disable grabbing the omnibus-toolchain because I had already built it locally. Once that was in place, it was a matter of sudoing to the omnibus user, downloading the Chef source and running

. load-omnibus-toolchain.sh
cd chef/omnibus
bundle install --without development
bundle exec omnibus build chef -l debug

And approximately 3 hours later I had a new chef_12.15.27+20161013214455-1_armhf.deb which worked great once installed.

root@beaglebone:/home/omnibus/chef-12.15.27/omnibus/pkg# dpkg -i chef_12.15.27+20161013214455-1_armhf.deb
Selecting previously unselected package chef.
(Reading database ... 82288 files and directories currently installed.)
Preparing to unpack chef_12.15.27+20161013214455-1_armhf.deb ...
Unpacking chef (12.15.27+20161013214455-1) ...
Setting up chef (12.15.27+20161013214455-1) ...
Thank you for installing Chef!

I’ll continue to refine the build process and follow along with new releases of Chef. Now I can move on to the next more important piece, which is actually using the box. Feel free to download it: chef_12.15.27+20161013214455-1_armhf.deb

Posted in chef, linux | Tagged: , , , | 4 Comments »

ZoneMinder Chef Cookbook

Posted by mattray on February 20, 2014

D-Link 930L

D-Link 930L


A recent rash of burglaries in my neighborhood encouraged me to set up a security camera for my front door. I’d recently heard the FLOSS Weekly episode for ZoneMinder, so I figured I would check it out. The wiki listed the D-Link 930L as an working option, and it was about $40 on Amazon. It is wifi-connected and does 640×480 video, so it’s a pretty good basic solution. I plugged it in, set it up and everything “just worked”. Rather than subscribe to D-Link’s cloud service, I configured ZoneMinder to record video when motion was detected. The Android app actually lets me see the video live from anywhere and I’ve hooked it up to my Roku as well.

I published a Chef cookbook for installing and configuring ZoneMinder, following the configuration guide. The monitor configuration is stored in the database and I didn’t feel like spending the time to set that up, so the cookbook is pretty basic since additional configuration was done in the web UI. The code for the cookbook is at https://github.com/mattray/zoneminder-cookbook

Here are screenshots of the configuration screens:

Monitor: General Settings

Monitor: General Settings

Monitor: Source

Monitor: Source

ZoneMinder: Options

ZoneMinder: Options

Posted in chef, geekery, monitoring | Tagged: , , , | 3 Comments »

Spiceweasel 2.0

Posted by mattray on January 14, 2013

One year after the release of Spiceweasel 1.0, Spiceweasel 2.0 is now available! For those of you unfamiliar with Spiceweasel, it is a command-line tool that helps manage loading Chef infrastructure from a manifest file. The JSON or YAML manifest is a simplified representation of the Chef repository contents that you may use to recreate and share how to build the application or infrastructure. This file is validated to ensure that all the components listed are present and the correct versions are available, and can (and should) be managed in version control with the rest of your repository.

There are minor updates to the manifest syntax in 2.0, with a focus on clarity. Spiceweasel has 3 major new features since the 1.0 release :

Execute
Spiceweasel now has the ability to directly execute the knife commands, creating (or deleting or rebuilding) the infrastructure described in the manifest when you use the -e/--execute flag.

Extract
Rather than write your manifest from scratch, Spiceweasel may generate the knife commands or manifests for you. Running spiceweasel --extractlocal generates the knife commands required to upload all the existing cookbooks, roles, environments and data bags found in your Chef repository with validation. spiceweasel --extractjson or spiceweasel --extractyaml will generate the JSON or YAML manifest for your Chef repository, which may be captured and then tracked in version control.

Clusters
Clusters are not a type supported by Chef, this is a logical construct added by Spiceweasel to enable managing sets of infrastructure together. The clusters section is a special case of nodes, where each member of the named cluster in the manifest will be tagged to ensure that the entire cluster may be created in sync (refresh and delete coming soon). The node syntax is the same as that under nodes, the only addition is the cluster name.

clusters:
- mycloud:
  - openstack 1:
      run_list: role[db]
      options: -S mray -i ~/.ssh/local.pem -Ia0b27dd9-e008-4a13-8cda-a2c74a28b7b1 -f 3
  - openstack 2:
      run_list: role[web] recipe[mysql::client]
      options: -S mray -i ~/.ssh/local.pem -Ia0b27dd9-e008-4a13-8cda-a2c74a28b7b1 -f 2
- amazon:
  - ec2 1:
      run_list: role[db]
      options: -S mray -i ~/.ssh/mray.pem -x ubuntu -I ami-8af0f326 -f m1.medium
  - ec2 2:
      run_list: role[web] recipe[mysql::client]
      options: -S mray -i ~/.ssh/mray.pem -x ubuntu -I ami-7000f019 -f m1.small

This would generate the following knife commands:
knife openstack server create -S mray -i ~/.ssh/local.pem -Ia0b27dd9-e008-4a13-8cda-a2c74a28b7b1 -f 3 -j '{"tags":["mycloud+roledb"]}' -r 'role[db]'
knife openstack server create -S mray -i ~/.ssh/local.pem -Ia0b27dd9-e008-4a13-8cda-a2c74a28b7b1 -f 2 -j '{"tags":["mycloud+rolewebrecipemysqlclient"]}' -r 'role[web],recipe[mysql::client]'
knife openstack server create -S mray -i ~/.ssh/local.pem -Ia0b27dd9-e008-4a13-8cda-a2c74a28b7b1 -f 2 -j '{"tags":["mycloud+rolewebrecipemysqlclient"]}' -r 'role[web],recipe[mysql::client]'
knife ec2 server create -S mray -i ~/.ssh/mray.pem -x ubuntu -I ami-8af0f326 -f m1.medium -j '{"tags":["amazon+roledb"]}' -r 'role[db]'
knife ec2 server create -S mray -i ~/.ssh/mray.pem -x ubuntu -I ami-7000f019 -f m1.small -j '{"tags":["amazon+rolewebrecipemysqlclient"]}' -r 'role[web],recipe[mysql::client]'
knife ec2 server create -S mray -i ~/.ssh/mray.pem -x ubuntu -I ami-7000f019 -f m1.small -j '{"tags":["amazon+rolewebrecipemysqlclient"]}' -r 'role[web],recipe[mysql::client]'

What’s important to note here is the use of tags to identify them, we’ll be coming back to this in future releases.

Roadmap
As always the code is available at https://github.com/mattray/spiceweasel with a detailed changelog. The code has been heavily refactored into discrete libraries and the next few releases should add additional cluster features and integration with Berkshelf and Librarian.

Feature suggestions and patches are always welcome, thanks for using Spiceweasel!

Posted in chef, opschef, spiceweasel | Tagged: , , , | 6 Comments »

Chef for OpenStack Status 11/2

Posted by mattray on November 2, 2012

Getting back into the swing of semi-regular updates. Last week was the Chef Developer Summit, lots of great conversations and quite a few people interested in OpenStack. This week was catching up mostly, trying to clean up a few Essex leftovers before moving to Folsom.

  • I bumped to Essex versions to 2012.1.0 to sync to the OpenStack versioning per feedback at the OpenStack Summit. I tagged everything for Essex and merged to master.
  • Added an ‘lxc’ role to enable using LXC. Just an attribute and it just worked, so awesome.
  • Added placeholder cookbooks for quantum, cinder and ceilometer. These have the suffix “-cookbook” in my GitHub, there was some discussion about wanting to rename the cookbook repos for the other 5 projects, anyone feel strongly?
  • Updated all the Community cookbook dependencies and retested (apt, erlang, database, ntp, apache2, database, mysql, rabbitmq, openssh)
  • Released a new version of pxe_dust which enforces assigning the PXE-booted NIC as eth0.
  • Trying to coordinate Chef support for the bare-metal provisioning tool Razor, ping me if you’re interested.
  • Canceled the NYC Chef for OpenStack Hack Day and NYC Chef Meetup.
  • Preparing for the Opscode/DreamHost webinar “Automating OpenStack and Ceph at DreamHost with Private Chef“.

Next week I’ll be in Chicago presenting at the CME Group Technology Conference, ping me if you’re in Chicago and want to catch up. My OpenStack goals are to merge in the outstanding pull requests and resync with the latest Folsom work from rcbops, hopefully merging in some more branches.

Posted in chef, community, openstack, opschef | Tagged: , , | Leave a Comment »

Gettings Started with the Chef for OpenStack docs

Posted by mattray on October 23, 2012

I have primarily been focused on documentation lately and the http://github.com/mattray/openstack-chef-docs is the repository. Since there is so much interaction between the various components, prerequisites and cookbooks, I felt a unified document format would best serve our needs. The various markdown readmes and documentation is slowly migrating to this single repository so it can be kept updated in a single location and link to the various components.

The docs are in Restructured Text and use Sphinx, which is compatible with the http://docs.openstack.org source docs. The license matches the OpenStack documentation’s Apache V2 and Creative Commons Attribution ShareAlike 3.0 License. Opscode has standardized on this format for our own documentation and in the near future it will be merged upstream with official Opscode documentation.

The evolving document is currently broken into these 6 components:
* Architecture – overview of the architecture for Chef for OpenStack.
* Prerequisites – the hardware, network and operating system requirements.
* Installation – how to install Chef for Openstack.
* Example Deployment – example configuration of a small test lab.
* Knife-OpenStack – using the OpenStack plugin for Knife for provisioning and managing instances.
* Additional Resources – additional useful information and links related to Chef for OpenStack.

The docs are just getting started, lots of placeholders but I’m active writing. Please feel free to send corrections and additional details to help fill things out. There will be a permanent URL for the docs online soon, here is a temporary link:
http://15.185.230.54/

Posted in chef, community, openstack, opschef | Tagged: , , , | Leave a Comment »

Chef for OpenStack Status 10/22

Posted by mattray on October 23, 2012

I’ve decided to start cross-posting my status emails for the Chef for OpenStack project to help spread the word. The Chef for OpenStack mailing list is here, please join: http://groups.google.com/group/opscode-chef-openstack

I apologize for the lack of updates, but I come bearing lots of news. For a quick summary of the state of Chef for OpenStack, check out this deck from my presentation at the OpenStack Summit:

Speaking of the OpenStack Summit, it was quite productive despite not getting to attend enough sessions due to meetings and booth duty. Monday there was a session on “Upstreaming Chef Cookbooks”, which was essentially a meetup of folks working on Chef for OpenStack. We compared notes and there is quite a lot of work being done in the various branches maintained outside the Opscode one, I’m looking forward to merging as much of the work as possible. Tuesday I gave my general Chef for OpenStack presentation linked above and we had a “DevOps Panel” later that day where there was an engaging discussion on the various issues facing deployers of OpenStack. I’ll link up videos as they become available.

Some short-term takeaways from the Summit where that there is a tremendous amount of development effort I was unaware of and the pace is about to pick up substantially. DreamHost and AT&T have a number of patches to be merged and work has already started on Folsom by several folks. The general consensus was to move the focus to Folsom now that it’s out, the cookbooks have been tagged and the repos have all been merged back to master. The ‘essex’ branches are working and have been pushed to the Community site for direct download and are still available of course if you want to continue development.

There were so many great discussions and ideas shared, I’m really looking forward to the work ahead. I’ll try to post more frequently, so the level of engagement will continue to get better.

Posted in chef, community, openstack, opschef, opscode | Tagged: , , | Leave a Comment »

New Chef BitTorrent Cookbook

Posted by mattray on January 9, 2012

Bittorrent is a well-established protocol and tool for peer-to-peer distribution of files. It is frequently used in large scale infrastructures for distributing content in a highly-efficient and exceptionally fast manner. I decided to write a general-purpose Chef bittorrent cookbook for providing BitTorrent Resources via a Lightweight Resource Provider (LWRP). While there was already a very useful transmission cookbook for downloading torrent files, I wanted to create LWRPs and a set of recipes that made it simple to seed and peer a file with minimal interaction.

Even though there are a tremendous number of BitTorrent applications available, I had 2 requirements: trackerless seeding and the ability to be easily daemonized. After researching quite a few clients, aria2 was found to have the required features and turned out to work quite well. Trackerless torrents proved to be a poorly supported and/or documented feature for most tools, the key to using this with aria2 was to understand the need for seeding node to expose the distributed hash table (DHT) on a single port for ease of use and to include this in the creation of the torrent itself.

TESTING

In testing and benchmarking with a 4.2 gigabyte file (CentOS 6.2 DVD 1) on EC2 with 11 m1.smalls (1 seed and 10 peers), there were a number of interesting results. The chef-client run averaged about 8 and a half minutes (with download speeds around 11.4MiB/s) for the 10 nodes, this stayed fairly constant when moving to 20 nodes.

A separate test was done distributing the file with Apache as well. Not surprisingly the more nodes that were added, the slower the downloads became to the point where some failed because of timeouts. Apache could probably be configured to handle the scenario better, but this is why we use a peer-to-peer solution to avoid the single source. The average chef-client run was about 20 minutes for 10 nodes, which was twice as slow as the same test with 3 nodes.

There are definite bottlenecks on EC2, either on the filesystem or at the network level which is to be expected in a virtualized environment. File allocations on some machines take an order of magnitude longer than others, and some nodes are extremely slow in networking. With the larger test case of 20 nodes, some were even faster than with 10 nodes, a few outliers were exceptionally slow (as seen with any large sample of EC2 nodes). On my gigabit network with physical nodes, depending on the downloading drive (SSD or RAID-0 drives), I doubled these speeds with just 5 nodes. This would indicate that the drives or filesystems are the bottleneck.

TRACKERLESS TORRENTS WITH DHT

The first use case I wanted to get working was trackerless-torrents. To create the torrent we use the mktorrent package. For trackerless seeding from 10.0.0.10, we used the following command:

mktorrent -d -c \"Generated with Chef\" -a node://10.0.0.10:6881 -o test0.torrent mybigfile.tgz

To run aria2 as a trackerless seeder in the foreground on 10.0.0.10, it is important to identify the DHT and listening ports (UDP and TCP respectively).

aria2c -V --summary-interval=0 --seed-ratio=0.0 --dht-file-path=/tmp/dht.dat --dht-listen-port 6881 --listen-port 6881 --d/tmp/ test0.torrent

To run aria2 as a peer of a trackerless torrent on 10.0.0.10, you have to specify the “–dht-entry-point”.

aria2c -V --summary-interval=0 --seed-ratio=0.0 --dht-file-path=/tmp/dht.dat --dht-listen-port 6881 --listen-port 6881 --dht-entry-point=10.0.0.10:6881 --d/tmp/ test0.torrent

This technique works, but has the limiting factor of needing to transfer the torrent file between machines. This is solved in the bittorrent cookbook by storing the torrent file in a data bag (future versions of the cookbook may support magnet URIs to remove the need for a file completely).

LIGHTWEIGHT RESOURCE PROVIDERS

Once I had identified the commands that worked for these operations, they needed to be encapsulated in a bittorrent cookbook with LWRPs for creating torrents, seeding and peering the files.

bittorrent_torrent: Given a file it creates a .torrent file for sharing a local file or directory via the [BitTorrent protocol](http://en.wikipedia.org/wiki/BitTorrent).

bittorrent_seed: Share a local file via the [BitTorrent protocol](http://en.wikipedia.org/wiki/BitTorrent).

bittorrent_peer: Downloads the file or files specified by a torrent via the [BitTorrent protocol](http://en.wikipedia.org/wiki/BitTorrent). Update notifications are triggered when a blocking download completes and on the initiation of seeding. There are also options on whether to block on the download and whether to continue seeding after download.

RECIPES

The recipes were provided as an easy way to use bittorrent to share and download files simply by passing the path and filename via['bittorrent']['file'] and['bittorrent']['path'] attributes. There are recipes to seed, peer and to stop the seeding and peering.

bittorrent::seed: given the['bittorrent']['file'],['bittorrent']['path'] attributes it will create a .torrent file for the file(s) to be distributed, store it in the `bittorrent` data bag and start seeding the distribution of the file(s).

bittorrent::peer: given the['bittorrent']['file'] and['bittorrent']['path'] attributes it will look for a torrent in the bittorrent data bag that provides that file. If one exists, it will download the file and continue seeding depending on the value of the['bittorrent'']['seed'] attribute (false by default).

bittorrent::stop: stops either the seeding or peering of a file.

FUTURE

I plan on continuing development on this cookbook as it gets used in production environments and would appreciate any feedback or patches. Right now it is Ubuntu-only, but adding support for RHEL/CentOS is on the short-term roadmap and requires finding sources for the `mktorrent` and `aria2` packages. Using magnet URIs instead of a torrent file would probably be more efficient as well, since it would remove the distribution of the torrent file and allow the use of search to specify multiple seeders to prime the DHT.

Posted in chef, opschef, opscode | Tagged: , , | Leave a Comment »

Spiceweasel 1.0

Posted by mattray on January 3, 2012

One of the more useful things I’ve written since I’ve been at Opscode (over a year now) is a tool called Spiceweasel. Spiceweasel processes a simple YAML (or JSON) manifest that describes how to deploy Chef-managed infrastructure with the command-line tool knife. It is fairly simple, but it fills a useful niche by making it easy to document your infrastructure’s dependencies and how to deploy in a file that may be managed with version control. Spiceweasel also attempts to validate that the dependencies you list in your YAML manifest all exist in the repository and that all of their dependencies are included as well.

Examples

There is the https://github.com/mattray/ravel-repo which provides a working example for bootstrapping a Chef repository with Spiceweasel. The examples directory in GitHub is slowly getting more examples based on the Chef Quick Starts.

Given the example YAML file example.yml:

cookbooks:
- apache2:
- apt:
 - 1.2.0
- mysql:

environments:
- development:
- qa:
- production:

roles:
- base:
- iisserver:
- monitoring:
- webserver:

data bags:
- users:
 - alice
 - bob
 - chuck
- data:
 - *
- passwords:
 - secret secret_key
 - mysql
 - rabbitmq

nodes:
- serverA:
 - role[base]
 - -i ~/.ssh/mray.pem -x user --sudo -d ubuntu10.04-gems
- serverB serverC:
 - role[base]
 - -i ~/.ssh/mray.pem -x user --sudo -d ubuntu10.04-gems -E production
- ec2 4:
 - role[webserver] recipe[mysql::client]
 - -S mray -i ~/.ssh/mray.pem -x ubuntu -G default -I ami-7000f019 -f m1.small
- rackspace 3:
 - recipe[mysql],role[monitoring]
 - --image 49 --flavor 2
- windows_winrm winboxA:
 - role[base],role[iisserver]
 - -x Administrator -P 'super_secret_password'
- windows_ssh winboxB winboxC:
 - role[base],role[iisserver]
 - -x Administrator -P 'super_secret_password'

Spiceweasel generates the following knife commands:

knife cookbook upload apache2
knife cookbook upload apt
knife cookbook upload mysql
knife environment from file development.rb
knife environment from file qa.rb
knife environment from file production.rb
knife role from file base.rb
knife role from file iisserver.rb
knife role from file monitoring.rb
knife role from file webserver.rb
knife data bag create users
knife data bag from file users alice.json
knife data bag from file users bob.json
knife data bag from file users chuck.json
knife data bag create data
knife data bag create passwords
knife data bag from file passwords mysql.json --secret-file secret_key
knife data bag from file passwords rabbitmq.json --secret-file secret_key
knife bootstrap serverA -i ~/.ssh/mray.pem -x user --sudo -d ubuntu10.04-gems -r 'role[base]'
knife bootstrap serverB -i ~/.ssh/mray.pem -x user --sudo -d ubuntu10.04-gems -E production -r 'role[base]'
knife bootstrap serverC -i ~/.ssh/mray.pem -x user --sudo -d ubuntu10.04-gems -E production -r 'role[base]'
knife ec2 server create -S mray -i ~/.ssh/mray.pem -x ubuntu -G default -I ami-7000f019 -f m1.small -r 'role[webserver],recipe[mysql::client]'
knife ec2 server create -S mray -i ~/.ssh/mray.pem -x ubuntu -G default -I ami-7000f019 -f m1.small -r 'role[webserver],recipe[mysql::client]'
knife ec2 server create -S mray -i ~/.ssh/mray.pem -x ubuntu -G default -I ami-7000f019 -f m1.small -r 'role[webserver],recipe[mysql::client]'
knife ec2 server create -S mray -i ~/.ssh/mray.pem -x ubuntu -G default -I ami-7000f019 -f m1.small -r 'role[webserver],recipe[mysql::client]'
knife rackspace server create --image 49 --flavor 2 -r 'recipe[mysql],role[monitoring]'
knife rackspace server create --image 49 --flavor 2 -r 'recipe[mysql],role[monitoring]'
knife rackspace server create --image 49 --flavor 2 -r 'recipe[mysql],role[monitoring]'
knife bootstrap windows winrm winboxA -x Administrator -P 'super_secret_password' -r 'role[base],role[iisserver]'
knife bootstrap windows ssh winboxB -x Administrator -P 'super_secret_password' -r 'role[base],role[iisserver]'
knife bootstrap windows ssh winboxC -x Administrator -P 'super_secret_password' -r 'role[base],role[iisserver]'

Cookbooks

The `cookbooks` section of the manifest currently supports `knife cookbook upload FOO` where `FOO` is the name of the cookbook in the `cookbooks` directory. The default behavior is to download the cookbook as a tarball, untar it and remove the tarball. The `–siteinstall` option will allow for use of `knife cookbook site install` with the cookbook and the creation of a vendor branch if git is the underlying version control. If a version is passed, it is validated against the existing cookbook `metadata.rb` and it must match the `metadata.rb` string exactly. Validation is also done to ensure dependencies listed in the metadata for the cookbooks exists.

Environments

The `environments` section of the manifest currently supports `knife environment from file FOO` where `FOO` is the name of the environment file ending in `.rb` or `.json` in the `environments` directory. Validation is done to ensure the filename matches the environment and that any cookbooks referenced are listed in the manifest.

Roles

The `roles` section of the manifest currently supports `knife role from file FOO` where `FOO` is the name of the role file ending in `.rb` or `.json` in the `roles` directory. Validation is done to ensure the filename matches the role name and that any cookbooks or roles referenced are listed in the manifest.

Data Bags

The `data bags` section of the manifest currently creates the data bags listed with `knife data bag create FOO` where `FOO` is the name of the data bag. Individual items may be added to the data bag as part of a JSON or YAML sequence, the assumption is made that they `.json` files and in the proper `data_bags/FOO` directory. You may also pass a wildcard as an entry to load all matching data bags (ie. `*`). Encrypted data bags are supported by listing `secret filename` as the first item (where `filename` is the secret key to be used). Validation is done to ensure the JSON is properly formatted, the id matches and any secret keys are in the correct locations.

Nodes

The `nodes` section of the manifest bootstraps a node for each entry where the entry is a hostname or provider and count. A shortcut syntax for bulk-creating nodes with various providers where the line starts with the provider and ends with the number of nodes to be provisioned. Windows nodes need to specify either `windows_winrm` or `windows_ssh` depending on the protocol used, followed by the name of the node(s). Each node requires 2 items after it in a sequence. You may also use the `–parallel` flag from the command line, allowing provider commands to run simultaneously for faster deployment.

The first item after the node is the run_list and the second are the CLI options used. The run_list may be space or comma-delimited. Validation is performed on the run_list components to ensure that only cookbooks and roles listed in the manifest are used. Validation on the options ensures that any Environments referenced are also listed. You may specify multiple nodes to have the same configuration by listing them separated by a space.

Status and Roadmap

Spiceweasel has now hit version 1.0, it’s fairly complete and there are no open issues. I’ll continue to track changes in Chef and fix any issues that arise. If I start feeling ambitious (or someone sends patches), I may turn it into a knife plugin. I’ve also considered having Spiceweasel “extract” existing infrastructure, by parsing a list of nodes and documenting their cookbooks, environments, roles and run lists. I’ll write another post soon about using it in the “real world”.

Posted in chef, opschef, opscode, spiceweasel | Tagged: , , , | 1 Comment »