The Shining Path of Least Resistance


Archive for January, 2012

New Chef BitTorrent Cookbook

Posted by mattray on January 9, 2012

Bittorrent is a well-established protocol and tool for peer-to-peer distribution of files. It is frequently used in large scale infrastructures for distributing content in a highly-efficient and exceptionally fast manner. I decided to write a general-purpose Chef bittorrent cookbook for providing BitTorrent Resources via a Lightweight Resource Provider (LWRP). While there was already a very useful transmission cookbook for downloading torrent files, I wanted to create LWRPs and a set of recipes that made it simple to seed and peer a file with minimal interaction.

Even though there are a tremendous number of BitTorrent applications available, I had 2 requirements: trackerless seeding and the ability to be easily daemonized. After researching quite a few clients, aria2 was found to have the required features and turned out to work quite well. Trackerless torrents proved to be a poorly supported and/or documented feature for most tools, the key to using this with aria2 was to understand the need for seeding node to expose the distributed hash table (DHT) on a single port for ease of use and to include this in the creation of the torrent itself.


In testing and benchmarking with a 4.2 gigabyte file (CentOS 6.2 DVD 1) on EC2 with 11 m1.smalls (1 seed and 10 peers), there were a number of interesting results. The chef-client run averaged about 8 and a half minutes (with download speeds around 11.4MiB/s) for the 10 nodes, this stayed fairly constant when moving to 20 nodes.

A separate test was done distributing the file with Apache as well. Not surprisingly the more nodes that were added, the slower the downloads became to the point where some failed because of timeouts. Apache could probably be configured to handle the scenario better, but this is why we use a peer-to-peer solution to avoid the single source. The average chef-client run was about 20 minutes for 10 nodes, which was twice as slow as the same test with 3 nodes.

There are definite bottlenecks on EC2, either on the filesystem or at the network level which is to be expected in a virtualized environment. File allocations on some machines take an order of magnitude longer than others, and some nodes are extremely slow in networking. With the larger test case of 20 nodes, some were even faster than with 10 nodes, a few outliers were exceptionally slow (as seen with any large sample of EC2 nodes). On my gigabit network with physical nodes, depending on the downloading drive (SSD or RAID-0 drives), I doubled these speeds with just 5 nodes. This would indicate that the drives or filesystems are the bottleneck.


The first use case I wanted to get working was trackerless-torrents. To create the torrent we use the mktorrent package. For trackerless seeding from, we used the following command:

mktorrent -d -c \"Generated with Chef\" -a node:// -o test0.torrent mybigfile.tgz

To run aria2 as a trackerless seeder in the foreground on, it is important to identify the DHT and listening ports (UDP and TCP respectively).

aria2c -V --summary-interval=0 --seed-ratio=0.0 --dht-file-path=/tmp/dht.dat --dht-listen-port 6881 --listen-port 6881 --d/tmp/ test0.torrent

To run aria2 as a peer of a trackerless torrent on, you have to specify the “–dht-entry-point”.

aria2c -V --summary-interval=0 --seed-ratio=0.0 --dht-file-path=/tmp/dht.dat --dht-listen-port 6881 --listen-port 6881 --dht-entry-point= --d/tmp/ test0.torrent

This technique works, but has the limiting factor of needing to transfer the torrent file between machines. This is solved in the bittorrent cookbook by storing the torrent file in a data bag (future versions of the cookbook may support magnet URIs to remove the need for a file completely).


Once I had identified the commands that worked for these operations, they needed to be encapsulated in a bittorrent cookbook with LWRPs for creating torrents, seeding and peering the files.

bittorrent_torrent: Given a file it creates a .torrent file for sharing a local file or directory via the [BitTorrent protocol](

bittorrent_seed: Share a local file via the [BitTorrent protocol](

bittorrent_peer: Downloads the file or files specified by a torrent via the [BitTorrent protocol]( Update notifications are triggered when a blocking download completes and on the initiation of seeding. There are also options on whether to block on the download and whether to continue seeding after download.


The recipes were provided as an easy way to use bittorrent to share and download files simply by passing the path and filename via['bittorrent']['file'] and['bittorrent']['path'] attributes. There are recipes to seed, peer and to stop the seeding and peering.

bittorrent::seed: given the['bittorrent']['file'],['bittorrent']['path'] attributes it will create a .torrent file for the file(s) to be distributed, store it in the `bittorrent` data bag and start seeding the distribution of the file(s).

bittorrent::peer: given the['bittorrent']['file'] and['bittorrent']['path'] attributes it will look for a torrent in the bittorrent data bag that provides that file. If one exists, it will download the file and continue seeding depending on the value of the['bittorrent'']['seed'] attribute (false by default).

bittorrent::stop: stops either the seeding or peering of a file.


I plan on continuing development on this cookbook as it gets used in production environments and would appreciate any feedback or patches. Right now it is Ubuntu-only, but adding support for RHEL/CentOS is on the short-term roadmap and requires finding sources for the `mktorrent` and `aria2` packages. Using magnet URIs instead of a torrent file would probably be more efficient as well, since it would remove the distribution of the torrent file and allow the use of search to specify multiple seeders to prime the DHT.

Posted in chef, opschef, opscode | Tagged: , , | Leave a Comment »

Spiceweasel 1.0

Posted by mattray on January 3, 2012

One of the more useful things I’ve written since I’ve been at Opscode (over a year now) is a tool called Spiceweasel. Spiceweasel processes a simple YAML (or JSON) manifest that describes how to deploy Chef-managed infrastructure with the command-line tool knife. It is fairly simple, but it fills a useful niche by making it easy to document your infrastructure’s dependencies and how to deploy in a file that may be managed with version control. Spiceweasel also attempts to validate that the dependencies you list in your YAML manifest all exist in the repository and that all of their dependencies are included as well.


There is the which provides a working example for bootstrapping a Chef repository with Spiceweasel. The examples directory in GitHub is slowly getting more examples based on the Chef Quick Starts.

Given the example YAML file example.yml:

- apache2:
- apt:
 - 1.2.0
- mysql:

- development:
- qa:
- production:

- base:
- iisserver:
- monitoring:
- webserver:

data bags:
- users:
 - alice
 - bob
 - chuck
- data:
 - *
- passwords:
 - secret secret_key
 - mysql
 - rabbitmq

- serverA:
 - role[base]
 - -i ~/.ssh/mray.pem -x user --sudo -d ubuntu10.04-gems
- serverB serverC:
 - role[base]
 - -i ~/.ssh/mray.pem -x user --sudo -d ubuntu10.04-gems -E production
- ec2 4:
 - role[webserver] recipe[mysql::client]
 - -S mray -i ~/.ssh/mray.pem -x ubuntu -G default -I ami-7000f019 -f m1.small
- rackspace 3:
 - recipe[mysql],role[monitoring]
 - --image 49 --flavor 2
- windows_winrm winboxA:
 - role[base],role[iisserver]
 - -x Administrator -P 'super_secret_password'
- windows_ssh winboxB winboxC:
 - role[base],role[iisserver]
 - -x Administrator -P 'super_secret_password'

Spiceweasel generates the following knife commands:

knife cookbook upload apache2
knife cookbook upload apt
knife cookbook upload mysql
knife environment from file development.rb
knife environment from file qa.rb
knife environment from file production.rb
knife role from file base.rb
knife role from file iisserver.rb
knife role from file monitoring.rb
knife role from file webserver.rb
knife data bag create users
knife data bag from file users alice.json
knife data bag from file users bob.json
knife data bag from file users chuck.json
knife data bag create data
knife data bag create passwords
knife data bag from file passwords mysql.json --secret-file secret_key
knife data bag from file passwords rabbitmq.json --secret-file secret_key
knife bootstrap serverA -i ~/.ssh/mray.pem -x user --sudo -d ubuntu10.04-gems -r 'role[base]'
knife bootstrap serverB -i ~/.ssh/mray.pem -x user --sudo -d ubuntu10.04-gems -E production -r 'role[base]'
knife bootstrap serverC -i ~/.ssh/mray.pem -x user --sudo -d ubuntu10.04-gems -E production -r 'role[base]'
knife ec2 server create -S mray -i ~/.ssh/mray.pem -x ubuntu -G default -I ami-7000f019 -f m1.small -r 'role[webserver],recipe[mysql::client]'
knife ec2 server create -S mray -i ~/.ssh/mray.pem -x ubuntu -G default -I ami-7000f019 -f m1.small -r 'role[webserver],recipe[mysql::client]'
knife ec2 server create -S mray -i ~/.ssh/mray.pem -x ubuntu -G default -I ami-7000f019 -f m1.small -r 'role[webserver],recipe[mysql::client]'
knife ec2 server create -S mray -i ~/.ssh/mray.pem -x ubuntu -G default -I ami-7000f019 -f m1.small -r 'role[webserver],recipe[mysql::client]'
knife rackspace server create --image 49 --flavor 2 -r 'recipe[mysql],role[monitoring]'
knife rackspace server create --image 49 --flavor 2 -r 'recipe[mysql],role[monitoring]'
knife rackspace server create --image 49 --flavor 2 -r 'recipe[mysql],role[monitoring]'
knife bootstrap windows winrm winboxA -x Administrator -P 'super_secret_password' -r 'role[base],role[iisserver]'
knife bootstrap windows ssh winboxB -x Administrator -P 'super_secret_password' -r 'role[base],role[iisserver]'
knife bootstrap windows ssh winboxC -x Administrator -P 'super_secret_password' -r 'role[base],role[iisserver]'


The `cookbooks` section of the manifest currently supports `knife cookbook upload FOO` where `FOO` is the name of the cookbook in the `cookbooks` directory. The default behavior is to download the cookbook as a tarball, untar it and remove the tarball. The `–siteinstall` option will allow for use of `knife cookbook site install` with the cookbook and the creation of a vendor branch if git is the underlying version control. If a version is passed, it is validated against the existing cookbook `metadata.rb` and it must match the `metadata.rb` string exactly. Validation is also done to ensure dependencies listed in the metadata for the cookbooks exists.


The `environments` section of the manifest currently supports `knife environment from file FOO` where `FOO` is the name of the environment file ending in `.rb` or `.json` in the `environments` directory. Validation is done to ensure the filename matches the environment and that any cookbooks referenced are listed in the manifest.


The `roles` section of the manifest currently supports `knife role from file FOO` where `FOO` is the name of the role file ending in `.rb` or `.json` in the `roles` directory. Validation is done to ensure the filename matches the role name and that any cookbooks or roles referenced are listed in the manifest.

Data Bags

The `data bags` section of the manifest currently creates the data bags listed with `knife data bag create FOO` where `FOO` is the name of the data bag. Individual items may be added to the data bag as part of a JSON or YAML sequence, the assumption is made that they `.json` files and in the proper `data_bags/FOO` directory. You may also pass a wildcard as an entry to load all matching data bags (ie. `*`). Encrypted data bags are supported by listing `secret filename` as the first item (where `filename` is the secret key to be used). Validation is done to ensure the JSON is properly formatted, the id matches and any secret keys are in the correct locations.


The `nodes` section of the manifest bootstraps a node for each entry where the entry is a hostname or provider and count. A shortcut syntax for bulk-creating nodes with various providers where the line starts with the provider and ends with the number of nodes to be provisioned. Windows nodes need to specify either `windows_winrm` or `windows_ssh` depending on the protocol used, followed by the name of the node(s). Each node requires 2 items after it in a sequence. You may also use the `–parallel` flag from the command line, allowing provider commands to run simultaneously for faster deployment.

The first item after the node is the run_list and the second are the CLI options used. The run_list may be space or comma-delimited. Validation is performed on the run_list components to ensure that only cookbooks and roles listed in the manifest are used. Validation on the options ensures that any Environments referenced are also listed. You may specify multiple nodes to have the same configuration by listing them separated by a space.

Status and Roadmap

Spiceweasel has now hit version 1.0, it’s fairly complete and there are no open issues. I’ll continue to track changes in Chef and fix any issues that arise. If I start feeling ambitious (or someone sends patches), I may turn it into a knife plugin. I’ve also considered having Spiceweasel “extract” existing infrastructure, by parsing a list of nodes and documenting their cookbooks, environments, roles and run lists. I’ll write another post soon about using it in the “real world”.

Posted in chef, opschef, opscode, spiceweasel | Tagged: , , , | 1 Comment »