Magical Flowers

I’ve spent a hilarious number of hours in Minecraft doing any number of things from simple mining to building enormous factories that cause even the mightiest of servers to creak under the load of them.

Really though, it’s made me deeply appreciate magical flowers.

The Game

If you haven’t played Minecraft and aren’t familiar with the basic concept, here’s a really well written overview of what the game is. At it’s heart it’s a sandbox to play in. Personally I like taking on engineering and resource management challenges and automating large machines.

The Mods

There are many mods available that are loaded onto the basic Minecraft game that add new mechanics, new blocks, abilities or fundamentally change the way the game is played. The two that I’ll be talking about today are Open Computers and Botania

Open Computers

OpenComputers is a mod that adds computers and robots into the game, which can be programmed in Lua 5.3. It takes ideas from a couple of other mods such as ComputerCraft, StevesCarts and Modular Powersuits to create something new and interesting.


Botania is a tech mod themed around natural magic. The main concept is to create magical flowers and devices utilizing the power of the earth, in the form of Mana.

Interface is a visual language

Botania and Open Computers are not attempts at feature replication. Instead they are distinct approaches to a similar set of problems that a Minecraft player may encounter. I don’t think one is superior in some absolute, objective way.

Instead I believe that Botania represents better design. Mana (The primary form of magical energy used in Botania) is a new resource that feels like an extension of the game. The movement through space and time remains important. Your skillfulness at navigating terrain, building new structures, gathering resources and manipulating the environment remain the primary methods used to achieve your goals. It builds on you as a user and keeps the you grounded in a progression towards greater goals and powers.

From the Botania design guidelines it’s clear that it’s not accidental.

Botania follows some design rules… * No pipes, wires or equivalent, they are boring and overdone; * Abstain from using GUIs in favour of in-world interaction; * Refrain from showing numbers to the player to minimize the incentive to minmax; * Provide pleasing visuals utilizing only two different particle effects; * Utilize or provide renewable resources where possible.

That extension of the Botania controls and resources into the broader Minecraft play environment makes it easier to design, test and execute many of its functions while keeping the player within the same context that they’re already fluent in. I don’t consider that fluency trivial. If you’re cracking open the Lexica Botania for the first time you’ve already mastered most of the basic commands you’ll need to use the mod.

In contrast the interface paradigm of Open Computers is closer in execution to the experience many of us have at the terminal level. It is a design that offers a great deal of flexibility at the cost of ease of use but It isn’t necessarily a more useful tool. For the uninitiated a blinking cursor on a black screen is an opaque and seemingly hostile interface method. For the initiated it can be an invitation to great power. Still, using this mod is a fundamental break in your conception of space, gameplay and if you’re not already comfortable with a scripting language; a major break in your fluency. Instead of being an augmented player with new powers and a grasp of some new mechanics you are player in the grips of readmes and mysterious runtime errors.

While you might think that an automation and engineering inclined person such as myself may take more readily to Open Computers it’s Botania that I prefer to reach for. Botania is a new set of words that I can use in my problem solving and Open Computers is an entirely different language.

Language is a shared experience

I believe that breaking a user’s paradigm is an expensive moment. The competency that they’ve been developing is broken and they feel like a beginner once again. Once a user has developed fluency in the basics of an interface we have a language in common to expand on. Do so. When other developers start to extend your interface they should have guidelines provided to them that explain the design language clearly and show how it can be extended.

When I’m thinking about user interfaces, I want to try and bring magical flowers to the user. I think this extends beyond visual interfaces and into the design of APIs, data structures, code and essentially anything that a person will put their brain in front of. These are all interfaces and they should be beautiful. Not because beauty has a separate merit but rather that a design that serves its purpose expresses beauty.


I laid out an issue with a Ruby script that I tried to use in this post and I’d like to talk more about my approach to these sorts of problems.

This is a Class Of Problems

This is a pattern I keep running into

for Problem in Problems:
    with One Tool Try to solve this Problem
        Installing One Tool requires Other Tools
            Installing Other Tools causes Problems
            Append Problem to Problems
    Make Tea Instead

The Lesson: Never Install Locally

I keep hoping that things will “just work” and this has almost never been true with these kinds of tools. When you walk towards the edge of Userland and into Development, there’s a steep cliff waiting for you. Package managers make this a softer descent into madness but you’re still falling and eventually you’ll find yourself face to face with the same pile of skulls. I don’t want to be down here with the skulls.

Local installations often carry invisible parameters that cannot be accounted for. A .file that didn’t get make it into version control, a library version that didn’t make it into the dependency manager or just Windows character encoding nonsense. There are a thousand sorts of pain of this kind and almost all of them come down to one statement “Well it works on my machine!”

It does not work on my machine.

A Container For Everything and Everything In A Container

While working on a different project I’ve gotten into few habits that have saved me a tremendous amount of work and I try to carry them with me into new projects:

  • I create Dockerfiles for every service I need to develop and deploy the site
  • I check my Dockerfiles into version control
  • I document build and deployment notes for the containers

I’ve already seen some pretty great benefits to this strategy. Chief among them is that I have a portable environment so work performed on any workstation is done with the same tooling. Everything runs on my machine because “my machine” is the same machine everywhere I go.

I want more open source projects to post their runtime environment. If it ran on your machine, please give me a copy of your machine so I can run it.


I can’t rely on everything having it’s environment posted with the code and I need a toolkit that can be rapidly adapted to fit these conditions. Whether it’s deploying the kind of data migration tools I’m attempting here or developing my own tools I think these could stand me in good stead. All the pieces are there in Docker and there’s some good precedent in approaches like the Anaconda Distribution. I’m going to dedicate some time to formalizing the approach and hopefully saving others some of the time I’ve sunk into setting up a containerized development environment.

What does a toolkit look like?

There’s a really great post on the Red Hat dev blog calling out all of the best practices in container creation and how it works; mostly in production. In fact, there are a huge number of extremely well written examples of best practices in production. That’s not what I’m on about.

A lean container makes the most sense in production but in early toolbox it can be more frustrating than is worthwhile. So many times I’ve tried to $nano somefile just to see update just one ENV variable only to be rebuffed by a too lean environment. If your tools are getting in the way of productivity then it’s time to consider if that’s really a tool you want to use.

General Purpose

Overall the design of a toolbox container should be broad. It does not attempt the laser sharp focus of a production container. It is instead happy to be the Swiss Army Knife. To be a practical tool for running applications that you may only ever run once. To be available for running Weird Ideas to check their viability.

In many ways it acts as a shield against the constant coral-reef growth of dependencies on your local machine and so it must be easier to use than your own machine. It should be the first tool you reach for.

Maybe Not For You:

These are not tools for working in an established ecosystem. Like all tools they are not Universal in their applicability to All Problems. They are designed for solving a particular class of problems that I’ve run into across the huge spread of languages, frameworks and applications out in the world. If you work on a production system and you’re not developing with it then… maybe don’t do that? Read the next section.

It’s Not Production:

Really. Not at all. Not in any way should a toolbox container be shipped. Paint the thing with a giant yellow and black striped band around it that just says “Not For Production”.

Realistically they should be built consistently with “dev” tags and naming conventions. If you use semantic versioning keep any and all toolbox containers below a version 1.

Batteries Included:

Borrowing liberally from the Pythonic gospel, I believe that toolbox containers should come ready to do meaningful work. They should include the tools you need to get started and make it easy for you to change your mind later. A toolbox container should be able to run pretty much anything in its language or framework with minimal configuration from the defaults. When selecting for a language ensure that the package manager for that language is included if there’s one available. If there’s no package manager for the language you want to use please return to your time machine. If you’re building from that point forward, target 80% of your use cases and write a toolbox container version that meets those needs.


I think that containers should strive to be lean. Even early in development when you’re oscillating more broadly between approaches you will be served by some restraint. I don’t think that a toolbox container must always run only one process. Sometimes it makes sense to use it to run a single service. Sometimes it needs to be many things in order to achieve progress. This must be weighed against the priorities of your work. This is the other side of the Batteries Included scale. You must achieve the correct balance between Not Enough Features and Too Many Features.


Like All Things Container, a toolbox container should be ready to explode at any given time for any or no reason at all. You must let go of the container. It will be reborn. If you need to persist data between containers use volume mapping. It’s easier to manage rapidly moving data structures in the early stages and it helps to remember that The Container Is Not For Storage.

It Still Goes In Version Control

Everything goes in version control. If you’ve made a particularly useful container and you find yourself reusing it it’s nice to be able to clone it instead of rewriting it. Going to run a test more than once? Make a new container to run that test. Keep in mind that adding layers on top of an existing Dockerfile is extremely inexpensive in time and compute. Iterate away but always check in your changes.

If you can check yourself into version control then maybe do that too. Then tell us how you did it.

My Toolbox

I’m still working on the right way to build and document these containers. Part of being new at things is not having a ton of code to I’ll update and publicize this when it’s a more stable project.

Tumblr To Hugo

Here’s my Problem

While building this website to host my blog I thought it might be nice to pull a bunch of my older illustrations off of Tumblr and rehost them in a section here. I’m using Hugo to build this site and so I’ll need to do some conversions to the source files so they’ll render correctly.

From the hugo docs site I cloned the repo for tumblr-to-hugo locally and after some initial reading I made a run that gave me an error message.

$ ruby t2h.rb APIKEY
/usr/local/Cellar/ruby/2.0.0/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require': cannot load such file -- pry (LoadError)
        from /usr/local/Cellar/ruby/2.0.0/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require'
        from t2h.rb:2:in `<main>'

Alright, that’s not great. What version of Ruby is installed?

ruby -v

I get back

ruby 2.0.0p648 (2015-12-16 revision 53162) [universal.x86_64-darwin16]

That’s pretty out of date, let’s update ruby

brew install ruby

Then run the command again

$ ruby t2h.rb APIKEY

The same error message comes back, this time with a different revision

/usr/local/Cellar/ruby/2.4.1_1/lib/ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in `require': cannot load such file -- pry (LoadError)
        from /usr/local/Cellar/ruby/2.4.1_1/lib/ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in `require'
        from t2h.rb:2:in `<main>'

Well, that didn’t solve anything. Looking at the gemfile I see that this is calling for a version of Ruby that isn’t the latest and isn’t the older one I had installed.

source ''
ruby '2.3.2'

I’m assuming that there’s something similar to Python’s virtualenv for Ruby. A bit of googling later I’m installing rbenv. Cool.

$ brew update
$ brew upgrade rbenv ruby-build

Following the rbenv instructions for macOS to add it to my path.

$ rbenv init
# Load rbenv automatically by appending
# the following to ~/.bash_profile:

eval "$(rbenv init -)"
$ atom ~/.bash_profile

I prefer using Atom over nano, vi or emacs. Whatever, fight me.

Now to source the shell.

$ source ~/.bash_profile

Next to install the version of Ruby I’ll need to get this tool to run

$ rbenv install 2.3.2

Then the whole thing falls apart

ruby-build: use openssl from homebrew
Downloading ruby-2.3.2.tar.bz2...
Installing ruby-2.3.2...
ruby-build: use readline from homebrew

BUILD FAILED (OS X 10.12.3 using ruby-build 20170201)

Inspect or clean up the working tree at /var/folders/29/151mkpxs5tx3h4vz2qptg2xm0000gn/T/ruby-build.20170402120157.22586
Results logged to /var/folders/29/151mkpxs5tx3h4vz2qptg2xm0000gn/T/ruby-build.20170402120157.22586.log

Last 10 log lines:
compiling init.c
compiling constants.c
linking shared-object psych.bundle
installing default psych libraries
linking shared-object zlib.bundle
linking shared-object socket.bundle
linking shared-object tcltklib.bundle
installing default tcltklib libraries
linking shared-object ripper.bundle
make: *** [build-ext] Error 2

The whole log is available on gist.

Ten minutes of searching and I find the ruby-build ticket for this issue.

From there I found the associated language ticket

Seems like this is a known issue for this Ruby version and homebrew. I’ve wandered into a language issue when I want to do is make some API calls. An even bigger issue for me is that I’ve gone off the reservation from my initial goal. I’m reading up on clang and LLVM, wondering if I need to do a bunch more homebrew homework and maybe rollback a library or two for this compilation. This is not what I set out to do this morning.

Solving My Problem

I could dig in with both hands into resolving the underlying issue with my local installation. Jacob recommended an alternative to rbenv in rvm. While that might solve the particular roadblock of getting rbenv installed and get the script to run it adds another piece of unfamiliar software to the mix. Instead I’m reaching for one of my favorite and more familiar tools; a Docker container.

A Container for Tumblr-To-Hugo

To start off with I’ll need to identify a starting position for this container. I know it’s Ruby so I’ll start off with a read through the Docker Hub Ruby Pages.

There I find this

Run a single Ruby script

For many simple, single file projects, you may find it inconvenient to write a complete Dockerfile. In such cases, you can run a Ruby script by using the Ruby Docker image directly: $ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp ruby:2.1 ruby your-daemon-or-script.rb

Perfect, this sounds like me and I do want to run this one script.

If you’re not used to using Docker the commands for running containers can be pretty opaque. Here’s a breakdown of what each section is doing in this command string:

$ docker run \ # The docker run command itself
-it \ # Sets the container to interactive mode and allocates a tty to it
--rm \ #  Removes intermediate containers
--name my-running-script \ # Give the container a name. Otherwise it gets a HEX string as an ID.
-v "$PWD":/usr/src/myapp \ # This maps your current directory on your host to the container as “/usr/src/myapp”
-w /usr/src/myapp \ # Changes the working directory of the container to this directory
ruby:2.1 \ # Specifies the image used to create this container
ruby your-daemon-or-script.rb \ # The command passed to the container

More information on Docker run commands can be found here

Adapting it to the script I’m trying to run I’ll rewrite the command. I’ll need to ensure the correct Ruby version is called and that I add the runtime arguments for the script.

$ docker run -it --rm --name tumblr-to-docker-to-hugo -v "$PWD":/usr/src/myapp -w /usr/src/myapp ruby:2.3.2 ruby t2h.rb APIKEY

That run failed with an error

/usr/local/lib/ruby/site_ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require': cannot load such file -- httparty (LoadError)
        from /usr/local/lib/ruby/site_ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require'
        from t2h.rb:1:in `<main>'

Looks like I need to install the dependencies first, so I strip out the script command and bring up a shell.

$ docker run -it --rm --name tumblr-to-docker-to-hugo -v "$PWD":/usr/src/myapp -w /usr/src/myapp ruby:2.3.2 bash

Then the dependencies

bundle install

Then try again

ruby t2h.rb APIKEY

Great victory is mine, the posts are exported to a new directory called blog on my host machine. They have the expected TOML frontmatter and should port to Hugo without too much more issue.

├── 2013-03-30-in-the-stream-being-a-person-not-a-personality.html
├── 2013-04-11-draw-friends-live.html
├── 2013-04-26-how-to-dress-people-up-in-stuff.html
├── 2013-05-09-brittany-vs-room.html
├── 2013-06-12-looking-for-roommate.html
├── 2013-09-10-cartozia-tales.html
├── 2013-09-21-soft-show-6-good-humor-man.html
├── 2013-11-21-sexual-harassment-in-comics-next-steps.html
├── 2015-03-16-the-beyond-anthology.html
└── 2015-06-26-all-hail-the-matriarchs.html

Oh this is just the text posts. Looking at the source code it makes sense:

class Tumblr
 TUMBLR_API_KEY = ARGV[0].freeze
 BASE_URL = "{TUMBLR_DOMAIN}/posts/text?api_key=#{TUMBLR_API_KEY}"

The BASE_URL is only set up to pull text posts. I read the README and I didn’t expect it to skip all of the illustration posts given the wording:

When you run this, it will create a file per post you have on Tumblr, with the proper title and a path that makes sense (/blog/the-title-of-the-post) and also create a CSV file with the original URL and the new path on Hugo, to help you setup the redirections.

It doesn’t explicitly call out images, sounds or video. I assumed that it would be able to handle the media types that Tumblr does. That was incorrect.

Never Gonna Give You Up

I haven’t solved my initial problem of pulling images down from Tumblr but have learned some important things.

  • Homebrew and Ruby don’t always play nice
  • Containers can help mitigate dependency issues
  • A short README may mean; go read the source instead

I’ll update this when I’ve found a way to handle pulling image posts from Tumblr to Hugo.


Thanks for exploring my website. This is where I experiment, play, and explore new concepts in software engineering. I’m a software developer with a focus on operations. DevOps is my passion because it focuses on communication and collaboration.

Every project is different, and my unique skill set allows me to help you with whatever kindles your imagination. It’s not just about code, after all: it’s about making something that’s unique, meaningful, and functional. My background is in supply chain management, business process automation, design, and accounting. That means that, in addition to designing and building tools that will work for you, I excel at project management. I already understand how businesses operate, and people too: don’t worry, I speak human!

I like conversations about philosophy, engineering, design, psychology and technology.

I am available for hire at your company right now. If you have something specific in mind, please contact me. I’m always looking for new ideas and I love talking tech.

Here’s my contact information. I hope to hear from you soon.