Central Park series: data-driven artwork

In this small series, I worked with the axidraw pen plotter and Processing to imagine the new growth happening in early spring in Central Park.

From the warmth of my room, I researched March weather data in Central Park as well as geotagged posts of spring flowers, outsourcing my reference-gathering to the data of others.

I made some simple plots of the weather data using Processing, comparing humidity and temperature by date and plotting the points as small circles.

Cloud cover was additionally represented by adding rings around the circles (the rule was to add one extra ring if cloud cover was between 30% and 60%, two if it was anything over 60%).

With the axidraw, I could then draw the plots accurately and precisely using waterproof artist’s pen directly onto watercolor paper.

I drew lines to create dimensional relationships between the data points, and then “grew” the flowers I’d been researching from home out of the spaces between the data points.

The data formed a “garden” to situate the flowers, and their pattern was dependent on its form, as with the growth anticipated in the season.

The data forms them and gives a structure to their layout, just like the forces represented by that data enable the form of our gardens year over year.

Aleataxonomy series

“Aleataxonomy”, my shorthand from “aleatoric” (created from random action) and “taxonomy” (referring to the classification of the elements I use in drawings), is a recent series based specifically on limiting drawings to randomly-generated numbers of elements in sequence. The project takes on the problem of end condition, or “when is the drawing done”, that I’ve been dealing with since working on series like Schema (one of the first that departed from my historic end condition of “whenever the pattern goes to the edge”). My hypothesis was that even with an artificial end condition, these drawings wouldn’t end up looking unfinished because of the conscious and subconscious aesthetic evaluation I unavoidably engage in as I draw.

Procedurally, I would first figure out what types of patterns would work for this process, break down their individual procedures into steps and ranges, and then write up a script in Processing that would render out random values (within the range) each time it ran.

For some of these processes I made intermediate sketches to determine how I should balance out the ranges of values before I could write the randomizing script.

Once I had a solid idea of the steps and ranges of values, I wrote scripts that would choose a random number from a range for each step and, critically, read the choices back to me in clear step-by-step instructions. Much of “aleatoric” art has to do with physical randomization tools like dice, but since I had access to coding text output and random value generation in Processing, that made it possible to write custom ranges in absolutely whatever value I might want.

Once I had the readout, my task was to follow the instructions to the letter (often using tally marks on scratch paper to record how many of each mark I’d completed).

– Example of a first step. The result of the Processing “dice roll” is at the bottom of the black rectangle below the code. (For those following along at home, the code that requests that printout is println())

– Process for a full drawing

There are many ways I can take this kind of process. One of the options I wanted to pursue was to try using the same exact values to make multiple drawings. Would they be the same, indistinguishable, or somewhere in the middle?

One of the unexpected upshots of this project was learning just how many houses or lines I might be able to draw in a small space without realizing. Even when I was dealing with scales like 6”x8” and large numbers like 150-250, it turns out that’s barely enough “house” marks to look inhabited.

In general, I’m glad I indulged this impetus to take programmatic drawing to a bit of an extreme. It’s another tool I can use to determine when a piece is “finished”; it allowed me to explore aesthetic balances that I might not have chosen to work with before; and in a way it gives me more confidence in the persistence of my artistic style.

Fathom presentation

I connected with Fathom Information Design during the Eyeo festival and they invited me to give a presentation during their Friday social hour in mid-July about my work and its connection to infovis and computational artmaking. I drew up a slideshow about my recent forays into Processing and how they relate to the “Organic Algorithm” series I’ve been developing.
slide11-001
Handwritten “algorithm” for generating a hand-drawn map

 

Organic Algorithm Animation
Procedurally-created hand-drawn map process

 

sligo
Map of a local pub

 

eyeo2016018
Visual notes from Eyeo during a presentation by Ben Fry, co-creator of Processing and head of Fathom

 

The best part of the presentation was that while I was presenting on my visual notes, I was getting visualized myself! Attending employee Rachel Harris drew an amazing map of what I’d talked about; see more of her work on her Instagram here.
Rachel Harris

Style transfer maps

I had a great time at the recent Machine Learning for Artists hack day at Bocoup. I didn’t actually accomplish much myself—I generally resorted to drawing in my sketchbook—but I learned a lot from the projects presented, and was fascinated to see my drawings applied as style transfer, a project K. Adam White and Kawandeep Virdee among others took on during the event. New drawings of mine were created without my having to do anything! It was magical.
out
We ended up with this…
gift01-coaster01 map223 blocks of blocks web
by combining these two

Style transfer is a machine learning process whereby a content image gets transformed using the style of a source image. If you’ve seen the Google Deep Dream project, with its hallucinatory puppyslugs in pop colors, it’s related to that.
 Google Maps transformed by my abstract map art

More recently, the Prisma app has been super popular for Instagram posts and uses a similar technology—it gives you a range of artworks that you can use to transform the style of your photos. When I gave it an image of my map art, it even added extra roads to it:
wood-panel crop wood-panel filter
Using this local style transfer process is like being able to transform a photo Prisma-style into ANY type of artwork. So of course I got mappified :)
I’m excited to see what else can be done with this process, and especially if it’ll affect my drawing in any way. I’m already thinking about doing a fractal drawing process where I make progressively bigger drawings with the aid of these machine-hallucinated map details. Stay tuned!

EYEO Festival visual notes

I recently attended Eyeo Festival for the second time, in early June. This time, instead of my separate sketch and written journals, I took notes all in my multimedia sketchbook. This meant that for almost every talk I attended, I ended up with at least one page full of important quotes and memorable visuals. Some highlights:
01 alexis lloyd
Alexis Lloyd on the history of robots and androids in our culture and our relationship to them. The video of her talk is up here!
02 tega brain
Tega Brain on her amazing IoT-type art projects she called “post-scary media arts”. She’s an inspiration. View the talk here.
03 patricio gonzalez vivo
Patricio Gonzalez Vivo on synchronicity and his AMAZING projects. Check out the talk here.
04 ben fry
Ben Fry, head of Fathom Information Design and co-creator of the fundamental computational drawing tool Processing, telling it like it is about “data visualization… and its hip cousin, “data-vis!””. See the whole talk here.
05 gene kogan
Gene Kogan presented so much fascinating info about style transfer and machine learning that I literally wrote off the page. He’s the only presenter I had to use more than one page for. Somehow I still managed to get some visual representations of his slides in too! They help me remember the presentation since I’m mostly visual. And since the video isn’t up on the eyeo channel yet—soon?
I’m not sure if you can tell yet, but Eyeo was -extremely- inspirational. I’m in love with all the projects here, and I’m so glad I have my notes to remind me what I want to strive for. I’ve already had a hand in some style transfer experiments using my maps—keep an eye out for a post on that soon!

Processing sketches

One of my many goals this year has been to learn more about scripting and procedurally generating graphics, for which I’ve been using the language Processing. I don’t have a lot of prior experience with code, though, and that makes things a bit difficult!

It’s a totally different way of thinking. Processes that seem simple enough to me—like drawing a series of city blocks that are different shapes but all have the same width streets between them, for instance—are surprisingly difficult to put into code. And at the same time, graphics that would be difficult or just tiresome for me to execute physically, like filling the screen with parallel lines or drawing the same shape over and over in different places, are particularly simple.
Much of the learning curve has just been differentiating between the different parts of code and learning their names so I can understand instruction, even before I learn the actual functions.

That all said, I’ve been having a pretty good time coming up with little visual programs. Below are some short animations that I’ve made so far. I’m particularly proud of the fact that I came up with and wrote all the code for each one myself. It’s possible to copy others’ code, but I find that that often causes more problems than it solves—and it’s more satisfying to know I coded something from scratch.