Learning to grow with crystals and ML
As I’ve been growing sugar crystals I’ve photographed them at every size, and from every angle so that I can build a machine learning model in Runway ML software. The video above is a work in progress, a first test, as I figure out whether/how its interesting.
I recently met Jenny Rhee, STS scholar and author of The Robotic Imaginary: The Human and the Price of Dehumanized Labor, who has been researching artists working with AI and machine learning. We talked about learning how to feed the machine learning. While making the ‘Seeding Things’ series, and ‘Gathering Downstream’, I’ve developed a way of photographing every stage of something growing, so that the ML knows all the possible growth stages, and can then reproduce version of them and appear to grow in the animations I produce with it. It’s really a mutual learning process. I know that if I take all the photographs in intersting light then that will get picked up. For a lot of these images I used a small phone camera macro lense, to get good close up focus. And mostly took photographs within a glass jar and on a windowsill with the morning light behind it. To create the movement I want it doesn’t need carefully composed images, it needs all the possible angles and stages of growth so that it knows all of the options. (At least this is how I think of it).
When Ruskin writes about rock that looks like sugar, he suggests that if you saw it in a chunk in a sugar bowl, it would look just the same chrystalline form. A material that could stand in for another, could seem to be another. In the same section of Modern Painters IV he also talks about the way that minerals from rocks become soil, and helps things to grow, a nourishing presence.
In ‘The Perception of the Environment’ (2000) Tim Ingold discusses ‘How to see from everywhere at once’. Following James Gibson he discusses how we understand what we see in front of us, and the parts of it that are not necessarily visible, because we have been observing that thing, or similar things, over the course of our lives. In a way what I’m doing for the ML model is trying to capture seeing while moving and seeing over time in a series of images. Perhaps building an ecological visual perception for machine learning.