Semantic Granola
Isolating clusters of meaning in AI through visual ambiguation
I have been investigating new approaches in my image making practice by incorporating renders of NeRFs (neural radiance fields) into the process. The current name I use for the technique is “nerf-coring” because it reminds me of how scientists take core samples to look into the stuff that is hidden in matter. The process produces views into an immaterial space that remains in a latent state until it is looked at.
Generative AI is a double edged sword: it can be manipulated to act as a parasitic hijacker of visual lexicons but it can also provide great insight into the representation of ideas with its capacity to dissect and recombine information. The above image came out of the “Stable Diffusion Reimagine” algorithm: an input image is provided and variations of that image are procedurally created with the goal that these variations convey the same informational payload. In order to translate the original image into the semantic abstraction needed to get the job done, this process relies on a corpus of references from which associations are derived and used as templates to guide the cloning process. In this case, it is apparent that the algorithm isn’t quite able to parse the unusual input. It does preserve some of the semantic clusters (blue-ness, streakyness, newyork-ness, tearing) and is even able to associate it with high-rise buildings not present in the original but, similar to the fate of the hapless scientist played by Jeff Goldblum in “The Fly”, they end up being reassembled in a different way.
A chameleon may look like the thing it is standing on but its appearance is not a manifestation of the same process.