A year ago, I created a small demo of animating guilloches as two dimensional graphics on an HTML5 canvas. In this post I revisit the beautiful and elegant patterns as 3d constructs that resonate with sounds from the physical world.
This tool converts a calendar year between Gregorian Common Era (CE) คริสต์ศักราช (ค.ศ.), Buddhist Era (BE) พุทธศักราช (พ.ศ.), Jula Sakarat (JS) จุลศักราช (จ.ศ.), and Ratanakosin Sakarat (RS) รัตนโกสินทร์ศก (ร.ศ.)
This article outlines the process for porting Andrew Trask’s (aka IAmTrask) 11-line neural network from Numpy (Python) to Torch (Lua).
I’ve documented my progress here, for those who are interested in learning about Torch and Numpy and their differences. As I started from scratch I hope this can prove useful to others who get stuck or need guidance.
Having recently read a blog post on guilloches, I became intrigued and the post inspired me to recreate them. They are beautiful patterns, and the starting formula to draw a rosette looked very simple to replicate in an HTML5 canvas. The 30 minute project quickly took off into a several hour excursion into the beauty of animated guilloches.
The mechanisms for storing data in the client are inadequate and unprepared for the next generation of web applications. A new solution for persistent state management in the client is needed that is based on well understood foundations long prevalent on the desktop and server.
After some hammock driven development, Harissa is mature enough to release some results. Originally intended for entire videos, I found the process better suited for only several frame remixes, usually of an identical source image.
I am slowly working on a side-project that makes a video into a mishmash of circles for each frame. I have an early version running, that manually takes a video, splits it into frames, remixes each frame into the circle mishmash, and recomposes the video with the new remixed frames. The project is called ‘Harissa’. The name Harissa comes from an Armenian dish and is made from chicken and a local type of wheat. It cooks for a long time, until it is a thick porridge. It is a fitting name because fully rendering a video is a slow process, and the result is an interesting mishmash of the original.
It is hard to write a program that invents original art. Two of the main reasons that software cannot create original expressive art are lack of context, and lack of experience.
Software lacks the ability to derive a human-like context from its surroundings. Some trivial examples are not knowing whether a flower is beautiful, or whether satire is funny. Software also does not know how to learn to understand this context, it cannot experience its surroundings in a similar fashion to that of the observer, and therefore cannot relate to the subject nor connect with the observer in any meaningful way.
I’ve finally open sourced Anagramica (http://anagramica.com/)
The code is now available under the MIT license at https://github.com/binarymax/anagramica
I’m not entirely sure why I never open sourced it in the first place. After 25 years of coding I’ve only recently become active in opening my code for others to see and use. I have a cathartic story to tell about a previous project, which I’ve never told anyone about, and silently open sourced this past winter.
I’ve been known to debate about a subject, which I like to call ‘The Idea is Art’. I defend that whatever imagery we can conceive of in our mind can be considered art, even when lacking a physical manifestation.
‘What is art’ has been debated ad infinitum, and some like to draw the line and say something is not art if it cannot be expressed - as art is, by definition, expression.