As an ML-researcher/artist at Onformative, Berlin, I'm working on a 3D digital sculpting project in Unity3D.
We're using reinforcement learning, enabled in Unity3D via its ml-agents machine learning library, to train our sculpting agent to creatively sculpt complex shapes from a block.
The agent uses a combination of engineered sensors and visual input to get a good idea of the partially observable environment.
The final version will have the agent using different tool shapes which will result in different aesthetics.
Synchronization in dynamical systems: Fireflies
A long-term goal is having a music code library/toolkit containing algorithms that simulate complex systems.
The current work is a first step.
Firefly-synchronization is a biological phenomenon in which the emission of flashes starts of randomly, and over the course of a night synchronizes.
Firing correlates over time culminating into synchrony.
This phenomenon has been studied by biologists and mathematicians, which resulted in multiple papers with mathematical models decribing it.
The paper I used for reference was called Firefly-inspired Heartbeat Synchronization in Overlay Networks.
The algorithm described is the Ermentraut algorithm, which adjusts the firing frequence of each fly depending on firing behavior of its neighbors.
Implementing it into python code, gave quite a nice result.
A result of a synchronization run with 800 fireflies is plotted on the graph below. Each blue dot is a flash event from firefly ID (y-axis) on time in seconds (x-axis).
As you can see, the simulation converges into a synchronous firing within ~10 seconds.
Firing events are transformed to midi signal and ported with python's mido midi library into Ableton.
Below is an example audio file made from synchronizing while also lowering the natural firing frequency over the course of a simulation, which results in some pretty interesting path towards a slow synchronization at the end.
Code can be found on my github.
The biggest critique of using deep neural networks as models in science is that we don't know how exactly they compute.
They're being called black box models for that reason. e.g. understanding a black box like the brain with another black box is not useful.
However, various attempts have been made to get a better understanding of it, by presenting images, and visualizing the representations in different layers.
One other way to do this is synthesizing input images by using gradient-ascent to maximise classification score on a selected class, filter, or neuron.
This is called Activation Maximization.
To play with that, I used CIFAR10 to train a CNN and applied activatin maximization on it.
My results are visualizations of maximizing the classification of specific classes on Cifar10.
I did get some good ones that looked like the various categories I was trying to visualize.
However, after playing a bit more with it I got some unexpected results, displayed on the right.
A lot of the work is playing with different regularization techniques on updates to the synthetic input image.
Stronger natural image priors are necessary to produce better visualizations with this gradient ascent technique.
Playing around means adjusting the amount of the regularizations available.
L2 regularization helped a lot, but I found that higher degrees of gaussian regularization (used in this paper to correlate pixels more with another)
gave a really cool aesthetic to the output. It's all about finding the balance.
My results were a bit overdone on the regularization-end.
I was quite surprised how the results didn't contain any strong features of the category they represented, yet they're rather beautiful.
Another way to describe this is making adversarials pretty.
Multi Spectral Vision and image recognition on iOS
At Plant Vision we developed a demo to show our capabilities with infrared detection on the iPhone, and embedding neural networks in an iOS app to using image recognition.
This was all put together with the FLIR SDK, tensorflow-hub for transfer learning to custom categories, and ML-Core for embedding. All was glued together in Swift and Objective-C.
For two semesters I did neuroscience/Machine-Learning research at MIT’s DiCarlo lab. They specialise in a computational model approach to study the brain’s visual system. I was Supervised by prof. J. DiCarlo, dr. P. Bashivan and dr. J. Kubilius, in two projects:
One project was about using Neural Architecture Search for finding more brain-like models (Teacher Guided Architecture Search). The idea is using Neural Architecture search to construct convolutional neural network architectures closer to the brain’s 'architecture'.
This can be done by comparing the representations of visual input at various depths in the brain and model using Representational dissimilarity matrices.
Putting the simmilarity in the objective function steers the search into sampling more neural-like models.
Here I learned and research a ton about reinforcement learning, by testing and implementing state of the art reinforcement learning optimisation algorithms (PPO, and REINFORCE) to make the search more efficient.
The other project was using Neural Architecture Search for finding and analysing recurrent, and efficient cells for object recognition.
This project was designed to steer the search into more parameter-sparse models by rewarding parameter sparsity in the optimizer's objective function.
The model space we searched in were Cells of recurrent models (RNNs) for image recognition. Common implementations of these cells are LSTM and GRU cells.