Firefly-synchronization is a biological phenomenon in which the emission of flashes starts of randomly, and over the course of a night synchronizes. Firing correlates over time culminating into synchrony.

This phenomenon has been studied by biologists and mathematicians, which resulted in multiple papers with mathematical models decribing it. The paper I used for reference was called Firefly-inspired Heartbeat Synchronization in Overlay Networks. The algorithm described is the Ermentraut algorithm, which adjusts the firing frequence of each fly depending on firing behavior of its neighbors. Implementing it into python code, gave quite a nice result. A result of a synchronization run with 800 fireflies is plotted on the graph below. Each blue dot is a flash event from firefly ID (y-axis) on time in seconds (x-axis). As you can see, the simulation converges into a synchronous firing within ~10 seconds.

Firing events are transformed to midi signal and ported with python's mido midi library into Ableton. Below is an example audio file made from synchronizing while also lowering the natural firing frequency over the course of a simulation, which results in some pretty interesting path towards a slow synchronization at the end.

Next step is making fireflies reactive to other inputs (from e.g. other Ableton midi instruments). Also converting our python code into a Javascript Max for Live device is on the planning. Code can be found on my github.

My results are visualizations of maximizing the classification of specific classes on Cifar10. I did get some good ones that looked like the various categories I was trying to visualize. However, after playing a bit more with it I got some unexpected results, displayed on the right.

A lot of the work is playing with different regularization techniques on updates to the synthetic input image. Stronger natural image priors are necessary to produce better visualizations with this gradient ascent technique. Playing around means adjusting the amount of the regularizations available. L2 regularization helped a lot, but I found that higher degrees of gaussian regularization (used in this paper to correlate pixels more with another) gave a really cool aesthetic to the output. It's all about finding the balance. My results were a bit overdone on the regularization-end.

I was quite surprised how the results didn't contain any strong features of the category they represented, yet they're rather beautiful. Another way to describe this is making adversarials pretty.

This was all put together with the FLIR SDK, tensorflow-hub for transfer learning to custom categories, and ML-Core for embedding. All was glued together in Swift and Objective-C.

One project was about using Neural Architecture Search for finding more brain-like models (Teacher Guided Architecture Search). The idea is using Neural Architecture search to construct convolutional neural network architectures closer to the brain’s 'architecture'. This can be done by comparing the representations of visual input at various depths in the brain and model using Representational dissimilarity matrices. Putting the simmilarity in the objective function steers the search into sampling more neural-like models. Here I learned and research a ton about reinforcement learning, by testing and implementing state of the art reinforcement learning optimisation algorithms (PPO, and REINFORCE) to make the search more efficient.

The other project was using Neural Architecture Search for finding and analysing recurrent, and efficient cells for object recognition. This project was designed to steer the search into more parameter-sparse models by rewarding parameter sparsity in the optimizer's objective function. The model space we searched in were Cells of recurrent models (RNNs) for image recognition. Common implementations of these cells are LSTM and GRU cells.