In recent newsletter from Ello I find very interesting art create by Laura Fluture, that inspired me to challenge myself.
Post contains blinking images, so be careful if you have any problems with that (GIFs are quite big ~15MB)
Here is an original image:
I asked myself: “Can I implement this on iOS just with code?”. I know how to generate structures, but decided to use real image and focus on effects and animations. Technically easy you may say? If you look on this as on sequence of predefined images – yes, but if you need to apply this to random structure in runtime then it’s good challenge.
Here is the result:
Everything (hexagons, animations, filtering, etc) generated in runtime. The same algorithm can be applied to live videos.
When I got result about I was really happy that I was able to reproduce similar view. I haven’t got some of the things that artist made, e.g. blinking effect. I played with colors and frames for animation, but with no success. It was quite hard to figure out how colours applied to images, how many colour groups are on the image, how many of generated hexagons should be coloured, etc etc. So, I was happy with technical part of the work and not so happy with art work. Anyway, I decided to try again later and opened the Facebook… and rapidly App Store.
Prism. Now. Available. Offline.
While I was colouring hexagons those guys have implemented neural network on iPhone… it also a chance that current level of neural network’s understanding of images is enough to not learn more and this is a reason why it might run on iPhone.
It’s hard to understand, but any of the solutions is the thing on the frontier of the software development and has much more applications than just filtering photos for social network albums and it’s possible to run on so small device… wow. Just a year ago it wasn’t possible and a lot of people was 100% sure that you need big servers to process data fast.