Side Projects and Hobbies


Side projects allow me to play with a wide range of concepts and ideas without subjugating them to the same degree of professional scrutiny as my research directions. My projects involve a variety of areas: mathematics, automation, investing, game design, and more. I'd be happy to discuss any of them in more details if you're interested!

You can find more half-baked projects on my github.

Disclaimer: Most of the projects presented here are treated as hobbies and are not necessarily representative of my analytical, creative, or coding skills. Their purpose is to showcase some things I'm excited about and help me look super cool and interesting 😎



Artwork Generation via Neural Style Transfer

Neural Style Transfer is a charming technique introduced in the paper A Neural Algorithm of Artistic Style that utilizes a pretrained image classifier to stylize one image in the artistic fashion of another image in order to create new artwork. In addition to being an entertaining concept to play with, Neural Transfer is also tightly connected to the questions of robustness and adversarial attacks, which is another reason I'm so interested in the topic. While the basic concept by itself is not groundbreaking and one can easily find a number of available implementations, I was never satisfied with the way people set up the optimization objective: it always felt hacky and sloppy (which is not necessarily a bad thing in a very conventional ML-engineering way) and lacked mathematical rigor and attention to technical details. This is why I created my own implementation that is more satisfactory to me from an optimization viewpoint, and can be used to generate stylized images and create white-box adversarial examples. If you play with it and create something cool, please share it with me! Disclaimer: it's a pretty old project and the code is not really up to my standards anymore, but I plan to come back to it at some point.




Reinforcement Learning Personalization Challenge

One of the things I've quickly realized while working on reinforcement learning for personalization tasks is that it is very non-trivial to control what the agent learns and to ensure that the important aspects of the environment are being properly captured. This is especially prevalent in non-tabular settings where complete exploration of the state-action space is not feasible and thus an agent must not only learn to appropriately parameterize states and actions but also to generalize to the unseen parts of the state-action space. If we additionally restrict the number of agent-environment interactions (which is in line with practical use cases of RL systems), we suddenly end up with an immensely complicated problem that might not even be satisfactory solvable with most of the existing approaches and hence calls for the development of novel techniques. In order to allow people to experience the aforementioned phenomenon first-hand, I have created a synthetic personalization environment that anyone can attempt to solve in my Reinforcement Learning Personalization Challenge. In this challenge you are given a contextual bandit environment and your goal is to train an agent that achieves a sufficiently good performance, while still providing a non-trivial distribution over the action space. Despite this seemingly simple statement, the structure of the reward signal is based on behavioral preferencing principles and is quite hard to learn with conventional approaches. Give it a go and let me know what you think!




Shallow Network Approximation Challenge

As one of the curious outcomes of my early research on neural network approximation, I stumbled upon the counter-intuitive and not-well-documented phenomenon that conventional approaches for the training of shallow neural networks do quite poorly in situations where the input data is very low-dimensional. This fact is somewhat documented in the papers Dying ReLU and Initialization and Trainability of ReLU Networks, though I personally don't agree with the authors' findings. Instead I believe that the primary reason for such bad performance is tied to the concept of the blessing of dimensionality and is best explained in the amazing paper Deep ReLU Networks Have Surprisingly Few Activation Patterns. But regardless of the underlying justification, I wanted to let people experience this phenomenon for themselves and set up a simple problem—the Shallow Network Approximation Challengewhere this issue can be observed. Feel free to play with it and let me know if you can get a non-trivial approximation. Psst: I actually know how to create a network that perfectly solves this challenge but it's not trained with a conventional approach; ask me how if you're curious.



Mathematical Literature Overview

In the fast-paced stream of ML development it is critically important to stay up-to-date on the literature that is being continuously added to arXiv. While there are many ways to solve this problem, such as the Arxiv Sanity Preserver and Research Gate or Google Scholar recommendations, I find it helpful to keep an eye on all new submissions, even though it is occasionally time consuming. The reason for my view is two-fold: first, it is important to stay open to new ideas that might currently be outside of the field of your immediate research; and second, I don't like the idea of fully relying on the personalized recommendation services to provide me with a list of papers to read. After all, my interests change and expand all the time, so if you're only reading the most relevant publications, you will surely miss a lot of amazing things you didn't even know existed! And while arXiv provides subscription services to monitor new submissions, I find their mailings extremely hard to read, so this solution does not work for me personally. As a workaround, I wrote a script that retrieves and parses tracked submissions and formats them into an html-file that can then be sent via an email as a usual newsletter. Even though the script can be tweaked to better match your preferences, it will likely provide a large amount of submissions (about 100/day for me). As a result, such an approach for discovering new literature requires quite a bit of time and will surely not work for everyone, but for me it's totally worth it, and I've been relying on it ever since I wrote it back in 2019!




Financial Investment and Algorithmic Trading

Investing is by far my favorite way of losing money. Of all its relevant aspects, I am most intrigued by the overwhelming stochasticity of the financial markets and am very curious about various approaches of quantifying and analyzing it and to understand the underlying asset price movements (and preferably make money in the process). For the past several years I've been playing with inventing and testing various investment strategies in the context of stock and crypto markets: from naive predetermined rules to fully-automated RL agents. Despite numerous attempts, none of my strategies were able to turn profit in the long run, but I'm learning something new every time and still have a bunch of ideas to explore. Unfortunately I cannot publicly release any of my code due to sensitive information, but I'd be happy to provide recommendations if you're interested in the field and looking for a place to start!




Automation and Workflow Optimization

In order to enhance my workflow as well as my general quality of life, I am always looking for ways to automate mundane things with various technological solutions. One of the best things I've done for this initiative happened in 2019 when I set up a small home server (and I mean small—I'm still using a Raspberry Pi 3B+ and it works great for all my needs!) that runs a bunch of custom scripts and keeps tabs on various processes that I don't want to monitor myself. Turns out that configuring and maintaining the server is a pretty straightforward process (though I'm still learning it) and I wish I'd realized it much sooner. The second best thing is a comprehensive notification system that I've assembled and configured mostly with the use of custom bots for Telegrama lightweight multi-platform messenger with a nice Python API, which makes bot implementation and deployment a breeze. I utilize Telegram bots for a wide range of tasks: controlling the code execution on different machines, monitoring stock and crypto price movements, remote data logging, and more. If you know of any easy-to-implement technological solutions that have simplified some aspects of your life, please share with me, I'm always looking for new ideas!




Game Development

Of all the different types of media, I believe that video games have the most potential due to their ability to combine narrative visualization, audio ambiance, and interactive storytelling with engaging mechanics. I especially appreciate challenging systems that are amusing to untangle, analyze, and master, and I have several concepts that I hope to develop and shape into a playable product. Game development is one of the areas that I am consistently coming back to and would like to have a more intimate experience with. Unfortunately, even though it is easier then ever to make games nowadays, it is still an incredibly time-consuming process and I currently cannot afford such a time commitment. The closest I've ever gotten to creating a game is a mod for Mount and Blade: Warband called Kitten Warfare that I developed with a friend during 2015–2017 while pursuing my PhD (I have a couple mods for other games that I have started but never finished, ask me about them if you're interested). While I'm not actively pursuing more game development, I'm still trying to stay in the field by watching relevant gamedev talks, and trying to throw things together in Unity or Godot from time to time.