My name is Igor and I am a student in the sound and music for interactive games masters course at Leeds Beckett University.
My passion for video games and music started at the age of seven, when I first beat the Pokémon league on my Game Boy Color: I was countinuosly re-doing it to hear the amazing champion's theme in the match against Blue (if you don't know what I am talking about ).
In the years I played the most different games, but my favourites are (japanese) action RPGs, like Xenoblade, Zelda or Monster Hunter.
At 18 I started playing bass in different bands and that made me consider to make music not only an hobby, but my full time job, so, after High School, I decided to attend a BTEC National Diploma in Popular Music in my hometown Verona, and an Higher National Diploma in Music after that. In the meantime, I played bass with different bands, taught it in a music school, and worked as freelance sound engineer.
I began thinking about merging these two passions when the first Global Game Jam (a worldwide hackathon in which different teams have 48 hours to make a game from a given theme) was host in Milan.
I made some music and sound for games there, which can be listened on my soundcloud (https://soundcloud.com/igor-dallavanzi).
In 2015 I moved to Leeds to get a Bachelor's Degree in Music Production, attracted by the fact that one of the optional modules would have been Game Audio.
There I grew my knoledge of this world, thanks both to the Game Audio module and to my final year project.
For my project, I worked with a team of the Video Game design course at the university, creating for them an adaptive and partly generative music system for an horror game, using FMod and Unreal Engine 4. The system can be seen in action hereandhere, while more info can be found on my online journal of the project.
For the Game Audio Module, I implemented sounds and music for a level completely inside Unreal Engine 4 featuring a generative wind system and a music system that guides the player into the level, and can decide how to combine different layers of music to avoid repetition. It can be seen in action here.
Another system I built in the Unreal Engine can understand how many enemies are attacking the player and, based on that, wait more or less (more enemies attacking = less waiting time) to introduce more engaging music, in order to follow the challenge felt by the player. Also, on the last enemy left, the music becomes more intense and the game will wait for a proper musical moment to destroy it, for a better synchronisation between the score and the game events.
I am now trying to enhance my knowledge and skills about generative and procedural music: I want to create musical systems that can understand what is going on inside video games and then play proper music, as a band watching the player and improvising on his game experience. To achieve that, I recently started studying Max 7, a powerful visual programming language for music and multimedia.
I would like to extend this system also to installations: imagine for example a room where the music is built and change following the number of people inside it and their movements.
In this blog, I will talk about my progress and experience in this world, hoping you will find something useful!