Soundtheory – Gullfoss
Written by Ricardo Waddell Lewis on 17/03/2019
Follow us on social media
Everything just got clearer
Gullfoss is an easy-to-use tool for everyone from the amateur musician to the professional mastering engineer.
Its clean user interface offers a set of basic parameters that can be adjusted to improve the clarity, detail, spatiality, and balance of a mix or recording in a matter of seconds.
Gullfoss is an intelligent equalizer that listens to a signal and decides how to prepare the audio so that your brain can get the most information out of it. The realtime analysis of Gullfoss uses Soundtheory’s computational auditory perception model to understand which audible elements are competing for your attention. Gullfoss allows for quick and precise fixes that would otherwise be unsolvable or would require significant time and experience to resolve.
Gullfoss is even capable of fixing balancing issues between different sound elements without access to the individual tracks. The internal auditory model allows Gullfoss to make objective decisions about the perceived sound. As a result, mixes processed with Gullfoss will generally translate more consistently between different listening situations.
Gullfoss, enabled by new patent-pending equalizer technology, processes audio with unrivaled sound quality. The equalizer is capable of changing its frequency response more than 300 times per second and without introducing audible artifacts or degrading signal quality. Together with the highly advanced computational auditory perception model that has been developed by Soundtheory, Gullfoss is the first and only product of its kind.
It’s no secret that the world of DAWs is evolving quickly with more sophisticated software packages and plugins coming out all the time. It’s also quite clear that many aim to replace existing physical gear or make the whole recording and production workflow faster.
Gullfoss from Soundtheory
aims to resolve EQ issues in a mix dynamically, using proprietary algorithms (or in their words “unique computational auditory perception technology”) to enhance a mix on the fly, solving frequency issues as the music progresses. This could speed up the whole production process quite significantly if used wisely.
In a world where competition for streaming real estate is fierce, a quick and effective workflow is essential. Let’s be honest: major bands in the 70s or 80s could afford months in studio perfecting their albums as when they came out they were the talk of the town for months or even years. In this day and age of fast food-like consumption of media, putting out less content more often seems to be the name of the game. Thus getting to results faster allows for more content output with the same effort.
This is a summary of what the controls aim to achieve:
Tame and Recover: pretty straightforward controls, as one will quiet down signal components that may be dominating others, and recover will bring up other components which may be masked by the dominant ones or not clear enough in the mix.
Bias: sometimes signal components can be borderline. If the algorithm is unsure whether the component should be tamed or recovered, it will use this control to bias its decision. Positive values bias towards recovering, while negative values will give preference to taming.
Brighten: there is a more detailed explanation in the documentation provided above, but essentially this allows tweaks to brightness as the algorithm is genre agnostic and preferences may vary.
Boost: same as brightness, but for lower frequencies. These two controls will allow you to adjust the final result to your preferences.
For the testing we’ve chosen to evaluate how the plugin would act on guitar modelling sounds, and for that a Line 6 Helix Native was used. Guitar amplifier modelling has been around for quite a while, and it has advanced significantly. Like any technology it has its quirks – put many digital effects in sequence and some unnatural artefacts can emerge. Also the frequency response of its output can be a bit irregular depending on how the modelling amplifier is set and what cabinet simulation or IR (Impulse Response) is being used. Finally we do know that many guitars in a mix can have quite a battle for frequency and amplitude real estate.
The Helix Native we utilised had a preset consisting of all the elements of a typical guitar signal chain: compression, overdrive, power amplifier and cabinet. We’ve chosen to turn off any modulation or reverb effects to give the plugin the chance to treat a raw overdriven or distorted guitar signal.
The first experiment was to go from the Helix Native straight into the Gullfoss plugin on a few selected guitar tracks, for example one for rhythm and another for solo guitar. This allowed us to run two instances of the plugin on two different tracks and see what is the result of them individually and combined – in a mix – but with the plugin receiving context from its own track only, not the whole song.
The results were quite interesting. By adjusting both tame and recover controls we could see and hear some improvements in the overall EQ. This was then dynamically following the music, without any noticeable artefacts, which was quite impressive. Important to notice that in both cases the guitar was the only signal delivered to this plugin, so whatever clash of frequencies it was trying to resolve were results of unbalanced frequency response coming out of the amp modeller (or my sloppy playing), and not a result of instruments fighting for frequency response real estate in a mix. The pictures are static but the results are quite dynamic and change as the song progresses.
Plugin Disabled Guitars Only
Plugin Enabled Guitars Only
We also did try routing all guitars to an aux bus and applying the plugin to that instead of the individual tracks, and the results were quite similar, perhaps slightly better as the plugin now has knowledge of both rhythm and solo guitars at the same time.
Plugin Enabled on Aux Bus Guitars Only
The other alternative we’ve tried was plugging the Gullfoss at the end of the mix of the same song with the same guitar tracks, but then turning the instances plugged into individual guitar tracks off. That way it would receive context from both guitars at the same time as well as drums and bass.
Plugin Enabled on Stereo Out
The results here were even more impressive; with full context of the song this plugin really shines. The overall brightness and clarity was significantly improved and all it took was tweaking two knobs – as we did leave the brighten and boost controls alone for most of the testing, and didn’t use the bias much. Doing this by hand would require mad automation skills and auditory perception. Hear the comparison and you should be able to see the difference in the clarity of the guitar sound and the overall balance of the mix, even considering that this was just a rough ‘pick up your guitar and play’ kind of experiment.
There is no doubt that this plugin could save hours of work and make workflows for final mixing and even mastering far simpler. It allows inexperienced mastering engineers or mixing engineers to fix issues they didn’t even know they had. It saves time for professionals by automating something that would take much longer to do manually.
On the tests we ran we noticed that context matters quite a lot, so on a busier mix you really see the value of having this plugin in your chain. Since these tests were heavily geared towards guitar sounds, we would recommend checking out other reviews and also the examples they have on their website, which do include outputs from other types of instruments and mixes
See it in action
Breaking down the sound
Soundtheory emerged in 2016 after nearly 14 years of fundamental research. For more than a decade, we have worked on an alternative approach to signal processing inspired by quantum theory and mathematical methods such as non-commutative algebra, differential geometry, and information theory. Sound, theory.
We discovered deep insights into how the human brain processes sound. This research spiraled into the development of new and unique methods for realtime audio processing. We are particularly proud of our highly advanced human auditory perception model which allows us to analyze sound the same way a human would perceive it.
Our technology is entirely different from anything that has come before. So it’s worth mentioning that Gullfoss is not using artificial intelligence (AI), neural networks, Fletcher-Munson curves, traditional DSP methods, or machine learning algorithms. Instead, Gullfoss is the first in a line of products that employ our computational auditory perception technology. Watch out for what the future brings because we have only just started.
What is Gullfoss?
Gullfoss is a famous Icelandic waterfall. It is one of the most beautiful in the world, and with an unforgettable name. The inspiration for using the name of a waterfall for our plugin comes from one of the questions we asked ourselves while developing our theory. Why do waterfalls sound so pleasing? To answer, one could argue that a waterfall generates close to pink noise. However, that leads to a similar question. Why does pink noise sound pleasing? Both waterfalls and pink noise come near to maximizing the amount of information perceived by your brain. They give your brain more of what it wants. Gullfoss the software is all about organizing the information in the signal so that your brain finds the result more pleasing.
Just like a waterfall.
Limited time offer
Soundtheory - Gullfossavailable for mac OS and Windows
- intelligent equalizer
- realtime analysis
- quick and precise fixes