WaveAlign Logo

What is loudness?

Yannik Brehm
Yannik BrehmSeptember 2, 2025
A speaker cone

At first glance, the concept of "loudness" may seem simple. You turn a knob, the music gets louder, and the crowd reacts. But behind that intuitive experience lies a complex journey from physical force to human perception. While at first glance this knowledge seems exclusively useful in the domain of audio engineers, understanding these concepts is what empowers a DJ to deliver a truly seamless and professional set.

The Physics of Sound: From Air Pressure to Decibels

At its core, sound is a physical event. When a speaker cone pushes forward, it compresses the air molecules in front of it, creating a wave of high pressure. As it pulls back, it creates a wave of low pressure. The resulting continuous vibration travels through the air and into our ears. This change in air pressure is called sound pressure, and it's the raw, physical component of sound. It's measured in a unit called Pascals, and the human ear is incredibly sensitive to it. We can detect the faint whisper of 20 micropascals just as we can feel the force of a 20-pascal scream. Essentially a millionfold difference in pressure that our ears have to handle.

Loudness scale in Pascal (Pa) and Decibel (db)

The logarithmic decibel scale is much more convenient for handling loudness

Because the range of sound pressure we can hear is so immense, using Pascals to talk about it is impractical. This is where the decibel scale for sound pressure level (dB SPL) comes in. Instead of a linear scale, decibels use a logarithmic one, which is a much better match for how our brains actually perceive changes in volume. The threshold of hearing (20 micropascals) is set as the baseline, or 0 dB SPL. From there, every tenfold increase in sound pressure equals a 20 dB jump. This makes the decibel a far more manageable and intuitive way to measure the intensity of a sound.

Human Perception: How Our Ears and Brain Create Sound

During all of this, we need to keep in mind that a wave of air pressure doesn't become "sound" until our brain says it is. First, a sound-wave hits our eardrum, causing it to vibrate. These vibrations are then amplified by three tiny bones in the middle ear before being transferred to the cochlea. The cochlea is the real translator here; it converts these mechanical vibrations into electrical impulses that are sent to the brain. It's only when the brain interprets these signals that we experience the sensation of sound. This distinction is key: a physicist might define sound as the movement of air, but for us, it's a perceptual experience.

Anatomy of the human ear

Sound travels through various areas in the human ear until it reaches the nervous system to be "perceived"

This brings us to this article's most crucial part: Loudness. Loudness is not the same as a simple sound pressure decibel level. While db SPL measures physical sound pressure, loudness is the perceived intensity of a sound, and it's heavily influenced by the frequency of the incoming wave. Our ears are most sensitive to frequencies in the mid-range (between 2,000 and 5,000 Hz). This is famously illustrated by the equal-loudness contours (or Fletcher-Munson curves), which show that a low-frequency bass tone needs to have a much higher decibel level to be perceived as being just as loud as a high-frequency cymbal.

Digital Audio and LUFS: Measuring Perceived Loudness

So if dB SPL levels don't tell the whole story, how can we translate this complex perception into a reliable metric, especially in the digital world where we manage our music? For this, the LUFS, or Loudness Units Full Scale metric was introduced. The "Full Scale" part is the key: it clarifies that this is a digital measurement, relative to the maximum possible level in a digital audio file before clipping occurs.

LUFS is a sophisticated algorithm that analyzes the digital audio signal itself to give a relative prediction on how loud an audio signal is perceived. We will go into more detail about the technical background of LUFS measurements in a future article, but at a glance, it models key aspects of human hearing, such as the equal-loudness contour and the duration of sounds. It then produces a single number that accurately reflects perceived loudness. This is the metric that streaming platforms like Spotify and YouTube use for their loudness normalization. For example, Spotify aims for an integrated loudness of around -14 LUFS, and it will automatically turn down tracks that are louder to ensure a consistent experience. For DJs, understanding LUFS is the key to preparing a library where every track, regardless of its original mastering, can be mixed smoothly, without jarring jumps in perceived volume.

Like what you see? Share with a friend.

Yannik Brehm

Yannik Brehm

Yannik is a multifaceted professional whose career merges audio-engineering, software-development and music production. Currently working as a Senior Audio System Engineer specializing in automotive audio solutions and embedded development, he leverages a Master's degree in Media Informatics and experience in professional real-time audio software development for companies like Sennheiser. His technical expertise is complemented by practical knowledge and critical listening skills gained through years of producing, mixing and mastering electronic music.

Ready for perfectly balanced audio?

Join the WaveAlign beta today and take control of your sound.

By joining, you agree to our Terms of Service and acknowledge our Privacy Policy.