November 13th – 17th, 9:30 – 16:00

Sted: Notam in Oslo
Price: 5000,- NOK for all academic employees in 50% positions or more, free for all others.
Thibaut Carpentier is one of the main developers behind Spat, a toolkit for sound spatialization in Max. The workshop will contain a mix of practical work, guidance and lectures.

Day 1: theoretical background on spatial/immersive audio; introduction to the Spat “philosophy”
Day 1 and 2: (hands-on) introduction to the Spat framework; basic panning examples
Day 2 and 3: reverberation, sound radiation; more advanced examples
Day 3: focus on Ambisonics and/or binaural techniques (applications to head-tracking, VR, etc.)
Day 4: highlight on Panoramix (1), a new derivative of Spat, which focuses on mixing and post-production
Day 4: presentation of recent work on spatial audio at Ircam (research projects and artistic productions)
Day 5: authoring tools for spatial sound (automation, digital audio workstations, algorithmic composition, remote control, integration within OpenMusic (2), object-based audio, etc.)
The course is fully booked

Bio: Thibaut Carpentier is an R&D engineer at IRCAM (Institut de Recherche et de Coordination Acoustique/Musique), Paris.
After studying acoustics at the Ecole Centrale and signal processing at the ENST Telecom Paris, he joined CNRS (French National Centre for Scientific Research) and the Acoustics & Cognition Research Group in 2008.
His research is focused on spatial audio, room acoustics, artificial reverberation, analysis/synthesis of sound fields and compositional tools for spatial sound.
In recent years, he has been responsible for the development of Ircam Spat and he contributed to the conception and implementation of a 350-loudspeaker array
for holophonic sound reproduction in Ircam’s concert hall.
Besides his research activities, he has been involved in numerous creative projects, as computer music designer, sound engineer or scientific consultant for several multimedia artists and ensembles.

About IRCAM Spat:
Spat is a real-time spatial audio processor that allows composers, sound artists, performers, and sound engineers to control the localization of sound sources in 3D auditory spaces. In addition, Spat provides a powerful reverberation engine that can be applied to real and virtual auditory spaces.
The processor receives sounds from instrumental or synthetic sources, adds spatialization effects in real-time, and outputs signals for reproduction on an electroacoustic system (loudspeakers or headphones).
Its modular signal processing architecture and design are guided by computational efficiency and configurability considerations. This allows, in particular, straightforward adaptation to various multichannel output formats and reproduction setups, over loudspeakers or headphones, while the control interface provides direct access to perceptually relevant parameters for specifying distance and reverberation effects, irrespective of the chosen reproduction format.
Another original feature of Spat is its room effect control interface relying on perceptive criteria. This allows the user to intuitively specify the characteristics of a specific room without having to use an acoustic or architectural vocabulary.

Text found here, read more:

Illustration from