CATT TUCT overview

Latest update v1.1a:2 with CATT-Acoustic v9.0c:2 Apr 17, 2013.
TUCT was released with CATT-Acoustic v8.0j March 29, 2010.

Overview of the new CATT prediction/auralization software TUCT:

TUCT stands for The Universal Cone Tracer and predicts echograms and room impulse responses offering several different internal algorithms depending on the room case, ranging from basic to advanced. In particular it offers very good ways to predict and auralize open cases (outdoor arena etc.) that traditionally often have necessitated making a faked closed model for most algorithms to work well, especially for auralization, and cases where flutter echos may be important. Also prediction and auralization in big indoor venues with high absorption will benefit. The core algorithms are based on various levels and combinations of actual and random diffuse ray split-up and are general so that as the algorithms are further refined and computer speed increases additional levels of actual split-up can be incorporated.

TUCT
relies on the geometry modeling view/check and library handling (absorption, source directivity, HRTFs, headphones) of the CATT-Acoustic main program (CATT-A) that exports a file (.CAG) containing the data necessary and runs TUCT. Everything previously learned regarding geometry modeling in CATT-Acoustic and old models can thus be directly used but all prediction and auralization is from v9 instead performed by TUCT in a simpler, more general and more flexible way.

TUCT
is a near total rewrite from scratch but also includes some parts of CATT-Acoustic v8 such as Pixel rendering, an Image source model and Time trace that are adapted and extended to work with TUCT as separate tools in a more flexible and integrated way. Most parts of the CATT-Acoustic v8 post-processing have no direct correspondence in TUCT, they are simply not needed. The few remaining, still useful but not very often used, utilities are kept in the stripped-down CATT-A v9.

Major differences between TUCT and the previous CATT-Acoustic v8 prediction/auralization:
  • prediction algorithms are more general and do not rely on reflection growth extrapolation like the RTC
  • auralization is based on the full reverb tail instead of recreating it in post-processing
  • no separate post-processing stage for auralization required, impulse responses are available for evaluation and convolution/listening directly after prediction
  • displays measures and graphs for both an energy echogram and a pressure impulse response giving an indirect indication about the prediction reliability at low frequencies
  • the pressure impulse response made it possible to  include a comprehensive treatment of early diffraction using a secondary edge-source method based on a discrete Huygens interpretation of Biot-Tolstoy (from v1.1a). This method has few principle limitations when used in room acoustics with finite edges, especially as compared to infinite screen formulas. The documentation includes  a 20+ page whitepaper about how diffraction has been implemented, and why a screen formula has not been used.
  • no separate multiple source addition or auralization required, multiple source impulse responses are available for evaluation and convolution/listening directly after prediction
  • no separate convolution utility necessary, just click Play/Convolve for immediate listening, even for multiple sources like in a PA system
  • for multiple source (with different sounds) auralization MultiVolver VST and MultiVolver WCP (offline version) can be used, TUCT saves settings-file for easy integration
  • no separate relative calibration required to auralize positions within a room with relative levels preserved
  • can run in multiple instances (examples: one instance can perform an audience area mapping while another performs impulse response prediction for the same model, or one instance predicts one room and a second instance another room while a third is used to view old results)
  • uses multiple processor cores for all major processing functions, number of threads automatically selected or specified
  • many types of calculation results can be viewed in parallel (it can e .g. be useful to view the results of an early part Image source model together with the results of a full calculation to identify main early reflections)
  • direct or reflected sound can be mapped on all walls and/or audience surfaces at an interactively selected resolution via the new Surface rendering (similar to Pixel rendering but at a variable absolute spatial resolution allowing free model rotation and rescaling)
  • after a calculation, the effects on STI when changing background noise, overall level/eq and STI type can be studied interactively including the effect on map statistics, a separately calculated noise map can be used as  background noise.
  • very few result items have to be decided on before prediction, old results can be recalled, displayed and analyzed again in new ways (in most cases also if new measures or analysis/display features have been added after the original calculation)
  • simple sequence processing for running several calculation in sequence
  • no use of PLT-files (in v9 a new PL9-format is used for the main program for geometry/view check and directivity grahpics, TUCT can export to PL9 for presentations or side by side comparisons).
  • several selectable mapping color palettes.
  • flexible structure for adding future functionality and measures

Licensing:
  • for v8 users v9 is treated as an update and as v8 has separate Prediction (with demo auralization) and Full auralization versions


A basic TUCT screen showing main windows (further open windows as icons below)

 



The Main:Actions window

is an interface to main prediction options and while processing it indicates processing steps, estimated processing times
and % CPU used:




Predict SxR predicts echograms and impulse response for each selected Source x Receiver (SxR) combination and utilizes multiple CPU cores. Three different main prediction algorithms ranging from basic to advanced. The algorithm choice depends on the room type and the level of details and auralization quality desired. Higher order B-format (2nd or 3rd) can be selected for external decoding and  5-ch mic setups can be used for ITU 5-ch surround. Diffraction can optionally be included for early sound. Special test options with any combination of direct sound, 1st order specular and diffraction. One of two internal methods is automatically selected to create the impulse responses depending on case:

   

Map measures predicts measures over a defined audience area receiver map and utilizes multiple CPU cores.
Optionally STI and U50 background noise can be calcuated using SPL noise map from actual noise sources.
Special test options with any combination of direct sound, 1st order specular and diffraction:

 
 

Map direct sound predicts direct sound SPL over a defined audience area receiver map and creates
source delay and closest source maps,
utilizes multiple CPU cores:



For all the above the latest prediction settings are stored on a project basis so when a new version of the same model is
made the previosuly used settings are preselected.

Predict SxR and Map measures can be run via a Sequence (batch) processing:




The Main:Show 3D window

Displays the room model with many options such as face coloring and switching on/off 3D elements and save/load of
named global or project-specific 3D-views:



Detection and display of ray-leaks due to model errors (gaps or warped planes):



Displays results from direct sound mapping. 
Mouse-over value readout and an auto-scale option:
 
 
 

Displays results from audience area mapping of measures (SPL, STI, D50, etc. plus SPL(t)). Mouse-over value readout
and auto-scale option
:
 
 

 
   
 
 
 
 
 

The Main:Show 2D window

Displays the room model in plan, side and end views with mouse zooming/panning and an optional 2D grid. The Plan/Section Cut z checkbox enables a moveable z (height) cut-plane to make it easier to see source, receiver and audience details in a room with many ceiling details (see heavy horizontal red line in figures below):







Zoomed and without z-cut:


Displays predicted measures for a selected Source x Receiver (SxR) or Source sum x R (*xR) combination
:
 
 


Displays echogram-related results for a selected Source x Receiver (SxR) or Source sum x R (*xR) combination,
optionally only the energy echogram E can be displayed (red curves below):







 






Displays impulse response (IR) related results for a selected Source x Receiver (SxR) or Source sum x R (*xR) combination:








Rotating mic. The Rotating mic direction is indicated in the 3D display, with the Sector mic the mic sector outline is
indicated on walls making it easy to identify reflections



 







Main:Impulse Response Detail

Displays detailed IR-related results for a selected Source x Receiver (SxR) Source sum x R (*xR) combination:




 


The following displays are independent of the Main: windows and can be viewed in parallel.

Pixel rendering

Displays direct or reflected sound
at walls and/or audience surfaces at every visible pixel:

 
Surface rendering

Displays direct or reflected sound at all walls and/or audience surfaces at a selected spatial resolution, since the resolution is not pixel-based the model can be rotated after calculation. Mouse-over value readout and auto-scale option.




Image source model  (for specular reflection detail)







 
Time trace as function of time
 
 
 

The following are tools independent of the room model loaded:


WAV-file player



Walkthrough convolver

The Walkthrough convolver uses SIM-file impulse responses that contain all necessary information.
For normal auralization in SIM-files are no longer used but they can be exported:



Overall features










Copyright © CATT 1998-2013
All products mentioned are trademarks of their respective owners.
Document last updated: