if you could pick a standard format for a purpose what would it be and why?
e.g. flac for lossless audio because…
(yes you can add new categories)
summary:
- photos .jxl
- open domain image data .exr
- videos .av1
- lossless audio .flac
- lossy audio .opus
- subtitles srt/ass
- fonts .otf
- container mkv (doesnt contain .jxl)
- plain text utf-8 (many also say markup but disagree on the implementation)
- documents .odt
- archive files (this one is causing a bloodbath so i picked randomly) .tar.zst
- configuration files toml
- typesetting typst
- interchange format .ora
- models .gltf / .glb
- daw session files .dawproject
- otdr measurement results .xml
Why? What reason could there possibly be to store frequencies as high as 96 kHz? The limit of human hearing is 20 kHz, hence why 44.1 and 48 kHz sample rates are used
That is not what 96khz means. It doesn’t just mean it can store frequencies up to that frequency, it means that there are 96,000 samples every second, so you capture more detail in the waveform.
Having said that I’ll give anyone £1m if they can tell the difference between 48khz and 96khz. 96khz and 192khz should absolutely be used for capture but are absolutely not needed for playback.
this is a misconception about how waves are reconstructed. each sample is a single point in time. But the sampling theorem says that if you have a bunch of discrete samples, equally spaced in time, there is one and only one continuous solution that would hit those samples exactly, provided the original signal did not contain any frequencies above nyquist (half the sampling rate). Sampling any higher than that gives you no further useful information. There is stil only one solution.
tldr: the reconstructed signal is a continuous analog signal, not a stair step looking thing
On top of that, 20 kHz is quite the theoretical upper limit.
Most people, be it due to aging (affects all of us) or due to behaviour (some way more than others), can’t hear that far up anyway. Most people would be suprised how high up even e.g. 17 kHz is. Sounds a lot closer to very high pitched “hissing” or “shimmer”, not something that’s considered “tonal”.
So yeah, saying “oh no, let me have my precious 30 kHz” really is questionable.
At least when it comes to listening to finished music files. The validity of higher sampling frequencies during various stages in the audio production process is a different, way less questionable topic,
because if you use a 40 kHz signal to “draw” a 10 kHz wave, the wave will have only four “pixels”, so all the high frequencies have very low fidelity
As long as the audio frequency is less than half the sample rate, it is a mathematical function with only one (exact) wave that is able to fit all 4 points, so it is perfectly reconstructed. This video provides a great visualization of it https://www.youtube.com/watch?v=cIQ9IXSUzuM