Strictly speaking, as soon as an analog signal is quantized into digital samples there is loss, both in the amplitude domain (a value of infinite precision is turned into a value that must fit in a specific number of bits, hence of finited precision) and on the time domain (digitalization samples the analog input at specific time intervals, whilst the analog input itself is a continuous wave).
That said, whether that is noticeable if the sampling rate and bits per sample are high enough is a whole different thing.
Ultra high frequency sounds might be missing or mangled at a 44.7 kHz sampling rather (a pretty standard one and used in CDs) but that should only be noticeable to people who can hear sounds above 22.35kHz (who are rare since people usually only hear sounds up to around 20kHz, the oldest the person the worse it gets) and maybe a sharp ear can spot the error in sampling at 24 bit, even though its miniscule (1/2^24 of the sampling range assuming the sampling has a linear distribution of values) but its quite unlikely.
That said, some kinds of trickery and processing used to make “more sound” (in the sense of how most people perceive the sound quality rather than strictly measured in Phsysics terms) fit in fewer bits or fewer samples per second in a way that most people don’t notice might be noticeable for some people.
Remember most of what we use now is anchored in work done way back when every byte counted, so a lot of the choices were dictated by things like “fit an LP as unencoded audio files - quite luterallyplain PCM, same as in Wav files - on the available data space of a CD” so it’s not going to be ultra high quality fit for the people at the upper ends of human sound perception.
All this to say that FLAC encoded audio files do have losses versus analog, not because of the encoding itself but because Analog to Digital conversion is by its own nature a process were precision is lost even if done without any extra audio or data handling process that might distort the audio samples even further, plus generally the whole thing is done at sampling rates and data precision’s fit for the average human rather than people at the upper end of the sound perception range.
When we talk about lossless in the audio encoding world, we aren’t comparing directly with the analog wave, as there will always be loss when storing an analog signal in a digital machine. Lossless formats are compared to pure PCM, which is the uncompressed way of representing a waveform in bits.
With audio, every step you take to transform it, capture it, move it or store it, even while working with the analog waveform, degrades it. Even by picking it up with a microphone you’re already degrading the waveform. However, generally, the official release CDs or WebDLs are considered the original, lossless, master file. Everything that manages to keep that exact waveform is lossless (FLAC, AIFF, WAV, ALAC…), and everything that distorts it further is considered lossy (MP3, AAC, OPUS…).
Additionally, a “bad transcode” (which is a transcode that involves lossy formats somewhere that isn’t the last step) is also considered lossy, for obvious reason. Transcoding FLAC to MP3 to WAV stores the exact same waveform that MP3 made, as it is the lowest common denominator, even though the audio is stored as WAV in its final form.
Transcoding between lossy formats also loses more data, even if the final lossy format can store more bits or is more accurate than the original. This is one of the main problems with lossy codecs. MP3 192kbps to MP3 320kbps will lose information, just like MP3 to AAC. That’s why, normally, we use a lossless file and transcode it to every lossy format (FLAC to MP3, then FLAC to AAC…). This way you’re not losing more than what the lossy format already loses.
My point being that unlike the misunderstanding (or maybe just mis-explanation) of many here, even a digital audio format which is technically named “lossless” still has losses compared to the analog original and there is no way around it (you can reduce the losses with a higher sampling rate and more bits per sample, but never eliminate it because the conversion to digital is a quantization of an infinite precision input).
“Losslessness” in a digital audio stream is about the integrity of the digital data itself, not about the digital audio stream being a perfect reproduction of the original soundwaves. With my mobile phone I can produce at home a 16 bit PCM @ 44.7 kHz (same quality as a CD) recording of the ambient sounds and if I store it as an uncompressed raw PCM file (or a Wav file, which is the same data plus some headers for ease of use) it’s technically deemed “lossless” whilst being a shit reproduction of the ambient sounds at my place because the capture process distorted the signal (shitty shit small microphone) and lost information (the quantization by the ADC in the mobile phone, even if it’s a good one, which is doubtful).
So maybe, just maybe, some “audiophiles” do notice the difference. I don’t really know for sure but I certainly won’t dismiss their point about the imperfect results of the end-to-process, with the argument that because after digitalization the digital audio data has been kept stored in a lossless format like FLAC or even raw PCM, then the whole thing is lossless.
One of my backgrounds is Digital Systems in Electronics Engineering, which means I also got to learn (way back in the days of CDs) how the whole process end to end works and why, so most of the comments here talking about the full end-to-end audio capture and reproduction process (which is what a non-techie “audiophile” would be commenting about) not being lossy because the digital audio data handling is “lossless”, just sounds to me like the Dunning-Krugger Effect in action.
People here are being confidently incorrect about the confident incorrection of some guy on the Internet, which is pretty ironic.
PS: Note that with high enough sampling rates and bits per sample you can make it so precise that the quantization error is smaller that the actual noise in the original analog input, which de facto is equivalent to no losses in the amplitude domain and so far into the high frequencies in the time domain that no human could possibly hear it, and if the resulting data is stored in a lossless format you could claim that the end-to-end process is lossless (well, ish - the capture of the audio itself into an analog signal itself has distortions and introduces errors, as does the reproduction at the other end), but that’s something quite different from claiming that merely because the audio data is stored in a “lossless” format it yields a signal as good as the original.
What I meant is yeah, you are right about that, but no, lossless formats aren’t called lossless because they don’t lose anything to the original, they’re called lossless because, after compressing and decompressing, you get the exact same file that you initially compressed.
Another commenter on this post explained it really well.