What was your last RTFM adventure? Tinker this, read that, make something smoother! Or explodier.
As for me, I wanted to see how many videos I could run at once. (Answer: 60 frames per second or 60 frames per second?)
With my sights on GPUizing some ethically sourced motion pictures, I RTFW, graphed, and slapped on environment variables and flags like Lego bricks. I got the Intel VAAPI thingamabob to jaunt by (and found that it butterized my mpv videos)
$ pacman -S blahblahblahblahblahtfm
$ mpv --show-profile=fast
Profile fast:
scale=bilinear
dscale=bilinear
dither=no
correct-downscaling=no
linear-downscaling=no
sigmoid-upscaling=no
hdr-compute-peak=no
allow-delayed-peak-detect=yes
$ mpv --hwdec=auto --profile=fast graphwar-god-4KEDIT.mp4
# fucking silk
But there was no pleasure without pain: Mr. Maxwell F. N. 940MX (the N stands for Nvidia) played hooky. So I employed the longest envvars ever
$ NVD_LOG=1 VDPAU_TRACE=2 VDPAU_NVIDIA_DEBUG=3 NVD_BACKEND=direct NVD_GPU=nvidia LIBVA_DRIVER_NAME=nvidia VDPAU_DRIVER=nvidia prime-run vdpauinfo
GPU at BusId 0x1 doesn't have a supported video decoder
Error creating VDPAU device: 1
# stfu
to try translating Nvidia VDPAU to VAAPI – of course, here I realized I rtfmed backwards and should’ve tried to use just VDPAU instead. So I did.
Juice was still not acquired.
Finally, after a voracious DuckDuckGoing (quacking?), I was then blessed with the freeing knowledge that even though post-Kepler is supposed to support H264, Nvidia is full of lies…
______
< fudj >
------
\ ‘^----^‘
\ (◕(‘人‘)◕)
( 8 ) ô
( 8 )_______( )
( 8 8 )
(_________________)
|| ||
(|| (||
and then right before posting this, gut feeling: I can’t read.
$ lspci | grep -i nvidia
... NVIDIA Corporation GM108M [GeForce 940MX] (rev a2)
# ArchWiki says that GM108 isn't supported.
# Facepalm
SO. What was your last RTFM adventure?
I was trying to write a custom Strategy for an objectMapper in Java. Foolishly decided to ask ChatGPT about it and got instructions which suggested an implementation that was the inverse of how Strategies actually work. Stuck for an afternoon.
Then in the evening I read the docs and put it together in half an hour from scratch. Lesson learned about the stochastic parrots.
Hah, stochastic parrots.
Makes me wonder. Every laziness I’ve had with the vector guessers, I’ve seen an exact counterweight.
matrix scrombulator | webpage (2007-2014) |
---|---|
Here’s random code. Pray it works | Free ancient code at man 3 getifaddrs . |
How does this API work? (when the API has below 10 million sample lines of code) | Incredibly concise documentation worth spending 2 minutes on or HTML text without margin lines worth spending 20 minutes on |
Maybe this is what’s causing your bug. Investigate a, b, and c. Conclusion sentence. | footnote in ArchWiki / archetypal 2009 StackOverflow duplicate |
Here’s the main idea of X… you need to take into account a combination of facets to ensure safety. | Angry blog post about X that’s oddly technical (now you see both sides) |
One, you can invoke more often (throw ChatGPT configs against the wall until it doesn’t error); the other you can invoke more deeply. So I can’t help but wonder – when we cancel out all the terms – if the timesaving sum is positive or negative. ¯\_(ツ)_/¯