Alphane Moon
That there is no perfect defense. There is no protection. Being alive means being exposed; it’s the nature of life to be hazardous—it’s the stuff of living.
To be fair, I have zero experience with real-time upscaling. I only do archival stuff often with low source quality; DVD, VHSRip, low bitrate 720p videos, older, low bitrate vertical smartphone videos.
Getting good quality upscaled video can be a big pain and even on a beefy computer (5800X, 3080, 64GB RAM) it is nowhere near real-time and more difficult source material can require multiple-trial runs (different model and/or config).
That being said 1080 to 4K does tend to be relatively easier (it helps that 1080 source material is usually of high quality). However, I usually never bother with 1080 to 4K as I find the results to somewhat unimpressive and not worth the effort (compared to say a full on film to 4K transfer like on UHD Bluray).
The article doesn’t really go into any detail, I suspect the result is more cosmetic in nature. This is in contrast to some results I’ve had where it literally looks like magic or like in the movies (the original Bladerunner with the enhance scene).
If you can run “lite real-time upscaling” on the NPU, it is a decent feature and use case (even if I wouldn’t use it).
For video games? I don’t think the NPUs can run DLSS or FSR. Not to mention the iGPU is going to be struggling.
Video content upscaling is out of the question. My 3080 (240 tensor TOPS and 30 FP32 TFLOPS) takes about 15-25 minutes to upscale ~5-7 min SD content to HD. The CPU can also get hammered a lot since you’re also encoding the output file.
The Neural Processing Unit (NPU) provides integrated AI capabilities and can perform up to 11 TOPS making a total compute performance of 34 TOPS after including 5 TOPS from the CPU and 18 TOPS from the iGPU.
Seems like a strange statement. Is there any software that can leverage this NPU, let alone use the NPU/iGPU/CPU in combination? And any rate, there are definitely ML use cases where your CPU will be busy processing workloads from the ML processes.
From the article:
A single nuclear-diamond battery containing 1 gram [0.04 ounce] of carbon-14 could deliver 15 joules of electricity per day. For comparison, a standard alkaline AA battery, which weighs about 20 grams [0.7 ounces], has an energy-storage rating of 700 joules per gram. It delivers more power than the nuclear-diamond battery would in the short term, but it would be exhausted within 24 hours.
It seems that even a 100 gram nuclear-diamond battery would not be able to sustain a modern smartphone.
My calculations might be off, but it seems even a highly optimized low powered smartphone (say 10 watthours for 24 hours under regular use) would need x25 lower power consumption to work with a 100 g nuclear-diamond battery. And you would likely still need an additional battery of some sort (which would need to be replaced) to handle peaks (don’t think modern smartphones can function under ~420 mwatt peak max).
It’s interesting that while analysts are making predictions around competitive dynamics between AMD/Intel/Nvidia/Custom CSP silicon, but few seem to entertain the possibility for a glut in AI computer hardware due to lack of revenue generation for “AI” services. Albeit, maybe this is not the type of article for this sort of thing.
Just goes to show that russians have had (until recently) relatively easy access to independent information within a few clicks on their smartphones.
I will also note that many reliable news organizations (BBC, DW) started their russian language YT news programs as far back as 2010. This is also true for well regarded local independent news (TV Dozdh).