Avatar

apigban

apigban@lemmy.dbzer0.com
Joined
0 posts • 23 comments
Direct message

hypervisor: proxmox

vms: rhel 9.2

permalink
report
reply

what kind of thumbnails are you seeing?

I see a good lookin guy pointing at a Black box.

permalink
report
parent
reply

hey yeah, no stress!

just lemme know if you’d want someone to brainstorm with.

permalink
report
parent
reply

lemme know if you need some tshooting remotely, if schedules permit, we can do screenshares

permalink
report
parent
reply

I had this issue when I used kubernetes, sata SSDs cant keep up, not sure what Evo 980 is and what it is rated for but I would suggest shutting down all container IO and do a benchmark using fio.

my current setup is using proxmox, rusts configured in raid5 on a NAS, jellyfin container.

all jf container transcoding and cache is dumped on a wd750 nvme, while all media are store on the NAS (max. BW is 150MBps)

you can monitor the IO using IOstat once you’ve done a benchmark.

permalink
report
parent
reply

I’d check high I/O wait, specially if your all of the vms are on HDDs.

one of the solution I had for this issue was to have multiple DNS servers. solved it by buying a raspberry pi zero w and running a 2nd small instance of pihole there. I made sure that the piZeroW is plugged on a separate circuit in my home.

permalink
report
reply

the person you are replying to either lacks comprehension or maybe just wants to be argumentative and doesn’t want to comprehend.

permalink
report
parent
reply

i didnt have a problem with network ports (I use a switch) what I shouldve considered during purchasing was the number of drives (sata ports), pcie features (bifurcation, version, number of nvme slots)

I need to do high IOPs for my research now and I am stuck with raid0 commodity SSDs in 3 ports.

permalink
report
reply