I’ve pirated every video converter known to man (UniConverter, WinX, VideoProc, Aiseesoft, Tipard, etc) & even tried open source tools like ffmpeg and handbrake and I can’t get hardware acceleration to work unless I just don’t understand how it’s supposed to work. I have a Radeon ™ RX 470 graphics card and plenty of processing power.
An example is when I attempt to convert a video to HEVC and don’t use acceleration, I can get like 100 FPS and 2-3 mins rendering time but all my CPUs go to over 100%.
However, when I turn on acceleration or use the AMD HEVC Encoder (ffmpeg, handbrake), the FPS rate drops to like 10-15 FPS, the CPUs barely go over 10% and the GPU then jumps to over 100% which is fine but then it tells me it’ll take like 20 mins to render a 20 mins tv episode!?!?
This is driving me crazy. Can someone provide some insight on this? I’d be forever grateful. Thanks!
If you don’t have effective cooling, maybe, but I’ve never heard of any reason to keep core utilization under any specific percentage. Are your temps an issue?
No, not so far. No crashes or anything like that. Someone somewhere just told me a good range for video rendering was between 65-75% core usage.
I can think of no logical explanation for that. Maybe if you wanted to use CPU encoding and use the system at the same time. But given how many cores systems have these days, percentages don’t mean much. As long as you leave a few cores available, you’ll be able to use the system.
If you don’t care about that, let it go to 100%.
Even then you’d still want 100% with encoding running at a lower priority.
That’s bullshit. There’s no reason to limit or target a specific or non-maximum CPU core usage.
That would only make sense to evade hardware faults or cooling issues. Never as a general guideline.
A good range for CPU utilization is 100%. Same for memory. Anything less and you’re wasting your computer, letting energy flow through your components and degrading them without much benefit.