cross-posted from: https://beehaw.org/post/16899034
Invidious, an alternative YouTube client in the browser without using YouTube directly (more private): https://inv.nadeko.net/watch?v=h9Z4oGN89MU
I recommend watching or listen this video at x1.4 playback speed. If you can’t set the speed to this value, then at least watch at x1.25.
Description:
Graphics Cards can run some of the most incredible video games, but how many calculations do they perform every single second? Well, some of the most advanced graphics perform 36 Trillion calculations or more every single second. But how can a single device manage these tens of trillions of calculations? In this video, we explore the architecture inside the 3090 graphics card and the GA102 GPU chip architecture.
Note: We chose to feature the 30 series of GPUs because, to create accurate 3D models, we had to tear down a 3090 GPU rather destructively. We typically select a slightly older model because we’re able to find broken components on eBay. If you’re wondering, the 4090 can perform 82.58 trillion calculations a second, and then we’re sure the 5090 will be even more.
Table of Contents:
00:00 - How many calculations do Graphics Cards Perform? 02:15 - The Difference between GPUs and CPUs? 04:56 - GPU GA102 Architecture 06:59 - GPU GA102 Manufacturing 08:48 - CUDA Core Design 11:09 - Graphics Cards Components 12:04 - Graphics Memory GDDR6X GDDR7 15:11 - All about Micron 16:51 - Single Instruction Multiple Data Architecture 17:49 - Why GPUs run Video Game Graphics, Object Transformations 20:53 - Thread Architecture 23:31 - Help Branch Education Out! 24:29 - Bitcoin Mining 26:50 - Tensor Cores 27:58 - Outro
Saving this for when I have time to watch it. The inner workings of computers are electricity and magic to me
We squeeze this rock, and bam, you can watch porn on the bus!
I think you’d like this channel: https://m.youtube.com/@CoreDumpped