Furcsa kapacitású DDR5-ös memóriákat hoz a Micron – PROHARDVER! Hozzászólások – PROHARDVER!
> Wow ! Ennek még értelmét is látom .
> 16gb már néha kevés, 32gb viszont felesleges pazarlás egy játékos gépbe .
egy 12 csatornás szervernél/ workstation-nél még inkább így van ..
Hogy meglegyen a rendes memória-sávszél – minden memória csatornához kell DDR5 memória … de az még nagyon drága ..
és a TCO -t nem árt optimalizálni ..
“Non-binary DDR5 is finally coming to save your wallet
Need a New Year’s resolution? How about stop paying for memory you don’t need”
Say your workload benefits from having 3GB/thread. Using a 96-core AMD Epyc 4-based system with one DIMM per channel, you’d need at least 576GB of memory. However, 32GB DIMMs would leave you 192GB short, while 64GB DIMMs would leave you with just as much in surplus. You could drop down to 10 channels and get closer to your target, but then you’re going to take a hit to memory bandwidth and pay extra for the privilege. And this problem only gets worse as you scale up.
In a two-DIMM-per-channel configuration — something we’ll note AMD doesn’t support on Epyc 4 at launch — you could use mixed capacity DIMMs to narrow in on the ideal memory-to-core ratio, but as Drake points out, this isn’t a perfect solution.
“Maybe the system has to down clock that two-DIMM-per-channel solution, so it can’t run the maximum data rate. Or maybe there’s a performance implication of having uneven ranks in each channel,” he said.
By comparison, 48GB DIMMs will almost certainly cost less, while allowing you to hit your ideal memory-to-core ratio without sacrificing on bandwidth. And as we’ve talked about in the past, memory bandwidth matters a lot, as chipmakers continue to push the core counts of their chips ever higher.
The calculus is going to look different depending on your needs, but at the end of the day, non-binary memory offers greater flexibility for balancing cost, capacity, and bandwidth.
And there aren’t really any downsides to using non-binary DIMMs, Drake said, adding that, in certain situations, they may actually perform better.
What about CXL?
Of course non-binary memory isn’t the only way to get around the memory-core ratio problem.
“Technologies such as non-binary capacities are helpful, but so is the move to CXL memory — shared system memory — and on-chip high-bandwidth memory,” Lam said.
With the launch of AMD’s Epyc 4 processors last fall and Intel’s upcoming Sapphire Rapids processors next month, customers will soon have another option for adding memory capacity and bandwidth to their systems. Samsung and Astera Labs have both shown off memory-expansion modules, and Marvell plans to offer controllers for similar products in the future.
SK hynix -nek is van.
The initial offerings on this product are set to be 48 Gigabyte (GB) and 96GB modules for supply to cloud data centers. It is also expected to power high-performance servers for big data processing such as artificial intelligence (AI) and machine learning, as well as realizing Metaverse applications among others.
Kevin (Jongwon) Noh, President and Chief Marketing Officer at SK hynix, said, “In line with the release of 24Gb DDR5, SK hynix is closely engaging with a number of customers that provides cloud services. We will continue to strengthen our leadership in growing DDR5 market by introducing advanced technologies and developing products with ESG-awareness.””