What's new
Carbonite

Welcome to Carbonite! Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

Press Release: ADATA Unveils XPG SPECTRIX D50 DDR4

Gouhan

Forum Addict
VIP Supporter
TheOverClocker.com
Joined
Apr 22, 2010
Messages
2,771
Reaction score
595
Points
3,935
ADATA, a leading manufacturer of high-performance DRAM modules, NAND Flash products, gaming products, and mobile accessories, today announces the launch of the XPG SPECTRIX D50 DDR4 RGB memory module. Reaching performance of up to 4800MHz, sporting a maximum capacity of 32GB, and featuring an elegant geometric design, the XPG SPECTRIX D50 offers immense performance and minimalist styling gamers, overclockers, and PC enthusiasts will appreciate.
Read More
 

[email protected]

CUD has me already.
Joined
Nov 21, 2017
Messages
547
Reaction score
306
Points
1,295
Location
Durbanville
About time, so when is 5000MHz coming?
Wonder how many kidneys we are going to be selling to purchase this though 😂.
 

Gouhan

Forum Addict
VIP Supporter
TheOverClocker.com
Joined
Apr 22, 2010
Messages
2,771
Reaction score
595
Points
3,935
DIMMs that are way out of the JDEC specification.

JEDEC specifications are low, always have been. Think of them as the most basic standard/setting by which all DIMMs must at least adhere. The specification itself in terms of performance etc is largely irrelevant. Both AMD and INTEL certify and spec DRAM frequencies higher than JEDEC spec.
 

Gouhan

Forum Addict
VIP Supporter
TheOverClocker.com
Joined
Apr 22, 2010
Messages
2,771
Reaction score
595
Points
3,935
About time, so when is 5000MHz coming?
Wonder how many kidneys we are going to be selling to purchase this though 😂.
Towards the end of any DDR DRAM generation, the frequency gets much higher. Not necessarily better outright performance though as the latency increases disproportionately. Performance is also compromised by DRAM IC layout and PCB design (that which allows the frequency to be high). DDR5 is here in 2021, so it's expected we would start seeing these high speed numbers more regularly. Just as we started to see DDR3 3000/3100MHz (Avexir, remember this brand?) the year before DDR4 came to market.
Z490 has already shown some on boards going to 5,200MHz @ C14-26-26-XX ! (You won't be able to buy such a kit, but it is possible)
 

Eon

VIP
VIP Supporter
Joined
Nov 12, 2014
Messages
545
Reaction score
327
Points
1,615
Why is the ddr on gfx cards always so much higher than for stand alone pc ram?
 

Gouhan

Forum Addict
VIP Supporter
TheOverClocker.com
Joined
Apr 22, 2010
Messages
2,771
Reaction score
595
Points
3,935
Why is the ddr on gfx cards always so much higher than for stand alone pc ram?
GDDR memory uses quad data rate to achieve its frequency whereas desktop DDR memory uses double data rate (the DD in DDR originally came from this).
For both GDDR and DDR ,you will notice the actual clock frequency (in Hz/MHz/GHz etc) is relatively low. For instance DDR4 3200 actually operates at 1,600MHz. This is the SDR (single Data Rate) and actual clock speed. It becomes 3,200MT/s (Mega Transfers/s which is what we often refer to as MHz, which it isn't. CPU-Z reports this correctly showing only the SDR rate) because data is read/written on both the rising and falling edge of the square wave.

For GDDR, a second square wave that's out of phase with the first is used, which allows two falling and two rising edges in a single clock cycle.
So instead of two opportunities for data transfer as per desktop DDR, we have four. Hence the Quad Data Rate nature of GDDR. It isn't called GQDR, because it's still fundamentally double data rate. At the base level GDDR and DDR are the same. 1,600MHz that becomes DDR4 3200MT/s in your PC, would become GDDR 6,400MT/s on your GPU.

The reason we don't or didn't use GDDR as main memory is because, using this Quad Data Rate means we need significantly different data line control, with higher tolerances so it's more difficult to manufacture and costly. Moreover, because of the complex nature of having to clock essentially two waves in this manner every cycle, GDDR has much higher latency. For example instead of C16-18-18-36 for DDR4 3200. It would be C40-66-66-100 for GDDR 6400.
The reason these latencies aren't as important with GDDR is partially related to trace distance (GPU memory is a lot closer to the GPU than CPU is to DIMM socket). More importantly though is that GPUs are inherently parallel processors and this hides latency penalties, quite well. As in general computing, the most obvious way to hide latency penalties is to extract as much parallelism as possible.

For CPUs where there are plenty of serialised operations, it would end up hurting performance, to which the only remedy would larger caches from L1 all the way up. Low level caches are costly, increases die area, add complexity to chip design and in terms of computational performance/mm^2 are wasteful.

We will have a DDR5 platform for desktop within 12 months. It doesn't switch to QDR but effectively doubles bandwidth for the same clock frequency in a dual channel like manner and with increased SDR rate 1.6GHz to 3.2GHz as opposed to 0.8GH to 1.6GHz limit of DDR4. (Same DDR4 2400 becomes DDR5 4800 with a comparatively small sacrifice in timings)

Hope that helps a bit.
 

Top Donors

$320.00
$235.00
$210.00
$185.00
$182.00
Top