Thanks for the replies guys
Turing: Memory Compression Iterated
As I stated at the start of this section, the key to improving GPUs is to tackle the problem from two directions: increase the available memory bandwidth, and then decrease how much of it you use. For the latter, NVIDIA has employed a number of tricks over the years. Perhaps the most potent of which (that they’re willing to talk about, at least) being their memory compression technology.
The cornerstone of memory compression is what is called data color compression. DCC is a per-buffer/per-frame compression method that breaks down a frame into tiles, and then looks at the differences between neighboring pixels – their deltas. By utilizing a large pattern library, NVIDIA is able to try different patterns to describe these deltas in as few pixels as possible, ultimately conserving bandwidth throughout the GPU, not only reducing DRAM bandwidth needs, but also L2 bandwidth needs and texture unit bandwidth needs (in the case of reading back a compressed render target).
Thanks for this, I was told by my supervisor that, from his experience, the higher the ram the better. We're engineers so we know basically nothing about this kind of stuff except that GPU acceleration does decrease solution time in Ansys fluent .The NVIDIA Turing GPU Architecture Deep Dive: Prelude to GeForce RTX
www.anandtech.com
Not sure how that helps your sims since if the data is randomised, there will be limited compression and thus, you'd use all of the available VRAM.
Thanks for this, I was told by my supervisor that, from his experience, the higher the ram the better. We're engineers so we know basically nothing about this kind of stuff except that GPU acceleration does decrease solution time in Ansys fluent .
Interestingly my RTX 2080 Ti has Samsung DDR6 memory....Only downside is the two year warranty but apart from that having owned a Palit it was very good.
Also could be due to gddr6 being expensive atm as their is currently only one supplier not sure when Samsung is pushing out their gddr6 vram...
Sent from my Redmi Note 5 using Tapatalk
I'm need the GPU for my master's thesis so unfortunately it has to come out of my own pocket and my budget is quite limited. Thanks for the advice, will keep it in mind when working. It's a pretty creative loophole 🆒. To do certain flow visualizations in any CFD program you need a GPU so I could spin it to the company that wayWell then I’d say try get a 1080 as it has far more ram and GDDR5X VRAM at that.
If you’re using it in really VRAM intensive situations then it makes sense to get the fastest and most amount of RAM you can.
If it’s for business purposes, you can claim back VAT and save 15% on a brand new card so a 2070 or 2080 would be great.
If you’re just kicking around at home for your own projects, I’d still try get work to buy it for you, pay them back and save some cash. It’s a little bit of admin for them but it’ll help you out a ton.
Does it have to be Nvidia or can your software work with AMD? Vega might be really helpful if you can use it as it uses HBM memory.