Tech Talk – nVidia’s RTX 20xx series and raytracing

GamesCom 2018,  at a press conference hosted by nVidia’s finest in anticipation of their next generation technology. It revealed among the  GeForce RTX 2070 ($500 USD),  RTX 2080 ($700), and high market – RTX 2080 Ti ($1000), all of which will release on September 20, 2018.

They also flaunted their recent iteration of nVidia raytracing rendering software – as RTX Ray Tracing, with various trailers from Star Wars and Battlefield V showing phenomenal jumps in both visual appeal and realistic representation of ray tracing.

There are both Rendered Scene trailers such as Battlefield V and Star Wars with help from YouTube (links below)


And a Real Time Demo of Shadow of The Tomb Raider.

Although with some Social skepticism on various Facebook outlets, Some believe that this should be running at 60fps at 4k (1080 x 4) if not running at “a higher fps to cleanse any anomolies like sparkling, or ants”, Other notable comments mentioned the entry prices and the likelihood of influence from crypto-currencies inflating market prices ($500/ £399/€459 as a starting price for an entry level graphic’s card genuinely sounds steep!)

With the announcement – The RTX 20xx series came under scrutiny after a benchmark test was performed on the GOTTR and could not perform ray tracing at 4k as an optimum target resolution at 60fps and many valid suggestions implied that the advertised 50% increased speed to the 1080Ti is an exaggeration of capability and that even with optimum settings an on-average : 30% maximum with optimisation would be a realistic expectation.

My Thoughts : Given it’s specification and it’s predecessor’s specification. This is a case of GDDR5 vs GDDR6 with how ray tracing is performed and implemented, if it is physically calculated using GPU resources it will take Gigabits in memory bandwidth..
The significant difference in bandwidth then between GDDR5 and 6 is voided by software calculations justifying a lot of scrutiny which could have been avoided with an additional core that could do the calculations without loss of framerate or bandwidth hogging.