Nvidia Launches New GPU Named as Ada Lovelace | RTX 6000
Tech News

Nvidia Launches New GPU Named as Ada Lovelace | RTX 6000

 

Nvidia Launches New GPU Named as Ada Lovelace | RTX 6000


Nvidia Launches New GPU Named as Ada Lovelace | RTX 6000. In addition to the promised generational improvements, high-end workstation GPU promises to improve speed even further by altering how viewports and scenes are produced.

 

Nvidia Launches New GPU Named as Ada Lovelace | RTX 6000
Nvidia Launches New GPU Named as Ada Lovelace | RTX 6000

 

In honor of the English mathematician who is recognized with being the first computer programmer, Nvidia has introduced the Nvidia RTX 6000, a high-end workstation GPU constructed on the brand-new Ada Lovelace architecture.

The Nvidia RTX 6000 GPU promises to make significant improvements in real-time rendering, graphics, AI, and computation, including engineering simulation. It is said to perform up to two to four times better than the Nvidia RTX A6000 of the previous generation. It is distinct from the Turing-based Nvidia Quadro RTX 6000 from 2018.

The Nvidia RTX 6000 is a twin slot graphics card that supports PCIe Gen 4 and has 48 GB of GDDR6 memory with error-correcting code (ECC), a maximum power consumption of 300 W, and PCIe Gen 4 support, making it fully compatible with workstations using the newest Intel and AMD CPUs.

For streaming multiple concurrent XR sessions utilizing Nvidia CloudXR, it boasts three times the video encoding performance of the Nvidia RTX A6000 and supports Nvidia virtual GPU software for numerous high-performance virtual workstation instances.

The Nvidia RTX 6000 comes equipped with third-generation RT Cores for ray tracing, fourth-generation Tensor Cores for AI computation, and next-generation CUDA cores for graphics and simulation.


Deep Learning Super Sampling Technology

 

Nvidia Launches New GPU Named as Ada Lovelace | RTX 6000
Nvidia Launches New GPU Named as Ada Lovelace | RTX 6000

 

With the release of the new “Ada Lovelace” Nvidia RTX 6000, Nvidia DLSS has entered its third generation.

By rendering frames at a reduced resolution and then using the GPU’s ‘AI’ Tensor cores to forecast what a high-res frame might look like, DLSS leverages deep learning-based upscaling algorithms.

DLSS 2 used Nvidia’s ‘Ampere’ GPUs from the previous generation to forecast, pixel-by-pixel, what a high-resolution current frame would look like from a low-resolution current frame and a high-resolution prior frame.

Huang made no mention of the potential advantages of DLSS 3 for commercial 3D applications.

While DLSS 2 was mostly employed in GPU-limited visualization programmers like Enscape and Autodesk VRED, we are curious if DLSS 3 could provide significant performance benefits for 3D CAD, which frequently uses CPU-limited hardware.


Shader Execution Reordering Technique

 

GPUs are most effective when performing similar tasks at the same time, according to Nvidia. Rays, however, bounce in diverse directions and cross a variety of surfaces when using ray tracing.

Huang claims that this may cause various threads to process various shaders or access memory that is challenging to coalesce or cache.

The Nvidia RTX 6000 dynamically rearranges its workload using Shader Execution Reordering (SER), so related shaders are handled together.

Nvidia claims that SER can speed up ray tracing by two to three times and enhance frame rates by up to 25%. Which software programs will make use of this technology? Nvidia did not say.


Simulation Environment For Engineering

 

The primary focus of the Nvidia RTX 6000 introduction was on workflows that are heavily reliant on graphics. However, Nvidia also spent some time working on engineering simulation, notably using Ansys software for computational fluid dynamics.

Designers and engineers will be able to continue pushing the limits of engineering simulations thanks to the new Nvidia Ada Lovelace architecture, according to Dipankar Choudhury, an Ansys fellow and HPC Centre of Excellence lead.

” RTX 6000 GPU has  larger L2 cache with immense boost in core count with reliability, and higher storage bandwidth will result in outclass performance benefits for the entire Ansys application portfolio,” according to the manufacturer.

In Ansys Discovery, Nvidia demonstrated an automobile model prepared for a wind tunnel analysis with flow inlets, pressure outlets, and wall boundary conditions.

It demonstrated how the Nvidia RTX 6000 can enable the exploration of numerous design options in real time, showing that when the flow inlet velocity is modified, the outcomes can be seen right away.

Nvidia also emphasized the advantages of having 48 GB of RAM, noting that users of the Nvidia RTX 6000 can enhance the accuracy of the solver to execute more precise simulations while still getting results in almost real-time.

Leave a Reply

Your email address will not be published. Required fields are marked *