Past GPU Memory Limits With Unified Memory On Pascal
Trendy computer architectures have a hierarchy of recollections of varying dimension and performance. GPU architectures are approaching a terabyte per second memory bandwidth that, coupled with high-throughput computational cores, creates a really perfect system for data-intensive duties. However, everyone is aware of that fast memory is costly. Modern applications striving to solve larger and bigger issues might be restricted by GPU memory capability. Because the capability of GPU memory is considerably decrease than system memory, it creates a barrier for developers accustomed to programming just one memory space. With the legacy GPU programming mannequin there is no such thing as a easy approach to "just run" your software when you’re oversubscribing GPU memory. Even if your dataset is barely slightly larger than the available capacity, you'd still need to manage the energetic working set in GPU memory. Unified Memory is a way more intelligent memory management system that simplifies GPU development by offering a single memory space straight accessible by all GPUs and Memory Wave CPUs in the system, with automatic page migration for knowledge locality.
asus.com
Migration of pages permits the accessing processor to learn from L2 caching and the decrease latency of native memory. Moreover, migrating pages to GPU memory ensures GPU kernels take advantage of the very excessive bandwidth of GPU memory (e.g. 720 GB/s on a Tesla P100). And web page migration is all completely invisible to the developer: the system routinely manages all data movement for you. Sounds nice, proper? With the Pascal GPU structure Unified Memory is even more highly effective, thanks to Pascal’s bigger virtual memory tackle area and Page Migration Engine, enabling true virtual memory demand paging. It’s additionally price noting that manually managing memory movement is error-prone, which impacts productiveness and MemoryWave Official delays the day when you can lastly run your whole code on the GPU to see those great speedups that others are bragging about. Builders can spend hours debugging their codes due to memory coherency points. Unified memory brings large advantages for developer productivity. On this put up I'll present you ways Pascal can enable applications to run out-of-the-box with bigger memory footprints and obtain great baseline efficiency.
For a moment you'll be able to utterly forget about GPU memory limitations whereas developing your code. Unified Memory was launched in 2014 with CUDA 6 and the Kepler architecture. This relatively new programming mannequin allowed GPU applications to make use of a single pointer in each CPU capabilities and GPU kernels, which enormously simplified memory administration. CUDA eight and the Pascal architecture significantly improves Unified Memory performance by including 49-bit virtual addressing and on-demand page migration. The big 49-bit virtual addresses are enough to enable GPUs to access the entire system memory plus the memory of all GPUs in the system. The Page Migration engine allows GPU threads to fault on non-resident memory accesses so the system can migrate pages from anywhere in the system to the GPUs memory on-demand for environment friendly processing. In different phrases, Unified Memory transparently enables out-of-core computations for any code that's using Unified Memory for allocations (e.g. `cudaMallocManaged()`). It "just works" with none modifications to the application.
CUDA 8 additionally provides new methods to optimize data locality by providing hints to the runtime so it continues to be possible to take full control over information migrations. As of late it’s hard to discover a excessive-performance workstation with just one GPU. Two-, four- and eight-GPU systems have gotten common in workstations in addition to large supercomputers. The NVIDIA DGX-1 is one example of a excessive-efficiency built-in system for deep learning with 8 Tesla P100 GPUs. When you thought it was troublesome to manually handle data between one CPU and one GPU, now you will have 8 GPU memory areas to juggle between. Unified Memory is crucial for such programs and it enables extra seamless code development on multi-GPU nodes. Every time a particular GPU touches knowledge managed by Unified Memory, this data could migrate to native memory of the processor or the driver can set up a direct access over the accessible interconnect (PCIe or NVLINK).