Build AI-Powered Games with NVIDIA DLSS 4.5, RTX, and Unreal Engine 5

SOURCE | 6 hours ago


Enhance your Social Media content with NViNiO•AI™ for FREE


Today, game developers can begin integrating NVIDIA DLSS 4.5 with Dynamic Multi Frame Generation, Multi Frame Generation 6X, and the second-generation transformer model for NVIDIA Super Resolution.  

In this post, we’ll go over new technologies and resources to share with our game-developer community, including:

A new NVIDIA TensorRT for RTX plugin for Unreal Engine’s Neural Network Engine (NNE)  NVIDIA Kimodo for easier motion generation A guide to using ComfyUI to help produce pre-production assets More than a dozen new sessions from GDC and GTC now available on YouTube Our April “Level Up with NVIDIA” webinar, highlighting path-traced hair in Unreal Engine 5.7

Integrate DLSS 4.5 Dynamic Multi Frame Generation 

At CES 2026, we introduced DLSS 4.5, extending its AI-driven rendering pipeline with a second-generation transformer model for Super Resolution to deliver another major upgrade to image quality. DLSS 4.5 also introduced Dynamic Multi Frame Generation and an updated 6X Multi Frame Generation mode, enabling significantly higher frame rates while maintaining responsiveness. 

The release built on the rapid adoption of DLSS 4, which was already supported by more than 250 games and applications with Multi Frame Generation, making it one of the fastest-adopted gaming technologies from NVIDIA, and overall, DLSS technologies are now available in more than 700 games and apps.

Video 1. Bryan Catanzaro of NVIDIA walking through DLSS 4.5, its new features, and more 

The DLSS 4.5 SDK with Dynamic Multi Frame Generation and Multi Frame Generation 6X is now available to developers, as well as the second-generation transformer model for Super Resolution. Built on Streamline, the SDK offers a consistent integration path across DLSS features, allowing developers to selectively adopt capabilities like Ray Reconstruction or Dynamic Multi Frame Generation. Updated APIs, documentation, and sample code help reduce integration time and make it easier to bring DLSS into both new and existing projects.

Accelerating AI Workloads in Unreal Engine’s NNE with TensorRT for RTX

The TensorRT for RTX plugin provides a runtime for Unreal Engine’s NNE, enabling efficient deployment of AI models directly within real-time applications. By leveraging RTX GPUs across desktops, laptops, and workstations, TensorRT for RTX accelerates workloads such as rendering, language, speech, and animation while maintaining strong performance on consumer hardware.

In practice, developers can see 1.5x performance improvements compared to DirectML-based approaches, making it easier to integrate responsive AI-driven features into games and interactive experiences. Access the plugin today

NVIDIA Kimodo for motion generation

Image shows a person running and jumping via an NVIDIA Kimodo prompt
Figure 1. NVIDIA Kimodo generates motion for digital humans with detailed text prompts

NVIDIA Kimodo is a research project exploring a new approach to generating high-quality human motion for interactive applications. Built as a kinematic motion generation model, Kimodo can synthesize realistic 3D character animation from simple inputs such as text, keyframes, or trajectory constraints. Trained on a large dataset of high quality 3D motion capture data, it is designed to produce natural, physically plausible motion while remaining responsive to developer input and control.

The image shows how the use of joint constraint combined with text input to fine tune and direct the desired animation results. 
Figure 2. This shows the use of joint constraint to art direct the look of the motion generated from the text input, allowing more precise control of the generated performance

For game developers, Kimodo highlights a path toward more scalable animation workflows. Instead of relying entirely on authored or captured animation clips, developers can generate motion data to prototype behaviors, create variations, or fill gaps between animations. 

A person walking, with motion tracked on a timeline, shows how Kimodo provides fine-grained control over motions by adding constraints in the tracks on the timeline


Figure 3. NVIDIA Kimodo provides fine-grained control over motions by adding constraints in the tracks on the timeline

This can help reduce iteration time and expand the range of character movement in a project, while maintaining consistency with existing animation systems. Learn more about Kimodo

A guide to using ComfyUI to produce pre-production assets

Game development teams today produce more pre-production assets, in more formats, than ever before. Generative AI can accelerate that work—compressing tasks that once took hours of manual effort into automated, repeatable pipelines while maintaining creative control.

ComfyUI is an open-source, node-based workflow platform that runs locally on all NVIDIA RTX GPU platforms. It connects image generation, video synthesis, 3D object generation and language models into pipelines that teams own, customize, and extend — without cloud dependencies or data leaving the workstation.

ComfyUI workflow visualization showing a node graph for image inpainting. The flow connects an input image, a mask editor, a Qwen Image Edit model, and an output image node to demonstrate object removal and background fill.
Figure 4. Example of ComfyUI workflow to remove object from a photo

We’ve put together a guide that walks creators through three production-ready workflows from the GenAI Creator Toolkit, adapted from the NVIDIA GTC 2026 Deep Learning Institute course “Create Generative AI Workflows for Design and Visualization in ComfyUI.” Each workflow is standalone, runs on any NVIDIA RTX GPU with 16 GB or more of VRAM, and works on both Windows and Linux. 

Check out the latest sessions in RTX neural rendering and AI 

John Spitzer on stage presenting at GDC 2026
Figure 5. John Spitzer of NVIDIA presenting his Driving Innovations and RTX Advances talk at GDC

At the GDC Festival of Gaming and GTC 2026, we hosted more than a dozen sessions highlighting how RTX neural rendering and AI are defining the next era of gaming. Some standout sessions include: 

Driving Innovation and RTX Advances with John Spitzer, VP of Developer and Performance Technology The Future of Path Tracing | Best Practices, Optimizations & Future Standards What’s New in RTX for Unreal Engine 5 Real-Time Path Tracing in RE ENGINE for Resident Evil Requiem and PRAGMATA Supercharging Godot Development: Rapid Path Tracing Integration with Cursor

Access our GDC sessions and GTC sessions on YouTube. 

Path traced hair in Unreal Engine 5.7 

 Image shows digital hair rendered in Unreal Engine 5
Figure 6.  RTX Hair Technology enabled on Metahuman Hair Groom asset in NVIDIA RTX Branch of Unreal Engine 5

Watch a recording of our Level Up with NVIDIA webinar focused on Path Traced Hair in Unreal Engine 5.7. The webinar highlighted the latest updates in the NVIDIA RTX Branch of Unreal Engine 5.7.2, including new features such as path-traced hair, along with opportunities for optimization and image quality improvements when using RTX technologies.

We also covered a recap of what the team saw at GDC, along with a recap of the NVIDIA “State of RTX Rendering in Unreal Engine 5” presentation. 

Resources for game developers

See our full list of game developer resources here, and follow us to stay up-to-date with the latest NVIDIA Game development news: 

Join the NVIDIA Developer Program (select gaming as your industry) Follow us on social: X, LinkedIn, Facebook, and YouTube Join our Discord community

Enhance your brand's digital communication with NViNiO•Link™ : Get started for FREE here


Read Entire Article

© 2026 | Actualités Africaines & Tech | Moteur de recherche. NViNiO GROUP

_