Dataset Viewer
Auto-converted to Parquet
en_url
stringlengths
47
153
en_title
stringlengths
12
118
en_content
stringlengths
16
19.4k
zh_url
stringlengths
53
159
zh_title
stringlengths
4
70
zh_content
stringlengths
16
10.5k
https://developer.nvidia.com/blog/ai-for-climate-energy-and-ecosystem-resilience-at-nvidia-gtc-2025/
AI for Climate, Energy, and Ecosystem Resilience at NVIDIA GTC 2025
From mitigating climate change to improving disaster response and environmental monitoring, AI is reshaping how we tackle critical global challenges. Advancements in fast, high-resolution climate forecasting, real-time monitoring, and digital twins are equipping scientists, policy-makers, and industry leaders with data-driven tools to understand, plan for, and respond to a warming planet. At NVIDIA GTC 2025 , running March 17–21, thought leaders, scientists, developers, and innovators will highlight how AI is helping shape a more sustainable and resilient future. The following sessions showcase AI’s role in climate forecasting, disaster mitigation, and conservation efforts, helping communities adapt to an increasingly unpredictable world. Strengthening extreme weather predictions and disaster response As global warming intensifies, and extreme weather events become more severe and frequent, communities need faster and more precise natural disaster predictions and response strategies. AI is improving flood, wildfire, and hurricane modeling, enabling earlier warnings and more effective mitigation efforts. Using large-scale weather simulations, satellite data analysis, and real-time predictive insights, AI is helping emergency responders and decision-makers minimize damage, improve disaster resilience, and keep communities safe. Sessions Transform Natural Catastrophe Risk Simulations with Advanced Computational Tools AXA, AWS, and NVIDIA use Earth-2 simulations to model extreme weather events at unprecedented scale and precision. These tools help insurers, urban planners, and governments prepare for disasters by improving risk estimation and resilience planning, reducing the financial and societal impact of natural disasters. Boosting Earth System Model Outputs Using Exascale Climate Emulators Researchers at KAUST and Saint Louis University use exascale climate emulators powered by NVIDIA GPUs to accelerate and refine earth system model outputs. Achieving ultra-high spatial resolution (3.5 km), these models enable more accurate storm and climate simulations, improving extreme weather predictions, and helping emergency responders, insurers, and policymakers improve disaster response planning and climate resilience efforts. Harnessing AI for Advanced Flood Risk Modelling and Mitigation Strategies JBA Risk Management uses AI-driven weather models, including Spherical Fourier Neural Operators, to simulate storm seasons with greater accuracy. Using NVIDIA NIM, these models improve flood risk assessment, disaster response planning, and infrastructure investment decisions, all while reducing computational costs. Near-Real-Time Fire Detection Leveraging Edge AI in Space: Transforming Thermal Earth Observation with NVIDIA Wildfires require rapid response to minimize devastation. OroraTech’s use of NVIDIA Jetson technology onboard CubeSats delivers fire detection within 60 seconds, providing first responders with critical real-time data to deploy resources more effectively. Autonomous Systems and Remote Sensing for Better Earth Data Autonomous aircraft equipped with AI are revolutionizing environmental monitoring by collecting high-resolution data at scale. MIT researchers showcase how these low-cost, long-range systems gather critical data for precision agriculture, disaster response, and environmental assessments, providing actionable insights at scale. Boosting climate forecasting for energy and infrastructure planning Accurate, long-term climate forecasting is essential for guiding infrastructure investments, optimizing energy grids, and enhancing sustainability planning. AI-powered models make climate projections faster and more precise, guiding solar energy integration, climate-resilient infrastructure development, and sustainability strategies. These high-resolution, data-driven insights help city planners and decision-makers anticipate evolving conditions for a more resilient future. Sessions The Role of AI and Accelerated Computing in Understanding and Mitigating Urban Climate Change Researchers are using AI, digital twins, and accelerated computing to model rising temperatures, air pollution, and extreme weather in cities. This session explores how generative AI and machine learning analyze satellite data, IoT sensors, and social insights to create high-resolution simulations of urban heat islands and pollution patterns. Experts will discuss how these insights help guide climate-resilient infrastructure, energy efficiency, and targeted adaptation strategies while addressing challenges like computational efficiency and equitable access to AI-driven solutions. Enhancing Photovoltaic Power Predictions with High-Resolution Weather Forecasting from NVIDIA Earth-2 NVIDIA Earth-2 is revolutionizing solar energy forecasting with ultra-precise, AI-driven weather predictions. In collaboration with GCL and Peking University, researchers show how time series models and high-resolution weather data improve photovoltaic power forecasts, enhancing grid stability, and improving renewable energy planning for power providers and policymakers. Applying AI Weather Models with NVIDIA Earth-2 NVIDIA Earth-2 AI-powered forecasting models generate high-resolution weather predictions at a fraction of the cost and time of traditional numerical models. This training lab explores how AI-driven downscaling techniques improve forecasting accuracy for industries such as energy and agriculture, providing more accurate forecasting and better decision-making across critical sectors. Advancing AI-driven environmental monitoring and conservation AI is transforming environmental monitoring, conservation, and ecosystem management with advanced digital twin technology and autonomous systems. From high-resolution coral reef modeling to large-scale ecosystem assessments, these innovations provide scientists and conservationists with critical insights to guide conservation strategies and protect biodiversity. Session Exploring Earth’s Oceans: Using Digital Twins to Drive Digital Ocean Collaboration Oceans regulate climate and support biodiversity, but their complexity makes them challenging to study. MITRE uses NVIDIA Omniverse to create digital twins of ocean systems, enabling real-time simulations and predictive modeling. These tools foster collaboration among scientists, policymakers, and educators to improve marine resource management, drive conservation efforts, and bolster climate resilience. In-person posters Photo-Realistic 3D Digital Twin to Enhance Understanding of the Great Barrier Reef ​​AI-powered 3D digital twins are advancing how researchers model and monitor coral reef ecosystems. Using Reef-NeRF and Reef-3DGS, scientists can create highly detailed reconstructions to track coral health, measure structural changes, and assess the impacts of climate change. These tools provide conservationists and policymakers with critical data to inform reef recovery strategies and improve long-term conservation efforts Mangrove Simulation Predicts Carbon Sequestration Solutions Mangrove forests are a key solution to carbon capture and climate mitigation, but effective restoration requires precise monitoring and management. ID Water Co., Ltd. is using AI-powered irrigation automation and GPU-driven carbon sink modeling to improve mangrove reforestation efforts. These models improve survival rates, optimize carbon sequestration, and address verification challenges, making large-scale restoration more feasible and impactful. Revolutionizing Antarctic Flora Monitoring with AI and Drones AI-powered drones and hyperspectral imaging are enabling high-precision mapping of Antarctic vegetation. Using NVIDIA GPUs, researchers can detect moss and lichen with over 99% accuracy, providing key insights into climate-driven ecosystem changes while reducing the need for invasive field surveys in this fragile ecosystem. Join our global community of developers, scientists, business leaders, and innovators at NVIDIA GTC 2025 to discover how AI drives solutions to our most complex challenges. From NVIDIA CEO Jensen Huang’s must-see keynote to over 900 sessions, 300+ exhibits, hands-on technical training, and exclusive networking events, GTC offers a firsthand look at AI’s real-world impact. The session catalog is open—start building your agenda today.
https://developer.nvidia.com/zh-cn/blog/ai-for-climate-energy-and-ecosystem-resilience-at-nvidia-gtc-2025/
NVIDIA GTC 2025 上的人工智能促进气候、能源和生态系统复原力
从减缓气候变化到改进灾害响应和环境监测,AI 正在重塑我们应对重大全球挑战的方式。快速、高分辨率的气候预报、实时监控和数字孪生技术的进步为科学家、政策制定者和行业领导者提供了数据驱动的工具,帮助他们了解、规划和应对一个变暖的星球。 在 3 月 17 日至 21 日举行的 NVIDIA GTC 2025 大会上,思想领袖、科学家、开发者和创新者将重点介绍 AI 如何帮助塑造更具可持续性和韧性的未来。以下会议展示了 AI 在气候预测、灾难缓解和保护工作中发挥的作用,帮助社区适应日益不可预测的世界。 加强极端天气预测和灾害响应 随着全球变暖加剧,极端天气事件变得更加严重和频繁,社区需要更快、更精确的自然灾害预测和响应策略。AI 正在改进洪水、野火和飓风建模,从而实现更早的警报和更有效的缓解措施。借助大规模天气模拟、卫星数据分析和实时预测性见解,AI 正在帮助应急响应人员和决策者尽可能减少损失、提高抗灾能力,并确保社区安全。 会议 借助高级计算工具转变自然灾害风险模拟 AXA、AWS 和 NVIDIA 使用 Earth-2 模拟以前所未有的规模和精度对极端天气事件进行建模。这些工具通过改进风险估计和恢复能力规划,减少自然灾害的金融和社会影响,帮助保险公司、城市规划人员和政府做好灾害准备。 使用百亿亿级 (Exascale) 气候模拟器提升地球系统模型的输出 KAUST 和圣路易斯大学的研究人员使用由 NVIDIA GPUs 提供支持的百亿亿级 (Exascale) 气候模拟器来加速和优化地球系统模型的输出。这些模型可实现超高的空间分辨率 (3.5 公里),从而能够更准确地模拟风暴和气候,改进极端天气预测,并帮助应急响应人员、保险公司和政策制定者改进灾害应对规划和气候弹性工作。 将 AI 用于高级洪水风险建模和缓解策略 JBA Risk Management 使用 AI 驱动的天气模型 (包括 Spherical Fourier Neural Operators) 更准确地模拟风暴季。借助 NVIDIA NIM,这些模型可改善洪水风险评估、灾害应对规划和基础设施投资决策,同时降低计算成本。 在太空中利用边缘 AI 进行近乎实时的火灾检测:借助 NVIDIA 改变热地球观测方式 野火需要快速响应,以尽可能减少破坏。OroraTech 在 CubeSats 上使用 NVIDIA Jetson 技术,可在 60 秒内完成火灾检测,从而为急救人员提供关键的实时数据,以便更有效地部署资源。 利用自主系统和遥感获取更好的地球数据 配备 AI 的自主飞机正在大规模收集高分辨率数据,从而彻底改变环境监测。麻省理工学院的研究人员展示了这些低成本的远程系统如何为精准农业、灾害响应和环境评估收集关键数据,并大规模提供可行的见解。 提升气候预测能力以加强能源和基础设施规划 准确的长期气候预测对于指导基础设施投资、优化电网和增强可持续发展规划至关重要。AI 驱动的模型能够更快、更精确地进行气候预测,为太阳能集成、气候弹性基础设施开发和可持续发展策略提供指导。这些由数据驱动的高分辨率见解可帮助城市规划人员和决策者预测不断变化的条件,打造更具弹性的未来。 会议 AI 和加速计算在了解和减缓城市气候变化方面的作用 研究人员正在利用 AI、数字孪生和加速计算对城市中的气温升高、空气污染和极端天气进行建模。此会议将探讨生成式 AI 和机器学习如何分析卫星数据、物联网传感器和社会见解,以创建城市热岛和污染模式的高分辨率模拟。专家们将讨论这些见解如何帮助指导适应气候变化的基础设施、能效和有针对性的适应战略,同时应对计算效率和公平获取 AI 驱动的解决方案等挑战。 借助 NVIDIA Earth-2 的高分辨率天气预报增强太阳能发电预测 NVIDIA Earth-2 通过 AI 驱动的超精确天气预测,正在彻底改变太阳能预测。研究人员与 GCL 和北京大学合作,展示了时间序列模型和高分辨率天气数据如何改善太阳能发电预测、增强电网稳定性,以及如何改善电力供应商和政策制定者的可再生能源规划。 将 AI 天气模型与 NVIDIA Earth-2AI 驱动的预测模型结合使用 ,生成高分辨率天气预测,所需的成本和时间远低于传统数值模型。此训练实验室将探讨 AI 驱动的降比例技术如何提高能源和农业等行业的预测准确性,从而为关键领域提供更准确的预测和更好的决策。 推进 AI 驱动的环境监测和保护 AI 正在利用先进的数字孪生技术和自主系统,改变环境监测、保护和生态系统管理。从高分辨率珊瑚礁建模到大规模生态系统评估,这些创新为科学家和自然保护主义者提供了重要见解,以指导保护策略和保护生物多样性。 会议 探索地球的海洋:使用数字孪生推动数字海洋协作海洋调节气候并支持生物多样性 ,但其复杂性使研究这些海洋具有挑战性。MITRE 使用 NVIDIA Omniverse 创建海洋系统的数字孪生,实现实时模拟和预测建模。这些工具促进了科学家、政策制定者和教育工作者之间的协作,以改善海洋资源管理、推动保护工作,并增强气候恢复能力。 线下海报 逼真的 3D 数字孪生增强对大堡礁的理解 AI 驱动的 3D 数字孪生正在推进研究人员建模和监测珊瑚礁生态系统的方式。借助 Reef-NeRF 和 Reef-3DGS,科学家可以创建高度精细的重建模型,以追踪珊瑚健康状况、测量结构变化并评估气候变化的影响。这些工具为环保人士和政策制定者提供关键数据,以便制定珊瑚礁恢复策略并改进长期保护工作 Mangrove Simulation 预测碳封存解决方案 红树林是碳捕获和气候减缓的关键解决方案,但有效的恢复需要精确的监控和管理。ID Water Co.,Ltd.正在使用由 AI 提供动力支持的喷洒自动化和 GPU 驱动的碳汇建模来改进红树林再造工作。这些模型可提高存活率、优化碳封存并解决验证难题,从而提高大规模修复的可行性和成效。 借助 AI 和无人机革新南极植物监测 AI 赋能的无人机和高光谱成像技术可实现对南极植被的高精度绘图。借助 NVIDIA GPU,研究人员可以以超过 99%的准确率检测和,从而对气候驱动的生态系统变化提供关键见解,同时减少在这个脆弱的生态系统中进行侵入性实地调查的需求。 在 NVIDIA GTC 2025 大会上,加入由开发者、科学家、业务领袖和创新者组成的全球社区,了解 AI 如何为我们面临的复杂挑战提供解决方案。 从 NVIDIA 首席执行官 Jensen Huang 不容错过的主题演讲 ,到 900 多场会议、300 多场展览、实操技术培训和独家交流活动,GTC 让您亲身体验 AI 对现实世界的影响。 会议目录 现已开放,请立即开始构建您的议程。
https://developer.nvidia.com/blog/automating-gpu-kernel-generation-with-deepseek-r1-and-inference-time-scaling/
Automating GPU Kernel Generation with DeepSeek-R1 and Inference Time Scaling
As AI models extend their capabilities to solve more sophisticated challenges, a new scaling law known as test-time scaling or inference-time scaling is emerging. Also known as AI reasoning or long-thinking , this technique improves model performance by allocating additional computational resources during inference to evaluate multiple possible outcomes and then selecting the best one, neural network. This enables AI to strategize and systematically solve complex problems in a similar fashion to how humans dissect complex problems and solve them individually to arrive at a final solution. In this post, we talk about an experiment done by NVIDIA engineers who used one of the newest open-source models, the DeepSeek-R1 model, together with additional computing power during inference to solve a complex problem. The experiment was to automatically generate GPU attention kernels that were numerically correct and optimized for different flavors of attention without any explicit programming. The results turned out to be better than the optimized kernels developed by skilled engineers in some cases. The need for optimized attention kernels and associated challenges Attention is a key concept that revolutionized the development of the large language model (LLM). It’s a powerful mechanism that enables AI models to focus selectively on the most relevant parts of input when performing tasks. By focusing on important information, the attention operation helps the models make better predictions and find hidden patterns in the data. The computational complexity of the attention operation grows quadratically in relation to the input sequence length. This motivates the need for developing an optimized lower-level implementation (that is, a GPU kernel) to prevent runtime errors arising from simple implementations (for example, out-of-memory errors) and for computational efficiency purposes. There are multiple variants of attention (causal, relative positional embeddings, alibi, and so on) and often engineers must use a combination of these variants for a given task. ‌ Multi-modal models (for example, vision transformers) introduce an additional layer of challenges as they require specialized attention mechanisms (Spatial Neighborhood Attention) for maintaining spatio-temporal information often encountered in computer vision, video generation models, and so on. Figure 1. Neighborhood attention on 2D inputs Creating an optimized GPU kernel for attention takes a lot of skill and time, even for experienced software engineers. ‌ Recent LLMs like DeepSeek-R1 have shown a lot of promise in code generation tasks, but they still face challenges creating optimized code on the first try. This makes it necessary to use other strategies at inference time to generate optimized code. The following prompt is sample user input for a relative positional embeddings attention kernel. Please write a GPU attention kernel to support relative position encodings. Implement the relative positional encoding on the fly within the kernel. The complete code should be returned, including the necessary modifications. Use the following function to compute the relative positional encoding: def relative_positional(score, b, h, q_idx, kv_idx):     return score + (q_idx - kv_idx) When implementing the kernel, keep in mind that a constant scaling factor 1.44269504 should be applied to the relative positional encoding due to qk_scale = sm_scale * 1.44269504. The PyTorch reference does not need to scale the relative positional encoding, but in the GPU kernel, use: qk = qk * qk_scale + rel_pos * 1.44269504 Please provide the complete updated kernel code that incorporates these changes, ensuring that the relative positional encoding is applied efficiently within the kernel operations. LLMs can occasionally produce hallucinated code or mix syntax from different languages or frameworks, causing immediate code errors or inefficiencies. Computing the optimal GPU thread mapping is also non-trivial and a challenging task, often requiring iterative refinement to achieve a correct and efficient kernel. Inference-time scaling for generating optimized GPU Kernels To get the best results with optimized attention kernels, NVIDIA engineers created a new workflow that includes a special verifier along with the DeepSeek-R1 model during inference in a closed-loop fashion for a predetermined duration. Figure 2. Inference-time scaling with DeepSeek-R1 on the NVIDIA Hopper platform The workflow is first initialized by a manual prompt and the DeepSeek-R1 model generates the GPU code (that is, the kernel) in the first pass. The verifier runs on an NVIDIA H100 GPU. It analyzes the generated kernel and creates new prompts that are provided as ‌input to the DeepSeek-R1 model. This closed-loop approach makes the code generation process better by guiding it in a different way each time. The team found that by letting this process continue for 15 minutes resulted in an improved attention kernel. Figure 3. Performance of automatically generated optimized attention kernels with flex attention This workflow produced numerically correct kernels for 100% of Level-1 problems and 96% of Level-2 problems, as tested by Stanford’s KernelBench benchmark. ‌ The Level-1 solving rate in KernelBench refers to the numerical correct metric used to evaluate the ability of LLMs to generate efficient GPU kernels for specific computational tasks. This test is part of a series of challenges to test the latest LLMs’ abilities in GPU programming. Figure 4 shows how the inference-time budget affects the agent’s solving rate. Allocating more than 10 minutes per problem in the Level-1 category enables the workflow to produce numerical correct code for most of the 100 problems. Figure 4. Inference-time scaling results in optimized GPU kernels Optimized GPU kernels on DeepSeek-R1 These results show how you can use the latest DeepSeek-R1 model to give better GPU kernels by using more computing power during inference time. This is still a new research area with early results on a promising approach that automatically generates effective attention kernels. While we are off to a good start, more work is needed to generate better results consistently for a wider variety of problems. We’re excited about the recent developments in DeepSeek-R1 and its potential. For more information or to get started, see the DeepSeek-R1 NIM microservice , now available on build.nvidia.com .
https://developer.nvidia.com/zh-cn/blog/automating-gpu-kernel-generation-with-deepseek-r1-and-inference-time-scaling/
使用 DeepSeek-R1 和推理时间缩放实现自动化 GPU 内核生成
随着 AI 模型扩展其功能以解决更复杂的挑战,一种称为“ 测试时扩展 ”或“ 推理时扩展 ”的新扩展法则正在出现。该技术也称为 AI 推理 或 长时思考 技术,通过在推理过程中分配额外的计算资源来评估多种可能的结果,然后选择最佳结果 (neural network),从而提高模型性能。这使得 AI 能够以类似于人类剖析复杂问题并单独解决这些问题以达成最终解决方案的方式,制定战略并系统化地解决复杂问题。 在本文中,我们将介绍 NVIDIA 工程师完成的一项实验,他们在推理过程中使用最新的开源模型之一 DeepSeek-R1 模型以及额外的计算能力来解决复杂的问题。该实验旨在自动生成 GPU 注意力内核,这些内核在数值上是正确的,并针对不同的注意力类型进行了优化,而无需任何显式编程。 事实证明,在某些情况下,最终结果优于由技术精湛的工程师开发的优化内核。 优化注意力内核的需求和相关挑战 注意力是一个关键概念,彻底改变了大语言模型(LLM)的发展。这是一种功能强大的机制,可让 AI 模型在执行任务时,有选择地专注于与输入内容最相关的部分。通过聚焦于重要信息,注意力运算可帮助模型做出更好的预测,并找到数据中隐藏的模式。 相对于输入序列长度,注意力运算的计算复杂性呈正交增长。这促使我们需要开发经过优化的低级实现 (即 GPU 内核),以防止简单实现产生的运行时错误 (例如内存不足的错误),并提高计算效率。 注意力有多种变体 (因果关系、相对位置嵌入、不在场证明等),工程师通常必须将这些变体的组合用于给定任务。 多模态模型 (例如,vision transformers) 带来了额外的一层挑战,因为它们需要专门的注意力机制 (Spatial Neighborhood Attention) 来维护计算机视觉、视频生成模型等领域中经常遇到的时空信息。 图 1. 邻域注意力在 2D 输入上的应用 创建经优化的 GPU 内核以供关注需要大量技能和时间,即使对于经验丰富的软件工程师而言也是如此。 最近的 LLMs(如 DeepSeek-R1)在代码生成任务方面表现出了很大的前景,但在第一次尝试创建优化代码时,它们仍然面临挑战。这使得有必要在推理时使用其他策略来生成优化的代码。 以下提示是用户输入相对位置嵌入注意力核函数的示例。 Please write a GPU attention kernel to support relative position encodings. Implement the relative positional encoding on the fly within the kernel. The complete code should be returned, including the necessary modifications. Use the following function to compute the relative positional encoding: def relative_positional(score, b, h, q_idx, kv_idx): return score + (q_idx - kv_idx) When implementing the kernel, keep in mind that a constant scaling factor 1.44269504 should be applied to the relative positional encoding due to qk_scale = sm_scale * 1.44269504. The PyTorch reference does not need to scale the relative positional encoding, but in the GPU kernel, use: qk = qk * qk_scale + rel_pos * 1.44269504 Please provide the complete updated kernel code that incorporates these changes, ensuring that the relative positional encoding is applied efficiently within the kernel operations. LLM 偶尔会产生来自不同语言或框架的幻影代码或混合语法,从而立即导致代码错误或效率低下。计算最佳 GPU 线程映射也并非易事,也是一项具有挑战性的任务,通常需要进行迭代优化才能实现正确高效的内核。 用于生成经过优化的 GPU 内核的推理时间扩展 为了通过优化的注意力内核获得最佳结果,NVIDIA 工程师创建了一个新的工作流程,其中包括一个特殊的验证器以及 DeepSeek-R1 模型,在预先设定的时间内以闭环方式进行推理。 图 2、在 NVIDIA Hopper 平台上使用 DeepSeek-R1 实现推理时间扩展 该工作流程首先通过手动提示进行初始化,然后 DeepSeek-R1 模型会在首次通道中生成 GPU 代码(即核函数)。该验证器在 NVIDIA H100 GPU 上运行。它会分析生成的核函数并创建新的提示,这些提示以 input 的形式提供给 DeepSeek-R1 模型。 这种闭环方法每次都以不同的方式指导代码生成过程,从而改进代码生成过程。该团队发现,让此过程持续 15 分钟可以改进注意力核函数。 图 3、具有 Flex Attention 的自动生成优化注意力内核的性能 此工作流程为 100%的 1 级问题和 96%的 2 级问题生成了数值正确的内核,测试对象为 斯坦福大学的 KernelBench 基准测试。* KernelBench 中的 1 级求解率是指用于评估 LLM 为特定计算任务生成高效 GPU 内核的能力的数字正确指标。本次测试属于一系列挑战,旨在测试最新 LLM 在 GPU 编程中的能力。 图 4 显示了推理时间预算如何影响智能体的求解率。在 Level-1 类别中为每个问题分配超过 10 分钟的时间,使工作流程能够为 100 个问题中的大多数生成正确的数字代码。 图 4、在优化的 GPU 内核中实现推理 – 时间扩展 DeepSeek-R1 上经过优化的 GPU 内核 这些结果展示了如何使用最新的 DeepSeek-R1 模型,通过在推理期间使用更强大的计算能力来提供更好的 GPU 内核。这仍然是一个新的研究领域,在自动生成有效注意力内核的前景良好的方法方面取得了早期成果。 虽然我们已经有了一个良好的开端,但我们需要做更多的工作,以便为更广泛的问题持续提供更好的结果。我们对 DeepSeek-R1 的最新进展及其潜力感到兴奋。 如需了解更多信息或入门,请参阅 DeepSeek-R1 NIM 微服务 (现已在 build.nvidia.com 上提供)。
https://developer.nvidia.com/blog/ai-foundation-model-enhances-cancer-diagnosis-and-tailors-treatment/
AI Foundation Model Enhances Cancer Diagnosis and Tailors Treatment
A new study and AI model from researchers at Stanford University is streamlining cancer diagnostics, treatment planning, and prognosis prediction. Named MUSK (Multimodal transformer with Unified maSKed modeling), the research aims to advance precision oncology, tailoring treatment plans to each patient based on their unique medical data. “Multimodal foundation models are a new frontier in medical AI research,” said Ruijiang LI , an associate professor of radiation oncology and study senior author. “Recently, vision–language foundation models have been developed for medicine, particularly in the field of pathology. However, existing studies use off-the-shelf foundation models that require paired image–text data for pretraining. Despite extensive efforts that led to the curation of 1M pathology image–text pairs, it’s still insufficient to fully capture the diversity of the entire disease spectrum.” Oncologists rely on many data sources when considering a patient’s condition and planning optimal treatments. However, integrating and interpreting complex medical data remains difficult for doctors and AI models. The study , recently published in Nature , highlights how MUSK could help doctors make more accurate and informed decisions while also solving this long-standing challenge in medical AI. Using deep learning, MUSK processes clinical text data (such as doctor’s notes) and pathology images (like histology slides), to identify patterns that may not be immediately obvious to doctors, leading to better clinical insights. To do so, it uses a two-step multimodal transformer model. First, it learns from large amounts of unpaired data, pulling features from the text and images that are useful. Then it finetunes its understanding of the data by linking paired image-text data, which helps it recognize different types of cancer, predict biomarkers, and suggest effective treatment options. The researchers pretrained the AI model on one of the biggest datasets in the field, using 50M pathology images from 11,577 patients with 33 tumor types and 1B pathology-related text data. According to Jinxi Xiang , study lead author and postdoctoral scholar in radiation physics, the pretraining was conducted over 10 days using 64 NVIDIA V100 Tensor Core GPUs across eight nodes, enabling MUSK to process vast amounts of pathology images and clinical text efficiently. A secondary pretraining phase and ablation studies used NVIDIA A100 80 gb Tensor Core GPUs . The researchers also used NVIDIA RTX A6000 GPUs for evaluating downstream tasks. The framework was accelerated with NVIDIA CUDA and NVIDIA cuDNN libraries, for optimized performance. When tested on 23 pathology benchmarks, MUSK outperformed existing AI models in several key areas. It excelled at matching pathology images with correlating medical text, making it more effective in gathering relevant patient information. It also interpreted pathology-related questions, such as identifying a cancerous area or predicting biomarker presence with 73% accuracy. Figure 1. An example of the visual question-answering MUSK can perform It improved detection and classification for cancer subtypes including breast, lung, and colorectal cancer by up to 10%, which could help with early diagnosis and treatment planning. It also detected ‌breast cancer biomarkers with an AUC (a measure of model accuracy) of 83%. Additionally, MUSK reliably predicted cancer survival outcomes 75% of the time, and which lung and gastro-esophageal cancers would respond to immunotherapy with 77% accuracy. This is a significant improvement over standard clinical biomarkers with an accuracy of only 60-65%. “One striking finding is that AI models that integrate multi-modal data consistently outperform those based on imaging or text data alone, highlighting the power of a multimodal approach,” Li said. “The true value of MUSK lies in its ability to leverage large-scale unpaired image and text data for pretraining, which is a substantial increase over existing models that require paired data.” A core strength of the research is that it can adapt across different clinical settings with little training. This could improve efficiency in oncology workflows and help doctors diagnose cancer faster while tailoring treatments for better patient outcomes. Their future work will focus on validating the model in multi-institution cohorts of patients from diverse populations and for high-stakes applications such as treatment decision-making. The researchers note that prospective validation in clinical trials will be required for regulatory approval. “We are also working on an extension of the MUSK approach to digital pathology to other types of data such as radiology images and genomic data,” said Li. The researchers’ work, including installation instructions, model weights, evaluation code, and sample data is available on GitHub .
https://developer.nvidia.com/zh-cn/blog/ai-foundation-model-enhances-cancer-diagnosis-and-tailors-treatment/
AI 基础模型增强癌症诊断并实现个性化治疗
斯坦福大学研究人员的一项新研究和 AI 模型正在简化癌症诊断、治疗规划和预后预测。这项名为 MUSK (Multimodal transformer with Unified maSKed modeling) 的研究旨在提高精准肿瘤学,根据每位患者独特的医疗数据为其定制治疗计划。 “多模态基础模型是医学 AI 研究的新领域,”放射肿瘤学副教授兼研究高级作者 Ruijiang LI 说。“最近,我们为医学领域开发了视觉语言基础模型,尤其是在病理学领域。但是,现有研究使用的现有基础模型需要配对的图像 – 文本数据进行预训练。尽管我们付出了大量努力,最终打造出 1M 病理图像文本对,但它仍然不足以完全捕捉整个疾病谱系的多样性。” 在考虑患者状况和规划最佳治疗方案时,肿瘤科医生依靠多种数据源。然而,医生和 AI 模型仍然难以集成和解释复杂的医疗数据。该研究最近发表在 Nature 杂志上,重点介绍了 MUSK 如何帮助医生做出更准确、更明智的决定,同时解决医学 AI 领域的长期挑战。 借助深度学习,MUSK 处理临床文本数据(如医生的笔记)和病理学图像(如组织学幻灯片),以识别医生可能无法立即发现的模式,从而获得更好的临床见解。 为此,它使用了两步多模态 transformer 模型。首先,它从大量未配对的数据中学习,从有用的文本和图像中提取特征。然后,它通过关联配对的图像-文本数据来微调对数据的理解,这有助于识别不同类型的癌症、预测生物标志物,并提出有效的治疗方案。 研究人员基于该领域最大的数据集之一预训练了 AI 模型,使用了来自 11,577 名患者的 50M 病理学图像,其中有 33 种肿瘤类型和 1B 病理学相关文本数据。 据辐射物理学研究主要作者兼博士后学者 Jinxi Xiang 称,预训练在 8 个节点上使用 64 个 NVIDIA V100 Tensor Core GPUs 进行了 10 天以上,使 MUSK 能够高效处理大量病理学图像和临床文本。二级预训练阶段和消融研究使用 NVIDIA A100 80GB Tensor Core GPUs 。研究人员还使用 NVIDIA RTX A6000 GPUs 评估下游任务。该框架通过 NVIDIA CUDA 和 NVIDIA cuDNN 库进行加速,以优化性能。 在 23 项病理学基准测试中,MUSK 在多个关键领域的表现优于现有 AI 模型。它擅长将病理学图像与相关的医学文本进行匹配,从而更有效地收集相关的患者信息。它还能解读与病理学相关的问题,例如识别癌变区域或预测生物标志物的存在,准确率高达 73%。 图 1. 例如,视觉问答 MUSK 可以执行 它将乳腺癌、肺癌和结直肠癌等癌症亚型的检测和分类能力提高了 10%,这有助于早期诊断和治疗规划。它还检测到乳腺癌生物标志物,AUC(用于衡量模型准确性的指标)为 83%。 此外,MUSK 有 75%的时间能够可靠预测癌症生存期结果,以及哪些肺癌和胃食道癌会对免疫治疗做出反应,准确率为 77%。与准确率仅为 60-65%的标准临床生物标志物相比,这是一个显著的改进。 “一个惊人的发现是,集成多模态数据的 AI 模型的性能始终优于仅基于图像或文本数据的 AI 模型,这凸显了多模态方法的强大功能,”Li 说。“MUSK 的真正价值在于它能够利用大规模的未配对图像和文本数据进行预训练,与需要配对数据的现有模型相比,这是一个巨大的提升。” 这项研究的一个核心优势是,它可以在几乎没有训练的情况下适应不同的临床环境。这可以提高肿瘤学工作流程的效率,并帮助医生更快地诊断癌症,同时定制治疗方案以改善患者的治疗效果。 他们未来的工作重点将是在来自不同人群的多机构患者群体中验证该模型,以及用于治疗决策等高风险应用。研究人员指出,临床试验中的前瞻性验证需要获得监管机构的批准。 “我们还致力于将 MUSK 方法扩展到数字病理学,包括放射学图像和基因组数据等其他类型的数据,”Li 说。 研究人员的工作(包括安装说明、模型权重、评估代码和样本数据) 可在 GitHub 上获取。
https://developer.nvidia.com/blog/cuda-toolkit-12-8-delivers-nvidia-blackwell-support/
CUDA Toolkit Now Available for NVIDIA Blackwell
The latest release of the CUDA Toolkit , version 12.8, continues to push accelerated computing performance in data sciences, AI, scientific computing, and computer graphics and simulation, using the latest NVIDIA CPUs and GPUs. This post highlights some of the new features and enhancements included with this release: NVIDIA Blackwell architecture support CUDA Graphs conditional nodes enhancements Blackwell CUTLASS kernels for large language models (LLMs) NVIDIA Nsight Developer Tools updates Math libraries updates cudaStreamGetDevice Compiler updates Accelerated Python updates Feature-complete architectures NVIDIA Blackwell architecture support CUDA Toolkit 12.8 is the first version of the Toolkit to support the NVIDIA Blackwell architecture across the entire suite of Developer Tools including performance tools and profilers, libraries, and compilers. Built with 208 billion transistors—more than 2.5x the number of transistors in NVIDIA Hopper GPUs—Blackwell is the largest GPU ever built. Key Blackwell capabilities supported include: Second-generation Transformer Engine through custom Tensor Core technology: Accelerates inference and training for LLMs and mixture-of-experts (MoE) models. Decompression: Accelerates performance on data analytics and data science pipelines using the latest compression formats such as LZ4, Snappy, and Deflate. Network interconnect: NVLink and NVLink Switches accelerate inter-GPU communications performance for trillion-parameter and multitrillion-parameter AI models. To learn more about the leading innovations in Blackwell, see the NVIDIA Blackwell Architecture Technical Brief . 2x faster CUDA Graphs with runtime kernel selection for lower latency inference With Blackwell, CUDA Graphs APIs continue to be the most efficient way to launch repeated invocations of sequences of GPU operations. CUDA Toolkit 12.8 introduces more enhancements to CUDA Graphs, including additional conditional node types. In many applications, having dynamic control over the execution of work in CUDA Graphs can increase performance and flexibility of graph launches. For example, an algorithm that involves iterating over a series of operations many times until the result converges below a certain threshold can now run wholly on the GPU without needing CPU control management, reducing overhead by as much as 2x. CUDA Toolkit 12.8 improves APIs for runtime control of conditional graph nodes. Conditional nodes contain segments of a graph that can execute, or be skipped, based on a condition to evaluate as the graph is running. Such segments can be evaluated once (an IF node), or repeatedly in a loop (a WHILE node). CUDA 12.8 adds support for two new types of conditional graph nodes: IF/ELSE combined nodes and SWITCH nodes. With the Blackwell architecture, we’ve improved LLM performance to benefit all reasoning models, including DeepSeek-R1. CUDA Graphs enhanced SWITCH and IF/ELSE support delivers 2x more performance for runtime kernel selection versus going back to the CPU for launch decision-making. Training : By reducing CPU dependency for kernel selection, training workloads sustain even more GPU Tensor Core throughput, resulting in higher Model FLOPs Utilization (MFU). This improves performance using the same GPU infrastructure, reducing time and cost to train. Inference : For next-generation reasoning models that make use of test-time compute, a high token generation rate is critical as each inference request can generate a vast number of tokens per query. CUDA 12.8 new stream API enables fewer calls back to the host CPU, reducing the time between one kernel finishing and the next one starting, increasing token generation rate. This results in more tokens generated in fixed time budget, helping models reason more and increasing intelligence. To learn more, see Dynamic Control Flow in CUDA Graphs with Conditional Nodes . Blackwell CUTLASS kernels for LLMs CUTLASS , since its 2017 debut, has been instrumental for researchers and developers implementing high-performance CUDA kernels on NVIDIA GPUs. By providing developers with comprehensive tools to design custom operations, such as GEMMs and Convolutions, CUTLASS has been critical for the development of hardware-aware algorithms, powering breakthroughs like FlashAttention that helped spark modern AI. With the release of CUTLASS 3.8—which supports CUDA 12.8—NVIDIA is extending support to the Blackwell architecture, enabling developers to harness next-generation Tensor Cores with support for all new data types. This includes new narrow precision MX formats and the NVIDIA-developed FP4 format, which increase compute throughput. Figure 1 shows CUTLASS can achieve up to 98% relative peak performance for Tensor Core operations. Figure 1. Blackwell CUTLASS GEMM performance relative to expected peak, delivering up to 98% of Blackwell peak performance For DeepSeek-V3 and DeepSeek-R1, grouped GEMMs make up a large portion of the MoE compute required during inference. These operations enable different matrix sizes, scaling factors, and fusions to be grouped and parallelized in a single persistent-kernel launch. With CUTLASS, on Blackwell with FP4, Grouped GEMM kernel performance increases by up to 5x over H200 with FP16. Figure 2. CUTLASS Grouped GEMM performance for MoE inference used in DeepSeek delivers up to 5x more performance on Blackwell compared to Hopper at various precisions NVIDIA Nsight Developer Tools NVIDIA Nsight Compute 2025.1 is the first official release with support for the Blackwell architecture. Updates include visualization of Blackwell Hardware Tensor Memory in the memory chart as well as Tensor Core performance data. Figure 3. Tensor Memory traffic in the Nsight Compute memory chart It also comes with several improvements to the increasingly popular range profiling feature. Users can now collect source-level metrics, including Instructions Executed and memory access information, inside profiled ranges. This update also enables Guided Analysis rules evaluation for ranges. This built-in expertise for identifying performance issues is a key component of NVIDIA Nsight Compute. This release reports kernel stack sizes and adds custom tooltips to help users understand their workload performance. This release of Compute Sanitizer, an automatic correctness checking tool, adds support for Python call stacks to accurately locate kernel correctness issues when kernels are launched through Python applications. Additionally, new Tensor Core MMA guardrails for Blackwell can report errors related to Tensor Core programming. These are enabled by adding the PTXAS flag -g-tmem-access-check when compiling programs. Examples of common errors include access to unallocated tensor memory, invalid addresses, and invalid allocator usage. Math libraries updates With CUDA Toolkit 12.8, we have several new library enhancements that leverage the new Blackwell architecture and help accelerate applications in AI, data sciences, graphics and simulation, and high-performance scientific computing. New features cuBLAS APIs were extended to support microscaled 4-bit and 8-bit floating point mixed-precision tensor core accelerated matrix multiplication for compute capability 10.0 (Blackwell) and higher. Introduced initial support for CUDA in Graphics (CIG) on Windows x64 for NVIDIA Ampere GPU architecture and Blackwell GeForce-class GPUs. CIG contexts are now autodetected, and cuBLAS selects kernels that comply with CIG shared memory usage limits. cuSOLVER now supports zsytrf/zsytrs, a complex symmetric direct solver without pivoting. nvJPEG now provides support for the Tegra architecture. NPP now provides support for the DRIVE Thor architecture. cudaStreamGetDevice Applications often use CUDA streams to provide ordered access to GPU resources. An instance of a CUDA stream is associated with a fixed CUDA device. In applications that address multiple devices, there are scenarios where getting a handle to the underlying device for a given stream is useful to tailor the application to device characteristics. Previously, the CUDA API did not provide a mechanism for retrieving the device associated with a CUDA stream; developers had to track this themselves. The addition of the cudaStreamGetDevice CUDA API to retrieve the device associated with a CUDA stream can simplify applications. Compiler updates New compiler updates include the following: The CUDA Toolkit 12.8 release introduces support for GCC 14 as a host-side compiler. The default high-level optimizer is now based on LLVM 18 for the Blackwell architecture. nvdisasm now supports emitting JSON formatted SASS disassembly. Accelerated Python updates The following two beta releases are now available for Python users: CUDA Python has released an early prototype of a new idiomatic object model called cuda.core and moved the CUDA binding to a submodule, cuda.bindings . For more information, see the documentation in the NVIDIA/cuda-python GitHub repo. CUDA Core Compute Libraries (CCCL) has released early prototypes of Python for parallel and cooperative algorithms, enabling you to use thread-level parallelism with user-defined types and functions from pure Python code. Learn more about CCCL . Additionally, the CuPy team is releasing a new version with Blackwell patches validated for general availability. Feature-complete architectures With the CUDA Toolkit 12.8 release, we now consider the Maxwell, Pascal, and Volta architectures to be feature-complete and support for them will be frozen in an upcoming release. This means that, in future releases, no new features will be added to the driver to enable new CUDA Toolkit functionality supporting Maxwell, Pascal, and Volta architectures. End users will be able to run existing software stacks and applications on Maxwell, Pascal, and Volta architectures using the supported upcoming LTS driver branch through its lifecycle. Starting with release 12.8, developers running offline compilers targeting these architectures will output a warning message when using nvcc , nvrtc , and nvjitlink . In the next major CUDA Toolkit release, offline compilation support for the Maxwell, Pascal, and Volta architectures will be removed from the compilers. The upcoming LTS driver for production application execution and JIT compilation of Maxwell, Pascal, and Volta applications will be supported for the normal 3-year LTS support window. For more details, read the CUDA Toolkit 12.8 Release Notes . Summary The CUDA Toolkit 12.8 release provides full feature support for the NVIDIA Blackwell architecture. This release continues to provide enhanced support for the newest NVIDIA GPUs, accelerated libraries, compilers, and Developer Tools, whether you’re developing applications in C++ or Python. Want more information? Check out the CUDA documentation , browse the latest NVIDIA Deep Learning Institute (DLI) offerings, and visit the NGC catalog . Ask questions and join the conversation in the CUDA Developer Forums. Acknowledgments Thanks to the following NVIDIA contributors: Stephen Jones, Jackson Marusarz, Becca Zandstein, Andy Terrel, Ashraf Eassa, Matt Nicely, and Mridula Prakash.
https://developer.nvidia.com/zh-cn/blog/cuda-toolkit-12-8-delivers-nvidia-blackwell-support/
CUDA 工具包现已支持 NVIDIA Blackwell 架构
CUDA 工具包 的最新版本 (版本 12.8) 使用最新的 NVIDIA CPU 和 GPU,持续提升数据科学、AI、科学计算以及计算机图形和模拟领域的加速计算性能。本文重点介绍了此版本包含的一些新功能和增强功能: NVIDIA Blackwell 架构支持 CUDA 图形处理条件节点增强功能 用于大语言模型(LLMs)的 Blackwell CUTLASS 内核 NVIDIA Nsight 开发者工具更新 数学库更新 cudaStreamGetDevice 编译器更新 加速 Python 更新 功能齐全的架构 NVIDIA Blackwell 架构支持 CUDA 工具包 12.8 是该工具包的第一个版本,在整个开发者工具套件 (包括性能工具和分析器、库和编译器) 中支持 NVIDIA Blackwell 架构。Blackwell 由 208 亿个晶体管构建而成,是 NVIDIA Hopper GPU 中晶体管数量的 2.5 倍以上,是迄今为止最大的 GPU。 Blackwell 支持的主要功能包括:Key Blackwell 采用自定义 Tensor Core 技术的第二代 Transformer 引擎:加速 LLM 和 mixture-of-experts (MoE) 模型的推理和训练。 解压缩: 使用 LZ4、Snappy 和 Deflate 等最新压缩格式,加速数据分析和数据科学工作流的性能。 网络互连:NVLink 和 NVLink Switches 加速万亿参数和数万亿参数 AI 模型的 GPU 间通信性能。 如需详细了解 NVIDIA Blackwell 的领先创新,请参阅 NVIDIA Blackwell 架构技术概览。 使用运行时核选择将 CUDA Graphs 速度提升 2 倍,从而降低延迟推理 借助 Blackwell,CUDA Graphs APIs 仍然是启动 GPU 操作序列重复调用的最高效方式。CUDA Toolkit 12.8 为 CUDA Graphs 引入了更多增强功能,包括其他 条件节点类型 。 在许多应用程序中,对 CUDA Graphs 中工作的执行进行动态控制可以提高图形启动的性能和灵活性。例如,一种算法需要多次迭代一系列运算,直到结果收到某个值以下,现在这种算法无需进行 CPU 控制管理即可完全在 GPU 上运行,从而将开销降低高达 2 倍。CUDA Toolkit 12.8 改进了用于条件图形节点运行时控制的 API。 条件节点包含图形的片段,这些片段可以在图形运行时根据要评估的条件执行或跳过。此类片段可以评估一次 (IF 节点),也可以在循环中重复评估 (WHILE 节点)。CUDA 12.8 增加了对两种新型条件图形节点的支持:IF/ELSE 组合节点和 SWITCH 节点。 借助 Blackwell 架构,我们改进了 LLM 性能,使包括 DeepSeek-R1 在内的所有推理模型受益。与返回 CPU 进行启动决策相比,CUDA Graphs 增强的 SWITCH 和 IF/ELSE 支持可将运行时内核选择的性能提高 2 倍。 训练:通过减少内核选择对 CPU 的依赖,训练工作负载可维持更多的 GPU Tensor Core 吞吐量,从而提高模型 FLOPS 利用率(MFU)。这提高了使用相同的 GPU 基础架构的性能,减少了训练时间和成本。 推理:对于使用测试时计算的新一代推理模型 ,高令牌生成速率至关重要,因为每个推理请求都可以在每个查询中生成大量令牌。CUDA 12.8 新流 API 可减少对主机 CPU 的调用,从而缩短一次内核处理与下一次启动之间的时间,从而提高令牌生成率。这会在固定时间预算内生成更多 token,帮助模型推理更多并提高智能。 如需了解详情, 请参阅使用条件节点的 CUDA 图形中的动态控制流。 适用于 LLMs 的 Blackwell CUTLASS 内核 自 2017 年首次推出以来, CUTLASS 一直在推动研究人员和开发者在 NVIDIA GPUs 上实施高性能 CUDA 核函数。通过为开发者提供全面的工具来设计自定义操作 (例如 GEMMs 和 Convolutions),CUTLASS 在开发硬件感知算法方面发挥了至关重要的作用,推动了 FlashAttention 等帮助激发现代 AI 的突破。 随着支持 CUDA 12.8 的 CUTLASS 3.8 的发布,NVIDIA 将扩展对 Blackwell 架构的支持,使开发者能够利用新一代 Tensor Core 来支持所有新的数据类型。这包括新的窄精度 MX 格式和 NVIDIA 开发的 FP4 格式,可提高计算吞吐量。图 1 显示,对于 Tensor Core 运算,CUTLASS 可实现高达 98% 的相对峰值性能。 图 1. Blackwell CUTLASS GEMM 性能相对于预期峰值,可提供高达 98% 的 Blackwell 峰值性能 对于 DeepSeek-V3 和 DeepSeek-R1,分组的 GEMM 在推理期间所需的 MoE 计算中占很大比例。这些运算支持在单个持久性核函数启动中对不同的矩阵大小、缩放系数和融合进行分组和并行化。借助 CUTLASS,在 Blackwell 以 FP4,Grouped GEMM 内核性能增加高达 5 倍,相比使用 FP16 的 H200。 图 2、与 Hopper 相比,DeepSeek 中使用的用于 MoE 推理的 CUTLASS 分组 GEMM 性能在 Blackwell 上在各种精度下的性能提升高达 5 倍 NVIDIA Nsight 开发者工具 NVIDIA Nsight Compute 2025.1 是首个支持 Blackwell 架构的官方版本。更新包括显存图表中 Blackwell 硬件 Tensor 内存的可视化,以及 Tensor Core 性能数据。 图 3、Nsight Compute 内存图中的 Tensor 内存流量 它还对日益流行的范围分析功能进行了多项改进。用户现在可以在已分析的范围内收集源级指标,包括已执行指令和内存访问信息。此更新还启用了针对范围的引导分析规则评估。这种用于识别性能问题的内置专业知识是 NVIDIA Nsight Compute 的关键组件。此版本报告了内核堆栈大小,并添加了自定义工具提示,以帮助用户了解其工作负载性能。 此版本的 Compute Sanitizer 是一款自动正确性检查工具,增加了对 Python 调用堆栈的支持,可在通过 Python 应用启动内核时准确定位内核正确性问题。此外,用于 Blackwell 的新 Tensor Core MMA 护栏可以报告与 Tensor Core 编程相关的错误。在编译程序时,可以通过添加 PTXAS 标志 -g-tmem-access-check 来启用这些功能。常见错误的示例包括访问未分配的 tensor 内存、无效的地址以及使用无效的分配器。 数学库更新 借助 CUDA 工具包 12.8,我们获得了一些新的增强功能库,这些增强功能利用了新的 Blackwell 架构,并有助于加速 AI、数据科学、图形和仿真以及高性能科学计算领域的应用程序。 新功能 cuBLAS API 经过扩展,支持微缩 4 位和 8 位浮点混合精度张量核心加速矩阵乘法,可实现 10.0(Blackwell)及更高版本的计算能力。 为 Windows x64 上的 NVIDIA Ampere GPU 架构和 Blackwell GeForce 级 GPU 引入了对 CUDA in Graphics (CIG) 的初步支持。现在,系统会自动检测 CIG 上下文,并且 cuBLAS 会选择符合 CIG 共享内存使用限制的内核。 cuSOLVER 现在支持 zsytrf/zsytrs,这是一款无需旋转的复杂对称直接求解器。 nvJPEG 现在支持 Tegra 架构。 NPP 现在为 DRIVE Thor 架构提供支持。 cudaStreamGetDevice 应用程序通常使用 CUDA 流提供对 GPU 资源的有序访问。CUDA 流实例与固定的 CUDA 设备相关联。在用于处理多台设备的应用中,在某些情况下,为给定流获取底层设备的句柄有助于根据设备特性定制应用。 以前,CUDA API 没有提供检索与 CUDA 流关联的设备的机制;开发者必须自行追踪。添加 cudaStreamGetDevice CUDA API 以检索与 CUDA 流关联的设备,可以简化应用。 编译器更新 新的编译器更新包括以下内容: CUDA 工具包 12.8 版本引入了对作为主机端编译器的 GCC 14 的支持。 现在,Blackwell 架构的默认高级优化器基于 LLVM 18 nvdisasm 现在支持发射 JSON 格式的 SASS 反汇编。 加速 Python 更新 以下两个测试版现已面向 Python 用户提供: CUDA Python 已发布名为 cuda.core 的新惯用对象模型的早期原型,并将 CUDA 绑定移至子模块 cuda.bindings 。有关更多信息,请参阅 NVIDIA/cuda-python GitHub 存储库中的文档。 CUDA 核心计算库 ( CCCL ) 已发布用于并行和协作算法的早期 Python 原型,使您能够使用线程级并行性以及来自纯 Python 代码的用户定义类型和函数。详细了解 CCCL。 此外,CuPy 团队还将发布新版本,其中的 Blackwell 补丁经过验证,现已全面推出。 功能齐全的架构 在 CUDA 工具包 12.8 版本中,我们现在认为 Maxwell、Pascal 和 Volta 架构功能齐全,并且即将发布的版本将冻结对这些架构的支持。 这意味着,在未来的版本中,不会向驱动添加任何新功能来启用支持 Maxwell、Pascal 和 Volta 架构的新 CUDA 工具包功能。最终用户将能够在其生命周期中使用受支持的即将推出的 LTS 驱动分支,在 Maxwell、Pascal 和 Volta 架构上运行现有的软件堆栈和应用。 从版本 12.8 开始,开发者在运行针对这些架构的离线编译器时,将在使用 nvcc 、 nvrtc 和 nvjitlink 时输出警告消息。 在下一个主要 CUDA 工具包版本中,将从编译器中删除对 Maxwell、Pascal 和 Volta 架构的离线编译支持。即将推出的用于生产应用程序执行的 LTS 驱动以及 Maxwell、Pascal 和 Volta 应用程序的 JIT 编译将在正常的 3 年期 LTS 支持窗口期内获得支持。 如需了解更多详情,请参阅 CUDA Toolkit 12.8 版本说明 。 总结 CUDA 工具包 12.8 版本为 NVIDIA Blackwell 架构提供完整的功能支持。无论您是使用 C++ 还是 Python 开发应用程序,此版本都将继续为最新的 NVIDIA GPU、加速库、编译器和开发者工具提供增强支持。 想要了解更多信息?查看 CUDA 文档 ,浏览最新的 NVIDIA Deep Learning Institute (DLI) 产品 ,并访问 NGC 目录 。在 CUDA Developer Forums 中提出问题并加入对话。 致谢 感谢以下 NVIDIA 贡献者:Stephen Jones、Jackson Marusarz、Becca Zandstein、Andy Terrel、Ashraf Eassa、Matt Nicely 和 Mridula Prakash。
https://developer.nvidia.com/blog/recent-posts/
Recent posts
No content found
https://developer.nvidia.com/zh-cn/blog/recent-posts/
最近文章
No content found
https://developer.nvidia.com/blog/high-performance-remote-io-with-nvidia-kvikio/
High-Performance Remote IO With NVIDIA KvikIO
Workloads processing large amounts of data, especially those running on the cloud, will often use an object storage service (S3, Google Cloud Storage, Azure Blob Storage, etc.) as the data source. Object storage services can store and serve massive amounts of data, but getting the best performance can require tailoring your workload to how remote object stores behave. This post is for RAPIDS users who want to read or write data to object storage as quickly as possible so that IO doesn’t bottleneck your workload. Some of your knowledge about how local file systems behave translates to remote object stores, but they are fundamentally different. Probably the biggest difference between the two, at least for data analysis workloads, is that read and write operations on object storage have higher and more variable latency . Every storage service has their own set of best practices and performance guidelines ( AWS , Azure ). Here, we’ll give some general guidelines that are focused on data analysis workloads. Location Placing your compute nodes near the storage service (ideally, in the same cloud region) will give you the fastest and most reliable network between the machines running your workload and the machines serving the data. And, at the end of the day, the transfer will be limited by the speed of light so minimizing the physical distance doesn’t hurt. File format “Cloud-native” file formats have been developed to work well with object storage. These file formats typically provide fast, easy access to metadata (which includes both high-level information like the column names or data types, and lower-level information like where in the file specific data subsets are located). Apache Parquet , Zarr , and Cloud Optimized GeoTIFF are some examples of cloud-native file formats for various types of data. Because object storage services typically support range requests , clients (like cuDF ) can read the metadata and then download just the data you actually need. For example, cuDF can read just a few columns out of a Parquet file with many columns, or a Zarr client can read a single chunk out of a large n-dimensional array. These reads are done in just a few HTTP requests, and without needing to download a bunch of extraneous data that just gets filtered out. File size Because every read operation requires (at least) one HTTP request, we’d prefer to amortize the overhead from each HTTP request over a reasonably large number of bytes. If you control the data-writing process, you’ll want to ensure that the files are large enough for your downstream processing tasks to get good performance. The optimal value depends on your workload, but somewhere in the dozens to low-hundreds of MBs is common for parquet files (see below for some specific examples). That said, you’ll need to be careful with how file size interacts with the next tool in our kit: concurrency. Concurrency Using concurrency to download multiple blobs (or multiple pieces of a single blob) at the same time is essential to getting good performance out of a remote storage service. Since it’s a remote service, your process is going to spend some time (perhaps a lot of time) waiting around doing nothing. This waiting spans the time between when the HTTP request is sent and the response received. During this time, we wait for the network to carry the request, the storage service to process it and send the response, and the network to carry the (possibly large) response. While parts of that request/response cycle scale with the amount of data involved, other parts are just fixed overhead. Object storage services are designed to handle many concurrent requests. We can combine that with the fact that each request involves some time waiting around doing nothing, to make many concurrent requests to raise our overall throughput. In Python, this would typically be done using a thread pool : pool = concurrent.futures.ThreadPoolExecutor() futures = pool.map(request_chunk, chunks) Or with asyncio : tasks = [request_chunk_async(chunk) for chunk in chunks] await asyncio.gather(*tasks) We’re able to have a lot of reads waiting around doing nothing at the same time , which improves our throughput. Because each thread/task is mostly doing nothing, it’s ok to have more threads/tasks than your machine has cores. Given enough concurrent requests you will eventually saturate your storage service, which has some requests per second and bandwidth targets it tries to meet. But those targets are high; you’ll typically need many machines to saturate the storage service and should achieve very high throughput. Libraries Everything above applies to essentially any library doing remote IO from an object storage service. In the RAPIDS context, NVIDIA KvikIO is notable because It automatically chunks large requests into multiple smaller ones and makes those requests concurrently. It can read efficiently into host or device memory, especially if GPU Direct Storage is enabled. It’s fast. As mentioned in the RADIDS 24.12 release announcement , KvikIO can achieve impressive throughput when reading from S3. Let’s take a look at some benchmarks to see how it does. Benchmarks When you read a file, KvikIO splits that read into smaller reads of kvikio.defaults.task_size bytes. It makes those read requests in parallel using a thread pool with kvikio.defaults.num_threads workers. These can be controlled using the environment variables KVIKIO_TASK_SIZE and KVIKIO_NTHREADS , or through Python with: with kvikio.defaults.set_num_threads(num_threads), kvikio.defaults.set_task_size(size): ... See Runtime Settings for more. This chart shows the throughput, in megabits per second, of reading a 1 GB blob from S3 to a g4dn EC2 instance in the same region for various sizes of the thread pool (higher is better). Figure 1. From a benchmark reading a 1 GB file from S3 to a g4dn.xlarge EC2 instance, which has a published bandwidth of up to 25 Gbps. This the throughput of kvikio.RemoteFile.read for various values of kvikio.defaults.num _threads and a task size of 16 MiB. Throughput increases as we add more threads and parallelize the reads, up to a point. Fewer threads (less than four) achieve lower throughput and take longer to read the file. More threads (64, 128, 256) achieve higher throughput by parallelizing the requests to the storage service, which serves them in parallel. There are diminishing and even negative returns as we hit the limits of the storage service, network, or other bottlenecks in our system. With remote IO, each thread spends a relatively long time idle waiting for the response, so a higher number of threads (relative to your number of cores) might be appropriate for your workload. We see that the throughput is highest between 64 to 128 threads in this case. As shown in the next figure, the task size also affects the maximum throughput. Figure 2. From a benchmark reading a 1 GB file from S3 to a g4dn.xlarge EC2 instance, which has a published bandwidth of up to 25 Gbps. This shows a heatmap of the throughput of kvikio.RemoteFile.read . The horizontal axis shows throughput for various task sizes, while the vertical axis shows various thread counts. As long as the task size isn’t too small (around or below 4 MiB) or too large (around or above 128 MiB), then we get around 10 Gbps of throughput. With too small of a task size, the overhead of making many HTTP requests reduces throughput. With too large of a task size, we don’t get enough concurrency to maximize throughput. KvikIO achieves higher throughput on this workload when compared with boto3 , the AWS SDK for Python, even when boto3 is used in a thread pool to execute requests concurrently. Figure 3. From a benchmark reading a 1 GB from S3 to a g4dn.xlarge EC2 instance, which has a published bandwidth of up to 25 Gbps. The KvikIO benchmark used 64 threads and 16 MiB task size. The Boto3 benchmark used a ThreadPool to read many byte 4 MB chunks in parallel, which a parameter search showed to be the fastest chunk size for boto3. As a slightly more realistic workload, though still just one focused solely on IO, we compare the performance reading a batch of 360 parquet files, each about 128 MB. This was run on an AWS g4dn.12xlarge instance , which has 4 NVIDIA T4 GPUs and 48 vCPUs. Figure 4. From a benchmark reading a parquet data set from S3 to a g4dn.12xlarge EC2 instance, which has a published bandwidth of up to 50 Gbps. The dataset had 360 Apache Parquet files of about 128 MB each, for a total of about 46 GB. The Dask cluster had 4 workers. These results use cuDF 25.04 which will include an optimization to read parquet footers in parallel. With KvikIO enabled, the four Dask worker processes are able to collectively achieve almost 20 Gbps of throughput from S3 to this single node. Conclusion As RAPIDS accelerates other parts of your workload, IO can become a bottleneck. If you’re using object storage and are tired of waiting around for your data to load, try out some of the recommendations from this post. Let us know how things work with KvikIO on GitHub . You can also join over 3,500 members on the RAPIDS Slack community to talk GPU-accelerated data processing.
https://developer.nvidia.com/zh-cn/blog/high-performance-remote-io-with-nvidia-kvikio/
借助 NVIDIA KvikIO 实现高性能远程 IO
处理大量数据的工作负载 (尤其是在云端运行的工作负载) 通常会使用对象存储服务 (S3、Google Cloud Storage、Azure Blob Storage 等) 作为数据源。对象存储服务可以存储和提供海量数据,但要想获得最佳性能,可能需要根据远程对象存储的行为方式调整工作负载。本文适用于希望尽快将数据读或写到对象存储,以便 IO 不会限制工作负载的 RAPIDS 用户。 您对本地文件系统行为方式的一些了解可转换为远程对象存储,但它们本质上是不同的。这两者之间的最大区别 (至少对于数据分析工作负载而言) 可能在于,对象存储上的读取和写入操作具有越来越高的可变延迟。每个存储服务 (AWS、Azure) 都有自己的一套最佳实践和性能指南。在这里,我们将提供一些专注于数据分析工作负载的一般指南。 地址 将计算节点放置在存储服务附近 (理想情况下,应位于同一云区域),可在运行工作负载的计算机和为数据提供服务的计算机之间提供速度最快、最可靠的网络。在一天结束时,传输将受到光速的限制,因此最大限度地减少物理距离不会造成伤害。 文件格式 “云原生”文件格式的开发能够很好地与对象存储配合使用。这些文件格式通常可让用户快速轻松地访问元数据 (元数据包括列名称或数据类型等高级信息,以及文件特定数据子集所在位置等低级信息)。 Apache Parquet 、 Zarr 和 Cloud Optimized GeoTIFF 是适用于各种类型数据的云原生文件格式的一些示例。 由于对象存储服务通常支持范围请求,因此客户端 (如 cuDF ) 可以读取元数据,然后只下载您实际需要的数据。例如,cuDF 只能从包含多列的 Parquet 文件中读取几列数据,或者 Zarr 客户端可以从大型 n 维数组中读取单个 chunk。这些读取只需通过几次 HTTP 请求即可完成,而且无需下载一堆刚刚被过滤掉的不相干数据。 文件大小 由于每个读取操作都需要 (至少) 一个 HTTP 请求,因此我们倾向于在合理数量的字节数上分担每个 HTTP 请求的用度。如果您控制数据写入过程,则需要确保文件足够大,以便下游处理任务获得良好性能。最佳值取决于您的工作负载,但 parquet 文件的大小通常介于数十 MB 到数百 MB 之间 (请参阅下文,了解一些特定示例)。 也就是说,您需要注意文件大小与 Kit 中的下一个工具:并发的交互方式。 并发 使用并发同时下载多个 blobs (或单个 blob 的多个部分) 对于从远程存储服务中获得良好性能至关重要。由于这是一项远程服务,您的流程将花费一些时间 (可能会花费大量时间) 四处等待,不执行任何操作。此等待时间为 HTTP 请求被发送到响应被接收之间的时间。在此期间,我们会等待网络执行请求,等待存储服务处理并发送响应,等待网络执行响应 (可能较大)。虽然该请求/响应周期的一部分会随所涉及的数据量而扩展,但其他部分只是固定的开销。 对象存储服务旨在处理许多并发请求。我们可以将这一点与每个请求都涉及一些时间来等待不执行任何操作的事实相结合,以发出许多并发请求来提高整体吞吐量。在 Python 中,这通常使用线程池完成: pool = concurrent.futures.ThreadPoolExecutor() futures = pool.map(request_chunk, chunks) 或使用 异步 : tasks = [request_chunk_async(chunk) for chunk in chunks] await asyncio.gather(*tasks) 我们能够让大量读取 同时 不执行任何操作,从而提高吞吐量。由于每个线程/任务通常不执行任何任务,因此拥有比计算机核心数更多的线程/任务也是可以的。如果并发请求数量足够多,您最终会使存储服务饱和,而存储服务试图满足一些每秒请求数和带宽目标数。但这些目标很高;您通常需要多台机器使存储服务饱和,并且应该实现非常高的吞吐量。 库 上述内容基本上适用于从对象存储服务执行远程 IO 的任何库。在 RAPIDS 环境中, NVIDIA KvikIO 值得注意,因为 它会自动将大型请求分块为多个较小的请求,并并发发出这些请求。 它可以高效读取主机或设备内存,尤其是启用 GPU Direct Storage 时。 速度很快。 正如 RADIDS 24.12 发布公告中提到的那样,从 S3 读取数据时,KvikIO 可以实现惊人的吞吐量。我们来看看一些基准测试,看看效果如何。 基准测试 当您读取文件时,KvikIO 会将读取的文件拆分成较小的 kvikio.defaults.task_size 字节读取。它使用具有 kvikio.defaults.num_threads 工作线程的线程池并行执行这些读取请求。可以使用环境变量 KVIKIO_TASK_SIZE 和 KVIKIO_NTHREADS 控制这些内容,也可以通过 Python 使用: with kvikio.defaults.set_num_threads(num_threads), kvikio.defaults.set_task_size(size): ... 详情请参阅 Runtime Settings 。 此图表显示了在同一区域内,针对不同大小的线程池,从 S3 到 g4dn EC2 实例读取 1 GB Blob 的吞吐量 (以 Mbps 为单位) (越高越好)。 图 1、从 S3 读取 1 GB 文件的基准测试,到具有高达 25 Gbps 已发布带宽的 g4dn.xlarge EC2 实例。这是 kvikio.RemoteFile.read 的吞吐量,适用于各种值的 kvikio.defaults.num _threads 和 16 MiB 的任务。随着我们添加更多线程并对读取进行并行化,吞吐量会增加到一定程度。 线程越少 (少于 4 个),吞吐量越低,读取文件的时间越长。更多线程 (64、128、256) 通过将请求并行化到以并行方式提供服务的存储服务,实现更高的吞吐量。当我们遇到系统中存储服务、网络或其他瓶颈的限制时,会出现递减甚至负回报的情况。 借助远程 IO,每个线程都会在相对较长的时间内等待响应,因此对于您的工作负载,可能适合使用更多线程 (相对于核心数量而言)。我们看到,在本例中,吞吐量最高,介于 64 到 128 个线程之间。 如下图所示,任务大小也会影响最大吞吐量。 图 2、从 S3 读取 1 GB 文件的基准测试,到具有高达 25 Gbps 已发布带宽的 g4dn.xlarge EC2 实例 。这显示了 kvikio.RemoteFile.read 吞吐量的热图。水平轴显示各种任务大小的吞吐量,而垂直轴显示各种线程数量。 只要任务大小不是太小(大约或低于 4 MiB)或太大(大约或超过 128 MiB),吞吐量就会达到 10 Gbps 左右。由于任务规模过小,发出许多 HTTP 请求会降低吞吐量。由于任务规模过大,我们无法获得足够的并发能力来最大限度地提高吞吐量。 与 boto3 (适用于 Python 的 AWS SDK) 相比,即使在线程池中使用 boto3 并发执行请求,KvikIO 也能实现更高的吞吐量。 图 3、从从 S3 读取 1 GB 的基准测试,到具有高达 25 Gbps 已发布带宽的 g4dn.xlarge EC2 实例。KvikIO 基准测试使用 64 个线程和 16 MiB 任务大小。Boto3 基准测试使用 ThreadPool 并行读取许多 4 MB 字节的块,而参数搜索表明,对于 Boto3 而言,这是最快的块大小。 对于略为逼真的工作负载 (尽管仍然仅有一个工作负载专注于 IO),我们比较了读取一批 360 个 parquet 文件 (每个文件约 128 MB) 的性能。这在 AWS g4dn.12xlarge 实例上运行,该实例包含 4 个 NVIDIA T4 GPU 和 48 个 vCPUs。 图 4、从读取 S3 中的 Parquet 数据集的基准测试,到具有高达 50 Gbps 已发布带宽的 g4dn.12xlarge EC2 实例。该数据集包含 360 个 Apache Parquet 文件,每个文件约 128 MB,总计约 46 GB。Dask 集群有 4 个工作者。这些结果使用 cuDF 25.04,其中包括并行读取 Parquet 文件页脚的优化。 启用 KvikIO 后,四个 Dask 工作进程能够共同实现从 S3 到此单个节点的近 20 Gbps 吞吐量。 结束语 随着 RAPIDS 加速工作负载的其他部分,IO 可能会成为瓶颈。如果您使用的是对象存储,并且已经疲于等待数据加载,请尝试本博文中的一些建议。让我们了解如何在 Github 上使用 KvikIO。您还可以与 RAPIDS Slack 社区的 3,500 多名成员一起讨论 GPU 加速的数据处理。
https://developer.nvidia.com/blog/latest-multimodal-addition-to-microsoft-phi-slms-trained-on-nvidia-gpus/
Latest Multimodal Addition to Microsoft Phi SLMs Trained on NVIDIA GPUs
Large language models (LLMs) have permeated every industry and changed the potential of technology. However, due to their massive size they are not practical for the current resource constraints that many companies have. The rise of small language models (SLMs) bridge quality and cost by creating models with a smaller resource footprint. SLMs are a subset of language models that tend to focus on specific domains and are built with simpler neural architectures. As models grow to mimic how humans perceive the world around them, models must rise to accept multiple forms of multimodal data. Microsoft announces the new generation of open SLMs to the Phi family with two new additions: Phi-4-mini Phi-4-multimodal Phi-4-multimodal is the first multimodal model to join the family that accepts text, audio, and image data inputs. These models are small enough for on-device deployment. This release builds on top of the December 2024 research-only release of the Phi-4 14B parameter SLM and enables commercial use for the two new smaller models. The new models are available on the Azure AI Foundry , Microsoft’s Cloud AI platform for design, customize, and manage AI applications and agents. You can test out each member of the Phi family through the NVIDIA API Catalog , which is the first sandbox environment to support each modality and tool-calling for Phi-4-multimodal . Use the preview NIM microservice to integrate the model into your applications today. Why invest in SLMs? SLMs enable generative AI capabilities in memory and compute constrained environments. For example, SLMs can be deployed directly on smartphones and several consumer-grade devices. On-device deployment can facilitate privacy and compliance for use cases that must adhere to regulatory requirements. Other benefits of SLMs include lower latency due to inherently faster inference compared to an LLM of similar quality. SLMs do tend to perform better on specialized tasks correlated to their training data. However, to supplement generalization and adaptability to different tasks, you can use retrieval-augmented generation (RAG) or native-function calling to build performant agentic systems. Phi-4-multimodal Phi-4-multimodal is with 5.6B parameters and accepts audio, image, and text reasoning. This enables it to support use cases such as automated speech recognition (ASR), multi-modal summarization, translation, OCR, and visual reasoning. This model was trained on 512 NVIDIA A100-80GB GPUs over 21 days. Figure 1 shows how you can preview your image data and ask Phi-4-multimodal visual QA in the NVIDIA API Catalog. You can also see how to adjust parameters such as token limits, temperature, and sampling values. You can generate sample code in Python, JavaScript, and Bash to help you integrate the model more easily into your applications. Figure 1. Visual QA demo in NVIDIA API Catalog You can also demo tool calling with a set of prebuilt agents. Figure 2 shows a tool that retrieves live weather data. Figure 2. Tool-calling demo in NVIDIA API Catalog Phi-4-mini Phi-4-mini is a text-only, dense, decoder-only Transformer model with 3.8B parameters that is optimized for chat. It includes a long-form context window of 128K tokens. This model was trained on 1024 NVIDIA A100 80GB GPUs over 14 days. For both models, the training data is intentionally focused on high quality educational data and code which results in a textbook-like quality to the models. Text, speech, and vision benchmark data can be found in the model cards. Advancing community models NVIDIA is an active contributor to the open-source ecosystem and has released several hundred projects under open-source licenses. NVIDIA is committed to optimizing community software and open models such as Phi which promotes AI transparency and lets users broadly share work in AI safety and resilience. Using the NVIDIA NeMo platform , these open models can be customized on proprietary data to be highly tuned and efficient for diverse AI workflows across any industry. NVIDIA and Microsoft have a long standing partnership which includes several collaborations driving innovation on GPUs on Azure, integrations and optimizations for PC developers using NVIDIA RTX GPUs, and many more, including research spanning generative AI to healthcare and life sciences. Get started today Bring your data and try out Phi-4 on the NVIDIA-accelerated platform at build.nvidia.com/microsoft . On the first multi-modal sandbox for Phi-4-multimodal, you can try out text, image, and audio as well as sample tool calling to see how this model will work for you in production.
https://developer.nvidia.com/zh-cn/blog/latest-multimodal-addition-to-microsoft-phi-slms-trained-on-nvidia-gpus/
在 NVIDIA GPU 上训练的 Microsoft Phi SLM 的多模态最新进展
大语言模型(LLMs)已渗透到各行各业,并改变了技术潜力。但是,由于规模庞大,它们对于许多公司目前面临的资源限制来说并不切实际。 小语言模型 (SLMs)的兴起通过创建资源占用更小的模型,将质量和成本联系起来。SLMs 是语言模型的一个子集,这些模型倾向于专注于特定领域,并使用更简单的神经架构构建。随着模型的发展模仿人类感知周围环境的方式,模型必须接受多种形式的多模态数据。 Microsoft 宣布在 Phi 系列中 推出新一代开放式 SLM ,并新增两项功能: Phi-4-mini Phi-4-multimodal Phi-4-multimodal 是第一个加入该系列的多模态模型,接受文本、音频和图像数据输入。 这些模型足够小,可以在设备上部署。此版本基于 2024 年 12 月发布的 Phi-4 14B 参数 SLM 的研究版本构建而成,可用于两个新的较小模型的商业用途。 这些新模型可在 Microsoft 的云 AI 平台 Azure AI Foundry 上使用,用于设计、定制和管理 AI 应用和代理。 您可以通过 NVIDIA API Catalog 测试 Phi 系列的每个成员,这是第一个支持 Phi-4 多模态 的每种模式和工具调用的沙盒环境。立即使用预览 NIM 微服务将模型集成到您的应用中。 为何投资 SLM? SLMs 可在内存和计算受限环境中实现生成式 AI 功能。例如,SLMs 可以直接部署在智能手机和多台消费级设备上。对于必须遵守监管要求的用例,设备端部署可以促进隐私和合规性。 SLM 的其他优势包括降低延迟,因为与质量相似的 LLM 相比,其本身的推理速度更快。SLM 在处理与其训练数据相关的专业任务时往往表现得更好。但是,为了补充对不同任务的泛化和适应性,您可以使用检索增强生成(RAG)或原生函数调用来构建高性能代理系统。 Phi-4-multimodal Phi-4-multimodal 具有 5.6B 个参数,接受音频、图像和文本推理。这使其能够支持自动语音识别 (ASR)、多模态摘要、翻译、OCR 和视觉推理等用例。该模型在 512 个 NVIDIA A100-80GB GPUs 上进行了为期 21 天的训练。 事实证明,该模型在 ASR 方面表现出色,因为它在 Huggingface OpenASR 排行榜上排名第一 ,单词错误率为 6.14%。 词错误率 (WER) 是量化语音识别性能的常用计算方法。WER 计算不正确转录的单词 (替换、插入和删除) 与正确文本相比所占的百分比。 图 1 展示了如何在 NVIDIA API Catalog 中预览图像数据并询问 Phi-4 多模态视觉问答。您还可以了解如何调整参数,例如令牌限制、温度和采样值。您可以使用 Python、JavaScript 和 Bash 生成示例代码,以帮助您更轻松地将模型集成到应用中。 图 1、NVIDIA API Catalog 中的可视化问答演示 您还可以使用一组预构建代理演示工具调用。图 2 显示了用于检索实时天气数据的工具。 图 2、NVIDIA API Catalog 中的工具调用演示 Phi-4-mini Phi-4-mini 是一个仅文本、密集、仅解码器的 Transformer 模型,具有 3.8B 个参数,并针对聊天进行了优化。它包含一个包含 128K 个令牌的长形式上下文窗口。该模型在 1024 个 NVIDIA A100 80GB GPUs 上进行了为期 14 天的训练。 对于这两个模型,训练数据有意地集中在高质量的教育数据和代码上,从而使模型获得类似于教科书的质量。您可以在模型卡中找到文本、语音和视觉基准测试数据。 推进社区模式 NVIDIA 是开源生态系统的积极贡献者,已根据开源许可发布了数百个项目。NVIDIA 致力于优化社区软件和 open-source licenses 中的项目,如 Phi,它促进了 AI 透明度,并让用户广泛分享在 AI 安全性和弹性方面的工作。 借助 NVIDIA NeMo 平台,这些开放模型可以根据专有数据进行定制,以便针对各行各业的各种 AI 工作流进行高度调整并提高效率。 NVIDIA 和 Microsoft 有着长期的合作伙伴关系,其中包括推动 Azure 上 GPU 创新的多项合作、为使用 NVIDIA RTX GPU 的 PC 开发者提供的集成和优化,等等,包括从生成式 AI 到医疗健康和生命科学的研究。 立即开始使用 请访问 build.nvidia.com/microsoft ,带上您的数据并在 NVIDIA 加速平台上试用 Phi-4。 在 Phi-4 多模态的第一个多模态沙盒中,您可以尝试使用文本、图像、音频以及示例工具调用,以了解此模型在生产环境中的工作原理。
https://developer.nvidia.com/blog/building-a-simple-vlm-based-multimodal-information-retrieval-system-with-nvidia-nim/
Building a Simple VLM-Based Multimodal Information Retrieval System with NVIDIA NIM
In today’s data-driven world, the ability to retrieve accurate information from even modest amounts of data is vital for developers seeking streamlined, effective solutions for quick deployments, prototyping, or experimentation. One of the key challenges in information retrieval is managing the diverse modalities in unstructured datasets, including text, PDFs, images, tables, audio, video, and so on. Multimodal AI models address this challenge by simultaneously processing multiple data modalities, generating cohesive and comprehensive output in different forms. NVIDIA NIM microservices simplify the secure and reliable deployment of AI foundation models for language, computer vision , speech, biology, and more. NIM microservices can be deployed on NVIDIA-accelerated infrastructure anywhere and expose industry-standard APIs for fast integration with applications and popular AI development frameworks, including LangChain and LlamaIndex. This post helps you get started with building a vision language model (VLM) based, multimodal, information retrieval system capable of answering complex queries involving text, images, and tables. We walk you through deploying an application using LangGraph, the state-of-the-art llama-3.2-90b-vision-instruct VLM, the optimized mistral-small-24B-instruct large language model (LLM), and NVIDIA NIM for deployment. This method of building simple information retrieval systems offers several advantages over traditional ones. The latest VLM NIM microservice enables enhanced contextual understanding by processing lengthy, complex visual documents without sacrificing coherence. The integration of LangChain’s tool calling enables the system to create tools, dynamically select and use external tools, and improve the precision of data extraction and interpretation from various sources. This system is good for enterprise applications because it generates structured outputs, ensuring consistency and reliability in responses. For more information about the implementation steps of this system, see the /NVIDIA/GenerativeAIExamples GitHub repo. A simple HTML multimodal retrieval pipeline The system consists of the following pipelines: Document ingestion and preprocessing: Runs a VLM on the images and translates them into text. Question-answering: Enables the user to ask questions of the system. Both pipelines integrate NVIDIA NIM and LangGraph to process and understand text, images, complex visualizations, and tables effectively. Data ingestion and preprocessing pipeline This stage parses documents to process text, images, and tables separately. Tables are first converted into images, and images are processed by the NVIDIA-hosted NIM microservice API endpoint for the llama-3.2-90b-vision-instruct VLM to generate descriptive text. Next, in the document reconstruction step, the descriptive text is merged with the original text of the document, then summarized by an LLM with long context modeling capability. In this implementation, llama-3.2-90b-vision-instruct is also used as the LLM, although other LLMs such as mistral-small-24b-instruct can also be deployed. Finally, the complete text, summaries, images, and their descriptions are stored in a NoSQL database, along with unique document identifiers. Figure 1. Data ingestion and preprocessing pipeline LLMs with long context modeling can process entire documents without fragmentation, enhancing comprehension of the document in a single pass, and capturing relationships and nuances across longer spans of text, leading to more accurate information retrieval. In contrast, traditional models may handle inputs of up to a few thousand tokens, requiring lengthy documents to be split into smaller chunks to fit within the model’s context window. This chunking process can disrupt coherence and context, making it more difficult to accurately retrieve and rank relevant information. However, long context modeling presents challenges related to scalability and cost, which must be considered when trading off with higher accuracy. QA pipeline All document summaries and their identifiers are compiled into a large prompt. When a query is sent, a LLM with long context modeling (mistral-small-24b-instruct in this case) processes the question, evaluates the relevance of each summary to the query, and returns the identifiers of the most relevant documents. Figure 2. Question-answering pipeline Next, the most relevant documents are fed into an LLM with long context (mistral-small-24b-instruct). The model generates an answer to the query based on the textual content. If the model identifies that an image may contain pertinent information based on its descriptive text, an additional step is triggered: the original image and the user’s question are sent to the VLM (llama-3.2-90b-vision-instruct), which can provide an answer based on the actual visual content. Finally, the system combines both textual and visual insights to deliver a comprehensive answer. Structured outputs ensure that the data returned by the model conforms to a predefined format, making it easier to extract specific information and perform subsequent operations. In contrast, unstructured or variable outputs can introduce ambiguities and difficulties in parsing the model’s responses, hindering automation and integration with other systems. Generating structured data from models typically requires carefully designed prompts to guide the model into responding in a particular format, such as JSON. However, ensuring consistent adherence to this structure can be challenging due to the models’ natural tendency to generate free-form text. NVIDIA NIM now natively supports capabilities for generating structured outputs . This means that you can rely on built-in functionalities to ensure that the model’s responses are consistently formatted, reducing the need for complex prompt engineering. Integrating NVIDIA NIM with LangGraph NVIDIA NIM offers seamless compatibility with popular frameworks and the latest AI models for your applications. The implementation of the pipeline integrates NVIDIA NIM with LangGraph , a framework to build agentic applications to determine the control flow, which has been widely adopted by the developer community. To orchestrate the workflow of this pipeline, the graph mainly consists of two nodes: Assistant node: Serves as an agent responsible for managing the logic and decision-making process. It interacts with the user’s inputs and invokes the necessary tools. Tools node: A collection of tools that perform specific tasks required by the assistant. Figure 3. Use LangGraph to build an agent for the pipeline Assistant node The assistant node is a primary agent that operates according to the workflow outlined in Figure 3. The code of the main agent can be found in the /NVIDIA/GenerativeAIExamples GitHub repo. Here are the agent inputs: Collection_name : The set of documents on which to search. Question : The user’s question. document_id : (Optional) If provided, the agent skips the document ranking phase. This is the agent process: Document selection : If document_id is not provided, the agent invokes the find_best_document_id tool, which identifies the most relevant document for the user’s question within the specified collection. Question answering : With document_id , the agent uses the query_document tool. This tool attempts to answer the question using the LLM (mistral-small-24b-instruct) based on the text and image descriptions within the document. Image analysis (if necessary): If the query_document tool indicates that the answer might be in an image (by returning an image_hash value), the agent invokes the query_image tool. This tool retrieves the actual image and uses a VLM to analyze the image and find the answer. Tools node We implemented three key tools for the agent to perform its tasks. Find_best_document_id : Identify the most relevant document for the user’s question when document_id is not provided. For more information, see the /NVIDIA/GenerativeAIExamples GitHub repo. query_document : Search for an answer within the specified document. If the answer may be in an image, it provides details to query the image. For more information, see the /NVIDIA/GenerativeAIExamples GitHub repo. query_image : Analyze the actual image using a VLM when the answer might be within the image content. For more information, see the /NVIDIA/GenerativeAIExamples . Binding external tools with models Tool calling is a feature that enables language models to integrate and interact with external tools or functions based on the prompts that they receive. This mechanism enables a model to decide which tools to use and how to use them to accomplish specific tasks. Tool binding empowers models to extend their capabilities dynamically, selecting appropriate tools during execution to provide more accurate, context-aware responses. Binding external tools is particularly crucial in agentic frameworks, where agents must choose the appropriate tools and provide the necessary arguments to perform tasks effectively. The benefits of binding external tools include the following: Extended capabilities : Models can perform complex operations such as calculations, data retrieval, or API calls, which go beyond mere text generation. Dynamic tool selection : The model can assess in real time which tools are most suitable for the task, improving efficiency and relevance. Seamless integration : NVIDIA NIM supports the integration of external tools, such as LangChain and LangGraph, with open community models such as Llama 3.3. You can adopt these advanced features without making significant changes to your existing systems. In this implementation, use LangChain’s @tool decorator to create three tools, then use the .bind_tools method to bind the tools with models. Defining structured outputs with Pydantic By defining the output schema with Pydantic and guiding an LLM NIM microservice such as mistral-small-24b-instruct through precise prompts, you ensure that the responses are consistent, reliable, and easily consumable by other components within the system. This approach is essential when integrating the LLM into automated workflows and agent-based frameworks such as LangGraph. Define the structure The process begins by defining the structure of the output that you expect from the LLM using Pydantic. This guarantees that the data returned by the model is consistent and can be easily parsed for downstream processing . from typing import List, Optional from pydantic import BaseModel, Field class Document(BaseModel): """ Represents a document with an identifier and its summary. """ id: str = Field(..., description="Hash identifier of the document") summary: str = Field(..., description="The summary of the document as is") class BestDocuments(BaseModel): """ Contains a list of the best documents to answer the question and their summaries. """ documents: List[Document] = Field(..., description="List of best documents") class Answer(BaseModel): """ Represents the answer to the user's question. """ answer: str = Field(..., description="Answer to the question posed by the user") Next, instruct the LLM to generate outputs that align with the defined Pydantic structures. This is achieved by incorporating specific instructions within the prompt and using LangChain’s with_structured_output method. Define the prompt The prompt_document_expert contains detailed instructions for the LLM, specifying the expected input format (Markdown with document summaries) and the required output format (JSON matching the BestDocuments schema). from langchain.chat_models import ChatNVIDIA from langchain.prompts import ChatPromptTemplate # Initialize the LLM with desired parameters llm = ChatNVIDIA(model="mistralai/mistral-small-24b-instruct ", temperature=0, max_tokens=3000) # Define the prompt template for the document expert prompt_document_expert = ChatPromptTemplate.from_messages( [ ( "system", f""" # Extract Best Document Identifier from list of summaries, based on a question coming from the user. You are an expert in getting insights of a document, based on its summaries and you are able to figure the best matches to the question in terms of the summary of the document. Provide no more than 3 of these documents. ## Format of the Input - The input is a markdown file containing second level headers (##) with the chapter index in the form ## Document <document_id> where document_id is an integer pointing to the index of the document. After the document heading there is the summary of the document which is relevant to understand the content of the document. ## Format of the output - The output is going to be the list of the best documents indices and a few of the corresponding summaries that help to answer the question coming from the user. ## Content - Here is the input you can work on: {{documents_context}} """, ), ( "human", "Can you tell me what are the most relevant document ids for this question: {question}" ), ("human", "Tip: Make sure to answer in the correct format"), ] ) Prepare context The get_context function prepares the input data by retrieving document summaries and formatting them appropriately. def get_context(input_data: dict) -> dict: collection_name = input_data.get("collection_name") question = input_data.get("question") documents_context = get_document_summaries_markdown(collection_name) # print(context) return {"documents_context": documents_context, "collection_name": collection_name, "question": question} Bind the structured output The llm.with_structured_output(BestDocuments) method instructs the LLM to produce output conforming to the BestDocuments Pydantic model. This method internally handles the parsing and validation of the LLM’s response, ensuring that the output matches the expected structure. LangChain’s with_structured_output method simplifies the process of binding the model to produce structured outputs. It abstracts the complexity of parsing and validating the LLM’s responses, enabling you to focus on defining the desired output structure and the prompt instructions. Finally, create a chain to process the input and generate the structured output: chain_document_expert = ( RunnableLambda(get_context) | prompt_document_expert | llm.with_structured_output(BestDocuments) | (lambda x: x.dict()) ) End-to-end tool in action To get started with the multimodal retrieval system, clone the /NVIDIA/GenerativeAIExamples GitHub repo and follow the Quick Start guide to set up the service. When it’s up and running, open your web browser and navigate to http://localhost:7860 to access the system through the Gradio user interface. For example, explore how the system processes queries on the NVIDIA Technical Blog. Ask a question about a bar chart showing the NVIDIA H100 GPU performance from one of the posts. The Select Question field is for evaluation purposes, with the Ground Truth Answer field value provided by a human. Figure 4. Agent multi-document evaluation This system generates an accurate answer based on the bar chart and also displays the relevant image for reference, such as the chart showing RetinaNet achieving 54%. This ensures precise answers while enabling users to visually verify the referenced data. Figure 5. Agent result with source graph for verification Video 1. How to Insert HTML Documents into a Multimodal Retriever Collection Using NVIDIA NIM Video 2. How to Search Text and Images Within a Multimodal Retriever Collection Using NVIDIA NIM Challenges and solutions As data volumes increase, so does the complexity of processing and retrieving relevant information. Handling large datasets efficiently is essential to maintaining performance and ensuring user satisfaction. In this information retrieval system, the sheer amount of document summaries can exceed the context window of even long-context models, making it challenging to process all summaries in a single prompt. Processing large volumes of data also demands considerable computational resources, which can result in higher costs and increased latency. Optimizing resource utilization is crucial to delivering fast and accurate responses while minimizing unnecessary expenses. Hierarchical document reranking solution To address scalability challenges, we implemented a hierarchical approach in the initial document reranking phase. Instead of processing all document summaries simultaneously, we divided them into manageable batches that fit within the model’s context window. The process involves multiple stages: Batch processing : Summaries are grouped into batches that the model can handle without exceeding the prompt size limitations. Intermediate reranking : The model evaluates each batch separately, ranking the documents within each group. Selection of top candidates : The most relevant documents from each batch are selected to proceed to the next stage. Final reranking : The top candidates from all batches are combined and re-evaluated to identify the most relevant document. Considering both scalability and cost concerns, this hierarchical approach ensures that all documents are considered without exceeding the model’s capacity. It not only improves scalability, but also boosts efficiency by narrowing down the candidate documents systematically until the most relevant one is identified. Future prospects with smaller models Using language models, especially those with long-context capabilities, involves processing a large number of tokens, which can incur significant costs. Each token processed adds to the overall expense, making cost management a critical consideration when deploying these systems at scale. The concern about cost is indeed valid. However, the landscape of language models is rapidly evolving, with smaller models becoming increasingly capable and efficient. As these advancements continue, these smaller models may offer similar performance at a fraction of the cost. Conclusion This post discussed the implementation of a simple multimodal information retrieval pipeline that uses NVIDIA NIM and LangGraph. The pipeline offers several advantages over existing information retrieval methods: Enhanced comprehension of documents A multimodal model to extract information from images, tables, and text Seamless integration of external tools Generation of consistent and structured output Using NVIDIA NIM and LangGraph, you can build on this work and customize it to suit specific needs. To get started, you can find source code in the /NVIDIA/GenerativeAIExamples GitHub repo. NVIDIA NIM also offers access to more models optimized for NVIDIA GPUs. You can explore NVIDIA NeMo , a scalable generative AI framework designed for researchers and PyTorch developers working on LLMs, multimodal models, and more. If you are working with a large corpora of enterprise data and are looking to develop enterprise-ready, real-time multilingual and cross-lingual information retrieval systems to generate context-aware responses, learn more about NVIDIA NeMo Retriever .
https://developer.nvidia.com/zh-cn/blog/building-a-simple-vlm-based-multimodal-information-retrieval-system-with-nvidia-nim/
使用 NVIDIA NIM 构建基于 VLM 的简单多模态信息检索系统
在当今数据驱动的世界中,即使是从少量数据中检索准确信息的能力,对于寻求精简、有效的快速部署、原型设计或实验解决方案的开发者来说也至关重要。信息检索领域的主要挑战之一是管理非结构化数据集中的各种模式,包括文本、PDF、图像、表格、音频、视频等。 多模态 AI 模型通过同时处理多个数据模式来应对这一挑战,以不同的形式生成连贯一致的全面输出。 NVIDIA NIM 微服务可简化 AI 基础模型 在语言、 计算机视觉 、语音、生物学等领域的安全可靠部署。 NIM 微服务可随时随地部署在 NVIDIA 加速基础设施上,并提供行业标准 API,以快速集成应用和热门 AI 开发框架 (包括 LangChain 和 LlamaIndex)。 本文将帮助您着手构建基于 视觉语言模型 (VLM)的多模态信息检索系统,该系统能够回答涉及文本、图像和表格的复杂查询。我们将引导您使用 LangGraph 部署应用程序、先进的 llama-3.2-90b-vision-instruct VLM、经过优化的 mistral-small-24B-instruct 大语言模型(LLM),以及用于部署的 NVIDIA NIM。 与传统方法相比,这种构建简单信息检索系统的方法具有许多优势。最新的 VLM NIM 微服务可在不牺牲一致性的情况下处理冗长而复杂的视觉文档,从而增强上下文理解。通过集成 LangChain 的工具调用 ,系统能够创建工具,动态选择和使用外部工具,并提高从各种来源提取和解释数据的精度。 此系统适用于企业应用,因为它生成结构化输出,确保响应的一致性和可靠性。有关此系统的实施步骤的更多信息,请参阅 /NVIDIA/GenerativeAIExamples Github 仓库。 简单的 HTML 多模态检索工作流 该系统由以下管道组成: 文档提取和预处理:在图像上运行 VLM 并将其转换为文本。 问答:允许用户提出系统问题。 这两个工作流均集成了 NVIDIA NIM 和 LangGraph,可有效处理和理解文本、图像、复杂的可视化效果和表格。 数据提取和预处理 pipeline 此阶段会解析文档,分别处理文本、图像和表格。首先将表格转换为图像,然后由 NVIDIA 托管的 NIM 微服务 API 端点为 llama-3.2-90b-vision-instruct VLM 处理图像,以生成描述性文本。 接下来,在文档重建步骤中,描述性文本将与文档的原始文本合并,然后由具有长上下文建模功能的 LLM 进行汇总。在此实施中,还可将 Llama-3.2-90b-vision-instruct 用作 LLM,不过也可部署其他 LLM(例如 mistral-small-24b-instruct)。 最后,完整的文本、摘要、图像及其说明将存储在 NoSQL 数据库中,以及唯一的文档标识符。 图 1. 数据提取和预处理管道 采用长上下文建模的 LLMs 可以处理整个文档,而不会出现碎片,从而在单个通道中增强对文档的理解,并捕获更长的文本跨度中的关系和细微差别,从而实现更准确的信息检索。 相比之下,传统模型可能会处理多达数千个 tokens 的输入,因此需要将冗长的文档拆分成较小的块,以适应模型的上下文窗口。这种分块过程会破坏一致性和上下文,使准确检索相关信息并对其进行排名变得更加困难。 但是,长上下文建模会带来与可扩展性和成本相关的挑战,在以更高的准确性进行权衡时必须考虑这些挑战。 QA 管道 所有文档摘要及其标识符都会编译成一个大型提示。发送查询时,使用长上下文建模(本例中为 mistral-small-24b-instruct)的 LLM 会处理问题,评估每个摘要与查询的相关性,并返回最相关文档的标识符。 图 2、问题回答管道 接下来,将最相关的文档输入到具有长上下文 (mistral-small-24b-instruct) 的 LLM 中。模型会根据文本内容生成查询答案。如果模型根据描述性文本识别出图像可能包含相关信息,则会触发另一个步骤:将原始图像和用户的问题发送至 VLM (llama-3.2-90b-vision-instruct),VLM 可以根据实际视觉内容提供答案。 最后,该系统将文本和视觉见解相结合,以提供全面的答案。 结构化输出可确保模型返回的数据符合预定义的格式,从而更轻松地提取特定信息并执行后续运算。相比之下,非结构化或可变输出会在解析模型的响应时引入模糊性和困难,从而阻碍自动化以及与其他系统的集成。 从模型生成结构化数据通常需要精心设计的提示,以指导模型以特定格式(例如 JSON)做出响应。但是,由于模型自然倾向于生成自由格式文本,因此确保一致性遵循此结构可能具有挑战性。 NVIDIA NIM 现在原生支持生成结构化输出的功能。这意味着,您可以依靠内置功能来确保模型的响应格式保持一致,从而减少对复杂提示工程的需求。 将 NVIDIA NIM 与 LangChain 集成 NVIDIA NIM 为您的应用提供与热门框架和最新 AI 模型的无缝兼容性。该流程的实施将 NVIDIA NIM 与 LangChain 相集成。LangChain 是一个用于构建代理应用以确定控制流的框架,已被开发者社区广泛采用。为编排此工作流的工作流,该图形主要由两个节点组成: 助理节点:充当负责管理逻辑和决策过程的代理。它与用户的输入进行交互,并调用必要的工具。 工具节点:用于执行助手所需特定任务的工具集合。 图 3、使用 LangGraph 为管道构建代理 助理节点 助手节点是根据图 3 中概述的工作流程运行的主代理。主代理的代码可在 /NVIDIA/GenerativeAIExamples GitHub repo 中找到。 智能体输入如下: Collection_name :要在其上搜索的文档集。 Question :用户的问题。 document_id :(可选) 如果提供,代理会跳过文档排名阶段。 这是智能体流程: 文档选择 :如果未提供 document_id ,代理会调用 find_best_document_id 工具,该工具可在指定集合中识别与用户问题最相关的文档。 问题回答:对于 document_id ,代理使用 query_document 工具。此工具会尝试使用 LLM (mistral-small-24b-instruct) 根据文档中的文本和图像描述来回答此问题。 图像分析 (如有必要):如果 query_document 工具表明答案可能在图像中 (通过返回 image_hash 值),代理会调用 query_image 工具。此工具会检索实际图像,并使用 VLM 分析图像并寻找答案。 工具节点 我们为智能体实施了三个关键工具来执行任务。 Find_best_document_id :在未提供 document_id 时,确定与用户问题最相关的文档。有关更多信息,请参阅 /NVIDIA/GenerativeAIExamples Github 存储库。 query_document :在指定文档中搜索答案。如果答案可能在图像中,则会提供查询图像所需的详细信息。有关更多信息,请参阅 /NVIDIA/GenerativeAIExamples GitHub 存储库。 query_image :当答案可能在图像内容中时,使用 VLM 分析实际图像。有关更多信息,请参阅/ NVIDIA/GenerativeAIExamples 。 将外部工具与模型绑定 工具调用是一项功能,可让语言模型根据收到的提示集成外部工具或函数并与之交互。此机制使模型能够决定使用哪些工具以及如何使用这些工具来完成特定任务。 工具绑定使模型能够动态扩展其功能,在执行期间选择合适的工具,以提供更准确的上下文感知响应。 绑定外部工具在代理框架中尤为重要,在这种框架中,代理必须选择合适的工具并提供有效执行任务所需的参数。绑定外部工具的优势包括: 扩展功能 :模型可以执行计算、数据检索或 API 调用等复杂操作,而不仅仅是文本生成。 动态工具选择 :模型可以实时评估哪些工具最适合任务,从而提高效率和相关性。 无缝集成:NVIDIA NIM 支持将 LangChain 和 LangGraph 等外部工具与 Llama 3.3 等开放式社区模型集成。您可以采用这些高级功能,而无需对现有系统进行重大更改。 在此实现中,使用 LangChain 的 @tool 装饰器创建三个工具,然后使用 .bind_tools 方法将这些工具与模型绑定。 使用 PyTorch 定义结构化输出 通过使用 Pydantic 定义输出模式,并通过精确的提示引导 LLM NIM 微服务 (例如 mistral-small-24b-instruct) ,您可以确保响应一致、可靠,并且易于被系统中的其他组件使用。当将 LLM 集成到自动化工作流和基于代理的框架 (例如 LangChain) 时,这种方法至关重要。 定义结构 首先,使用 Pydantic 定义 LLM 的预期输出结构。这可确保模型返回的数据保持一致,并可轻松解析以进行下游处理。 from typing import List, Optional from pydantic import BaseModel, Field class Document(BaseModel): """ Represents a document with an identifier and its summary. """ id: str = Field(..., description="Hash identifier of the document") summary: str = Field(..., description="The summary of the document as is") class BestDocuments(BaseModel): """ Contains a list of the best documents to answer the question and their summaries. """ documents: List[Document] = Field(..., description="List of best documents") class Answer(BaseModel): """ Represents the answer to the user's question. """ answer: str = Field(..., description="Answer to the question posed by the user") 接下来,指示 LLM 生成与定义的 Pydantic 结构保持一致的输出。这是通过在提示符中加入特定指令并使用 LangChain 的 with_structured_output 方法实现的。 定义提示 prompt_document_expert 包含 LLM 的详细说明,可指定预期的输入格式 (带有文档摘要的 Markdown) 和所需的输出格式 (与 BestDocuments 架构匹配的 JSON)。 from langchain.chat_models import ChatNVIDIA from langchain.prompts import ChatPromptTemplate # Initialize the LLM with desired parameters llm = ChatNVIDIA(model="mistralai/mistral-small-24b-instruct ", temperature=0, max_tokens=3000) # Define the prompt template for the document expert prompt_document_expert = ChatPromptTemplate.from_messages( [ ( "system", f""" # Extract Best Document Identifier from list of summaries, based on a question coming from the user. You are an expert in getting insights of a document, based on its summaries and you are able to figure the best matches to the question in terms of the summary of the document. Provide no more than 3 of these documents. ## Format of the Input - The input is a markdown file containing second level headers (##) with the chapter index in the form ## Document <document_id> where document_id is an integer pointing to the index of the document. After the document heading there is the summary of the document which is relevant to understand the content of the document. ## Format of the output - The output is going to be the list of the best documents indices and a few of the corresponding summaries that help to answer the question coming from the user. ## Content - Here is the input you can work on: {{documents_context}} """, ), ( "human", "Can you tell me what are the most relevant document ids for this question: {question}" ), ("human", "Tip: Make sure to answer in the correct format"), ] ) 准备上下文 get_context 函数通过检索文档摘要并对其进行适当格式化来准备输入数据。 def get_context(input_data: dict) -> dict: collection_name = input_data.get("collection_name") question = input_data.get("question") documents_context = get_document_summaries_markdown(collection_name) # print(context) return {"documents_context": documents_context, "collection_name": collection_name, "question": question} 绑定结构化输出 llm.with_structured_output(BestDocuments) 方法指示 LLM 生成符合 BestDocuments Pydantic 模型的输出。此方法在内部处理 LLM 响应的解析和验证,确保输出与预期结构相匹配。 LangChain 的 with_structured_output 方法简化了绑定模型以生成结构化输出的过程。它抽象化了解析和验证 LLM 响应的复杂性,使您能够专注于定义所需的输出结构和提示指令。 最后,创建一个链来处理输入并生成结构化输出: chain_document_expert = ( RunnableLambda(get_context) | prompt_document_expert | llm.with_structured_output(BestDocuments) | (lambda x: x.dict()) ) 端到端工具的实际应用 要开始使用多模态检索系统,请克隆 /NVIDIA/GenerativeAIExamples GitHub 存储库,然后按照快速入门指南设置服务。在服务启动并运行时,打开 Web 浏览器并导航至 http://localhost:7860 ,通过 Gradio 用户界面访问系统。 例如,在 NVIDIA 技术博客上探索系统如何处理查询。在其中一篇博文中,您可以询问有关显示 NVIDIA H100 GPU 性能的条形图的问题。“ Select Question ” 字段用于评估,真值答案字段值由人类提供。 图 4、Agent 多文档评估 该系统会根据条形图生成准确的答案,并显示相关图像以供参考,例如图表显示 RetinaNet 达到了 54%。这可确保准确的答案,同时使用户能够以直观方式验证引用数据。 图 5、Agent 结果与用于验证的源图形 视频1. 如何使用 NVIDIA NIM 将 HTML 文档插入多模态检索器集合 视频2. 如何使用 NVIDIA NIM 在多模态检索器集合中搜索文本和图像 挑战和解决方案 随着数据量的增加,处理和检索相关信息的复杂性也随之增加。高效处理大型数据集对于保持性能和确保用户满意度至关重要。在此信息检索系统中,文档摘要的数量甚至可能超过长上下文模型的上下文窗口,这使得在单个提示中处理所有摘要具有挑战性。 处理大量数据还需要大量计算资源,这可能会导致成本增加和延迟增加。优化资源利用率对于提供快速准确的响应,同时最大限度地减少不必要的支出至关重要。 分层文档重新排序解决方案 为应对可扩展性挑战,我们在初始文档重新排序阶段实施了分层方法。我们不会同时处理所有文档摘要,而是将其分为可管理的批量,以适应模型的上下文窗口。此过程涉及多个阶段: 批量处理 :将摘要分组为模型可以处理的批量,且不会超过提示大小限制。 中级重新排序 :模型分别评估每个批次,对每个组中的文档进行排序。 选择最优秀的候选文档 :从每个批次中选择最相关的文档,以进入下一阶段。 最终重新排名 :系统会对所有批次中排名靠前的候选文档进行合并和重新评估,以确定相关性最高的文档。 考虑到可扩展性和成本问题,这种分层方法可确保在不超出模型容量的情况下考虑所有文档。它不仅提高了可扩展性,而且还通过系统缩小候选文档的范围来提高效率,直到识别出最相关的文档。 小型模型的未来前景 使用语言模型,尤其是具有长上下文功能的语言模型,涉及处理大量 token,而这可能会产生巨大的成本。处理的每个 token 都会增加总支出,因此在大规模部署这些系统时,成本管理是一个重要考虑因素。 对成本的担心确实是站得住脚的。然而,语言模型的格局正在迅速演变,小型模型的功能和效率也在不断提升。随着这些进步的继续,这些较小的模型可能以远低于成本提供相似的性能。 结束语 本文讨论了如何使用 NVIDIA NIM 和 LangChain 实现简单的多模态信息检索工作流。与现有的信息检索方法相比,Pipeline 具有以下优势: 增强对文档的理解 用于从图像、表格和文本中提取信息的多模态模型 无缝集成外部工具 生成一致的结构化输出 借助 NVIDIA NIM 和 LangGraph,您可以在此基础上进行构建并对其进行定制,以满足特定需求。首先,您可以在 /NVIDIA/GenerativeAIExamples GitHub repo 中找到源代码。 NVIDIA NIM 还支持访问更多针对 NVIDIA GPU 优化的模型。您可以探索 NVIDIA NeMo ,这是一个可扩展的生成式 AI 框架,专为研究 LLM、多模态模型等的研究人员和 PyTorch 开发者而设计。 如果您正在处理大型企业数据语料库,并希望开发企业就绪的实时多语种和跨语言信息检索系统来生成上下文感知响应,请详细了解 NVIDIA NeMo Retriever 。
https://developer.nvidia.com/blog/tag/inference-performance/
Inference Performance
No content found
https://developer.nvidia.com/zh-cn/blog/tag/inference-performance/
Inference Performance
No content found
https://developer.nvidia.com/blog/optimizing-qwen2-5-coder-throughput-with-nvidia-tensorrt-llm-lookahead-decoding/
Optimizing Qwen2.5-Coder Throughput with NVIDIA TensorRT-LLM Lookahead Decoding
Large language models (LLMs) that specialize in coding have been steadily adopted into developer workflows. From pair programming to self-improving AI agents , these models assist developers with various tasks, including enhancing code, fixing bugs, generating tests, and writing documentation. To promote the development of open-source LLMs, the Qwen team recently released Qwen2.5-Coder, a family of advanced LLMs for code generation, reasoning, and fixing across popular programming languages. This post explores the benefits of inference optimizations for Qwen2.5-Coder models supported in NVIDIA TensorRT-LLM , and the ease of deployment with NVIDIA NIM for transformative potential and coding efficiency. Qwen2.5-Coder models The Qwen2.5-Coder models have achieved state-of-the-art performance across popular academic benchmarks. NVIDIA TensorRT-LLM has optimized three popular models from the Qwen2.5-Coder family—the 1.5B, 7B, and 32B versions—for high throughput and low latency. TensorRT-LLM is a library for fast, efficient LLM inference and includes optimizations such as dynamic inflight batching , KV caching , KV cache reuse , and several speculative decoding techniques, among others. These optimizations help deliver performance improvements for the Qwen2.5-Coder models on popular programming languages such as Python, C++, Java, Bash, Javascript, TypeScript, and Go, reaching a wider range of developers. This post explores the lookahead decoding optimization and the performance boost it helps achieve. Without any additional training or need for additional draft models, developers can leverage the TensorRT-LLM high-level API to speed up Qwen2.5-Coder inference to generate multiline autocode completion. Lookahead decoding Lookahead decoding is a speculative decoding technique that addresses the slow autoregressive nature of LLMs. Each autoregressive decoding step only generates one token at a time, not leveraging the massive parallel processing power of NVIDIA GPUs, leading to low GPU utilization and lower throughput. We’ve previously discussed the throughput boost achievable with draft target speculative decoding , and here we discuss the benefits of leveraging TensorRT-LLM lookahead decoding implementation using the Qwen2.5-Coder models as an example. Unlike the single-token generation in autoregressive decoding, lookahead decoding generates multiple tokens simultaneously, adequately utilizing the parallel processing capabilities of the GPU, leveraging computation (FLOPs) for latency reduction. Moreover, lookahead decoding doesn’t require a separate draft model that’s needed for draft target speculative decoding. Each decoding step is divided into two parallel branches, the lookahead branch and the verification branch. Using the Jacobi iteration method , a classic nonlinear systems solver, the lookhead branch performs parallel decoding for future tokens by generating n-grams. The verification branch selects and verifies the promising n-gram candidates generated by the lookahead branch. The lookahead algorithm is configured using three key parameters: window size (W), n-gram size (N), and verification set size (G). Window size (W): Represents the lookahead window size, which determines how many future tokens the algorithm attempts to predict in each step. Larger window size enables the model to look further, helping generate more tokens in a single pass. This effectively improves throughput performance while utilizing GPU computation FLOPs efficiently. N-gram size (N): Represents the size of the n-grams used in the lookahead process. For example, a 5-gram is a contiguous sequence of 5 future tokens. Together with the window size, it creates a fixed-sized, 2D window for the lookahead branch to generate n-grams from the Jacobi iteration trajectory. Verification set size (G): Represents the maximum number of speculations or candidate n-grams that the algorithm considers in each step for verification. It balances the trade-off between computation efficiency and exploring more possibilities. Figure 1. Lookahead decoding workflow with (W, N, G) = (5, 3, 2). Image credit: Break the Sequential Dependency of LLM Inference Using Lookahead Decoding Lookahead performance greatly depends on the base model, hardware, batch size, sequence length, and the dataset. It is recommended to profile various configurations to find the best (W, N, G) configuration given the setup. Optimal (W, N, G) tuple configuration enables lookahead decoding to deliver improved throughput performance without the need for any additional training, fine-tuning or draft models. Through our experiments on (W, N, G) configuration values sweep, we achieve 3.6x and 1.6x throughput speedups for Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models, respectively. These speedups are measured in throughput (tokens/second) compared to baseline (no lookahead speculative decoding) on NVIDIA H100 Tensor Core GPUs , as shown in Figure 2. Figure 2. Qwen2.5-Coder models throughput boost on NVIDIA DGX H100 with TensorRT-LLM lookahead decoding Data measured on 01/30/2025. Inference throughput (output tokens/second) speedups of Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models. DGX H100, TP=1 | (W, N, G) = (8, 8, 8) | Qwen2.5-Coder 7B Instruct, TP=2 | (W, N, G) = (15, 15, 15) | Qwen2.5-Coder-32B-Instruct, batch size=1, TensorRT-LLM version 0.15.0​. Similar throughput speedups are achieved on NVIDIA H200 Tensor Core GPUs . With their higher memory bandwidth, they also help raise the baseline throughput performance leading to slightly lower speedups as compared to H100 GPUs (Figure 3). Figure 3. Qwen2.5-Coder models throughput boost on NVIDIA DGX H200 with TensorRT-LLM lookahead decoding Data measured on 01/30/2025. Inference throughput (output tokens/second) speedups of Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models. DGX H200, TP=1 | (W, N, G) = (8, 8, 8) | Qwen2.5-Coder 7B Instruct, TP=2 | (W, N, G) = (15, 15, 15) | Qwen2.5-Coder 32B Instruct, batch size=1, TensorRT-LLM version 0.15.0​. Steps to run lookahead decoding with TensorRT-LLM To reproduce these performance gains using lookahead speculative decoding within TensorRT-LLM, follow the steps below. # Install TensorRT-LLM. (Commands below are for Linux. Refer to TensorRT-LLM docs for Windows) sudo apt-get -y install libopenmpi-dev && pip3 install --upgrade setuptools && pip3 install tensorrt_llm --extra-index-url https://pypi.nvidia.com Then run lookahead decoding in TensorRT-LLM using the high-level API. # Command for Qwen2.5-Coder-7B-Instruct from tensorrt_llm import LLM, SamplingParams from tensorrt_llm.llmapi import (LLM, BuildConfig, KvCacheConfig, LookaheadDecodingConfig, SamplingParams) def main(): """The end user can customize the build configuration with the build_config class. # Max draft length is based on (W,N,G) values and calculated as: (W + G -1) * (N-1) + ( N<=1 ? 0: N-2)""" build_config = BuildConfig(max_batch_size = 128, max_input_len = 2048, max_seq_len = 4096, max_num_tokens = 16384, max_draft_len = 111) build_config.plugin_config.reduce_fusion = True build_config.plugin_config.use_paged_context_fmha = True build_config.plugin_config.multiple_profiles = True # The configuration for lookahead decoding lookahead_config = LookaheadDecodingConfig(max_window_size=8, max_ngram_size=8, max_verification_set_size=8) kv_cache_config = KvCacheConfig(free_gpu_memory_fraction=0.4) llm = LLM(model="Qwen/Qwen2.5-Coder-7B-Instruct", kv_cache_config=kv_cache_config, build_config=build_config, speculative_config=lookahead_config) prompt = """Write a C++ program to find the nth Fibonacci number using recursion. Now we define a sequence of numbers in which each number is the sum of the three preceding ones. The first three numbers are 0, -1, -1. Write a program to find the nth number.""" sampling_params = SamplingParams(lookahead_config=lookahead_config) output = llm.generate(prompt, sampling_params=sampling_params) print(output) if __name__ == '__main__': main() Summary Lookahead speculative decoding enables throughput boost on LLMs without any additional training, fine-tuning, or draft models. We presented benchmarked performance improvements on Qwen2.5-Coder models. Visit build.nvidia.com to try the Qwen2.5-Coder models optimized with NVIDIA TensorRT-LLM for free. Qwen2.5-Coder models optimized with TensorRT-LLM have also been packaged as downloadable NVIDIA NIM microservices for ease of deployment. Acknowledgments We would like to thank Liwei Ma, Fanrong Li, Nikita Korobov, and Martin Marciniszyn Mehringer  for their efforts in supporting this post.
https://developer.nvidia.com/zh-cn/blog/optimizing-qwen2-5-coder-throughput-with-nvidia-tensorrt-llm-lookahead-decoding/
使用 NVIDIA TensorRT-LLM 前瞻性解码优化 Qwen2.5-Coder 吞吐量
专注于编码的 大语言模型(LLMs) 已稳步应用于开发者工作流程。从配对编程到自我改进的 AI 智能体 ,这些模型可帮助开发者完成各种任务,包括增强代码、修复错误、生成测试和编写文档。 为促进开源 LLM 的开发,Qwen 团队最近发布了 Qwen2.5-Coder,这是一系列先进的 LLM,用于跨热门编程语言的代码生成、推理和修复。本文将探讨针对 NVIDIA TensorRT-LLM 支持 的 Qwen2.5-Coder 模型进行推理优化的优势,以及借助 NVIDIA NIM 轻松部署以提升变革潜力和编码效率的好处。 Qwen2.5-Coder 模型 Qwen2.5-Coder 模型在热门的学术基准测试中取得了出色的性能。 NVIDIA TensorRT-LLM 已对 Qwen2.5-Coder 系列的三种热门模型 (1.5B、7B 和 32B 版本) 进行优化,以实现高吞吐量和低延迟。TensorRT-LLM 是一个用于快速、高效 LLM 推理的库,包含动态机上 批处理 、 KV 缓存 、 KV 缓存重复使用 和几种预测性解码技术等优化功能。 这些优化有助于提高 Qwen2.5-Coder 模型在 Python、C++、Java、Bash、Javascript、TypeScript 和 Go 等热门编程语言中的性能,从而使更多开发者受益。本文将探讨 lookahead decoding 优化的前瞻性及其有助于实现的性能提升。开发者无需进行任何额外训练,也无需额外的草图模型,即可利用 TensorRT-LLM 高级 API 加速 Qwen2.5-Coder 推理,以生成多行自动代码完成。 解码前景展望 解码前瞻是一种预测性解码技术,可解决 LLMs 缓慢自回归的问题。每个自回归解码步骤一次仅生成一个 token,无法利用 NVIDIA GPUs 强大的并行处理能力,导致 GPU 利用率低、吞吐量低。我们之前讨论过通过草稿目标预测解码可以实现的吞吐量提升,在这里,我们讨论了以 Qwen2.5-Coder 模型为例,利用 TensorRT-LLM lookahead decoding 实现的优势。 与自回归解码中的单令牌生成不同,前瞻性解码可同时生成多个令牌,充分利用 GPU 的并行处理能力,利用计算(FLOPs)降低延迟。此外,对于草稿目标预测性解码,前瞻性解码不需要使用单独的草稿模型。 每个解码步骤分为两个并行分支,即 lookahead 分支和验证分支。通过使用经典的非线性系统求解器 Jacobi 迭代法 ,lookahead 分支通过生成 n-grams 来对未来的 tokens 执行并行解码。验证分支选择并验证由 lookahead 分支生成的有前景的 n-gram 候选项。 前瞻性算法使用三个关键参数进行配置:窗口大小(W),n-gram 大小(N)和验证集大小(G)。 窗口大小 (W):表示前瞻性窗口大小,它决定了算法在每个步骤中尝试预测的未来令牌数量。窗口大小越大,模型的视野越广,一次传递就能生成更多 token。这可有效提高吞吐量性能,同时高效利用 GPU 计算 FLOPs。 N-gram size (N):表示前瞻性流程中使用的 N – gram 的大小。例如,5-gram 是由 5 个未来令牌组成的连续序列。它与窗口大小一起为前瞻性分支创建了一个大小固定的 2D 窗口,以便从 Jacobi 迭代轨迹生成 n-gram。 验证集大小 (G):表示算法在每个验证步骤中考虑的推测或候选 n-gram 的最大数量。它平衡了计算效率与探索更多可能性之间的权衡。 图 1、使用 (W,N,G) = (5,3,2) 展望解码工作流程。图片来源: Break the Sequential Dependency of LLM Inference Using Lookahead Decoding 未来的性能很大程度上取决于基础模型、硬件、批量大小、序列长度和数据集。建议分析各种配置,以找到给定设置的最佳 (W,N,G) 配置。最佳 (W,N,G) 元组配置支持 lookahead 解码前瞻性,无需任何其他训练、fine-tuning 或 draft 模型,即可提供更高的吞吐量性能。 通过对 (W,N,G) 配置值扫描的实验,我们分别为 Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型实现了 3.6 倍和 1.6 倍的吞吐量加速。这些加速是通过 NVIDIA H100 Tensor Core GPUs 上的吞吐量 (tokens/second) 与基线 (无 lookahead speculative decoding) 的比较进行测量的,如 Figure 2 所示。 图 2、借助 TensorRT-LLM 超前解码,Qwen2.5-Coder 模型可提升 NVIDIA DGX H100 上的吞吐量 数据测量日期:2025 年 1 月 30 日。Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型的推理吞吐量(输出令牌/秒)加速。DGX H100,TP=1 | (W,N,G)= (8,8,8)| Qwen2.5-Coder 7B Instruct,TP=2 | (W,N,G)= (15,15,15)| Qwen2.5-Coder-32B-Instruct,批量大小=1,TensorRT-LLM 版本 0.15.0。 NVIDIA H200 Tensor Core GPU 也实现了类似的吞吐量加速。凭借更高的显存带宽,它们还有助于提高基准吞吐量性能,从而使速度略低于 H100 GPU (图 3)。 图 3、Qwen2.5-Coder 模型在 NVIDIA DGX H200 上通过 TensorRT-LLM 超前解码实现吞吐量提升 数据测量日期:2025 年 1 月 30 日。Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型的推理吞吐量(输出令牌/秒)加速。DGX H200,TP=1 | (W,N,G)= (8,8,8)| Qwen2.5-Coder 7B Instruct,TP=2 | (W,N,G)= (15,15,15)| Qwen2.5-Coder 32B Instruct,批量大小=1,TensorRT-LLM 版本 0.15.0。 使用 TensorRT-LLM 进行解码的前瞻性运行步骤 要在 TensorRT-LLM 中使用预测性解码重现这些性能提升,请执行以下步骤。 # Install TensorRT-LLM. (Commands below are for Linux. Refer to TensorRT-LLM docs for Windows) sudo apt-get -y install libopenmpi-dev && pip3 install --upgrade setuptools && pip3 install tensorrt_llm --extra-index-url https://pypi.nvidia.com 然后,使用高级 API 在 TensorRT-LLM 中运行 lookahead decoding。 # Command for Qwen2.5-Coder-7B-Instruct from tensorrt_llm import LLM, SamplingParams from tensorrt_llm.llmapi import (LLM, BuildConfig, KvCacheConfig, LookaheadDecodingConfig, SamplingParams) def main(): """The end user can customize the build configuration with the build_config class. # Max draft length is based on (W,N,G) values and calculated as: (W + G -1) * (N-1) + ( N<=1 ? 0: N-2)""" build_config = BuildConfig(max_batch_size = 128, max_input_len = 2048, max_seq_len = 4096, max_num_tokens = 16384, max_draft_len = 111) build_config.plugin_config.reduce_fusion = True build_config.plugin_config.use_paged_context_fmha = True build_config.plugin_config.multiple_profiles = True # The configuration for lookahead decoding lookahead_config = LookaheadDecodingConfig(max_window_size=8, max_ngram_size=8, max_verification_set_size=8) kv_cache_config = KvCacheConfig(free_gpu_memory_fraction=0.4) llm = LLM(model="Qwen/Qwen2.5-Coder-7B-Instruct", kv_cache_config=kv_cache_config, build_config=build_config, speculative_config=lookahead_config) prompt = """Write a C++ program to find the nth Fibonacci number using recursion. Now we define a sequence of numbers in which each number is the sum of the three preceding ones. The first three numbers are 0, -1, -1. Write a program to find the nth number.""" sampling_params = SamplingParams(lookahead_config=lookahead_config) output = llm.generate(prompt, sampling_params=sampling_params) print(output) if __name__ == '__main__': main() 总结 前瞻性预测解码可提高 LLMs 的吞吐量,而无需任何其他训练、微调或草稿模型。我们展示了 Qwen2.5-Coder 模型的基准性能改进。 访问 build.nvidia.com,免费试用通过 NVIDIA TensorRT-LLM 优化的 Qwen2.5-Coder 模型。 为便于部署, 我们还将通过 TensorRT-LLM 优化的 Qwen2.5-Coder 模型打包为可下载的 NVIDIA NIM 微服务。 致谢 在此, 我们要感谢马立伟、李凡融、Nikita Korobov 和 Martin Marciniszyn Mehringer 为支持这篇博文所付出的努力 。
https://developer.nvidia.com/blog/optimize-ai-inference-performance-with-nvidia-full-stack-solutions/
Optimize AI Inference Performance with NVIDIA Full-Stack Solutions
The explosion of AI-driven applications has placed unprecedented demands on both developers, who must balance delivering cutting-edge performance with managing operational complexity and cost, and AI infrastructure. NVIDIA is empowering developers with full-stack innovations—spanning chips, systems, and software—that redefine what’s possible in AI inference , making it faster, more efficient, and more scalable than ever before. Easily deploy high-throughput, low-latency inference Six years ago, NVIDIA set out to create an AI inference server specifically designed for developers building high-throughput, latency-critical production applications. At the time, many developers were grappling with custom, framework-specific servers that increased complexity, drove up operational costs, and struggled to meet stringent service-level agreements for latency and throughput. To address this, NVIDIA developed the NVIDIA Triton Inference Server , an open-source platform capable of serving models from any AI framework. By consolidating framework-specific inference servers, Triton streamlined AI inference deployment and increased AI prediction capacity. This approach has made Triton one of the most widely adopted NVIDIA open-source projects , now used by hundreds of leading organizations to deploy production AI models efficiently. In addition to Triton, NVIDIA offers a broad ecosystem of AI inference solutions. For developers seeking powerful, customizable tools, NVIDIA TensorRT provides a high-performance deep learning inference library with APIs that enable fine-grained optimizations. NVIDIA NIM microservices provide a flexible framework for deploying AI models across the cloud, data centers, or workstations. Optimizations for AI inference workloads Inference is a full-stack problem today, requiring high-performance infrastructure and efficient software to make effective use of that infrastructure. In addition, inference workloads continue to become more challenging, as model sizes continue to grow and latency constraints tighten, all while the number of users leveraging these AI services also continues to increase. And with the introduction of inference time scaling, a new paradigm for scaling model intelligence, more compute is being applied during inference to enhance model performance. These trends mean that it’s important to continue advancing delivered inference performance, even on the same underlying hardware platform. By combining established methods like model parallelism, mixed-precision training, pruning, quantization, and data preprocessing optimization with cutting-edge advancements in inference technologies, developers can achieve remarkable gains in speed, scalability, and cost-effectiveness. The TensorRT-LLM library incorporates many state-of-the-art features that accelerate inference performance for large language models (LLMs) , which are outlined below. Prefill and KV cache optimizations Key-value (KV) cache early reuse : By reusing system prompts across users, the KV Cache Early Reuse feature accelerates time-to-first-token (TTFT) by up to 5x. Flexible KV block sizing and efficient eviction protocols ensure seamless memory management, enabling faster response times even in multi-user environments. Chunked prefill : For smarter deployment, chunked prefill divides the prefill phase into smaller tasks, enhancing GPU utilization and reducing latency. This innovation simplifies deployment and ensures consistent performance, even with fluctuating user demands. Supercharging multiturn interactions : The NVIDIA GH200 Superchip architecture enables efficient KV cache offloading, improving TTFT by up to 2x in multiturn interactions with Llama models while maintaining high throughput. Decoding optimization Multiblock attention for long sequences : Addressing the challenge of long input sequences, TensorRT-LLM multiblock attention maximizes GPU utilization by distributing tasks across streaming multiprocessors (SMs). This technique improves system throughput by more than 3x, enabling support for larger context lengths without additional hardware costs. Speculative decoding for accelerated throughput : Leveraging a smaller draft model alongside a larger target model, speculative decoding enables up to a 3.6x improvement in inference throughput. This approach ensures high-speed, high-accuracy generation of model outputs, streamlining workflows for large-scale AI applications. Speculative decoding with Medusa: The Medusa speculative decoding algorithm is available as part of TensorRT-LLM optimizations. By predicting multiple subsequent tokens simultaneously, Medusa boosts throughput for Llama 3.1 models by up to 1.9x on the NVIDIA HGX H200 platform. This innovation enables faster responses for applications that rely on LLMs, such as customer support and content creation. Multi-GPU inference MultiShot communication protocol : Traditional Ring AllReduce operations can become a bottleneck in multi-GPU scenarios. TensorRT-LLM MultiShot, powered by NVSwitch , reduces communication steps to just two, irrespective of GPU count. This innovation boosts AllReduce speeds by up to 3x, making low-latency inference scalable and efficient. Pipeline parallelism for high-concurrency efficiency : Parallelism techniques require that GPUs be able to transfer data quickly and efficiently, necessitating a robust GPU-to-GPU interconnect fabric for maximum performance. Pipeline parallelism on NVIDIA H200 Tensor Core GPUs achieved a 1.5x throughput increase for Llama 3.1 405B and demonstrated their versatility with a 1.2x speedup for Llama 2 70B in MLPerf Inference benchmarks. MLPerf Inference is a suite of industry-standard inference performance benchmarks developed by the MLCommons consortium. Large NVLink domains: The NVIDIA GH200 NVL32 system, powered by 32 NVIDIA GH200 Grace Hopper Superchips connected using the NVLink Switch system, and with TensorRT-LLM improvements, delivers up to 3x faster TTFT for Llama models. With up to 127 petaflops of AI compute, this next-generation architecture sets the stage for unprecedented real-time responsiveness in AI applications. Quantization and lower-precision compute NVIDIA TensorRT Model Optimizer for precision and performance: The NVIDIA custom FP8 quantization recipe in the NVIDIA TensorRT Model Optimizer delivers up to 1.44x higher throughput without sacrificing accuracy. These optimizations enable more cost-effective deployment by reducing latency and hardware requirements for demanding workloads. End-to-end full-stack optimization: NVIDIA TensorRT libraries and FP8 Tensor Core innovations ensure high performance across a wide range of devices, from data center GPUs to edge systems. NVIDIA has optimized the Llama 3.2 collection of models for great performance, demonstrating how full-stack software can adaptively unlock efficiency across diverse AI deployment environments. With these features, as well as many others within Triton and TensorRT-LLM, developers can now deploy LLMs that are not only faster and more efficient but also capable of handling a wider range of tasks and user demands. This opens new opportunities for businesses to enhance customer service, automate complex processes, and gain deeper insights from their data. Evaluating inference performance Delivering world-class inference performance takes a full technology stack—chips, systems, and software—all contributing to boosting throughput, reducing energy consumption per token, and minimizing costs. MLPerf Inference is one key measure of inference performance is MLPerf Inference. The benchmark measures inference throughput under standardized conditions, with results subject to extensive peer review. The benchmark is regularly updated to reflect new advances in AI, ensuring that organizations can rely on these results to evaluate platform performance. In the latest round of MLPerf Inference, NVIDIA Blackwell made its debut , delivering up to 4x more performance than the NVIDIA H100 Tensor Core GPU on the Llama 2 70B benchmark. This achievement was the result of the many architectural innovations at the heart of the Blackwell GPU, including the second-generation Transformer Engine with FP4 Tensor Cores and ultrafast HBM3e GPU memory that delivers 8 TB/s of memory bandwidth per GPU. In addition, many aspects of the NVIDIA software stack, including NVIDIA TensorRT-LLM, were re-engineered to make use of new capabilities in Blackwell, such as support for FP4 precision, while continuing to meet the rigorous accuracy target of the benchmark. The NVIDIA H200 Tensor Core GPU, available now from server makers and cloud service providers, also achieved outstanding results on every benchmark in the data center category. This includes the newly added Mixtral 8x7B mixture-of-experts (MoE) LLM, as well as on the Llama 2 70B LLM and Stable Diffusion XL text-to-image tests. As a result of continued software improvements, the Hopper architecture delivered up to 27% more inference performance compared to the prior round. NVIDIA Triton Inference Server, running on a system with eight H200 GPUs achieved virtually identical performance compared to the NVIDIA bare-metal submission on the Llama 2 70B benchmark in MLPerf Inference v4.1. This shows that enterprises no longer need to choose between a feature-rich, production-grade AI inference server and peak throughput performance—both can be achieved simultaneously with NVIDIA Triton. The future of AI inference: Emerging trends and technologies The landscape of AI inference is rapidly evolving, driven by a series of groundbreaking advancements and emerging technologies. Models continue to get smarter, as increases in compute at data center scale enable pretraining larger models. The introduction of sparse mixture-of-experts model architectures, such as GPT-MoE 1.8T, will also help boost model intelligence while improving compute efficiency. These larger models, whether dense or sparse, will require that GPUs individually become much more capable. NVIDIA Blackwell architecture is set to fuel next-generation generative AI inference. Each Blackwell GPU features second-generation Transformer Engine and fifth-generationTensor Cores utilizing FP4. Lower-precision data formats help to increase computational throughput and reduce memory requirements. To ensure they can deliver significant performance benefits while maintaining high accuracy, an incredible amount of software craftsmanship is needed. At the same time, to serve the most demanding models at brisk, real-time rates, many of the most capable GPUs will need to work in concert to generate responses. The NVIDIA GB200 NVL72 rack-scale solution creates a 72-GPU NVLink domain that acts as a single massive GPU. For GPT-MoE 1.8T real-time inference, it provides up to a 30x improvement in throughput compared to the prior generation Hopper GPU. In addition, the emergence of a new scaling law—test-time compute—is providing yet another way to improve response quality and accuracy for even more complex tasks. This new paradigm, first introduced with the OpenAI o1 model, enables models to “reason” by generating many intermediate tokens before outputting the final result. Reasoning models are particularly helpful in domains such as complex mathematics and generating computer code. This new paradigm is set to fuel a new wave of breakthroughs requiring more computational performance during inference time. The path to artificial general intelligence will rely on continued breakthroughs in data center compute performance. Pretraining, post-training, and test-time scaling all depend on state-of-the-art infrastructure running expertly crafted software. The NVIDIA platform is evolving rapidly, with a brisk one-year innovation rhythm, to enable the ecosystem to continue pushing the frontiers of AI. Get started Check out How to Get Started with AI Inference , learn more about the NVIDIA AI Inference platform, and stay informed about the latest AI inference performance updates. Watch a demo on how to quickly deploy NVIDIA NIM microservices or read A Simple Guide to Deploying Generative AI with NVIDIA NIM . Optimizations from TensorRT, TensorRT-LLM, and TensorRT Model Optimizer libraries are combined and available through production-ready deployments using NVIDIA NIM microservices.
https://developer.nvidia.com/zh-cn/blog/optimize-ai-inference-performance-with-nvidia-full-stack-solutions/
借助 NVIDIA 全栈解决方案提升 AI 推理性能
AI 驱动的应用的爆炸式发展对开发者提出了前所未有的要求,他们必须在提供先进的性能与管理运营复杂性和成本以及 AI 基础设施之间取得平衡。 NVIDIA 正在为开发者提供涵盖芯片、系统和软件的全栈创新,重新定义 AI 推理 的可能性,使其比以往更快、更高效、更具可扩展性。 轻松部署高吞吐量、低延迟推理 六年前,NVIDIA 着手打造 AI 推理服务器,专为构建高吞吐量、延迟关键型生产应用的开发者而设计。当时,许多开发者都在努力使用定制的、特定于框架的服务器,这些服务器增加了复杂性,增加了运营成本,并且难以满足严格的服务水平协议(service-level agreements)关于延迟和吞吐量的要求。 为解决这一问题,NVIDIA 开发了 NVIDIA Triton Inference Server ,这是一个开源平台,能够为来自任何 AI 框架的模型提供服务。通过整合特定于框架的推理服务器,Triton 简化了 AI 推理部署,并提高了 AI 预测能力。这种方法使 Triton 成为广泛采用的 NVIDIA 开源项目之一,现已被数百家领先的组织用于高效部署生产级 AI 模型。 除 Triton 外,NVIDIA 还提供广泛的 AI 推理解决方案生态系统。对于寻求功能强大的可定制工具的开发者, NVIDIA TensorRT 提供了一个高性能深度学习推理库,其 API 可实现细粒度优化。 NVIDIA NIM 微服务提供了一个灵活的框架,用于在云端、数据中心或工作站中部署 AI 模型。 针对 AI 推理工作负载进行优化 推理是当今的全栈问题,需要高性能基础架构和高效软件来有效利用该基础架构。此外,随着模型大小不断增长和延迟限制日益严格,推理工作负载的挑战性也越来越高,同时利用这些 AI 服务的用户数量也在不断增加。随着推理时间扩展(一种扩展模型智能的新范式)的引入,推理过程中应用了更多的计算来增强模型性能。 这些趋势意味着,即使在相同的底层硬件平台上,继续提高交付的推理性能也很重要。通过将模型并行、混合精度训练、剪枝、量化和数据预处理优化等成熟方法与推理技术的前沿进步相结合,开发者可以在速度、可扩展性和成本效益方面实现显著提升。 TensorRT-LLM 库包含许多先进功能,可加速 大语言模型(LLMs) 的推理性能,如下所述。 预填充和 KV 缓存优化 键值 (KV) 缓存提早复用 :通过在不同用户中重复使用系统提示,KV 缓存提早复用功能可将首个令牌 (TTFT) 的时间缩短高达 5 倍。灵活的 KV 块大小和高效的驱逐协议可确保无缝管理内存,即使在多用户环境中也能缩短响应时间。 分块预填充 :为实现更智能的部署,分块预填充可将预填充阶段划分为较小的任务,从而提高 GPU 利用率并降低延迟。这项创新可简化部署,并确保一致的性能,即使在用户需求波动的情况下也是如此。 强效助力多圈交互 :NVIDIA GH200 超级芯片架构可实现高效的 KV 缓存卸载,在与 Llama 模型进行多圈交互时,将 TTFT 性能提升高达 2 倍,同时保持高吞吐量。 解码优化 长序列的 多块注意力 :TensorRT-LLM 多块注意力通过在流多处理器 (SM) 中分配任务,更大限度地提高 GPU 利用率,从而解决长输入序列的挑战。此技术可将系统吞吐量提高 3 倍以上,从而在不增加硬件成本的情况下支持更大的上下文长度。 用于加速吞吐量的推理吞吐量:通过利用较小的草稿模型和较大的目标模型,推理吞吐量可将推理吞吐量提升高达 3.6 倍。这种方法可确保高速、高精度地生成模型输出,简化大规模 AI 应用的工作流。 使用 Medusa 进行推理解码 :Medusa 推理解码算法可作为 TensorRT-LLM 优化的一部分提供。通过同时预测多个后续令牌,Medusa 在 NVIDIA HGX H200 平台上将 Llama 3.1 模型的吞吐量提高了 1.9 倍。这项创新可加快客户支持和内容创建等依赖 LLM 的应用的响应速度。 多 GPU 推理 MultiShot 通信协议 :传统的 Ring AllReduce 操作可能会成为多 GPU 场景中的瓶颈。TensorRT-LLM MultiShot 由 NVSwitch 提供支持,无论 GPU 数量如何,都可以将通信步骤减少到两个。这项创新将 AllReduce 速度提升高达 3 倍,使低延迟推理具有可扩展性并十分高效。 实现高并发效率的工作流并行:并行技术要求 GPU 能够快速高效地传输数据,因此需要强大的 GPU 到 GPU 互连结构来实现出色性能。 NVIDIA H200 Tensor Core GPU 上的工作流并行将 Llama 3.1 405B 的吞吐量提高了 1.5 倍,并在 MLPerf Inference 基准测试中证明了其通用性,将 Llama 2 70B 的速度提高了 1.2 倍。MLPerf Inference 是一套行业标准推理性能基准测试,由 MLCommons 联盟开发。 大型 NVLink 域 :NVIDIA GH200 NVL32 系统由通过 NVLink Switch 系统连接的 32 个 NVIDIA GH200 Grace Hopper 超级芯片提供支持,并进行了 TensorRT-LLM 改进,可为 Llama 模型提供高达 3 倍的 TTFT 速度。凭借高达 127 Petaflops 的 AI 计算能力,此新一代架构为 AI 应用实现出色的实时响应速度奠定了基础。 量化和低精度计算 用于提高精度和性能的 NVIDIA TensorRT 模型优化器 :NVIDIA TensorRT 模型优化器中的 NVIDIA 定制 FP8 量化方法可在不牺牲准确性的情况下将吞吐量提高 1.44 倍。这些优化可降低高要求工作负载的延迟和硬件需求,从而实现更具成本效益的部署。 端到端全栈优化 :NVIDIA TensorRT 库和 FP8 Tensor Core 创新技术可确保从数据中心 GPU 到边缘系统等各种设备实现高性能。NVIDIA 优化了 Llama 3.2 模型集合,以实现出色性能,展示了全栈软件如何在不同的 AI 部署环境中灵活释放效率。 借助这些功能以及 Triton 和 TensorRT-LLM 中的许多其他功能,开发者现在可以部署更快速、更高效的 LLM,并且能够处理更广泛的任务和用户需求。这为企业增强客户服务、实现复杂流程自动化以及从数据中获得更深入见解带来了新机遇。 评估推理性能 实现出色的推理性能需要完整的技术堆栈(芯片、系统和软件),所有这些都有助于提高吞吐量、降低每个令牌的能耗并更大限度地降低成本。 MLPerf Inference 是衡量推理性能的一个关键指标。该基准测试用于测量标准化条件下的推理吞吐量,并对结果进行广泛的同行评审。基准测试会定期更新,以反映 AI 领域的新进展,确保企业组织可以依靠这些结果来评估平台性能。 在最新一轮 MLPerf Inference 中, NVIDIA Blackwell 首次亮相 ,在 Llama 2 70B 基准测试中,其性能比 NVIDIA H100 Tensor Core GPU 高 4 倍。这一成就得益于 Blackwell GPU 核心的众多架构创新,包括采用 FP4 Tensor Cores 的第二代 Transformer Engine 和可为每个 GPU 提供 8 TB/s 的 HBM3e GPU 内存带宽。 此外,对 NVIDIA 软件堆栈的许多方面 (包括 NVIDIA TensorRT-LLM) 进行了重新设计,以利用 Blackwell 中的新功能 (例如对 FP4 精度的支持),同时继续满足基准测试的严格准确性目标。 服务器制造商和云服务提供商现已推出的 NVIDIA H200 Tensor Core GPU 在数据中心类别的每项基准测试中都取得了出色的成绩。其中包括新增的 Mixtral 8x7B 多专家模型 (MoE) LLM,以及 Llama 2 70B LLM 和 Stable Diffusion XL 文本转图像测试。得益于软件的持续改进,Hopper 架构可提供高达 27% 的推理性能。 与 MLPerf Inference v4.1 中 Llama 2 70B 基准测试中的 NVIDIA 裸机提交相比 ,在配备 8 个 H200 GPU 的系统上运行的 NVIDIA Triton Inference Server 实现了几乎相同的性能。这表明企业不再需要在功能丰富的生产级 AI 推理服务器和峰值吞吐量性能之间做出选择,而 NVIDIA Triton 可以同时实现这两种性能。 AI 推理的未来:新兴趋势和技术 在一系列突破性进展和新兴技术的推动下,AI 推理的格局正在迅速发展。随着数据中心规模的计算能力增加,模型将继续变得更加智能。引入稀疏的多专家模型架构 (例如 GPT-MoE 1.8T) 也将有助于提高模型智能,同时提高计算效率。这些更大型的模型,无论是密集模型还是稀疏模型,都需要 GPU 单独变得更加强大。NVIDIA Blackwell 架构将为新一代生成式 AI 推理提供动力支持。 每个 Blackwell GPU 均配备第二代 Transformer Engine 和第五代 Tensor Cores,利用 FP4。低精度数据格式有助于提高计算吞吐量并降低内存需求。为了确保它们能够在保持高精度的同时提供显著的性能优势,我们需要大量的软件技术。 与此同时,为了以快速、实时的速率为要求严苛的模型提供服务,许多功能非常强大的 GPU 需要协同工作以生成响应。 NVIDIA GB200 NVL72 机架级解决方案创建了一个 72-GPU NVLink 域,可充当单个大型 GPU。对于 GPT-MoE 1.8T 实时推理,与上一代 Hopper GPU 相比,其吞吐量提高了 30 倍。 此外,新的扩展定律(测试时计算) 的出现为提高更复杂任务的响应质量和准确性提供了另一种方法。这种新范式首先在 OpenAI o1 模型中引入,使模型能够在输出最终结果之前通过生成许多中间令牌来“推理”。推理模型在复杂数学和生成计算机代码等领域尤为有用。这种新范式将起新一轮突破浪潮,需要在推理期间实现更高的计算性能。 通往人工通用智能的道路将依赖于数据中心计算性能的持续突破。预训练、后训练和测试时扩展都依赖于运行专家精心编写的软件的最先进的基础架构。NVIDIA 平台发展迅速,一年内创新节奏轻快,使生态系统能够继续推动人工智能的前沿发展。 开始使用 查看如何开始使用 AI 推理 ,了解更多关于 NVIDIA AI 推理平台 的信息,并随时了解 最新的 AI 推理性能更新 。 观看演示,了解如何快速部署 NVIDIA NIM 微服务,或阅读《使用 NVIDIA NIM 部署生成式 AI 的简单指南》。TensorRT、TensorRT-LLM 和 TensorRT Model Optimizer 库中的优化经过组合,可通过使用 NVIDIA NIM 微服务的生产就绪型部署获得。
https://developer.nvidia.com/blog/nvidia-tensorrt-llm-now-supports-recurrent-drafting-for-optimizing-llm-inference/
NVIDIA TensorRT-LLM Now Supports Recurrent Drafting for Optimizing LLM Inference
Recurrent drafting (referred as ReDrafter) is a novel speculative decoding technique developed and open-sourced by Apple for large language model (LLM) inference now available with NVIDIA TensorRT-LLM . ReDrafter helps developers significantly boost LLM workload performance on NVIDIA GPUs. NVIDIA TensorRT-LLM is a library for optimizing LLM inference. It provides an easy-to-use Python API to define LLMs and build NVIDIA TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. Optimizations include custom attention kernels, inflight batching, paged KV caching, quantization (FP8, INT4 AWQ, INT8 SmoothQuant), and much more. Speculative decoding is a technique that accelerates LLM inference by generating multiple tokens in parallel. It uses smaller “draft” modules to predict future tokens, which are then verified by the main model. This method maintains output quality while significantly reducing response times, especially during low traffic periods, by better utilizing available resources for low-latency inference. ReDrafter employs recurrent neural network (RNN)-based sampling, referred to as drafting , combined with tree-style attention previously used in other techniques like Medusa to predict and verify draft tokens from multiple possible paths for better accuracy and to potentially accept more than one token in each iteration of the decoder. NVIDIA collaborated with Apple to add support for this technique in TensorRT-LLM, making it accessible to the broader developer community. The integration of ReDrafter into TensorRT-LLM expanded its reach, unlocked new optimization potential, and improved on previous methods such as Medusa. For Medusa, the path acceptance and token sampling happens in the TensorRT-LLM runtime, introducing some overhead inside the engine for processing all possible future paths without knowing the accepted path, most of which are ultimately discarded. To reduce such overhead, ReDrafter requires the token validation and acceptance of the best path before drafting future tokens for the next iteration. TensorRT-LLM has been updated to incorporate drafting and validation logic inside a single engine, rather than relying on the runtime or separate engines to further minimize overhead. This approach provides TensorRT-LLM kernel selection and scheduling more freedom to optimize the network for maximum performance. To better illustrate ReDrafter improvements, Figure 1 highlights the key differences between its implementation and that of Medusa in TensorRT-LLM. Most of the components related to speculative decoding have been done in-engine for ReDrafter. This significantly simplifies the runtime changes needed for ReDrafter. Figure 1. Comparison of Medusa (left) and ReDrafter (right) implementations in NVIDIA TensorRT-LLM The following sections delve into some of the changes that help enable ReDrafter in TensorRT-LLM. Inflight-batching compatible engine Inflight-batching (IFB) is a strategy that significantly improves the throughput by batching context-phase and generation-phase requests. Speculative decoding, coupled with IFB, introduces more complexity to the pipeline as context-phase requests need to be handled differently than generation-phase requests, which require draft token validation. Since ReDrafter moves the validation logic inside the model definition, the engine needs that logic as well during validation. Similar to the attention plugin, the batch is split into two smaller batches: one for context requests and another for generation requests. Each smaller batch then enters its computational workflow, and at the end they are combined back to a single batch for drafting. Figure 2. ReDrafter’s computational workflow for inflight-batching compatible TensorRT-LLM engine Note that this approach requires that all operators on either path support empty tensors, which could happen if a batch consists of all context requests or all generation requests. This capability adds flexibility to TensorRT-LLM APIs, enabling the definition of more complicated models in the future. Implementing in-engine validation and drafting To validate and draft inside the engine, TensorRT-LLM is updated with support for numerous new operations so that PyTorch code can be easily translated into a definition of the TensorRT-LLM model. The following PyTorch code excerpt is Apple’s PyTorch implementation of ReDrafter . The TensorRT-LLM implementation is almost a straightforward line-by-line mapping of the PyTorch version. PyTorch def unpack( packed_tensor: torch.Tensor, unpacker: torch.Tensor, ) -> torch.Tensor: assert len(packed_tensor.shape) == 3 last_dim_size = packed_tensor.shape[2] batch_size, beam_width, beam_length = unpacker.shape unpacked_data_indices = unpacker.view( batch_size, beam_width * beam_length, 1).expand( -1, -1, last_dim_size ) unpacked_tensor = torch.gather( packed_tensor, 1, unpacked_data_indices).reshape( batch_size, beam_width, beam_length, -1 ) return unpacked_tensor TensorRT-LLM def _unpack_beams( x: Tensor, indices: Tensor, num_beams: int, beam_length: int ) -> Tensor: assert x.rank() == 3 d0 = shape(x, 0, INT_DTYPE_STR) dl = shape(x, -1, INT_DTYPE_STR) indices = view( indices, [-1, num_beams * beam_length, 1], False) res_shape = concat([d0, num_beams, beam_length, dl]) res = view(gather_nd(x, indices), res_shape, False) return res This, of course, is a very simple example. For a more complex example, see the beam search implementation . With the new functionalities added for ReDrafter, it might be possible to improve the Medusa implementation in TensorRT-LLM to further increase its performance. ReDrafter performance in TensorRT-LLM As benchmarked by Apple , ReDrafter with TensorRT-LLM can provide up to 2.7x throughput improvements on NVIDIA H100 GPUs with TP8 over the base LLM. Note that the performance improvement of any speculative decoding technique can be heavily impacted by many factors, including: GPU utilization: Speculative decoding is commonly used for low-traffic scenarios, where GPU resources are typically underutilized due to small batch sizes. Average acceptance rate: The latency of each decoding step is increased since speculative decoding must perform extra computation, where a significant portion of it is ultimately wasted after validation. As a result, to see any performance benefits from speculative decoding, the average acceptance rate must be high enough to pay for that extra latency. This is affected by the number of beams, their lengths, and the quality of the beam search itself (which is impacted by the training data). Task: It is easier to predict future tokens for some tasks (code completion, for example), which leads to a higher acceptance rate, and thus improved performance. Summary This collaboration between NVIDIA and Apple, has made TensorRT-LLM more powerful and more flexible, enabling the LLM community to innovate more sophisticated models and easily deploy them with TensorRT-LLM to achieve unparalleled performance on NVIDIA GPUs. These new features open exciting possibilities, and we eagerly anticipate the next generation of advanced models from the community that leverage TensorRT-LLM capabilities, driving further improvements in LLM workloads. Explore NVIDIA TensorRT-LLM to unlock the full potential of your models on NVIDIA GPUs.
https://developer.nvidia.com/zh-cn/blog/nvidia-tensorrt-llm-now-supports-recurrent-drafting-for-optimizing-llm-inference/
NVIDIA TensorRT-LLM 现支持 Recurrent Drafting,实现 LLM 推理优化
Recurrent Drafting (简称 ReDrafter) 是苹果公司为大语言模型 (LLM) 推理开发并开源的一种新型推测解码技术,该技术现在可与 NVIDIA TensorRT-LLM 一起使用。ReDrafter 帮助开发者大幅提升了 NVIDIA GPU 上的 LLM 工作负载性能。 NVIDIA TensorRT-LLM 是一个 LLM 推理优化库,提供了一个易于使用的 Python API 来定义 LLM 和构建 NVIDIA TensorRT 引擎,这些引擎具有顶尖的优化功能,可在 GPU 上高效执行推理。优化功能包括自定义 Attention Kernel、Inflight Batching、Paged KV Caching、量化技术 (FP8、INT4 AWQ、INT8 SmoothQuant) 等。 推测解码 (Speculative decoding) 是一种通过并行生成多个 token 来加速 LLM 推理的技术。它使用较小的“draft”模块预测未来的 token,然后由主模型进行验证。该方法通过更好地利用可用资源实现低延迟推理,在保持输出质量的同时大大缩短了响应时间,尤其是在低流量时段。 ReDrafter 运用基于循环神经网络 (RNN) 的采样 (称为 Drafting ) 并结合之前在 Medusa 等其他技术中使用的树状注意力,预测和验证来自多个可能路径的 draft token 以提高准确性,并在解码器的每次迭代中接受一个以上 token。NVIDIA 与苹果公司合作,在 TensorRT-LLM 中添加了对该技术的支持,使更加广泛的开发者社区能够使用该技术。 ReDrafter 与 TensorRT-LLM 的集成扩大了该技术的覆盖范围,解锁了新的优化潜力,并改进了 Medusa 等先前的方法。Medusa 的路径接受和 token 采样发生在 TensorRT-LLM 运行时,需要在接受路径未知的情况下处理所有可能的未来路径,而且其中大部分路径最终都会被丢弃,这就给引擎内部带来了一些开销。为了减少这种开销,ReDrafter 要求在 drafting 下一次迭代的未来 token 之前,先验证 token 并接受最佳路径。 为了进一步减少开销,TensorRT-LLM 更新后在单个引擎中整合了 drafting 和验证逻辑,不再依赖运行时或单独的引擎。这种方法为 TensorRT-LLM 内核选择和调度提供了更大的自由度,通过优化网络实现了性能的最大化。 为了更好地说明 ReDrafter 的改进,图 1 展示了 TensorRT-LLM 中 ReDrafter 实现与 Medusa 实现的主要区别。大多数与推测解码相关的组件都在 ReDrafter 的引擎内完成,这大大简化了 ReDrafter 所需的运行时更改。 图 1. NVIDIA TensorRT-LLM 中 Medusa(左)和 ReDrafter(右)实现的比较 下面将深入探讨有助于在 TensorRT-LLM 中启用 ReDrafter 的一些变化。 兼容 Inflight-batching 批处理的引擎 Inflight-batching (IFB) 是一种通过批量处理上下文阶段和生成阶段请求,来显著提高吞吐量的策略。鉴于上下文阶段请求与生成阶段请求的处理方式不同(生成阶段请求需要 draft token 验证),因此结合 IFB 的推测解码会给管线带来更大的复杂性。ReDrafter 将验证逻辑移至模型定义内部,因此引擎在验证过程中也需要该逻辑。与注意力插件类似,该批处理被分成两个较小的批处理:一个用于上下文请求,另一个用于生成请求。然后,每个较小的批处理进入计算工作流,最后再合并成一个批处理进行 drafting 流程。 图 2. ReDrafter 兼容 TensorRT-LLM 引擎的 Inflight-batching 批处理计算工作流 请注意,这种方法要求任一路径上的所有运算符都支持空张量。如果一个批处理由所有上下文请求或所有生成请求组成,就可能出现空张量。该功能增加了 TensorRT-LLM API 的灵活性,使未来定义更复杂的模型成为可能。 实现引擎内验证和 Drafting 为了在引擎内进行验证和 draft,TensorRT-LLM 更新时加入了对许多新操作的支持,这样 PyTorch 代码就可以轻松地转化成一个 TensorRT-LLM 模型的定义。 以下 PyTorch 代码摘录是苹果公司的 PyTorch 实现的 ReDrafter 。TensorRT-LLM 实现几乎就是 PyTorch 版本的直接逐行映射。 PyTorch def unpack( packed_tensor: torch.Tensor, unpacker: torch.Tensor, ) -> torch.Tensor: assert len(packed_tensor.shape) == 3 last_dim_size = packed_tensor.shape[2] batch_size, beam_width, beam_length = unpacker.shape unpacked_data_indices = unpacker.view( batch_size, beam_width * beam_length, 1).expand( -1, -1, last_dim_size ) unpacked_tensor = torch.gather( packed_tensor, 1, unpacked_data_indices).reshape( batch_size, beam_width, beam_length, -1 ) return unpacked_tensor TensorRT-LLM def _unpack_beams( x: Tensor, indices: Tensor, num_beams: int, beam_length: int ) -> Tensor: assert x.rank() == 3 d0 = shape(x, 0, INT_DTYPE_STR) dl = shape(x, -1, INT_DTYPE_STR) indices = view( indices, [-1, num_beams * beam_length, 1], False) res_shape = concat([d0, num_beams, beam_length, dl]) res = view(gather_nd(x, indices), res_shape, False) return res 当然,这只是一个非常简单的例子。如要了解更复杂的示例,请参见 束搜索实现 。借助为 ReDrafter 添加的新功能,就可以改进 TensorRT-LLM 中的 Medusa 实现,从而进一步提高其性能。 ReDrafter 在 TensorRT-LLM 中的性能 根据 苹果公司的基准测试 ,在采用 TP8(Tensor Parallelism with 8 GPUs,8 卡 GPU 张量并行) 的 NVIDIA GPU 上使用 TensorRT-LLM 的 ReDrafter 最多可将吞吐量提高至基础 LLM 的 2.7 倍。 请注意,任何推测解码技术的性能提升幅度都会受到诸多因素的大幅影响,包括: GPU 利用率: 推测解码通常用于低流量场景,由于批量较小,GPU 资源的利用率通常较低。 平均接受率: 由于推测解码必须执行额外的计算,而其中很大一部分计算最终会在验证后被浪费,因此每个解码步骤的延迟都会增加。所以要想通过推测解码获得任何性能上的优势,平均接受率必须高到足以弥补增加的延迟。这受到束数量、束长度和束搜索本身质量(受训练数据影响)的影响。 任务: 在某些任务(例如代码完成)中预测未来的 token 更容易,使得接受率更高,性能也会因此而提升。 总结 NVIDIA 与苹果公司的合作让 TensorRT-LLM 变得更加强大和灵活,使 LLM 社区能够创造出更加复杂的模型并通过 TensorRT-LLM 轻松部署,从而在 NVIDIA GPU 上实现无与伦比的性能。这些新特性带来了令人兴奋的可能性,我们热切期待着社区使用 TensorRT-LLM 功能开发出新一代先进模型,进一步改进 LLM 工作负载。 探索 NVIDIA TensorRT-LLM ,在 NVIDIA GPU 上充分释放模型潜能。
End of preview. Expand in Data Studio

Data Loading zh-cn (china zh) total: 65 articles, human verfication quality.

Downloads last month
175