en_url
stringlengths 70
147
| en_title
stringlengths 35
105
| en_content
stringlengths 5.28k
21.5k
| jp_url
stringlengths 76
153
| jp_title
stringlengths 16
72
| jp_content
stringlengths 3.21k
16.7k
|
---|---|---|---|---|---|
https://developer.nvidia.com/blog/three-building-blocks-for-creating-ai-virtual-assistants-for-customer-service-with-an-nvidia-nim-agent-blueprint/ | Three Building Blocks for Creating AI Virtual Assistants for Customer Service with an NVIDIA AI Blueprint | In todayâs fast-paced business environment, providing exceptional customer service is no longer just a nice-to-haveâitâs a necessity. Whether addressing technical issues, resolving billing questions, or providing service updates, customers expect quick, accurate, and personalized responses at their convenience. However, achieving this level of service comes with significant challenges.
Legacy approaches, such as static scripts or manual processes, often fall short when it comes to delivering personalized and real-time support. Additionally, many customer service operations rely on sensitive and fragmented data, which is subject to strict data governance and privacy regulations. With the rise of generative AI, companies aim to revolutionize customer service by enhancing operational efficiency, cutting costs, and maximizing ROI.
Integrating AI into existing systems presents challenges related to transparency, accuracy, and security, which can impede adoption and disrupt workflows. To overcome these hurdles, companies are leveraging generative AI-powered virtual assistants to manage a wide range of tasks, ultimately improving response times and freeing up resources.
This post outlines how developers can use the
NVIDIA AI Blueprint for AI virtual assistants
to scale operations with generative AI. By leveraging this information, including sample code, businesses can meet the growing demands for exceptional customer service while ensuring data integrity and governance. Whether improving existing systems or creating new ones, this blueprint empowers teams to meet customer needs with efficient and meaningful interactions.
Smarter AI virtual assistants with an AI query engine using retrieval-augmented generation
When building an AI virtual assistant, itâs important to align with the unique use case requirements, institutional knowledge, and needs of the organization. Traditional bots, however, often rely on rigid frameworks and outdated methods that struggle to meet the evolving demands of todayâs customer service landscape.
Across every industry, AI-based assistants can be transformational. For example, telecommunications companies, and the majority of retail and service providers, can use AI virtual assistants to enhance customer experience by offering support 24 hours a day, 7 days a week while handling a wide range of customer queries in multiple languages and providing dynamic, personalized interactions that streamline troubleshooting and account management. This helps reduce wait times and ensures consistent service across diverse customer needs.
Another example is within the healthcare insurance payor industry, where ensuring a positive member experience is critical. Virtual assistants enhance this experience by providing personalized support to members, addressing their claims, coverage inquiries, benefits, and payment issues, all while ensuring compliance with healthcare regulations. This also helps reduce the administrative burden on healthcare workers.
With the NVIDIA AI platform, organizations can create an AI query engine that uses
retrieval-augmented generation (RAG)
to connect AI applications to enterprise data. The AI virtual assistant blueprint enables developers to quickly get started building solutions that provide enhanced customer experiences. It is built using the following
NVIDIA NIM
microservices:
NVIDIA NIM for LLM:
Brings the power of state-of-the-art large language models (LLMs) to applications, providing unmatched natural language processing with remarkable efficiency.
Llama 3.1 70B Instruct NIM
:
Powers complex conversations with superior contextual understanding, reasoning, and text generation.
NVIDIA NeMo
Retriever NIM:
This collection provides easy access to state-of-the-art models that serve as foundational building blocks for RAG pipelines. These pipelines, when integrated into virtual assistant solutions, enable seamless access to enterprise data, unlocking institutional knowledge via fast, accurate, and scalable answers.
NeMo
Retriever Embedding NIM
:
Boosts text question-answering retrieval performance, providing high-quality embeddings for the downstream virtual assistant.
NeMo
Retriever Reranking NIM
:
Enhances the retrieval performance further with a fine-tuned reranker, finding the most relevant passages to provide as context when querying an LLM.
The blueprint is designed to integrate seamlessly with existing customer service applications without breaking information security mandates. Thanks to the portability of NVIDIA NIM, organizations can integrate data wherever it resides. By bringing generative AI to the data, this architecture enables AI virtual assistants to provide more personalized experiences tailored to each customer by leveraging their unique profiles, user interaction histories, and other relevant data.
A blueprint is a starting point that can be customized for an enterpriseâs unique use case. For example, integrate other NIM microservices, such as the
Nemotron 4 Hindi 4B Instruct
, to enable an AI virtual assistant to communicate in the local language. Other microservices can enable additional capabilities such as synthetic data generation and model fine-tuning to better align with your specific use case requirements. Give the AI virtual assistant a humanlike interface when connected to the digital human AI Blueprint.
With the implementation of a RAG backend with proprietary data (both company and user profile and their specific data), the AI virtual assistant can engage in highly contextual conversations, addressing the specifics of each customerâs needs in real-time. Additionally, the solution operates securely within your existing governance frameworks, ensuring compliance with privacy and security protocols especially when working with sensitive data.
Three building blocks for creating your own AI virtual assistant
As a developer, you can build your own AI virtual assistant that retrieves the most relevant and up-to-date information, in real time, with ever-improving humanlike responses. Figure 1 shows the AI virtual assistant architecture diagram which includes three functional components.
Figure 1. The NVIDIA AI Blueprint for AI virtual assistants
1. Data ingestion and retrieval pipeline
Pipeline administrators use the ingestion pipeline to load structured and unstructured data into the databases. Examples of structured data include customer profiles, order history, and order status. Unstructured data includes product manuals, the product catalog, and supporting material such as FAQ documents.
2. AI agent
The AI virtual assistant is the second functional component. Users interact with the virtual assistant through a user interface. An AI agent, implemented in the LangGraph agentic LLM programming framework, plans how to handle complex customer queries and solves recursively. The LangGraph agent uses the tool calling feature of the
Llama 3.1 70B Instruct NIM
to retrieve information from both the unstructured and structured data sources, then generates an accurate response.
The AI agent also uses short-term and long-term memory functions to enable multi-turn conversation history. The active conversation queries and responses are embedded so they can be retrieved later in the conversation as additional context. This allows more human-like interactions and eliminates the need for customers to repeat information theyâve already shared with the agent.
Finally, at the end of the conversation, the AI agent summarizes the discussion along with a sentiment determination and stores the conversation history in the structured database. Subsequent interactions from the same user can be retrieved as additional context in future conversations. Call summarization and conversation history retrieval can reduce call time and improve customer experience. Sentiment determination can provide valuable insights to the customer service administrator regarding the agentâs effectiveness.
3. Operations pipeline
The customer operations pipeline is the third functional component of the overall solution. This pipeline provides important information and insight to the customer service operators. Administrators can use the operations pipeline to review chat history, user feedback, sentiment analysis data, and call summaries. The analytics microservice, which leverages the Llama 3.1 70B Instruct NIM, can be used to generate analytics such as average call time, time to resolution, and customer satisfaction. The analytics are also leveraged as user feedback to retrain the LLM models to improve accuracy.
You can find the complete example of how to get started with this Blueprint on the
NVIDIA AI Blueprint GitHub repository.
Get to production with NVIDIA partners
NVIDIA consulting partners are helping enterprises adopt world-class AI virtual assistants built using NVIDIA accelerated computing and
NVIDIA AI Enterprise software
, which includes NeMo, NIM microservices, and AI Blueprints.
Accenture
The Accenture AI Refinery
built on
NVIDIA AI Foundry
helps design autonomous, intent-driven customer interactions, enabling businesses to tailor the journey to the individual through innovative channels such as digital humans or interaction agents. Specific use cases can be tailored to meet the needs of each industry, for example, telco call centers, insurance policy advisors, pharmaceutical interactive agents or automotive dealer network agents.
Deloitte
Deloitte Frontline AI enhances the customer service experience with digital avatars and LLM agents built with NVIDIA AI Blueprints that are accelerated by NVIDIA technologies such as NVIDIA ACE, NVIDIA Omniverse, NVIDIA Riva, and NIM.
Wipro
Wipro Enterprise Generative AI (WeGA) Studio accelerates industry-specific use cases including contact center agents across healthcare, financial services, retail, and more.
Tech Mahindra
Tech Mahindra is leveraging the NVIDIA AI Blueprint for digital humans to build solutions for customer service. Using RAG and NVIDIA NeMo, the solution provides the ability for a trainee to stop an agent during a conversation by raising a hand to ask clarifying questions. The system is designed to connect with microservices on the backend with a refined learning management system) which can be deployed across many industry use cases.
Infosys
Infosys Cortex
, part of
Infosys Topaz
, is an AI-driven customer engagement platform that integrates NVIDIA AI Blueprints and the NVIDIA NeMo, Riva, and ACE technologies for generative AI, speech AI, and digital human capabilities to deliver specialized and individualized, proactive, and on-demand assistance to every member of a customer service organization, consequently playing a pivotal role in enhancing customer experience, improving operational efficiency, and reducing costs.
Tata Consultancy Services
The Tata Consultancy Services (TCS) virtual agent, powered by NVIDIA NIM, and integrated with ServiceNowâs IT Virtual Agent is designed to optimize IT and HR support. This solution uses prompt-tuning and RAG to improve response times, accuracy, and provide multi-turn conversational capabilities. Benefits include reduced service desk costs, fewer support tickets, enhanced knowledge utilization, faster deployment, and a better overall employee and customer experience.
Quantiphi
Quantiphi
is integrating NVIDIA AI Blueprints into its conversational AI solutions to enhance customer service with lifelike digital avatars. These state-of-the-art avatars, powered by NVIDIA Tokkio and ACE technologies,
NVIDIA NIM microservices
and
NVIDIA NeMo
, seamlessly integrate with existing enterprise applications, enhancing operations and customer experiences with increased realism. Fine-tuned NIM deployments for digital avatar workflows have proven to be highly cost-effective, reducing enterprise spending on tokens.
SoftServe
SoftServe Digital Concierge
, accelerated by NVIDIA AI Blueprints and NVIDIA NIM microservices, uses NVIDIA ACE, NVIDIA Riva, and the NVIDIA Audio2Face NIM microservice to deliver a highly realistic virtual assistant. Thanks to the Character Creator tool, it delivers speech and facial expressions with remarkable accuracy and lifelike detail.
With RAG capabilities from NVIDIA NeMo Retriever, SoftServe Digital Concierge can intelligently respond to customer queries by referencing context and delivering specific, up-to-date information. It simplifies complex queries into clear, concise answers and can also provide detailed explanations when needed.
EXL
EXLâs Smart Agent Assist offering is a contact center AI solution leveraging NVIDIA Riva, NVIDIA NeMo, and NVIDIA NIM microservices. EXL plans to augment their solution using the NVIDIA AI Blueprint for AI virtual agents.
This week at
NVIDIA AI Summit India
, NVIDIA consulting partners announced a collaboration with NVIDIA to transform India into a Front Office for AI. Using NVIDIA technologies, these consulting giants can help customers tailor the customer service agent blueprint to build unique virtual assistants using their preferred AI modelâincluding sovereign LLMs from India-based model makersâand run it in production efficiently on the infrastructure of their choice.
Get started
To try the blueprint for free, and to see system requirements, navigate to the
Blueprint Card
.
To start building applications using those microservices, visit the
NVIDIA API catalog
. To
sign in
, youâll be prompted to enter a personal or business email address to access different options for building with NIM. For more information, see the
NVIDIA NIM FAQ
.
This post was originally published on 10/23/2024. | https://developer.nvidia.com/ja-jp/blog/three-building-blocks-for-creating-ai-virtual-assistants-for-customer-service-with-an-nvidia-nim-agent-blueprint/ | NVIDIA AI Blueprint ã§ã«ã¹ã¿ã㌠ãµãŒãã¹åãã® AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããäœæãã 3 ã€ã®æ§æèŠçŽ | Reading Time:
2
minutes
ä»æ¥ã®ããŸããããããžãã¹ç°å¢ã§ã¯ãåªããã«ã¹ã¿ã㌠ãµãŒãã¹ãæäŸããããšã¯ããã¯ãåã«ãããã°è¯ãããšãã§ã¯ãªãããå¿
èŠäžå¯æ¬ ãªããšãã§ããæè¡çãªåé¡ãžã®å¯Ÿå¿ãè«æ±ã«é¢ãã質åã®è§£æ±ºããµãŒãã¹ã®ææ°æ
å ±ã®æäŸãªã©ã顧客ã¯ãè¿
éãã€æ£ç¢ºã§ã顧客ã®éœåã«ã«ã¹ã¿ãã€ãºããã察å¿ãæåŸ
ããŠããŸãããããããã®ã¬ãã«ã®ãµãŒãã¹ãå®çŸããã«ã¯ã倧ããªèª²é¡ã䌎ããŸãã
ããŒãœãã©ã€ãºããããªã¢ã«ã¿ã€ã ã®ãµããŒããæäŸããã«ã¯ãå€ãã®å Žåãéçãªã¹ã¯ãªãããæäœæ¥ã«ããããã»ã¹ãšãã£ãåŸæ¥ã®ã¢ãããŒãã§ã¯äžååã§ããããã«ãå€ãã®ã«ã¹ã¿ã㌠ãµãŒãã¹æ¥åã§ã¯ãæ©å¯æ§ãé«ããã€æççãªããŒã¿ãåãæ±ãããšã«ãªããå³ããããŒã¿ç®¡çãšãã©ã€ãã·ãŒèŠå¶ã®å¯Ÿè±¡ãšãªããŸããçæ AI ã®å°é ã«ãããäŒæ¥ã¯éçšå¹çã®åäžãã³ã¹ãåæžãROI ã®æ倧åã«ãã£ãŠã«ã¹ã¿ã㌠ãµãŒãã¹ã«é©åœãèµ·ããããšãç®æããŠããŸãã
AI ãæ¢åã®ã·ã¹ãã ã«çµã¿èŸŒãéã«ã¯ãéææ§ã粟床ãã»ãã¥ãªãã£ã«é¢ãã課é¡ã«çŽé¢ããå°å
¥ã劚ããã¯ãŒã¯ãããŒãäžæãããããšããããããããŸãããããããããŒãã«ãå
æããããã«ãäŒæ¥ã¯çæ AI ã掻çšããããŒãã£ã« ã¢ã·ã¹ã¿ã³ããå©çšããŠå¹
åºãã¿ã¹ã¯ã管çããæçµçã«å¿çæéãççž®ããŠããªãœãŒã¹ã解æŸããŠããŸãã
ãã®æçš¿ã§ã¯ãéçºè
ãã
AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã« NVIDIA AI Blueprint
ã䜿çšããŠãçæ AI ã§æ¥åãæ¡åŒµããæ¹æ³ã«ã€ããŠèª¬æããŸãããµã³ãã« ã³ãŒããå«ããã®æ
å ±ã掻çšããããšã§ãäŒæ¥ã¯ãããŒã¿ã®æŽåæ§ãšããŒã¿ ã¬ããã³ã¹ã確ä¿ããªãããåªããã«ã¹ã¿ã㌠ãµãŒãã¹ãžã®é«ãŸãèŠæ±ã«å¿ããããšãã§ããŸããæ¢åã®ã·ã¹ãã ã®æ¹åãŸãã¯æ°ããã·ã¹ãã ã®æ§ç¯ã«ãããããããã® Blueprint ã«ãã£ãŠããŒã ã¯å¹ççã§æå³ã®ãããããšããéããŠé¡§å®¢ã®ããŒãºã«å¯Ÿå¿ããããšãã§ããŸãã
æ€çŽ¢æ¡åŒµçæ (RAG) ã䜿çšãã AI ã¯ãšãª ãšã³ãžã³ã«ããã¹ããŒã㪠AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ã
AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããæ§ç¯ããå Žåãç¬èªã®ãŠãŒã¹ ã±ãŒã¹èŠä»¶ããã³çµç¹ã®ç¥èãããŒãºã«åãããŠèª¿æŽããããšãéèŠã§ããåŸæ¥ã®ãããã§ã¯ãå€ãã®å Žåãæè»æ§ã®ä¹ãããã¬ãŒã ã¯ãŒã¯ãšæ代é
ãã®ã¡ãœãããå©çšããŠãããä»æ¥ã®ã«ã¹ã¿ã㌠ãµãŒãã¹ã®ãããªåžžã«å€åãç¶ããèŠæ±ã«å¯Ÿå¿ã§ããŸããã
ããããæ¥çã§ãAI ããŒã¹ã®ã¢ã·ã¹ã¿ã³ããé©æ°çãªååšãšãªãåŸãŸããããšãã°ãéä¿¡äŒç€Ÿãå°å£²ããµãŒãã¹ ãããã€ããŒã®å€§å€æ°ã¯ãAI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã䜿çšããŠã24 æé 365 æ¥çšŒåãããµããŒããæäŸããªãããå€èšèªã§å¹
åºã顧客ã®åãåããã«å¯Ÿå¿ãããã©ãã«ã·ã¥ãŒãã£ã³ã°ãã¢ã«ãŠã³ã管çãåçåããããã€ãããã¯ã§ããŒãœãã©ã€ãºããããããšããæäŸããããšã§ã顧客äœéšãåäžããããšãã§ããŸããããã«ãããåŸ
ã¡æéãççž®ããããŸããŸãªé¡§å®¢ããŒãºã«å¯ŸããŠäžè²«ãããµãŒãã¹ãæäŸããããšãã§ããŸãã
ããã²ãšã€ã®äŸãšããŠãå»çä¿éºã®æ¯ææ¥çã§ã¯ãå å
¥è
ã«ãšã£ãŠæºè¶³åºŠã®é«ãäœéšã確å®ã«æäŸããããšãéèŠã§ããããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã¯ãå»çèŠå¶ã®éµå®ã確ä¿ããªãããå å
¥è
ã«ããŒãœãã©ã€ãºããããµããŒããæäŸããè«æ±ãè£åã«é¢ããåãåããã絊ä»éãæ¯æãã«é¢ããåé¡ã«å¯ŸåŠããããšã§ãããããäœéšãåäžããŠããŸããããã«ãããå»çåŸäºè
ã®ç®¡çäžã®è² æ
ã軜æžããããšãã§ããŸãã
NVIDIA AI ãã©ãããã©ãŒã ã䜿çšããããšã§ãäŒæ¥ã¯ã
æ€çŽ¢æ¡åŒµçæ (RAG)
ã䜿çšãã AI ã¯ãšãª ãšã³ãžã³ãäœæããAI ã¢ããªã±ãŒã·ã§ã³ãäŒæ¥ããŒã¿ã«æ¥ç¶ããããšãã§ããŸããAI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã® Blueprint ã«ãããéçºè
ã¯ãããæŽç·Žããã顧客äœéšãæäŸãããœãªã¥ãŒã·ã§ã³ãè¿
éã«æ§ç¯ãéå§ããããšãã§ããŸãããã® Blueprint ã¯ã以äžã®
NVIDIA NIM
ãã€ã¯ããµãŒãã¹ã䜿çšããŠæ§ç¯ãããŸãã
LLM åã NVIDIA NIM:
æå
端ã®å€§èŠæš¡èšèªã¢ãã« (LLM) ã®ãã¯ãŒãã¢ããªã±ãŒã·ã§ã³ã«åãå
¥ãã倧å¹
ã«å¹çåããŠãåè¶ããèªç¶èšèªåŠçãæäŸããŸãã
Llama 3.1 70B Instruct NIM
:
åªããæèç解ãæšè«ãããã¹ãçæã§è€éãªäŒè©±ãå¯èœã§ãã
NVIDIA NeMo
Retriever NIM:
RAG ãã€ãã©ã€ã³ã®åºç€ãšãªãæ§æèŠçŽ ã§ããæå
端ã¢ãã«ã«ç°¡åã«ã¢ã¯ã»ã¹ã§ããŸãããã® RAG ãã€ãã©ã€ã³ã«ãã£ãŠãããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã¯äŒæ¥ããŒã¿ãžã®ã·ãŒã ã¬ã¹ãªã¢ã¯ã»ã¹ãå¯èœã«ãªããè¿
éãã€æ£ç¢ºã§ã¹ã±ãŒã©ãã«ãªåçã§ãçµç¹ã®ç¥èã掻çšã§ããŸãã
NeMo
Retriever Embedding NIM
:
ããã¹ãã® QA æ€çŽ¢ã¿ã¹ã¯ã«ç¹åãããŠãããããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã¯ãã®é«å質ã®ããã¹ãåã蟌ã¿ãå©çšããŸãã
NeMo
Retriever Reranking NIM
:
ãã¡ã€ã³ãã¥ãŒãã³ã°ããããªã©ã³ãã³ã° ã¢ãã«ã§ãããåã蟌ã¿ã¢ãã«ãšäœµçšããããšã§æ€çŽ¢æ§èœãããã«åäžãããããšãã§ããŸããå
¥åæã«æãé¢é£æ§ã®é«ãæç« ãèŠä»ãåºããLLM ã«æèãšããŠæž¡ããŸãã
ãã® Blueprint ã¯ãæ
å ±ã»ãã¥ãªãã£ã«é¢ãã矩åã«åããããšãªããæ¢åã®ã«ã¹ã¿ã㌠ãµãŒãã¹ ã¢ããªã±ãŒã·ã§ã³ãšã·ãŒã ã¬ã¹ã«çµ±åã§ããããã«èšèšãããŠããŸããNVIDIA NIM ã®ç§»æ€æ§ã®ãããã§ãäŒæ¥ã¯ãããŒã¿ãã©ãã«ãã£ãŠãçµ±åããããšãã§ããŸããçæ AI ãããŒã¿ã«åãå
¥ããããšã§ãAI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã¯ã顧客åºæã®ãããã¡ã€ã«ããŠãŒã¶ãŒãšã®å¯Ÿè©±å±¥æŽããã®ä»ã®é¢é£ããŒã¿ãªã©ã掻çšããŠãå顧客ã«åãããããããŒãœãã©ã€ãºãããäœéšãæäŸã§ããããã«ãªããŸãã
Blueprint ã¯ãäŒæ¥ç¬èªã®ãŠãŒã¹ ã±ãŒã¹ã«åãããŠã«ã¹ã¿ãã€ãºãå¯èœãª âåå°â ã®ãããªãã®ã§ããããšãã°ã
Nemotron 4 Hindi 4B Instruct
ãªã©ä»ã® NIM ãã€ã¯ããµãŒãã¹ãçµ±åããã°ãAI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããçŸå°ã®èšèªã§ã³ãã¥ãã±ãŒã·ã§ã³ã§ããããã«ãªããŸãããã®ä»ã®ãã€ã¯ããµãŒãã¹ã«ãããåæããŒã¿ã®çæãã¢ãã«ã®ãã¡ã€ã³ãã¥ãŒãã³ã°ãªã©ã®è¿œå æ©èœãå¯èœã«ãªããç¹å®ã®ãŠãŒã¹ ã±ãŒã¹èŠä»¶ã«é©åãããããšãã§ããŸãããŸããããžã¿ã« ãã¥ãŒãã³ AI Blueprint ã«æ¥ç¶ãããšãAI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã«äººéã®ãããªã€ã³ã¿ãŒãã§ã€ã¹ãæäŸãããŸãã
ç¬èªã®ããŒã¿ (äŒæ¥ããŠãŒã¶ãŒã®ãããã¡ã€ã«ãç¹å®ã®ããŒã¿) ãåãã RAG ããã¯ãšã³ããå®è£
ããããšã§ãAI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã¯ãæèã«æ²¿ã£ã察話ãè¡ãããªã¢ã«ã¿ã€ã ã§å顧客ã®ããŒãºã®ç¹å®äºé
ã«å¯Ÿå¿ããããšãã§ããŸããããã«ããã®ãœãªã¥ãŒã·ã§ã³ã¯ãã§ã«éçšããŠããã¬ããã³ã¹ ãã¬ãŒã ã¯ãŒã¯å
ã§å®å
šã«éçšãããç¹ã«æ©å¯ããŒã¿ãæ±ãéã«ã¯ããã©ã€ãã·ãŒãšã»ãã¥ãªã㣠ãããã³ã«ã®éµå®ãä¿èšŒããŸãã
ç¬èªã® AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããæ§ç¯ãã 3 ã€ã®æ§æèŠçŽ
éçºè
ãšããŠãæãé¢é£æ§ã®é«ãææ°ã®æ
å ±ããªã¢ã«ã¿ã€ã ã§ååŸããåžžã«äººéã®ãããªå¿çãã§ããããæ¥ã
é²åããç¬èªã® AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããæ§ç¯ã§ããŸããå³ 1 ã¯ã3 ã€ã®æ©èœã³ã³ããŒãã³ããå«ã AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã®ã¢ãŒããã¯ãã£å³ã§ãã
å³ 1. AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãåãã® NVIDIA AI Blueprint
1. ããŒã¿ã®åã蟌ã¿ãšæ€çŽ¢ãã€ãã©ã€ã³
ãã€ãã©ã€ã³ç®¡çè
ã¯ãåã蟌㿠(Ingest) ãã€ãã©ã€ã³ã䜿çšããŠãæ§é åããŒã¿ãéæ§é åããŒã¿ãããŒã¿ããŒã¹ã«èªã¿èŸŒãããšãã§ããŸããæ§é åããŒã¿ã®äŸãšããŠã顧客ãããã¡ã€ã«ã泚æå±¥æŽãçºéç¶æ³ãªã©ããããŸããéæ§é åããŒã¿ã«ã¯ã補åããã¥ã¢ã«ã補åã«ã¿ãã°ãFAQ ããã¥ã¡ã³ããªã©ã®ãµããŒãè³æãå«ãŸããŸãã
2. AI ãšãŒãžã§ã³ã
2 ã€ç®ã®æ©èœã³ã³ããŒãã³ã㯠AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ã ã§ãããŠãŒã¶ãŒã¯ããŠãŒã¶ãŒ ã€ã³ã¿ãŒãã§ã€ã¹ãä»ããŠããŒãã£ã« ã¢ã·ã¹ã¿ã³ããšå¯Ÿè©±ããŸãããšãŒãžã§ã³ãå LLM ããã°ã©ãã³ã° ãã¬ãŒã ã¯ãŒã¯ã§ãã LangGraph ã§å®è£
ããã AI ãšãŒãžã§ã³ããã顧客ããã®è€éãªåãåããã«å¯Ÿå¿ããæ¹æ³ãèšç»ãããã®åãåãããååž°çã«è§£æ±ºããŸããLangGraph ãšãŒãžã§ã³ãã¯
Llama3.1 70B Instruct NIM
ã®ããŒã«åŒã³åºãæ©èœã䜿çšããŠãéæ§é åããŒã¿ãšæ§é åããŒã¿ã®äž¡æ¹ããæ
å ±ãååŸããæ£ç¢ºãªå¿çãçæããŸãã
ãŸã AI ãšãŒãžã§ã³ãã«ãããçæã¡ã¢ãªãšé·æã¡ã¢ãªã®æ©èœã䜿çšããŠãã«ãã¿ãŒã³ã®å¯Ÿè©±å±¥æŽãå®çŸã§ããŸããã¢ã¯ãã£ããªäŒè©±ã«å¯Ÿããåãåãããå¿çãåã蟌ãŸããŠãããããäŒè©±ã®åŸåã§è¿œå ã®æèãšããŠæ€çŽ¢ãå©çšã§ããŸããããã«ããããã人éã«è¿ããããšããå¯èœã«ãªãã顧客ããã§ã«ãšãŒãžã§ã³ããšå
±æããæ
å ±ãç¹°ãè¿ãæäŸããå¿
èŠããªããªããŸãã
æçµçã«ãäŒè©±ã®æåŸã« AI ãšãŒãžã§ã³ããææ
ã®å€å®ãšãšãã«è°è«ãèŠçŽããæ§é åããŒã¿ããŒã¹ã«äŒè©±å±¥æŽãä¿åããŸãããŠãŒã¶ãŒãšã®å¯Ÿè©±ã¯ãä»åŸã®äŒè©±ã§è¿œå ã®æèãšããŠæ€çŽ¢ã§ããŸããé話ã®èŠçŽãšäŒè©±å±¥æŽãæ€çŽ¢ããããšã§ãé話æéãççž®ãã顧客äœéšãåäžãããããšãã§ããŸããææ
å€å®ã«ãã£ãŠããšãŒãžã§ã³ãã®æå¹æ§ã«é¢ãã貎éãªæŽå¯ãã«ã¹ã¿ã㌠ãµãŒãã¹ç®¡çè
ã«æäŸã§ããŸãã
3. éçšãã€ãã©ã€ã³
顧客éçšãã€ãã©ã€ã³ã¯ããœãªã¥ãŒã·ã§ã³å
šäœã® 3 ã€ç®ã®æ§æèŠçŽ ã§ãããã®ãã€ãã©ã€ã³ã¯ãã«ã¹ã¿ã㌠ãµãŒãã¹ ãªãã¬ãŒã¿ãŒã«éèŠãªæ
å ±ãšæŽå¯ãæäŸããŸãã管çè
ã¯ãéçšãã€ãã©ã€ã³ã䜿çšããŠããã£ããå±¥æŽããŠãŒã¶ãŒã®ãã£ãŒãããã¯ãææ
åæããŒã¿ãé話ã®èŠçŽã確èªããããšãã§ããŸããLlama 3.1 70B Instruct NIM ã掻çšããåæãã€ã¯ããµãŒãã¹ã䜿çšããŠãå¹³åé話æéã解決ãŸã§ã®æéã顧客æºè¶³åºŠãªã©ã®åæãçæã§ããŸãããŸãåæçµæã¯ããŠãŒã¶ãŒ ãã£ãŒãããã¯ãšããŠã掻çšãããLLM ã¢ãã«ãåãã¬ãŒãã³ã°ããŠç²ŸåºŠãåäžããŸãã
NVIDIA ããŒãããŒãšæ¬çªç°å¢ã«çæ
NVIDIA ã®ã³ã³ãµã«ãã£ã³ã° ããŒãããŒã¯ãåäŒæ¥ããNVIDIA ã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ãšãNeMoãNIM ãã€ã¯ããµãŒãã¹ãAI Blueprint ãå«ã
NVIDIA AI Enterprise ãœãããŠã§ã¢
ã§æ§ç¯ãããäžçæ°Žæºã® AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããå°å
¥ã§ããããã«æ¯æŽããŠããŸãã
Accenture
NVIDIA AI Foundry
äžã«æ§ç¯ããã
Accenture AI Refinery
ã¯ãèªåŸçã§é¡§å®¢ã®æå³ã«æ²¿ã£ã察話ãèšèšããäŒæ¥ãããžã¿ã« ãã¥ãŒãã³ãã€ã³ã¿ã©ã¯ã·ã§ã³ ãšãŒãžã§ã³ããªã©ã®é©æ°çãªãã£ãã«ãéããŠãå人ã«åãããŠã«ã¹ã¿ãã€ãºã§ããããã«ããŸããç¹å®ã®ãŠãŒã¹ ã±ãŒã¹ã¯ãéä¿¡äŒç€Ÿã®ã³ãŒã« ã»ã³ã¿ãŒãä¿éºå¥çŽã®ã¢ããã€ã¶ãŒãå»è¬åã®ã€ã³ã¿ã©ã¯ãã£ã ãšãŒãžã§ã³ããèªåè»ãã£ãŒã©ãŒã®ãããã¯ãŒã¯ ãšãŒãžã§ã³ããªã©ãåæ¥çã®ããŒãºã«åãããŠã«ã¹ã¿ãã€ãºã§ããŸãã
Deloitte
Deloitte Frontline AI ã¯ãNVIDIA ACEãNVIDIA OmniverseãNVIDIA RivaãNIM ãªã©ã® NVIDIA ã®ãã¯ãããžã«ãã£ãŠå éããã NVIDIA AI Blueprint ãå©çšããŠæ§ç¯ãããããžã¿ã« ã¢ãã¿ãŒã LLM ãšãŒãžã§ã³ãã§ã«ã¹ã¿ã㌠ãµãŒãã¹äœéšãåäžããŠããŸãã
Wipro
Wipro Enterprise Generative AI (WeGA) Studio ã¯ããã«ã¹ã±ã¢ãéèãµãŒãã¹ãå°å£²ãªã©ã®ã³ã³ã¿ã¯ã ã»ã³ã¿ãŒã®ãšãŒãžã§ã³ããå«ãæ¥çåºæã®ãŠãŒã¹ ã±ãŒã¹ãå éããŠããŸãã
Tech Mahindra
Tech Mahindra ã¯ãããžã¿ã« ãã¥ãŒãã³åãã® NVIDIA AI Blueprint ã掻çšããŠãã«ã¹ã¿ã㌠ãµãŒãã¹åãã®ãœãªã¥ãŒã·ã§ã³ãæ§ç¯ããŠããŸããRAG ãš NVIDIA NeMo ã䜿çšãããã®ãœãªã¥ãŒã·ã§ã³ã¯ããã¬ãŒãã³ã°åè¬è
ããäŒè©±äžã«æãæããŠæ確ãªè³ªåãããããšã§ããšãŒãžã§ã³ããæ¢ããæ©èœãæäŸããŸãããã®ã·ã¹ãã ã¯ãå€ãã®æ¥çã®ãŠãŒã¹ ã±ãŒã¹ã§ãããã€ã§ããæŽç·ŽãããåŠç¿ç®¡çã·ã¹ãã ã§ãããã¯ãšã³ãã®ãã€ã¯ããµãŒãã¹ãšæ¥ç¶ããããã«èšèšãããŠããŸãã
Infosys
Infosys Topaz
ã®äžéšã§ãã
Infosys Cortex
ã¯ãAI ã掻çšãã顧客ãšã³ã²ãŒãžã¡ã³ã ãã©ãããã©ãŒã ã§ãããçæ AIãã¹ããŒã AIãããžã¿ã« ãã¥ãŒãã³æ©èœãå®çŸãã NVIDIA AI Blueprint ãš NVIDIA NeMoãRivaãACE æè¡ãçµ±åããã«ã¹ã¿ã㌠ãµãŒãã¹çµç¹ã®ããããã¡ã³ããŒã«å°éçã§å人ã«åãããããã¢ã¯ãã£ããã€ãªã³ããã³ãã®æ¯æŽãæäŸããããšã§ã顧客äœéšã®åäžãéçšå¹çã®æ¹åãã³ã¹ãåæžã«éèŠãªåœ¹å²ãæãããŸãã
Tata Consultancy Services
NVIDIA NIM ãæèŒã ServiceNow ã® IT ä»®æ³ãšãŒãžã§ã³ããšçµ±åããã Tata Consultancy Services (TCS) ã®ä»®æ³ãšãŒãžã§ã³ãã¯ãIT ãš HR ã®ãµããŒããæé©åããããã«èšèšãããŠããŸãããã®ãœãªã¥ãŒã·ã§ã³ã¯ãããã³ãã ãã¥ãŒãã³ã°ãš RAG ã䜿çšããŠãå¿çæéã粟床ãåäžããããã«ãã¿ãŒã³ã®äŒè©±æ©èœãæäŸããŸãããµãŒãã¹ ãã¹ã¯ã®ã³ã¹ãåæžããµããŒã ãã±ããã®æžå°ããã¬ããžæŽ»çšã®åŒ·åãããè¿
éãªãããã€ããããŠåŸæ¥å¡ãšé¡§å®¢ã®å
šäœçãªäœéšã®åäžãªã©ã®ã¡ãªããããããŸãã
Quantiphi
Quantiphi
ã¯ãNVIDIA AI Blueprint ã察話å AI ãœãªã¥ãŒã·ã§ã³ã«çµ±åãããªã¢ã«ãªããžã¿ã« ã¢ãã¿ãŒã§ã«ã¹ã¿ã㌠ãµãŒãã¹ã匷åããŠããŸããNVIDIA Tokkio ãš ACEã
NVIDIA NIM ãã€ã¯ããµãŒãã¹
ã
NVIDIA NeMo
ãæèŒããæå
端ã®ã¢ãã¿ãŒããæ¢åã®ãšã³ã¿ãŒãã©ã€ãº ã¢ããªã±ãŒã·ã§ã³ãšã·ãŒã ã¬ã¹ã«çµ±åãããªã¢ãªãã£ãé«ããªããéçšãšé¡§å®¢äœéšãåäžãããŸããããžã¿ã« ã¢ãã¿ãŒ ã¯ãŒã¯ãããŒã«ãã¡ã€ã³ãã¥ãŒãã³ã°ããã NIM ã®ãããã€ã¯ãè²»çšå¯Ÿå¹æãé«ããäŒæ¥ã®ããŒã¯ã³ã«å¯Ÿããæ¯åºãåæžããããšãå®èšŒãããŠããŸãã
SoftServe
SoftServe Digital Concierge
ã¯ãNVIDIA AI Blueprint ãš NVIDIA NIM ãã€ã¯ããµãŒãã¹ã«ãã£ãŠå éãããŠãããNVIDIA ACEãNVIDIA RivaãNVIDIA Audio2Face NIM ãã€ã¯ããµãŒãã¹ã䜿çšããŠãéåžžã«ãªã¢ã«ãªããŒãã£ã« ã¢ã·ã¹ã¿ã³ããæäŸããŸããCharacter Creator ããŒã«ã䜿çšããããšã§ãé³å£°ãé¡ã®è¡šæ
ãé©ãã»ã©æ£ç¢ºãã€ãªã¢ã«ã«è©³çŽ°ãåçŸã§ããŸãã
NVIDIA NeMo Retriever ã® RAG æ©èœã«ãããSoftServe Digital Concierge ã¯ãæèãåç
§ããç¹å®ã®ææ°æ
å ±ãæäŸããããšã§ã顧客ããã®åãåããã«ã€ã³ããªãžã§ã³ãã«å¯Ÿå¿ã§ããŸããè€éãªåãåãããç°¡çŽ åããæ確ã§ç°¡æœãªåçã«ãŸãšããå¿
èŠã«å¿ããŠè©³çŽ°ãªèª¬æãæäŸããããšãã§ããŸãã
EXL
EXL ã® Smart Agent Assist 補åã¯ãNVIDIA RivaãNVIDIA NeMoãNVIDIA NIM ãã€ã¯ããµãŒãã¹ã掻çšããã³ã³ã¿ã¯ã ã»ã³ã¿ãŒ AI ãœãªã¥ãŒã·ã§ã³ã§ããEXL ã¯ãAI ä»®æ³ãšãŒãžã§ã³ãåãã® NVIDIA AI Blueprint ã䜿çšããŠããœãªã¥ãŒã·ã§ã³ã匷åããäºå®ã§ãã
NVIDIA AI Summit India
ã§ãNVIDIA ã³ã³ãµã«ãã£ã³ã° ããŒãããŒããã€ã³ãã AI ã®ããã³ã ãªãã£ã¹ã«å€é©ããããã«ãNVIDIA ãšã®ã³ã©ãã¬ãŒã·ã§ã³ãçºè¡šããŸãããNVIDIA ãã¯ãããžã䜿çšããããšã§ããããã®ã³ã³ãµã«ãã£ã³ã°å€§æã¯ã顧客ãã«ã¹ã¿ã㌠ãµãŒãã¹ ãšãŒãžã§ã³ãã® Blueprint ãã«ã¹ã¿ãã€ãºãã奜ã¿ã® AI ã¢ãã« (ã€ã³ãã«æ ç¹ã眮ãã¢ãã« ã¡ãŒã«ãŒãæäŸãããœããªã³ LLM ãå«ã) ã䜿çšããŠç¬èªã®ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããæ§ç¯ããåžæã®ã€ã³ãã©ã§å¹ççã«æ¬çªçšŒåã§ããããã«ããŸãã
ä»ããå§ãã
Blueprint ãç¡æã§è©Šããããã·ã¹ãã èŠä»¶ã確èªããã«ã¯ã
Blueprint ã«ãŒã
ããåç
§ãã ããããããã®ãã€ã¯ããµãŒãã¹ã䜿çšããŠã¢ããªã±ãŒã·ã§ã³ã®æ§ç¯ãå§ããã«ã¯ã
NVIDIA API ã«ã¿ãã°
ã«ã¢ã¯ã»ã¹ããŠãã ããã
ãµã€ã³ã€ã³
ããã«ã¯ãNIM ã§æ§ç¯ããããŸããŸãªãªãã·ã§ã³ã«ã¢ã¯ã»ã¹ãããããå人çšãŸãã¯ããžãã¹çšã®ã¡ãŒã« ã¢ãã¬ã¹ãå
¥åããå¿
èŠããããŸãã詳现ã«ã€ããŠã¯ã
NVIDIA NIM FAQ
ãã芧ãã ããã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
éèéšéåãã®å®å
šã§å¹ççãªããŒãã£ã« ã¢ã·ã¹ã¿ã³ã
GTC ã»ãã·ã§ã³:
çæ AI ã®èª²é¡ãžã®å¯Ÿå¿ãšå¯èœæ§ã®æŽ»çš: NVIDIA ã®ãšã³ã¿ãŒãã©ã€ãº ãããã€ããåŸãããæŽå¯
NGC ã³ã³ãããŒ:
retail-shopping-advisor-chatbot-service
NGC ã³ã³ãããŒ:
retail-shopping-advisor-frontend-service
ãŠã§ãããŒ:
éèãµãŒãã¹ ã³ã³ã¿ã¯ã ã»ã³ã¿ãŒåãã® AI é³å£°å¯Ÿå¿ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã®æ§ç¯ãšå°å
¥æ¹æ³
ãŠã§ãããŒ:
éä¿¡äŒæ¥ã察話å AI ã§é¡§å®¢äœéšãå€é©ããæ¹æ³ |
https://developer.nvidia.com/blog/hymba-hybrid-head-architecture-boosts-small-language-model-performance/ | Hymba Hybrid-Head Architecture Boosts Small Language Model Performance | Transformers, with their attention-based architecture, have become the dominant choice for language models (LMs) due to their strong performance, parallelization capabilities, and long-term recall through key-value (KV) caches. However, their quadratic computational cost and high memory demands pose efficiency challenges. In contrast, state space models (SSMs) like Mamba and Mamba-2 offer constant complexity and efficient hardware optimization but struggle with memory recall tasks, affecting their performance on general benchmarks.
NVIDIA researchers recently proposed
Hymba
, a family of small language models (SLMs) featuring a hybrid-head parallel architecture that integrates transformer attention mechanisms with SSMs to achieve both enhanced efficiency and improved performance. In Hymba, attention heads provide high-resolution recall, while SSM heads enable efficient context summarization.
The novel architecture of Hymba reveals several insights:
Overhead in attention:
Over 50% of attention computation can be replaced by cheaper SSM computation.
Local attention dominance:
Most global attention can be replaced by local attention without sacrificing performance on general and recall-intensive tasks, thanks to the global information summarized by SSM heads.
KV cache redundancy:
Key-value cache is highly correlated across heads and layers, so it can be shared across heads (group query attention) and layers (cross-layer KV cache sharing).
Softmax attention limitation:
Attention mechanisms are constrained to sum to one, limiting sparsity, and flexibility. We introduce learnable meta-tokens that are prepended to prompts, storing critical information and alleviating the âforced-to-attendâ burden associated with attention mechanisms.
This post shows that Hymba 1.5B performs favorably against state-of-the-art open-source models of similar size, including Llama 3.2 1B, OpenELM 1B, Phi 1.5, SmolLM2 1.7B, Danube2 1.8B, and Qwen2.5 1.5B. Compared to Transformer models of similar size, Hymba also achieves higher throughput and requires 10x less memory to store cache.
Hymba 1.5B is released to the
Hugging Face
collection and
GitHub
.
Hymba 1.5B performance
Figure 1 compares Hymba 1.5B against sub-2B models (Llama 3.2 1B, OpenELM 1B, Phi 1.5, SmolLM2 1.7B, Danube2 1.8B, Qwen2.5 1.5B) in terms of average task accuracy, cache size (MB) relative to sequence length, and throughput (tok/sec).
Figure 1. Performance comparison of Hymba 1.5B Base against sub-2B models
In this set of experiments, the tasks include MMLU, ARC-C, ARC-E, PIQA, Hellaswag, Winogrande, and SQuAD-C. The throughput is measured on an NVIDIA A100 GPU with a sequence length of 8K and a batch size of 128 using PyTorch. For models encountering out of memory (OOM) issues during throughput measurement, the batch size was halved until the OOM is resolved to measure the maximal achievable throughput without OOM.
Hymba model design
SSMs such as Mamba were introduced to address the quadratic complexity and large inference-time KV cache issues of transformers. However, due to their low-resolution memory, SSMs struggle with memory recall and performance. To overcome these limitations, we propose a road map for developing efficient and high-performing small LMs in Table 1.
Configuration
Commonsense reasoning (%) â
Recall (%) â
Throughput (token/sec) â
Cache size (MB) â
Design reason
Ablations on 300M model size and 100B training tokens
Transformer (Llama)
44.08
39.98
721.1
414.7
Accurate recall while inefficient
State-space models (Mamba)
42.98
19.23
4720.8
1.9
Efficient while inaccurate recall
A. + Attention heads (sequential)
44.07
45.16
776.3
156.3
Enhance recall capabilities
B. + Multi-head heads (parallel)
45.19
49.90
876.7
148.2
Better balance of two modules
C. + Local / global attention
44.56
48.79
2399.7
41.2
Boost compute/cache efficiency
D. + KV cache sharing
45.16
48.04
2756.5
39.4
Cache efficiency
E. + Meta-tokens
45.59
51.79
2695.8
40.0
Learned memory initialization
Scaling to 1.5B model size and 1.5T training tokens
F. + Size / data
60.56
64.15
664.1
78.6
Further boost task performance
G. + Extended context length (2Kâ8K)
60.64
68.79
664.1
78.6
Improve multishot and recall tasks
Table 1. Design road map of the Hymba model
Fused hybrid modules
Fusing attention and SSM heads in parallel within a hybrid-head module outperforms sequential stacking, according to the ablation study. Hymba fuses attention and SSM heads in parallel within a hybrid head module, enabling both heads to process the same information simultaneously. This architecture improves reasoning and recall accuracy.
Figure 2. The hybrid-head module in Hymba
Efficiency and KV cache optimization
While attention heads improve task performance, they increase KV cache requirements and reduce throughput. To mitigate this, Hymba optimizes the hybrid-head module by combining local and global attention and employing cross-layer KV cache sharing. This improves throughput by 3x and reduces cache by almost 4x without sacrificing performance.
Figure 3. Hymba model architecture
Meta-tokens
A set of 128 pretrained embeddings prepended to inputs, functioning as learned cache initialization to enhance focus on relevant information. These tokens serve a dual purpose:
Mitigating attention drain by acting as backstop tokens, redistributing attention effectively
Encapsulating compressed world knowledge
Figure 4. Interpretation of Hymba from the memory aspect
Model analysis
This section presents an apples-to-apples comparison across different architectures under the same training settings. We then visualize the attention maps of SSM and Attention in different pretrained models. Finally, we perform head importance analysis for Hymba through pruning. All the analyses in this section help to illustrate how and why the design choices for Hymba are effective.
Apples-to-apples comparison
We performed an apples-to-apples comparison of Hymba, pure Mamba2, Mamba2 with FFN, Llama3 style, and Samba style (Mamba-FFN-Attn-FFN) architectures. All models have 1 billion parameters and are trained from scratch for 100 billion tokens from SmolLM-Corpus with exactly the same training recipe. All results are obtained through lm-evaluation-harness using a zero-shot setting on Hugging Face models. Hymba performs the best on commonsense reasoning as well as question answering and recall-intensive tasks.
Table 2 compares various model architectures on language modeling and recall-intensive and commonsense reasoning tasks, with Hymba achieving strong performance across metrics. Hymba demonstrates the lowest perplexity in language tasks (18.62 for Wiki and 10.38 for LMB) and solid results in recall-intensive tasks, particularly in SWDE (54.29) and SQuAD-C (44.71), leading to the highest average score in this category (49.50).
Model
Language (PPL) â
Recall intensive (%) â
Commonsense reasoning (%) â
Mamba2
15.88
43.34
52.52
Mamba2 w/ FFN
17.43
28.92
51.14
Llama3
16.19
47.33
52.82
Samba
16.28
36.17
52.83
Hymba
14.5
49.5
54.57
Table 2. Comparison of architectures trained on 100 billion tokens under the same settings
In commonsense reasoning and question answering, Hymba outperforms other models in most tasks, such as SIQA (31.76) and TruthfulQA (31.64), with an average score of 54.57, slightly above Llama3 and Mamba2. Overall, Hymba stands out as a balanced model, excelling in both efficiency and task performance across diverse categories.
Attention map visualization
We further categorized elements in the attention map into four types:
Meta:
Attention scores from all real tokens to meta-tokens. This category reflects the modelâs preference for attending to meta-tokens. In attention maps, they are usually located in the first few columns (for example, 128 for Hymba) if a model has meta-tokens.
BOS:
Attention scores from all real tokens to the beginning-of-sequence token. In the attention map, they are usually located in the first column right after the meta-tokens.
Self:
Attention scores from all real tokens to themselves. In the attention map, they are usually located in the diagonal line.
Cross:
Attention scores from all real tokens to other real tokens. In the attention map, they are usually located in the off-diagonal area.
The attention pattern of Hymba is significantly different from that of vanilla Transformers. In vanilla Transformers, attention scores are more concentrated on BOS, which is consistent with the findings in Attention Sink. In addition, vanilla Transformers also have a higher proportion of Self attention scores. In Hymba, meta-tokens, attention heads, and SSM heads work complementary to each other, leading to a more balanced distribution of attention scores across different types of tokens.
Specifically, meta-tokens offload the attention scores from BOS, enabling the model to focus more on the real tokens. SSM heads summarize the global context, which focuses more on current tokens (Self attention scores). Attention heads, on the other hand, pay less attention to Self and BOS tokens, and more attention to other tokens (that is, Cross attention scores). This suggests that the hybrid-head design of Hymba can effectively balance the attention distribution across different types of tokens, potentially leading to better performance.
Figure 5. Schematics of the attention map of Hymba as a combination of meta-tokens, sliding window attention, and Mamba contributions
Figure 6. Sum of the attention score from different categories in Llama 3.2 3B and Hymba 1.5B
Heads importance analysis
We analyzed the relative importance of attention and SSM heads in each layer by removing them and recording the final accuracy. Our analysis reveals the following:
The relative importance of attention/SSM heads in the same layer is input-adaptive and varies across tasks, suggesting that they can serve different roles when handling various inputs.
The SSM head in the first layer is critical for language modeling, and removing it causes a substantial accuracy drop to random guess levels.
Generally, removing one attention/SSM head results in an average accuracy drop of 0.24%/1.1% on Hellaswag, respectively.
Figure 7. The achieved accuracy, measured using 1K samples from Hellaswag, after removing the Attention or SSM heads in each layer
Model architecture and training best practices
This section outlines key architectural decisions and training methodologies for Hymba 1.5B Base and Hymba 1.5B Instruct.
Model architecture
Hybrid architecture:
Mamba is great at summarization and usually closer focuses on the current token, while attention is more precise and acts as snapshot memory. Combining them in parallel merges these benefits, but standard sequential fusion does not. We chose a 5:1 parameter ratio between SSM and attention heads.
Sliding window attention:
Full attention heads are preserved in three layers (first, last, and middle), with sliding window attention heads used in the remaining 90% layers.
Cross-layer KV cache sharing:
Implemented between every two consecutive attention layers. It is done in addition to GQA KV cache sharing between heads.
Meta-tokens:
These 128 tokens are learnable with no supervision, helping to avoid entropy collapse problems in large language models (LLMs) and mitigate the attention sink phenomenon. Additionally, the model stores general knowledge in these tokens.
Training best practices
Pretraining:
We opted for two-stage base model training. Stage 1 maintained a constant large learning rate and used less filtered large corpus data. Continuous learning rate decay was then performed to 1e-5 using high-quality data. This approach enables continuous training and resuming of Stage 1.
Instruction fine-tuning:
Instruct model tuning is performed in three stages. First, SFT-1 provides the model with strong reasoning abilities by training on code, math, function calling, role play, and other task-specific data. Second, SFT-2 teaches the model to follow human instructions. Finally, DPO is leveraged to align the model with human preferences and improve the modelâs safety.
Figure 8. Training pipeline adapted for the Hymba model family
Performance and efficiency evaluation
With only 1.5T pretraining tokens, the Hymba 1.5B model performs the best among all small LMs and achieves better throughput and cache efficiency than all transformer-based LMs.
For example, when benchmarking against the strongest baseline, Qwen2.5, which is pretrained on 13x more tokens, Hymba 1.5B achieves a 1.55% average accuracy improvement, 1.41x throughput, and 2.90x cache efficiency. Compared to the strongest small LM trained on fewer than 2T tokens, namely h2o-danube2, our method achieves a 5.41% average accuracy improvement, 2.45x throughput, and 6.23x cache efficiency.
Model
# Para-ms
Train tokens
Token
per sec
Cache
(MB)
MMLU 5-
shot
ARC-E 0-shot
ARC-C 0-shot
PIQA 0-shot
Wino. 0-shot
Hella. 0-shot
SQuAD -C
1-shot
Avg
Open
ELM-1
1.1B
1.5T
246
346
27.06
62.37
19.54
74.76
61.8
48.37
45.38
48.57
Rene
v0.1
1.3B
1.5T
800
113
32.94
67.05
31.06
76.49
62.75
51.16
48.36
52.83
Phi
1.5
1.3B
0.15T
241
1573
42.56
76.18
44.71
76.56
72.85
48
30.09
55.85
Smol
LM
1.7B
1T
238
1573
27.06
76.47
43.43
75.79
60.93
49.58
45.81
54.15
Cosmo
1.8B
.2T
244
1573
26.1
62.42
32.94
71.76
55.8
42.9
38.51
47.2
h20
dan-ube2
1.8B
2T
271
492
40.05
70.66
33.19
76.01
66.93
53.7
49.03
55.65
Llama 3.2 1B
1.2B
9T
535
262
32.12
65.53
31.39
74.43
60.69
47.72
40.18
50.29
Qwen
2.5
1.5B
18T
469
229
60.92
75.51
41.21
75.79
63.38
50.2
49.53
59.51
AMD
OLMo
1.2B
1.3T
387
1049
26.93
65.91
31.57
74.92
61.64
47.3
33.71
48.85
Smol
LM2
1.7B
11T
238
1573
50.29
77.78
44.71
77.09
66.38
53.55
50.5
60.04
Llama
3.2 3B
3.0B
9T
191
918
56.03
74.54
42.32
76.66
69.85
55.29
43.46
59.74
Hymba
1.5B
1.5T
664
79
51.19
76.94
45.9
77.31
66.61
53.55
55.93
61.06
Table 2. Hymba 1.5B Base model results
Instructed models
The Hymba 1.5B Instruct model achieves the highest performance on an average of all tasks, outperforming the previous state-of-the-art model, Qwen 2.5 Instruct, by around 2%. Specifically, Hymba 1.5B surpasses all other models in GSM8K/GPQA/BFCLv2 with a score of 58.76/31.03/46.40, respectively. These results indicate the superiority of Hymba 1.5B, particularly in areas requiring complex reasoning capabilities.
Model
# Params
MMLU â
IFEval â
GSM8K â
GPQA â
BFCLv2 â
Avg. â
SmolLM
1.7B
27.80
25.16
1.36
25.67
-*
20.00
OpenELM
1.1B
25.65
6.25
56.03
21.62
-*
27.39
Llama 3.2
1.2B
44.41
58.92
42.99
24.11
20.27
38.14
Qwen2.5
1.5B
59.73
46.78
56.03
30.13
43.85
47.30
SmolLM2
1.7B
49.11
55.06
47.68
29.24
22.83
40.78
Hymba 1.5B
1.5B
52.79
57.14
58.76
31.03
46.40
49.22
Table 3. Hymba 1.5B Instruct model results
Conclusion
The new Hymba family of small LMs features a hybrid-head architecture that combines the high-resolution recall capabilities of attention heads with the efficient context summarization of SSM heads. To further optimize the performance of Hymba, learnable meta-tokens are introduced to act as a learned cache for both attention and SSM heads, enhancing the modelâs focus on salient information. Through the road map of Hymba, comprehensive evaluations, and ablation studies, Hymba sets new state-of-the-art performance across a wide range of tasks, achieving superior results in both accuracy and efficiency. Additionally, this work provides valuable insights into the advantages of hybrid-head architectures, offering a promising direction for future research in efficient LMs.
Learn more about
Hybma 1.5B Base
and
Hymba 1.5B Instruct
.
Acknowledgments
This work would not have been possible without contributions from many people at NVIDIA, including Wonmin Byeon, Zijia Chen, Ameya Sunil Mahabaleshwarkar, Shih-Yang Liu, Matthijs Van Keirsbilck, Min-Hung Chen, Yoshi Suhara, Nikolaus Binder, Hanah Zhang, Maksim Khadkevich, Yingyan Celine Lin, Jan Kautz, Pavlo Molchanov, and Nathan Horrocks. | https://developer.nvidia.com/ja-jp/blog/hymba-hybrid-head-architecture-boosts-small-language-model-performance/ | Hymba ãã€ããªãã ããã ã¢ãŒããã¯ãã£ãå°èŠæš¡èšèªã¢ãã«ã®ããã©ãŒãã³ã¹ãåäž | Reading Time:
4
minutes
Transformer ã¯ããã® Attention ããŒã¹ã®ã¢ãŒããã¯ãã£ã«ããã匷åãªããã©ãŒãã³ã¹ã䞊ååèœåãããã³ KV (Key-Value) ãã£ãã·ã¥ãéããé·æèšæ¶ã®ãããã§ãèšèªã¢ãã« (LM) ã®äž»æµãšãªã£ãŠããŸããããããäºæ¬¡èšç®ã³ã¹ããšé«ãã¡ã¢ãªèŠæ±ã«ãããå¹çæ§ã«èª²é¡ãçããŠããŸããããã«å¯ŸããMamba ã Mamba-2 ã®ãããªç¶æ
空éã¢ãã« (SSMs) ã¯ãè€éããäžå®ã«ããŠå¹ççãªããŒããŠã§ã¢æé©åãæäŸããŸãããã¡ã¢ãªæ³èµ·ã¿ã¹ã¯ãèŠæã§ããã¯äžè¬çãªãã³ãããŒã¯ã§ã®ããã©ãŒãã³ã¹ã«åœ±é¿ãäžããŠããŸãã
NVIDIA ã®ç 究è
ã¯æè¿ãå¹çæ§ãšããã©ãŒãã³ã¹ã®äž¡æ¹ãåäžãããããã«ãTransformer ã® Attention ã¡ã«ããºã ã SSM ãšçµ±åãããã€ããªãã ããã䞊åã¢ãŒããã¯ãã£ãç¹åŸŽãšããå°èŠæš¡èšèªã¢ãã« (SLM) ãã¡ããªã§ãã
Hymba
ãææ¡ããŸãããHymba ã§ã¯ãAttention ããããé«è§£å床ã®èšæ¶èœåãæäŸããSSM ããããå¹ççãªã³ã³ããã¹ãã®èŠçŽãå¯èœã«ããŸãã
Hymba ã®æ°ããªã¢ãŒããã¯ãã£ã¯ãããã€ãã®æŽå¯ãæããã«ããŠããŸãã
Attention ã®ãªãŒããŒããã:
Attention èšç®ã® 50% 以äžããããå®äŸ¡ãª SSM èšç®ã«çœ®ãæããããšãã§ããŸãã
ããŒã«ã« Attention ã®åªäœæ§:
SSM ãããã«ããèŠçŽãããã°ããŒãã«æ
å ±ã®ãããã§ãäžè¬çãªã¿ã¹ã¯ãã¡ã¢ãªæ³èµ·ã«éäžããã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ãç ç²ã«ããããšãªããã»ãšãã©ã®ã°ããŒãã« Attention ãããŒã«ã« Attention ã«çœ®ãæããããšãã§ããŸãã
KV ãã£ãã·ã¥åé·æ§:
Key-value ãã£ãã·ã¥ã¯ããããéãšã¬ã€ã€ãŒéã§é«ãçžé¢æ§ãããããããããé (GQA: Group Query Attention) ããã³ã¬ã€ã€ãŒé (Cross-layer KV ãã£ãã·ã¥å
±æ) ã§å
±æã§ããŸãã
Softmax ã® Attention ã®å¶é:
Attention ã¡ã«ããºã ã¯ãåèšã 1 ã«ãªãããã«å¶éãããŠãããçæ§ãšæè»æ§ã«å¶éããããŸããNVIDIA ã¯ãããã³ããã®å
é ã«åŠç¿å¯èœãªã¡ã¿ããŒã¯ã³ãå°å
¥ããéèŠãªæ
å ±ãæ ŒçŽããAttention ã¡ã«ããºã ã«é¢é£ããã匷å¶çã« Attention ãè¡ããè² æ
ã軜æžããŸãã
ãã®èšäºã§ã¯ãHymba 1.5B ãåæ§ã®èŠæš¡ã§ããæå
端ã®ãªãŒãã³ãœãŒã¹ ã¢ãã«ãLlama 3.2 1BãOpenELM 1BãPhi 1.5ãSmolLM2 1.7BãDanube2 1.8BãQwen2.5 1.5B ãªã©ãšæ¯èŒããŠãè¯å¥œãªããã©ãŒãã³ã¹ãçºæ®ããããšã瀺ãããŠããŸããåçã®ãµã€ãºã® Transformer ã¢ãã«ãšæ¯èŒãããšãHymba ã¯ããé«ãã¹ã«ãŒããããçºæ®ãããã£ãã·ã¥ãä¿åããããã«å¿
èŠãªã¡ã¢ãªã 10 åã® 1 ã§æžã¿ãŸãã
Hymba 1.5B ã¯
Hugging Face
ã³ã¬ã¯ã·ã§ã³ãš
GitHub
ã§å
¬éãããŠããŸãã
Hymba 1.5B ã®ããã©ãŒãã³ã¹
å³ 1 ã¯ãHymba 1.5B ãš 2B æªæºã®ã¢ãã« (Llama 3.2 1BãOpenELM 1BãPhi 1.5ãSmolLM2 1.7BãDanube2 1.8BãQwen2.5 1.5B) ããå¹³åã¿ã¹ã¯ç²ŸåºŠãã·ãŒã±ã³ã¹é·ã«å¯Ÿãããã£ãã·ã¥ ãµã€ãº (MB)ãã¹ã«ãŒããã (tok/sec) ã§æ¯èŒãããã®ã§ãã
å³ 1. Hymba 1.5B Base ãš 2B æªæºã®ã¢ãã«ã®ããã©ãŒãã³ã¹æ¯èŒ
ãã®äžé£ã®å®éšã«ã¯ãMMLUãARC-CãARC-EãPIQAãHellaswagãWinograndeãSQuAD-C ãªã©ã®ã¿ã¹ã¯ãå«ãŸããŠããŸããã¹ã«ãŒãããã¯ãã·ãŒã±ã³ã¹é· 8Kãããã ãµã€ãº 128 㧠PyTorch ã䜿çšã㊠NVIDIA A100 GPU ã§æž¬å®ããŸããã¹ã«ãŒããã枬å®äžã«ã¡ã¢ãªäžè¶³ (OOM: Out of Memory) åé¡ãçºçããã¢ãã«ã§ã¯ãOOM ã解決ããããŸã§ããã ãµã€ãºãååã«ããŠãOOM ãªãã§éæå¯èœãªæ倧ã¹ã«ãŒãããã枬å®ããŸããã
Hymba ã¢ãã«ã®ãã¶ã€ã³
Mamba ã®ãã㪠SSM ã¯ãTransformer ã®äºæ¬¡çãªè€éæ§ãšæšè«æã® KV ãã£ãã·ã¥ã倧ããåé¡ã«å¯ŸåŠããããã«å°å
¥ãããŸãããããããã¡ã¢ãªè§£å床ãäœãããã«ãSSM ã¯èšæ¶æ³èµ·ãšããã©ãŒãã³ã¹ã®ç¹ã§èŠæŠããŠããŸãããããã®å¶éãå
æããããã«ãè¡š 1 ã§å¹ççã§é«æ§èœãªå°èŠæš¡èšèªã¢ãã«ãéçºããããã®ããŒãããããææ¡ããŸãã
æ§æ
åžžèæšè« (%) â
ãªã³ãŒã« (%) â
ã¹ã«ãŒããã (token/sec) â
ãã£ãã·ã¥ ãµã€ãº (MB) â
èšèšçç±
300M ã¢ãã« ãµã€ãºãš 100B ãã¬ãŒãã³ã° ããŒã¯ã³ã®ã¢ãã¬ãŒã·ã§ã³
Transformer (Llama)
44.08
39.98
721.1
414.7
éå¹ççãªããæ£ç¢ºãªèšæ¶
ç¶æ
空éã¢ãã« (Mamba)
42.98
19.23
4720.8
1.9
å¹ççã ãäžæ£ç¢ºãªèšæ¶
A. + Attention ããã (é£ç¶)
44.07
45.16
776.3
156.3
èšæ¶èœåã匷å
B. + è€æ°ããã (䞊å)
45.19
49.90
876.7
148.2
2 ã€ã®ã¢ãžã¥ãŒã«ã®ãã©ã³ã¹ã®æ¹å
C. + ããŒã«ã« / ã°ããŒãã« Attention
44.56
48.79
2399.7
41.2
æŒç® / ãã£ãã·ã¥ã®å¹çãåäž
D. + KV ãã£ãã·ã¥å
±æ
45.16
48.04
2756.5
39.4
ãã£ãã·ã¥å¹çå
E. + ã¡ã¿ããŒã¯ã³
45.59
51.79
2695.8
40.0
åŠç¿ããèšæ¶ã®åæå
1.5B ã¢ãã« ãµã€ãºãš 1.5T ãã¬ãŒãã³ã° ããŒã¯ã³ãžã®ã¹ã±ãŒãªã³ã°
F. + ãµã€ãº / ããŒã¿
60.56
64.15
664.1
78.6
ã¿ã¹ã¯ ããã©ãŒãã³ã¹ã®ãããªãåäž
G. + ã³ã³ããã¹ãé·ã®æ¡åŒµ (2Kâ8K)
60.64
68.79
664.1
78.6
ãã«ãã·ã§ãããšãªã³ãŒã« ã¿ã¹ã¯ã®æ¹å
è¡š 1. Hymba ã¢ãã«ã®ãã¶ã€ã³ ããŒãããã
èååãã€ããªãã ã¢ãžã¥ãŒã«
ã¢ãã¬ãŒã·ã§ã³ç 究ã«ãããšããã€ããªãã ããã ã¢ãžã¥ãŒã«å
㧠Attention ãš SSM ãããã䞊åã«ããŠèåããã»ãããã·ãŒã±ã³ã·ã£ã«ã«ã¹ã¿ããã³ã°ããããåªããŠããããšãåãã£ãŠããŸããHymba ã¯ããã€ããªãã ããã ã¢ãžã¥ãŒã«å
㧠Attention ãš SSM ãããã䞊åã«èåãããäž¡ããããåæã«åãæ
å ±ãåŠçã§ããããã«ããŸãããã®ã¢ãŒããã¯ãã£ã¯ãæšè«ãšèšæ¶ã®æ£ç¢ºããé«ããŸãã
å³ 2. Hymba ã®ãã€ããªãã ããã ã¢ãžã¥ãŒã«
å¹çæ§ãš KV ãã£ãã·ã¥ã®æé©å
Attention ãããã¯ã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ãåäžãããŸãããKV ãã£ãã·ã¥ã®èŠæ±ãå¢å€§ãããã¹ã«ãŒããããäœäžãããŸãããããç·©åããããã«ãHymba ã¯ããŒã«ã«ããã³ã°ããŒãã«ã® Attention ãçµã¿åããã Cross-layer KV ãã£ãã·ã¥å
±æãæ¡çšããããšã§ããã€ããªãã ããã ã¢ãžã¥ãŒã«ãæé©åããŸããããã«ãããããã©ãŒãã³ã¹ãç ç²ã«ããããšãªãã¹ã«ãŒãããã 3 ååäžãããã£ãã·ã¥ãã»ãŒ 4 åã® 1 ã«åæžãããŸãã
å³ 3. Hymba ã¢ãã«ã®ã¢ãŒããã¯ãã£
ã¡ã¿ããŒã¯ã³
å
¥åã®å
é ã«çœ®ããã 128 ã®äºååŠç¿æžã¿ã®åã蟌ã¿ã®ã»ããã§ãããåŠç¿æžã¿ãã£ãã·ã¥ã®åæåãšããŠæ©èœããé¢é£æ
å ±ãžã®æ³šæã匷åããŸãããã®ãããªããŒã¯ã³ã«ã¯ 2 ã€ã®ç®çããããŸãã
ããã¯ã¹ããã ããŒã¯ã³ãšããŠæ©èœããAttention ãå¹æçã«ååé
ããããšã§ Attention ã®æµåºã軜æžãã
å§çž®ãããäžçç¥èãã«ãã»ã«åãã
å³ 4. ã¡ã¢ãªã®åŽé¢ããèŠã Hymba ã®è§£é
ã¢ãã«è§£æ
ãã®ã»ã¯ã·ã§ã³ã§ã¯ãåäžã®ãã¬ãŒãã³ã°èšå®ã«ãããç°ãªãã¢ãŒããã¯ãã£ãæ¯èŒããæ¹æ³ã玹ä»ããŸãããããããSSM ãš Attention ã® Attention ããããç°ãªãåŠç¿æžã¿ã¢ãã«ã§å¯èŠåããæåŸã«ãåªå® (pruning) ãéã㊠Hymba ã®ãããéèŠåºŠåæãè¡ããŸãããã®ã»ã¯ã·ã§ã³ã®ãã¹ãŠã®åæã¯ãHymba ã®ãã¶ã€ã³ã«ãããéžæã®ä»çµã¿ãšããããå¹æçãªçç±ã説æããã®ã«åœ¹ç«ã¡ãŸãã
åäžæ¡ä»¶ã§ã®æ¯èŒ
HymbaãçŽç²ãª Mamba2ãMamba2 ãš FFNãLlama3 ã¹ã¿ã€ã«ãSamba ã¹ã¿ã€ã« (Mamba-FFN-Attn-FFN) ã®ã¢ãŒããã¯ãã£ãåäžæ¡ä»¶ã§æ¯èŒããŸããããã¹ãŠã®ã¢ãã«ã 10 åã®ãã©ã¡ãŒã¿ãŒã§ããŸã£ããåããã¬ãŒãã³ã° ã¬ã·ã㧠SmolLM-Corpus ãã 1,000 åããŒã¯ã³ããŒãããåŠç¿ããŠããŸãããã¹ãŠã®çµæã¯ãHugging Face ã¢ãã«ã§ãŒãã·ã§ããèšå®ã䜿çšã㊠lm-evaluation-harness ãéããŠååŸãããŠããŸããHymba ã¯ãåžžèæšè«ã ãã§ãªãã質åå¿çã¿ã¹ã¯ãèšæ¶æ³èµ·ã¿ã¹ã¯ã§ãæé«ã®ããã©ãŒãã³ã¹ãçºæ®ããŸãã
è¡š 2 ã¯ãèšèªã¢ããªã³ã°ã¿ã¹ã¯ãšèšæ¶æ³èµ·ã¿ã¹ã¯ããã³åžžèæšè«ã¿ã¹ã¯ã«é¢ããããŸããŸãªã¢ãã« ã¢ãŒããã¯ãã£ãæ¯èŒããŠãããHymba ã¯ãã¹ãŠã®è©äŸ¡åºæºã§åè¶ããããã©ãŒãã³ã¹ãéæããŠããŸããHymba ã¯ãèšèªã¢ããªã³ã°ã¿ã¹ã¯ã§æãäœã Perplexity ã瀺ã (Wiki 㧠18.62ãLMB 㧠10.38)ãç¹ã« SWDE (54.29) ãš SQuAD-C (44.71) ã®èšæ¶æ³èµ·ã¿ã¹ã¯ã«ãããŠå
å®ãªçµæã瀺ãããã®ã«ããŽãªã§æé«ã®å¹³åã¹ã³ã¢ (49.50) ãéæããŸããã
ã¢ãã«
èšèªã¢ããªã³ã° (PPL) â
èšæ¶æ³èµ·å (%) â
åžžèæšè« (%) â
Mamba2
15.88
43.34
52.52
Mamba2 ãš FFN
17.43
28.92
51.14
Llama3
16.19
47.33
52.82
Samba
16.28
36.17
52.83
Hymba
14.5
49.5
54.57
è¡š 2. åãèšå®ã§ 1,000 åããŒã¯ã³ã§åŠç¿ãããã¢ãŒããã¯ãã£ã®æ¯èŒ
åžžèæšè«ãšè³ªåå¿çã«ãããŠãHymba ã¯å¹³åã¹ã³ã¢ 54.57 ã§ã SIQA (31.76) ã TruthfulQA (31.64) ãªã©ã®ã»ãšãã©ã®ã¿ã¹ã¯ã§ãLlama3 ã Mamba2 ãããäžåã£ãŠããŸããå
šäœçã«ãHymba ã¯ãã©ã³ã¹ã®åããã¢ãã«ãšããŠéç«ã£ãŠãããå€æ§ãªã«ããŽãªã§å¹çæ§ãšã¿ã¹ã¯ ããã©ãŒãã³ã¹ã®äž¡æ¹ã§åªããŠããŸãã
Attention ãããã®å¯èŠå
ããã«ãAttention ãããã®èŠçŽ ã 4 ã€ã®ã¿ã€ãã«åé¡ããŸããã
Meta:
ãã¹ãŠã®å®ããŒã¯ã³ããã¡ã¿ããŒã¯ã³ãžã® Attention ã¹ã³ã¢ããã®ã«ããŽãªã¯ãã¢ãã«ãã¡ã¿ããŒã¯ã³ã« Attention ãåããåŸåãåæ ãããã®ã§ããAttention ãããã§ã¯ãéåžžãã¢ãã«ã«ã¡ã¿ããŒã¯ã³ãããå Žåãæåã®æ°å (äŸãã° Hymba ã®å Žå㯠128) ã«äœçœ®ããŠããŸãã
BOS:
ãã¹ãŠã®å®ããŒã¯ã³ããã»ã³ãã³ã¹ã®éå§ããŒã¯ã³ãŸã§ã® Attention ã¹ã³ã¢ãAttention ãããã§ã¯ãéåžžãã¡ã¿ããŒã¯ã³ã®çŽåŸã®æåã®åã«äœçœ®ããŸãã
Self:
ãã¹ãŠã®å®ããŒã¯ã³ããããèªèº«ãžã® Attention ã¹ã³ã¢ãAttention ãããã§ã¯ãéåžžã察è§ç·äžã«äœçœ®ããŠããŸãã
Cross:
ãã¹ãŠã®å®ããŒã¯ã³ããä»ã®å®ããŒã¯ã³ãžã® Attention ã¹ã³ã¢ãAttention ãããã§ã¯ãéåžžã察è§ç·å€ã®é åã«äœçœ®ããŠããŸãã
Hymba ã® Attention ãã¿ãŒã³ã¯ãvanilla (å å·¥ãããŠããªã) Transformer ã®ãããšã¯å€§ããç°ãªããŸããvanilla Transformer ã® Attention ã¹ã³ã¢ã¯ BOS ã«éäžããŠãããAttention Sink ã®çµæãšäžèŽããŠããŸããããã«ãvanilla Transformer ã¯ãSelf-Attention ã¹ã³ã¢ã®æ¯çãé«ããªã£ãŠããŸããHymba ã§ã¯ãã¡ã¿ããŒã¯ã³ãAttention ããããSSM ããããäºãã«è£å®ãåãããã«æ©èœããç°ãªãã¿ã€ãã®ããŒã¯ã³éã§ããããã©ã³ã¹ã®åãã Attention ã¹ã³ã¢ã®ååžãå®çŸããŠããŸãã
å
·äœçã«ã¯ãã¡ã¿ããŒã¯ã³ã BOS ããã® Attention ã¹ã³ã¢ããªãããŒãããããšã§ãã¢ãã«ãããå®éã®ããŒã¯ã³ã«éäžã§ããããã«ãªããŸããSSM ãããã¯ã°ããŒãã«ãªã³ã³ããã¹ããèŠçŽããçŸåšã®ããŒã¯ã³ (Self-Attention ã¹ã³ã¢) ã«ããéç¹ã眮ããŸããäžæ¹ãAttention ãããã¯ãSelf ãš BOS ããŒã¯ã³ã«å¯Ÿãã泚æãäœããä»ã®ããŒã¯ã³ (ããªãã¡ãCross Attention ã¹ã³ã¢) ãžã®æ³šæãé«ããªããŸããããã¯ãHymba ã®ãã€ããªãã ããã ãã¶ã€ã³ããç°ãªãã¿ã€ãã®ããŒã¯ã³éã® Attention ååžã®ãã©ã³ã¹ãå¹æçã«åãããšãã§ããããã©ãŒãã³ã¹ã®åäžã«ã€ãªããå¯èœæ§ãããããšã瀺åããŠããŸãã
å³ 5. ã¡ã¿ããŒã¯ã³ãSliding Window AttentionãMamba è²¢ç®ã®çµã¿åããã«ãã Hymba ã® Attention ãããã®æŠç¥å³
å³ 6. Llama 3.2 3B ãš Hymba 1.5B ã®ç°ãªãã«ããŽãªããã® Attention ã¹ã³ã¢ã®åèš
ãããéèŠåºŠåæ
åã¬ã€ã€ãŒã®Attention ãš SSM ãããã®çžå¯ŸçãªéèŠæ§ãåæããããã«ããããããåé€ããŠæçµçãªç²ŸåºŠãèšé²ããŸãããåæã®çµæã以äžã®ããšãæããã«ãªããŸããã
åãã¬ã€ã€ãŒã®Â Attention / SSM ãããã®çžå¯ŸçãªéèŠæ§ã¯å
¥åé©å¿ã§ãããã¿ã¹ã¯ã«ãã£ãŠç°ãªããŸããããã¯ãããŸããŸãªå
¥åã®åŠçã«ãããŠãç°ãªã圹å²ãæããå¯èœæ§ãããããšã瀺åããŠããŸãã
æåã®ã¬ã€ã€ãŒã® SSM ãããã¯èšèªã¢ããªã³ã°ã¿ã¹ã¯ã«äžå¯æ¬ ã§ããããåé€ãããšãã©ã³ãã æšæž¬ã¬ãã«ã«ãŸã§å€§å¹
ã«ç²ŸåºŠãäœäžããŸãã
äžè¬çã«ãAttention / SSM ãããã 1 ã€åé€ãããšãHellaswag ã§ã¯ããããå¹³å 0.24%/1.1% 粟床ãäœäžããŸãã
å³ 7. Hellaswag ã® 1K ãµã³ãã«ã䜿çšããŠæž¬å®ãããåã¬ã€ã€ãŒã® Attention ãŸã㯠SSM ããããåé€ããåŸã®éæ粟床
ã¢ãã« ã¢ãŒããã¯ãã£ãšåŠç¿ã®ãã¹ã ãã©ã¯ãã£ã¹
ãã®ã»ã¯ã·ã§ã³ã§ã¯ãHymba 1.5B Base ãš Hymba 1.5B Instruct ã®äž»èŠã¢ãŒããã¯ãã£äžã®æ±ºå®äºé
ãšåŠç¿æ¹æ³ã®æŠèŠã«ã€ããŠèª¬æããŸãã
ã¢ãã« ã¢ãŒããã¯ãã£
ãã€ããªãã ã¢ãŒããã¯ãã£:
Mamba ã¯èŠçŽã«åªããéåžžã¯çŸåšã®ããŒã¯ã³ã«ããéç¹ã眮ããŸããAttention ã¯ããæ£ç¢ºã§ã¹ãããã·ã§ãã ã¡ã¢ãªãšããŠæ©èœããŸããæšæºçãªã·ãŒã±ã³ã·ã£ã«èåã§ã¯ãªãã䞊åã«çµã¿åãããããšã§å©ç¹ãçµ±åããããšãã§ããŸããSSM ãš Attention ãããéã®ãã©ã¡ãŒã¿ãŒæ¯ã¯ 5:1 ãéžæããŸããã
Sliding Window Attention:
å®å
šãª Attention ããã㯠3 ã€ã®ã¬ã€ã€ãŒ (æåãæåŸãäžé) ã«ç¶æãããæ®ãã® 90% ã®ã¬ã€ã€ãŒã§ Sliding Window Attention ãããã䜿çšãããŸãã
Cross-layer KV ãã£ãã·ã¥å
±æ:
é£ç¶ãã 2 ã€ã® Attention ã¬ã€ã€ãŒéã«å®è£
ãããŸããããã¯ããããéã® GQA KV ãã£ãã·ã¥å
±æã«å ããŠè¡ãããŸãã
ã¡ã¿ããŒã¯ã³:
ãããã® 128 ããŒã¯ã³ã¯æåž«ãªãåŠç¿ãå¯èœã§ããã倧èŠæš¡èšèªã¢ãã« (LLM) ã«ããããšã³ããããŒåŽ©å£ã®åé¡ãåé¿ããAttention Sink çŸè±¡ãç·©åããã®ã«åœ¹ç«ã¡ãŸããããã«ãã¢ãã«ã¯ãããã®ããŒã¯ã³ã«äžè¬çãªç¥èãæ ŒçŽããŸãã
åŠç¿ã®ãã¹ã ãã©ã¯ãã£ã¹
äºååŠç¿:
2 段éã®ããŒã¹ã¢ãã«åŠç¿ãéžæããŸãããã¹ããŒãž 1 ã§ã¯ãäžå®ã®é«ãåŠç¿çãç¶æãããã£ã«ã¿ãªã³ã°ãããŠããªã倧èŠæš¡ãªã³ãŒãã¹ ããŒã¿ã®äœ¿çšããŸãããç¶ããŠãé«å質ã®ããŒã¿ãçšã㊠1e-5 ãŸã§ç¶ç¶çã«åŠç¿çãæžè¡°ãããŸããããã®ã¢ãããŒãã«ãããã¹ããŒãž 1 ã®ç¶ç¶çãªåŠç¿ãšåéãå¯èœã«ãªããŸãã
æ瀺ãã¡ã€ã³ãã¥ãŒãã³ã°:
æ瀺ã¢ãã«ã®èª¿æŽã¯ 3 ã€ã®æ®µéã§è¡ãããŸãããŸããSFT-1 ã¯ãã³ãŒããæ°åŠãé¢æ°åŒã³åºããããŒã« ãã¬ã€ããã®ä»ã®ã¿ã¹ã¯åºæã®ããŒã¿ã§åŠç¿ãå®æœãã匷åãªæšè«èœåãã¢ãã«ã«ä»äžããŸãã次ã«ãSFT-2 ã¯ã¢ãã«ã«äººéã®æ瀺ã«åŸãããšãæããŸããæåŸã«ãDPO ã掻çšããŠãã¢ãã«ã人éã®å¥œã¿ã«åãããã¢ãã«ã®å®å
šæ§ãé«ããŸãã
å³ 8. Hymba ã¢ãã« ãã¡ããªã«é©å¿ããåŠç¿ãã€ãã©ã€ã³
ããã©ãŒãã³ã¹ãšå¹çæ§ã®è©äŸ¡
1.5T ã®äºååŠç¿ããŒã¯ã³ã ãã§ãHymba 1.5B ã¢ãã«ã¯ãã¹ãŠã®å°èŠæš¡èšèªã¢ãã«ã®äžã§æé«ã®æ§èœãçºæ®ããTransformer ããŒã¹ã® LM ãããåªããã¹ã«ãŒããããšãã£ãã·ã¥å¹çãå®çŸããŸãã
äŸãã°ã13 å以äžã®ããŒã¯ã³æ°ã§äºååŠç¿ãããæã匷åãªããŒã¹ã©ã€ã³ã§ãã Qwen2.5 ã«å¯ŸããŠãã³ãããŒã¯ããå ŽåãHymba 1.5B ã¯å¹³å粟床ã 1.55%ãã¹ã«ãŒãããã 1.41 åããã£ãã·ã¥å¹çã 2.90 åã«åäžããŸãã2T æªæºã®ããŒã¯ã³ã§åŠç¿ãããæã匷åãªå°èŠæš¡èšèªã¢ãã«ãããªãã¡ h2o-danube2 ãšæ¯èŒãããšããã®æ¹æ³ã¯å¹³å粟床ã 5.41%ãã¹ã«ãŒãããã 2.45 åããã£ãã·ã¥å¹çã 6.23 åã«åäžããŠããŸãã
ã¢ãã«
ãã©ã¡ãŒã¿ãŒæ°
åŠç¿ããŒã¯ã³
ããŒã¯ã³
(1 ç§ããã)
ãã£ãã·ã¥
(MB)
MMLU 5-
shot
ARC-E 0-shot
ARC-C 0-shot
PIQA 0-shot
Wino. 0-shot
Hella. 0-shot
SQuAD -C
1-shot
å¹³å
OpenELM-1
1.1B
1.5T
246
346
27.06
62.37
19.54
74.76
61.8
48.37
45.38
48.57
Renev0.1
1.3B
1.5T
800
113
32.94
67.05
31.06
76.49
62.75
51.16
48.36
52.83
Phi1.5
1.3B
0.15T
241
1573
42.56
76.18
44.71
76.56
72.85
48
30.09
55.85
SmolLM
1.7B
1T
238
1573
27.06
76.47
43.43
75.79
60.93
49.58
45.81
54.15
Cosmo
1.8B
.2T
244
1573
26.1
62.42
32.94
71.76
55.8
42.9
38.51
47.2
h20dan-ube2
1.8B
2T
271
492
40.05
70.66
33.19
76.01
66.93
53.7
49.03
55.65
Llama 3.2 1B
1.2B
9T
535
262
32.12
65.53
31.39
74.43
60.69
47.72
40.18
50.29
Qwen2.5
1.5B
18T
469
229
60.92
75.51
41.21
75.79
63.38
50.2
49.53
59.51
AMDOLMo
1.2B
1.3T
387
1049
26.93
65.91
31.57
74.92
61.64
47.3
33.71
48.85
SmolLM2
1.7B
11T
238
1573
50.29
77.78
44.71
77.09
66.38
53.55
50.5
60.04
Llama3.2 3B
3.0B
9T
191
918
56.03
74.54
42.32
76.66
69.85
55.29
43.46
59.74
Hymba
1.5B
1.5T
664
79
51.19
76.94
45.9
77.31
66.61
53.55
55.93
61.06
è¡š 2. Hymba 1.5B ããŒã¹ ã¢ãã«ã®çµæ
æ瀺ã¢ãã«
Hymba 1.5B Instruct ã¢ãã«ã¯ãå
šã¿ã¹ã¯å¹³åã§æé«ã®ããã©ãŒãã³ã¹ãéæããçŽè¿ã®æé«æ§èœã¢ãã«ã§ãã Qwen 2.5 Instruct ãçŽ 2% äžåããŸãããç¹ã«ãHymba 1.5B 㯠GSM8K/GPQA/BFCLv2 ã§ããããã 58.76/31.03/46.40 ã®ã¹ã³ã¢ã§ä»ã®ãã¹ãŠã®ã¢ãã«ãäžåã£ãŠããŸãããããã®çµæã¯ãç¹ã«è€éãªæšè«èœåãå¿
èŠãšããåéã«ãããŠãHymba 1.5B ã®åªäœæ§ã瀺ããŠããŸãã
ã¢ãã«
ãã©ã¡ãŒã¿ãŒæ°
MMLU â
IFEval â
GSM8K â
GPQA â
BFCLv2 â
å¹³åâ
SmolLM
1.7B
27.80
25.16
1.36
25.67
-*
20.00
OpenELM
1.1B
25.65
6.25
56.03
21.62
-*
27.39
Llama 3.2
1.2B
44.41
58.92
42.99
24.11
20.27
38.14
Qwen2.5
1.5B
59.73
46.78
56.03
30.13
43.85
47.30
SmolLM2
1.7B
49.11
55.06
47.68
29.24
22.83
40.78
Hymba 1.5B
1.5B
52.79
57.14
58.76
31.03
46.40
49.22
è¡š 3. Hymba 1.5B Instruct ã¢ãã«ã®çµæ
ãŸãšã
æ°ãã Hymba ãã¡ããªã®å°èŠæš¡èšèªã¢ãã«ã¯ããã€ããªãã ããã ã¢ãŒããã¯ãã£ãæ¡çšããAttention ãããã®é«è§£åãªèšæ¶èœåãš SSM ãããã®å¹ççãªã³ã³ããã¹ãã®èŠçŽãçµã¿åãããŠããŸããHymba ã®ããã©ãŒãã³ã¹ãããã«æé©åããããã«ãåŠç¿å¯èœãªã¡ã¿ããŒã¯ã³ãå°å
¥ãããAttention ããããš SSM ãããã®äž¡æ¹ã§åŠç¿æžã¿ãã£ãã·ã¥ãšããŠæ©èœããé¡èãªæ
å ±ã«æ³šç®ããã¢ãã«ã®ç²ŸåºŠã匷åããŸãããHymba ã®ããŒãããããå
æ¬çãªè©äŸ¡ãã¢ãã¬ãŒã·ã§ã³ç 究ãéããŠãHymba ã¯å¹
åºãã¿ã¹ã¯ã«ããã£ãŠæ°ããªæå
端ã®ããã©ãŒãã³ã¹ã確ç«ããæ£ç¢ºããšå¹çæ§ã®äž¡é¢ã§åªããçµæãéæããŸãããããã«ããã®ç 究ã¯ããã€ããªãã ããã ã¢ãŒããã¯ãã£ã®å©ç¹ã«é¢ãã貎éãªæŽå¯ããããããå¹ççãªèšèªã¢ãã«ã®ä»åŸã®ç 究ã«ææãªæ¹åæ§ã瀺ããŠããŸãã
Hybma 1.5B Base
ãš
Hymba 1.5B Instruct
ã®è©³çŽ°ã¯ãã¡ããã芧ãã ããã
è¬èŸ
ãã®ææã¯ãWonmin ByeonãZijia ChenãAmeya Sunil MahabaleshwarkarãShih-Yang LiuãMatthijs Van KeirsbilckãMin-Hung ChenãYoshi SuharaãNikolaus BinderãHanah ZhangãMaksim KhadkevichãYingyan Celine LinãJan KautzãPavlo MolchanovãNathan Horrocks ãªã©ãNVIDIA ã®å€ãã®ã¡ã³ããŒã®è²¢ç®ãªãããŠã¯å®çŸããŸããã§ããã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
Optimizing Large Language Models: An Experimental Approach to Pruning and Fine-Tuning LLama2 7B (倧èŠæš¡èšèªã¢ãã«ã®æé©å: LLama2 7B ã®åªå®ãšãã¡ã€ã³ãã¥ãŒãã³ã°ã®å®éšçã¢ãããŒã)
GTC ã»ãã·ã§ã³:
Accelerating End-to-End Large Language Models System using a Unified Inference Architecture and FP8 (çµ±äžæšè«ã¢ãŒããã¯ãã£ãš FP8 ãçšãããšã³ãããŒãšã³ãã®å€§èŠæš¡èšèªã¢ãã« ã·ã¹ãã ã®é«éå)
NGC ã³ã³ãããŒ:
Llama-3.1-Nemotron-70B-Ins
truct
NGC ã³ã³ãããŒ:
Llama-3-Swallow-70B-Instruct-v0.1
SDK:
NeMo Megatron |
https://developer.nvidia.com/blog/deploying-fine-tuned-ai-models-with-nvidia-nim/ | Deploying Fine-Tuned AI Models with NVIDIA NIM | For organizations adapting AI foundation models with domain-specific data, the ability to rapidly create and deploy fine-tuned models is key to efficiently delivering value with enterprise generative AI applications.
NVIDIA NIM
offers prebuilt, performance-optimized inference microservices for the latest AI foundation models, including
seamless deployment
of models customized using parameter-efficient fine-tuning (PEFT).
In some cases, itâs ideal to use methods like continual pretraining, DPO, supervised fine-tuning (SFT), or model merging, where the underlying model weights are adjusted directly in the training or customization process, unlike PEFT with low-rank adaptation (LoRA). In these cases, inference software configuration for the model must be updated for optimal performance given the new weights.
Rather than burden you with this often lengthy process, NIM can automatically build a
TensorRT-LLM
inference engine performance optimized for the adjusted model and GPUs in your local environment, and then load it for running inference as part of a single-step model deployment process.
In this post, we explore how to rapidly deploy NIM microservices for models that have been customized through SFT by using locally built, performance-optimized TensorRT-LLM inference engines. We include all the necessary commands as well as some helpful options, so you can try it out on your own today.
Prerequisites
To run this tutorial, you need an NVIDIA-accelerated compute environment with access to 80 GB of GPU memory and which has
git-lfs
installed.
Before you can pull and deploy a NIM microservice in an NVIDIA-accelerated compute environment, you also need an NGC API key.
Navigate to the
Meta Llama 3 8B Instruct
model listing in the NVIDIA API Catalog.
Choose
Login
at the top right and follow the instructions.
When youâre logged in, choose
Build with this NIM
on the
model page
.
Choose
Self-Hosted API
and follow either option to access NIM microservices access:
NVIDIA Developer Program membership with free access to NIM for research, development, and testing only.
The 90-day NVIDIA AI Enterprise license, which includes access to NVIDIA Enterprise Support.
After you provide the necessary details for your selected access method, copy your NGC API key and be ready to move forward with NIM. For more information, see
Launch NVIDIA NIM for LLMs
.
Getting started with NIM microservices
Provide your NGC CLI API key as an environment variable in your compute environment:
export NGC_API_KEY=<<YOUR API KEY HERE>>
You also must point to, create, and modify permissions for a directory to be used as a cache during the optimization process:
export NIM_CACHE_PATH=/tmp/nim/.cache
mkdir -p $NIM_CACHE_PATH
chmod -R 777 $NIM_CACHE_PATH
To demonstrate locally built, optimized TensorRT-LLM inference engines for deploying fine-tuned models with NIM, you need a model that has undergone customization through SFT. For this tutorial, use the
NVIDIA OpenMath2-Llama3.1-8B
model, which is a customization of
Metaâs Llama-3.1-8B
using the
OpenMathInstruct-2
dataset.
The base model must be available as a downloadable NIM for LLMs. For more information about downloadable NIM microservices, see the
NIM Type: Run Anywhere filter
in the NVIDIA API Catalog.
All you need is the weights to this model, which can be obtained in several ways. For this post, clone the model repository using the following commands:
git lfs install
git clone https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B
export MODEL_WEIGHT_PARENT_DIRECTORY=$PWD
Now that you have the model weights collected, move on to the next step: firing up the microservice.
Selecting from available performance profiles
Based on your selected model and hardware configuration, the most applicable inference performance profile available is automatically selected. There are two available performance profiles for local inference engine generation:
Latency:
Focused on delivering a NIM microservice that is optimized for latency.
Throughput:
Focused on delivering a NIM microservice that is optimized for batched throughput.
For more information about supported features, including available precision, see the
Support Matrix
topic in the NVIDIA NIM documentation.
Example using an SFT model
Create a locally built TensorRT-LLM inference engine for OpenMath2-Llama3.1-8B by running the following commands:
docker run -it --rm --gpus all \
--user $(id -u):$(id -g)\
--network=host \
--shm-size=32GB \
-e NGC_API_KEY \
-e NIM_FT_MODEL=/opt/weights/hf/OpenMath2-Llama3.1-8B \
-e NIM_SERVED_MODEL_NAME=OpenMath2-Llama3.1-8B \
-v $NIM_CACHE_PATH:/opt/nim/.cache \
-v $MODEL_WEIGHT_PARENT_DIRECTORY:/opt/weights/hf \
nvcr.io/nim/meta/llama-3.1-8b-instruct:1.3.0
The command is nearly identical to the typical command youâd use to deploy a NIM microservice. In this case, youâve added the extra
NIM_FT_MODEL
parameter, which points to the OpenMath2-Llama3.1-8B model.
With that, NIM builds an optimized inference engine locally. To perform inference using this new NIM microservice, run the following Python code example:
from openai import OpenAI
client = OpenAI(
base_url = "http://localhost:8000/v1",
api_key = "none"
)
completion = client.chat.completions.create(
model="OpenMath2-Llama3.1-8B",
messages=[{"role":"user","content":"What is your name?"}],
temperature=0.2,
top_p=0.7,
max_tokens=100,
stream=True
)
for chunk in completion:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
Video 1. How to Deploy Fine-Tuned AI Models
Building an optimized TensorRT-LLM engine with a custom performance profile
On
supported GPUs
, you can use a similar command to spin up your NIM microservice. Follow the
Model Profile
instructions to launch your microservice and determine which profiles are accessible for it.
export IMG_NAME="nvcr.io/nim/meta/llama-3.1-8b-instruct:1.3.0"
docker run --rm --gpus=all -e NGC_API_KEY=$NGC_API_KEY $IMG_NAME list-model-profiles
Assuming youâre in an environment with two (or more) H100 GPUs, you should see the following profiles available:
tensorrt_llm-h100-bf16-tp2âpp1-throughput
tensorrt_llm-h100-bf16-tp2-pp1-latency
Re-run the command and provide an additional environment variable to specify the desired profile:
docker run --rm --gpus=all \
-e NGC_API_KEY \
-e NIM_FT_MODEL=/opt/weights/hf/OpenMath2-Llama3.1-8B \
-e NIM_SERVED_MODEL_NAME=OpenMath2-Llama3.1-8B \
-e NIM_MODEL_PROFILE=tensorrt_llm-h100-bf16-tp2-pp1-latency \
-v $NIM_CACHE_PATH:/opt/nim/.cache \
-v $MODEL_WEIGHT_PARENT_DIRECTORY:/opt/weights/hf \
$IMG_NAME
Now that youâve relaunched your NIM microservice with the desired profile, use Python to interact with the model:
from openai import OpenAI
client = OpenAI(
base_url = "http://localhost:8000/v1",
api_key = "none"
)
completion = client.chat.completions.create(
model="llama-3.1-8b-instruct",
messages=[{"role":"user","content":"What is your name?"}],
temperature=0.2,
top_p=0.7,
max_tokens=100,
stream=True
)
for chunk in completion:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
Conclusion
Whether youâre using
PEFT
or SFT methods for model customization, NIM accelerates customized model deployment for high-performance inferencing in a few simple steps. With optimized TensorRT-LLM inference engines built automatically in your local environment, NIM is unlocking new possibilities for rapidly deploying accelerated AI inferencing anywhere.
Learn more and get started today by visiting the NVIDIA
API catalog
and checking out the
documentation
. To engage with NVIDIA and the NIM microservices community, see the NVIDIA
NIM developer forum
. | https://developer.nvidia.com/ja-jp/blog/deploying-fine-tuned-ai-models-with-nvidia-nim/ | NVIDIA NIM ã§ãã¡ã€ã³ãã¥ãŒãã³ã°ããã AI ã¢ãã«ã®ããã〠| Reading Time:
2
minutes
ãã¡ã€ã³åºæã®ããŒã¿ã§ AI åºç€ã¢ãã«ãé©å¿ãããŠããäŒæ¥ã«ãšã£ãŠããã¡ã€ã³ãã¥ãŒãã³ã°ãããã¢ãã«ãè¿
éã«äœæãããããã€ããèœåã¯ãäŒæ¥ã®çæ AI ã¢ããªã±ãŒã·ã§ã³ã§å¹ççã«äŸ¡å€ãæäŸããããã®éµãšãªããŸãã
NVIDIA NIM
ã¯ãParapeter-efficient Fine-tuning (PEFT) ãçšããŠã«ã¹ã¿ãã€ãºããã¢ãã«ã®
ã·ãŒã ã¬ã¹ãªãããã€
ãªã©ãææ°ã® AI åºç€ã¢ãã«åãã«ãã«ããããããã©ãŒãã³ã¹ãæé©åããæšè«ãã€ã¯ããµãŒãã¹ãæäŸããŸãã
å Žåã«ãã£ãŠã¯ãLow-rank Adaptation (LoRA) ã䜿çšãã PEFT ãšã¯ç°ãªããç¶ç¶äºååŠç¿ãDPOãæåž«ãããã¡ã€ã³ãã¥ãŒãã³ã° (SFT: Supervised Fine-tuning)ãã¢ãã« ããŒãžãªã©ã®ææ³ãå©çšããåºç€ãšãªãã¢ãã«ã®éã¿ããã¬ãŒãã³ã°ãã«ã¹ã¿ãã€ãºã®éçšã§çŽæ¥èª¿æŽããã®ãçæ³çã§ãããã®ãããªå Žåãæ°ããéã¿ãèæ
®ããæé©ãªããã©ãŒãã³ã¹ãå®çŸããã«ã¯ãã¢ãã«ã®æšè«ãœãããŠã§ã¢æ§æãæŽæ°ããå¿
èŠããããŸãã
ãã®é·æéãèŠããããã»ã¹ã«è² æ
ãå²ãã®ã§ã¯ãªããNIM ã¯ã調æŽãããã¢ãã«ãš GPU ã«åãããŠæé©åãã
TensorRT-LLM
æšè«ãšã³ãžã³ãããŒã«ã«ç°å¢ã§ãã«ãããããŒããããããåäžã¹ãããã®ã¢ãã« ããã〠ããã»ã¹ã®äžç°ãšããŠæšè«ãå®è¡ã§ããŸãã
ãã®æçš¿ã§ã¯ãããã©ãŒãã³ã¹ãæé©åãã TensorRT-LLM æšè«ãšã³ãžã³ãããŒã«ã«ã§ãã«ãããŠãSFT ã§ã«ã¹ã¿ãã€ãºãããã¢ãã«ã«å¯Ÿãã NIM ãã€ã¯ããµãŒãã¹ãè¿
éã«ãããã€ããæ¹æ³ã説æããŸããå¿
èŠãªã³ãã³ããšäŸ¿å©ãªãªãã·ã§ã³ãã玹ä»ããŸãã®ã§ãæ¯éä»ãããè©Šããã ããã
åææ¡ä»¶
ãã®ãã¥ãŒããªã¢ã«ãå®è¡ããã«ã¯ã80 GB ã® GPU ã¡ã¢ãªãæ〠NVIDIA ã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ç°å¢ãš
git-lfs
ã®ã€ã³ã¹ããŒã«ãå¿
èŠã§ãã
NVIDIA ã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ç°å¢ã§ãNIM ãã€ã¯ããµãŒãã¹ã pull ããŠãããã€ããã«ã¯ãNGC API ããŒãå¿
èŠã§ãã
NVIDIA API ã«ã¿ãã°ã®ã¢ãã«äžèŠ§ãã
Meta Llama 3 8B Instruct
ã«ç§»åããŸãã
å³äžã®
[Login]
ãéžæããæ瀺ã«åŸã£ãŠãã ããã
ãã°ã€ã³ãããã
ã¢ãã« ããŒãž
ã§
[Build with this NIM]
ãéžæããŸãã
[Self-Hosted API]
ãéžæããããããã®ãªãã·ã§ã³ã«åŸã£ãŠãNIM ãã€ã¯ããµãŒãã¹ãžã¢ã¯ã»ã¹ããŸãã
NVIDIA éçºè
ããã°ã©ã ã®ã¡ã³ããŒã§ããã°ãç 究ãéçºããã¹ãã«éã NIM ã«ç¡æã§ã¢ã¯ã»ã¹ããããšãã§ããŸãã
90 æ¥éã® NVIDIA AI Enterprise ã©ã€ã»ã³ã¹ã«ã¯ãNVIDIA Enterprise ãµããŒããžã®ã¢ã¯ã»ã¹ãå«ãŸããŠããŸãã
éžæããã¢ã¯ã»ã¹æ¹æ³ã«å¿
èŠãªè©³çŽ°æ
å ±ãæäŸããããNGC API ããŒãã³ããŒããŠãNIM ãé²ããæºåãããŸãã詳现ã«ã€ããŠã¯ã
Launch NVIDIA NIM for LLMs
ãåç
§ããŠãã ããã
NIM ãã€ã¯ããµãŒãã¹ãã¯ããã
å©çšäžã®ã³ã³ãã¥ãŒãã£ã³ã°ç°å¢ã®ç°å¢å€æ°ãšããŠãNGC API ããŒãæäŸããŸãã
export NGC_API_KEY=<<YOUR API KEY HERE>>
ãŸããæé©ååŠçäžã«ãã£ãã·ã¥ãšããŠäœ¿çšãããã£ã¬ã¯ããªãäœæããŠãããŒããã·ã§ã³ãå€æŽããŠãæå®ããå¿
èŠããããŸãã
export NIM_CACHE_PATH=/tmp/nim/.cache
mkdir -p $NIM_CACHE_PATH
chmod -R 777 $NIM_CACHE_PATH
NIM ã§ãã¡ã€ã³ãã¥ãŒãã³ã°ãããã¢ãã«ããããã€ããããã«ãæé©ãª TensorRT-LLM æšè«ãšã³ãžã³ãããŒã«ã«ã§ãã«ãããå®èšŒã«ã¯ãSFT ã«ãã£ãŠã«ã¹ã¿ãã€ãºããã¢ãã«ãå¿
èŠã§ãããã®ãã¥ãŒããªã¢ã«ã§ã¯ã
OpenMathInstruct-2
ããŒã¿ã»ããã䜿çšããŠã
Meta ã® Llama-3.1-8B
ãã«ã¹ã¿ãã€ãºãã
NVIDIA OpenMath2-Llama3.1-8B
ã¢ãã«ã䜿çšããŸãã
ããŒã¹ ã¢ãã«ã¯ãããŠã³ããŒãå¯èœãª NIM for LLMs ãšããŠå©çšå¯èœã§ãªããã°ãªããŸãããããŠã³ããŒãå¯èœãª NIM ãã€ã¯ããµãŒãã¹ã®è©³çŽ°ã«ã€ããŠã¯ãNVIDIA API ã«ã¿ãã°ã®ã
NIM Type: Run Anywhere filter
ããåç
§ããŠãã ããã
å¿
èŠãªã®ã¯ãã®ã¢ãã«ã®éã¿ã ãã§ãããã¯ããŸããŸãªæ¹æ³ããããŸãããã®æçš¿ã§ã¯ã以äžã®ã³ãã³ãã䜿çšããŠã¢ãã« ãªããžããªãã¯ããŒã³ããŸãã
git lfs install
git clone https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B
export MODEL_WEIGHT_PARENT_DIRECTORY=$PWD
ããã§ã¢ãã«ã®éã¿ãåéã§ããã®ã§ã次ã®ã¹ãããã®ãã€ã¯ããµãŒãã¹ã®èµ·åã«é²ã¿ãŸãã
å©çšå¯èœãªããã©ãŒãã³ã¹ ãããã¡ã€ã«ããéžæãã
éžæããã¢ãã«ãšããŒããŠã§ã¢ã®æ§æã«åºã¥ããŠãå©çšå¯èœãªãã®ã®äžããæãé©åãªæšè«ããã©ãŒãã³ã¹ ãããã¡ã€ã«ãèªåçã«éžæãããŸããããŒã«ã«æšè«ãšã³ãžã³ã®çæã«ã¯ã以äžã® 2 ã€ã®ããã©ãŒãã³ã¹ ãããã¡ã€ã«ãå©çšã§ããŸãã
ã¬ã€ãã³ã·:
ã¬ã€ãã³ã·ã«æé©åããã NIM ãã€ã¯ããµãŒãã¹ã®æäŸã«éç¹ã眮ããŸãã
ã¹ã«ãŒããã:
ããã ã¹ã«ãŒãããã«æé©åããã NIM ãã€ã¯ããµãŒãã¹ã®æäŸã«éç¹ã眮ããŸãã
å©çšå¯èœãªç²ŸåºŠãªã©ããµããŒãæ©èœã®è©³çŽ°ã«ã€ããŠã¯ãNVIDIA NIM ããã¥ã¡ã³ãã®
ãµããŒãæ
å ±
ã®ãããã¯ãåç
§ããŠãã ããã
SFT ã¢ãã«ã䜿çšããäŸ
以äžã®ã³ãã³ããå®è¡ããŠãããŒã«ã«ç°å¢ã§ãã«ããã OpenMath2-Llama3.1-8B çšã® TensorRT-LLM æšè«ãšã³ãžã³ãäœæããŸãã
docker run -it --rm --gpus all \
--user $(id -u):$(id -g)\
--network=host \
--shm-size=32GB \
-e NGC_API_KEY \
-e NIM_FT_MODEL=/opt/weights/hf/OpenMath2-Llama3.1-8B \
-e NIM_SERVED_MODEL_NAME=OpenMath2-Llama3.1-8B \
-v $NIM_CACHE_PATH:/opt/nim/.cache \
-v $MODEL_WEIGHT_PARENT_DIRECTORY:/opt/weights/hf \
nvcr.io/nim/meta/llama-3.1-8b-instruct:1.3.0
ãã®ã³ãã³ãã¯ãNIM ãã€ã¯ããµãŒãã¹ããããã€ããããã«äœ¿çšããå
žåçãªã³ãã³ããšã»ãŒåãã§ãããã®å Žåãè¿œå ã® NIM_FT_MODEL ãã©ã¡ãŒã¿ãŒãè¿œå ããOpenMath2-Llama3.1-8B ã¢ãã«ãæããŠããŸãã
ããã«ãããNIM ã¯æé©åãããæšè«ãšã³ãžã³ãããŒã«ã«ç°å¢ã§ãã«ãããŸãããã®æ°ãã NIM ãã€ã¯ããµãŒãã¹ã䜿çšããŠæšè«ãè¡ãã«ã¯ã以äžã® Python ã³ãŒã ãµã³ãã«ãå®è¡ããŸãã
from openai import OpenAI
client = OpenAI(
base_url = "http://localhost:8000/v1",
api_key = "none"
)
completion = client.chat.completions.create(
model="OpenMath2-Llama3.1-8B",
messages=[{"role":"user","content":"What is your name?"}],
temperature=0.2,
top_p=0.7,
max_tokens=100,
stream=True
)
for chunk in completion:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
åç» 1. ãã¡ã€ã³ãã¥ãŒãã³ã°ããã AI ã¢ãã«ããããã€ããæ¹æ³
ã«ã¹ã¿ã ããã©ãŒãã³ã¹ ãããã¡ã€ã«ã§æé©åããã TensorRT-LLM ãšã³ãžã³ã®ãã«ã
ãµããŒããããŠãã GPU
ãªããåæ§ã®ã³ãã³ãã䜿çšããŠãNIM ãã€ã¯ããµãŒãã¹ãèµ·åã§ããŸãã
ã¢ãã« ãããã¡ã€ã«
ã®æé ã«åŸã£ãŠãã€ã¯ããµãŒãã¹ãèµ·åããã©ã®ãããã¡ã€ã«ã«ã¢ã¯ã»ã¹ã§ãããã確èªããŸãã
export IMG_NAME="nvcr.io/nim/meta/llama-3.1-8b-instruct:1.3.0"
docker run --rm --gpus=all -e NGC_API_KEY=$NGC_API_KEY $IMG_NAME list-model-profiles
H100 GPU ã䜿çšããŠãããšä»®å®ãããšã以äžã®ãããã¡ã€ã«ãå©çšå¯èœã§ããããšãããããŸãã
tensorrt_llm-h100-bf16-tp2-pp1-latency
tensorrt_llm-h100-bf16-tp1-pp1-throughput
ã³ãã³ããåå®è¡ããç®çã®ãããã¡ã€ã«ãæå®ããç°å¢å€æ°ãè¿œå ããŸãã
docker run --rm --gpus=all \
--user $(id -u):$(id -g)\
--network=host \
--shm-size=32GB \
-e NGC_API_KEY \
-e NIM_FT_MODEL=/opt/weights/hf/OpenMath2-Llama3.1-8B \
-e NIM_SERVED_MODEL_NAME=OpenMath2-Llama3.1-8B \
-e NIM_MODEL_PROFILE=tensorrt_llm-h100-bf16-tp2-pp1-latency \
-v $NIM_CACHE_PATH:/opt/nim/.cache \
-v $MODEL_WEIGHT_PARENT_DIRECTORY:/opt/weights/hf \
$IMG_NAME
ç®çã®ãããã¡ã€ã«ã§ NIM ãã€ã¯ããµãŒãã¹ãåèµ·åããã®ã§ãPython ã䜿çšããŠã¢ãã«ãšããåãããŸãã
from openai import OpenAI
client = OpenAI(
base_url = "http://localhost:8000/v1",
api_key = "none"
)
completion = client.chat.completions.create(
model="llama-3.1-8b-instruct",
messages=[{"role":"user","content":"What is your name?"}],
temperature=0.2,
top_p=0.7,
max_tokens=100,
stream=True
)
for chunk in completion:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
ãŸãšã
ã¢ãã«ã®ã«ã¹ã¿ãã€ãºã«
PEFT
ãŸã㯠SFT ã䜿çšããŠããå Žåã§ããNIM ã¯ãé«æ§èœãªæšè«ã®ããã«ã«ã¹ã¿ãã€ãºãããã¢ãã«ã®ãããã€ãããããªã¹ãããã§ç°¡åã«é«éåããŸããæé©åããã TensorRT-LLM æšè«ãšã³ãžã³ãããŒã«ã«ç°å¢ã§èªåçã«ãã«ãããããšã§ãNIM ã¯ãé«éåããã AI æšè«ãã©ãã«ã§ãè¿
éã«ãããã€ã§ããããæ°ããªå¯èœæ§ãåŒãåºããŠããŸãã詳现ã«ã€ããŠã¯ã
NVIDIA API ã«ã¿ãã°
ã«ã¢ã¯ã»ã¹ããNVIDIA NIM ããã¥ã¡ã³ãã®
ãã¡ã€ã³ãã¥ãŒãã³ã°ãããã¢ãã«ã®ãµããŒã
ãã芧ãã ããã
NVIDIA NIM éçºè
ãã©ãŒã©ã
ã§ã¯ãNVIDIA ããã³ NIM ãã€ã¯ããµãŒãã¹ ã³ãã¥ããã£ãšã®äº€æµããããšãã§ããŸãã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
Kubernetes çš Oracle ã³ã³ãã㌠ãšã³ãžã³ã䜿çšãã OCI ã® NVIDIA Nemotron LLM ã®ãã¡ã€ã³ãã¥ãŒãã³ã°ãšããã〠(Oracle æäŸ)
GTC ã»ãã·ã§ã³:
äŒæ¥ãå é: 次äžä»£ AI ãããã€ãå®çŸããããŒã«ãšãã¯ããã¯
GTC ã»ãã·ã§ã³:
NVIDIA NeMo ã«ããå€æ§ãªèšèªã§ã®åºç€ãšãªã倧èŠæš¡èšèªã¢ãã«ã®ã«ã¹ã¿ãã€ãº
NGC ã³ã³ãããŒ:
Phind-CodeLlama-34B-v2-Instruct
NGC ã³ã³ãããŒ:
Phi-3-Mini-4K-Instruct
NGC ã³ã³ãããŒ:
Mistral-NeMo-Minitron-8B-Instruct |
https://developer.nvidia.com/blog/mastering-llm-techniques-data-preprocessing/ | Mastering LLM Techniques: Data Preprocessing | The advent of
large language models (LLMs)
marks a significant shift in how industries leverage AI to enhance operations and services. By automating routine tasks and streamlining processes, LLMs free up human resources for more strategic endeavors, thus improving overall efficiency and productivity.
Training and
customizing LLMs
for high accuracy is fraught with challenges, primarily due to their dependency on high-quality data. Poor data quality and inadequate volume can significantly reduce model accuracy, making dataset preparation a critical task for AI developers.
Datasets frequently contain duplicate documents, personally identifiable information (PII), and formatting issues. Some datasets even house toxic or harmful information that poses risks to users. Training models on these datasets without proper processing can result in higher training time and lower model quality. Another significant challenge is the scarcity of data. Model builders are running out of publicly available data to train on, prompting many to turn to third-party vendors or generate synthetic data using advanced LLMs.
In this post, we will describe data processing techniques and best practices for optimizing LLM performance by improving data quality for training. We will introduce
NVIDIA NeMo Curator
and how it addresses these challenges, demonstrating real-world data processing use cases for LLMs.
Text processing pipelines and best practices
Dealing with the preprocessing of large data is nontrivial, especially when the dataset consists of mainly web-scraped data which is likely to contain large amounts of ill-formatted, low-quality data.
Figure 1. Text processing pipelines that can be built using NeMo Curator
Figure 1 shows a comprehensive text processing pipeline, including the following steps at a high-level:
Download the dataset from the source and extract to a desirable format such as JSONL.
Apply preliminary text cleaning, such as Unicode fixing and language separation.
Apply both standard and custom-defined filters to the dataset based on specific quality criteria.
Perform various levels of deduplication (exact, fuzzy, and semantic).
Selectively apply advanced quality filtering, including model-based quality filtering, PII redaction, distributed data classification, and task decontamination.
Blend curated datasets from multiple sources to form a unified dataset.
The sections below dive deeper into each of these stages.
Download and extract text
The initial step in data curation involves downloading and preparing datasets from various common sources such as Common Crawl, specialized collections such as arXiv and PubMed, or private on-prime datasets, each potentially containing terabytes of data.
This crucial phase requires careful consideration of storage formats and extraction methods, as publicly hosted datasets often come in compressed formats (for example, .warc.gz, tar.gz, or zip files) that need to be converted to more manageable formats (such as .jsonl or .parquet) for further processing.
Preliminary text cleaning
Unicode fixing and language identification represent crucial early steps in the data curation pipeline, particularly when dealing with large-scale web-scraped text corpora. This phase addresses two fundamental challenges: improperly decoded Unicode characters, and the presence of multiple languages within the dataset.
Unicode formatting issues often arise from incorrect character encoding or multiple encoding/decoding cycles. Common problems include special characters appearing as garbled sequences (for example, âcaféâ showing as âcaféâ). Language identification and separation are equally important, especially for curators who are interested in curating monolingual datasets. Moreover, some of the data curation steps such as heuristic filtering, and model-based quality classifiers are language-specific.
This preliminary preprocessing step ensures clean, properly encoded text in identified languages, forming the foundation for all subsequent curation steps.
Heuristic filtering
Heuristic filtering employs rule-based metrics and statistical measures to identify and remove low-quality content.
The process typically evaluates multiple quality dimensions, such as document length, repetition patterns, punctuation distribution, and structural integrity of the text. Common heuristic filters include:
Word count filter:
Filters out snippets that are too brief to be meaningful or suspiciously long.
Boilerplate string filter:
Identifies and removes text containing excessive boilerplate content.
N-gram repetition filter:
Identifies repeated phrases at different lengths and removes documents with excessive repetition that might indicate low-quality or artificially generated content.
For heuristic filtering, the best practice is to implement a cascading approach. This enables more nuanced quality control while maintaining transparency in the filtering process. For improved performance, batch filtering can be implemented to process multiple documents simultaneously, significantly reducing computation time when dealing with large-scale datasets.
Deduplication
Deduplication is essential for improving model training efficiency, reducing computational costs, and ensuring data diversity. It helps prevent models from overfitting to repeated content and improves generalization. The process can be implemented through three main approaches: exact, fuzzy, and semantic deduplication. These form a comprehensive strategy for handling different types of duplicates in large-scale datasets, from identical copies to conceptually similar content.
Exact deduplication
Exact deduplication focuses on identifying and removing completely identical documents. This method generates hash signatures for each document and groups documents by their hashes into buckets, keeping only one document per bucket. While this method is computationally efficient, fast and reliable, itâs limited to detecting perfectly matching content and may miss semantically equivalent documents with minor variations.
Fuzzy deduplication
Fuzzy deduplication addresses near-duplicate content using MinHash signatures and Locality-Sensitive Hashing (LSH) to identify similar documents.
The process involves the following steps:
Compute MinHash signatures for documents.
Use LSH to group similar documents into buckets. One document might belong to one or more buckets.
Compute Jaccard similarity between documents within the same buckets.
Based on the Jaccard similarity, transform the similarity matrix to a graph and identify connected components in the graph.
Documents within a connected component are considered fuzzy duplicates.
Remove identified duplicates from the dataset.
This method is particularly valuable for identifying content with minor modifications, detecting partial document overlaps, and finding documents with different formatting but similar content. It strikes a balance between computational efficiency and duplicate detection capability.
Semantic deduplication
Semantic deduplication represents the most sophisticated approach, employing advanced embedding models to capture semantic meaning combined with clustering techniques to group semantically similar content. Research has shown that semantic deduplication can effectively reduce dataset size while maintaining or improving model performance. Itâs especially valuable for identifying paraphrased content, translated versions of the same material, and conceptually identical information.
Semantic deduplication consists of the following steps:
Each data point is embedded using a pretrained model.
The embeddings are clustered into k clusters using k-means clustering.
Within each cluster, pairwise cosine similarities are computed.
Data pairs with cosine similarity above a threshold are considered semantic duplicates.
From each group of semantic duplicates within a cluster, one representative datapoint is kept and the rest are removed.
Model-based quality filtering
Model-based quality filtering employs various types of models to evaluate and filter content based on quality metrics. The choice of model type significantly impacts both the effectiveness of filtering and the computational resources required, making it crucial to select the appropriate model for specific use cases.
Different types of models that can be used for quality filtering include:
N-gram based classifiers:
The simplest approach uses n-gram based bag-of-words classifiers like fastText, which excel in efficiency and practicality, as they require minimal training data (100,000 to 1,000,000 samples).
BERT-style classifiers:
BERT-style classifiers represent a middle-ground approach, offering better quality assessment through Transformer-based architectures. They can capture more complex linguistic patterns and contextual relationships, making them effective for quality assessment.
LLMs:
LLMs provide the most sophisticated quality assessment capabilities, leveraging their extensive knowledge to evaluate text quality. While they offer superior understanding of content quality, they have significant computational requirements thus they are best suited for smaller-scale applications, such as fine-tuning datasets.
Reward models:
Reward models represent a specialized category designed specifically for evaluating conversational data quality. These models can assess multiple quality dimensions simultaneously but similar to LLMs, they have significant computational requirements.
The optimal selection of quality filtering models should consider both the dataset scale and available computational resources. For large-scale pretraining datasets, combining lightweight models for initial filtering with advanced models for final quality assessment often provides the best balance of efficiency and effectiveness. For smaller, specialized datasets where quality is crucial, using models like LLMs or reward models becomes more feasible and beneficial.
PII redaction
Personally Identifiable Information (PII) redaction involves identifying and removing sensitive information from datasets to protect individual privacy and ensure compliance with data protection regulations.
This process is particularly important when dealing with datasets that contain personal information, from direct identifiers like names and social security numbers to indirect identifiers that could be used to identify individuals when combined with other data.
Modern PII redaction employs various techniques to protect sensitive information, including:
Replacing sensitive information with symbols (for example, XXX-XX-1234 for U.S. Social Security Numbers) while maintaining data format and structure.
Substituting sensitive data with non-sensitive equivalents that maintain referential integrity for analysis purposes.
Eliminating sensitive information when its presence is not necessary for downstream tasks.
Overall, PII redaction helps maintain data privacy, comply with regulations, and build trust with users while preserving the utility of their datasets for training and analysis purposes.
Distributed data classification
Data classification plays a vital role in data curation. This process helps organize and categorize data based on various attributes such as domain and quality, ensuring data is well-balanced and representative of different knowledge domains.
Domain classification helps LLMs understand the context and specific domain of input text by identifying and categorizing content based on subject matter. The domain information serves as valuable auxiliary data, enabling developers to build more diverse training datasets while identifying and filtering out potentially harmful or unwanted content. For example, using the AEGIS Safety Model, which classifies content into 13 critical risk categories, developers can effectively identify and filter harmful content from training data.
When dealing with pretraining corpora that often contain billions of documents, running inference for classification becomes computationally intensive and time-consuming. Therefore, distributed data classification is necessary to overcome these challenges. This is achieved by chunking the datasets across multiple GPU nodes to accelerate the classification task in a distributed manner.
Task decontamination
After training, LLMs are usually evaluated by their performance on downstream tasks consisting of unseen test data. Downstream task decontamination is a step that addresses the potential leakage of test data into training datasets, which can provide misleading evaluation results. The decontamination process typically involves several key steps:
Identifying potential downstream tasks and their test sets.
Converting test data into n-gram representations.
Searching for matching n-grams in the training corpus.
Removing or modifying contaminated sections while preserving document coherence.
This systematic approach helps ensure the effectiveness of decontamination while minimizing unintended impacts on data quality, ultimately contributing to more reliable model evaluation and development.
Blending and shuffling
Data blending and shuffling represent the final steps in the data curation pipeline, combining multiple curated datasets while ensuring proper randomization for optimal model training. This process is essential for creating diverse, well-balanced training datasets that enable better model generalization and performance. Data blending involves merging data from multiple sources into a unified dataset, creating more comprehensive and diverse training data. The blending process is implemented using two approaches:
Online: Data combination occurs during training
Offline: Datasets are combined before training
Each approach offers distinct advantages depending on the specific requirements of the training process and the intended use of the final dataset.
Synthetic data generation
Having navigated the intricacies of the preprocessing stage, we now confront a formidable challenge in the realm of LLM development: the scarcity of data. The insatiable appetite of LLMs for vast training datasets, even for fine-tuning purposes, frequently outstrips the availability of domain-specific or language-particular data. To this end,
synthetic data generation (SDG)
is a powerful approach that leverages LLMs to create artificial datasets that mimic real-world data characteristics while maintaining privacy and ensuring data utility. This process uses external LLM services to generate high-quality, diverse, and contextually relevant data that can be used for pretraining, fine-tuning, or evaluating other models.
SDG empowers LLMs by enabling adaptation to low-resource languages, supporting domain specialization, and facilitating knowledge distillation across models, making it a versatile tool for expanding model capabilities. SDG has become particularly valuable in scenarios where real data is scarce, sensitive, or difficult to obtain.
Figure 2. General synthetic data generation architecture with NeMo Curator
The synthetic data pipeline encompasses three key stages: Generate, Critique, and Filter.
Generate:
Use prompt engineering to generate synthetic data for various tasks. Taking
Nemotron-4
as an example, SDG is applied to generate training data for five different types of tasks: open-ended QA, closed-ended QA, writing assignments, coding, and math problems.
Critique:
Use methods like LLM reflection, LLM-as-judge, reward model inference, and other agents to evaluate the quality of synthetic data. The evaluation results can be used as feedback to SDG LLM to generate better results or filter out low quality data. A prime example is the
Nemotron-4-340B reward NIM
, which assesses data quality through five key attributes: Helpfulness, Correctness, Coherence, Complexity, and Verbosity. By setting appropriate thresholds for these attribute scores, the filtering process ensures that only high-quality synthetic data is retained, while filtering out low-quality or inappropriate content.
Filter:
Steps like deduplication and PII redaction to further improve SDG data quality.
Note, however, SDG is not suitable in all cases. Hallucinations from external LLMs can introduce unreliable information, compromising data integrity. Additionally, the generated dataâs distribution may not align with the target distribution, potentially leading to poor real-world performance. In such cases, using SDG could actually harm the systemâs effectiveness rather than improve it.
Data processing for building sovereign LLMs
As noted previously, open-source LLMs excel in English but struggle with other languages, especially those of Southeast Asia. This is primarily due to a lack of training data in these languages, limited understanding of local cultures, and insufficient tokens to capture unique linguistic structures and expressions.
To fully meet customer needs, enterprises in non-English-speaking countries must go beyond generic models and customize them to capture the nuances of their local languages, ensuring a seamless and impactful customer experience. For example, using NeMo Curator, Viettel Solutions processed
high-quality Vietnamese data
to increase accuracy by 10%, reduce the dataset size by 60% and accelerate training time by 3x.
The main steps for this use case are:
Download several Vietnamese and multilingual datasets (Wikipedia, Vietnamese news corpus,
OSCAR
, and C4) and convert to Parquet for efficient handling and processing of large datasets.
Combine, standardize, and shard into a single dataset
Apply unicode reformatting, exact deduplication, quality filtering (heuristic and classifier-based).
You can
follow along with the full tutorial
.
Improve data quality with NVIDIA NeMo Curator
So far, we have discussed the importance of data quality in improving the accuracy of LLMs and explored various data processing techniques. Developers can now try these techniques directly through
NeMo Curator
. It provides a customizable and modular interface that enables developers to build on top of it easily.
NeMo Curator uses NVIDIA RAPIDS GPU-accelerated libraries like cuDF, cuML, and cuGraph, and Dask to speed up workloads on multinode multi-GPUs, reducing processing time and scale as needed. For example, by using GPUs to accelerate the data processing pipelines,
Zyphra reduced the total cost of ownership (TCO)
by 50% and processed the data 10x faster (from 3 weeks to 2 days).
To get started, check out the
NVIDIA/NeMo-Curator GitHub repository
and available
tutorials
that cover various data curation workflows, such as:
Data processing for pretraining
Data processing for customization
SDG pipelines
You can also gain access through a
NeMo framework container
and request enterprise support with an
NVIDIA AI Enterprise
license. | https://developer.nvidia.com/ja-jp/blog/mastering-llm-techniques-data-preprocessing/ | LLM ãã¯ããã¯ã®ç¿åŸ: ããŒã¿ã®ååŠç | Reading Time:
2
minutes
倧èŠæš¡èšèªã¢ãã« (LLM)
ã®åºçŸã¯ãäŒæ¥ã AI ã掻çšããŠæ¥åãšãµãŒãã¹ã匷åããæ¹æ³ã«å€§ããªå€åããããããŸãããLLM ã¯æ¥åžžçãªäœæ¥ãèªååããããã»ã¹ãåçåããããšã§ã人çãªãœãŒã¹ãããæŠç¥çãªåãçµã¿ã«å²ãåœãŠãããšã§ãå
šäœçãªå¹çæ§ãšçç£æ§ãåäžãããŸãã
LLM ãé«ç²ŸåºŠã«ãã¬ãŒãã³ã°ããã³
ã«ã¹ã¿ãã€ãº
ããã«ã¯ãé«å質ãªããŒã¿ãå¿
èŠãšãªããããå€ãã®èª²é¡ã䌎ããŸããããŒã¿ã®è³ªãäœããéãååã§ãªããšãã¢ãã«ã®ç²ŸåºŠã倧å¹
ã«äœäžããå¯èœæ§ããããããAI éçºè
ã«ãšã£ãŠããŒã¿ã»ããã®æºåã¯éèŠãªäœæ¥ã® 1 ã€ãšãªã£ãŠããŸãã
ããŒã¿ã»ããã«ã¯åŸã
ã«ããŠéè€ããããã¥ã¡ã³ããå人ãç¹å®ã§ããæ
å ± (PII)ããã©ãŒãããã«é¢ããåé¡ãååšããŸããããŒã¿ã»ããã®äžã«ã¯ããŠãŒã¶ãŒã«ãªã¹ã¯ãããããæ害ãªæ
å ±ãäžé©åãªæ
å ±ãå«ãŸããŠãããã®ãããããŸããé©åãªåŠçãè¡ããã«ãããã£ãããŒã¿ã»ããã§ã¢ãã«ããã¬ãŒãã³ã°ãããšããã¬ãŒãã³ã°æéãé·åŒããããã¢ãã«ã®å質ãäœäžããå ŽåããããŸãããã 1 ã€ã®å€§ããªèª²é¡ã¯ããŒã¿ã®äžè¶³ã§ããã¢ãã«éçºè
ã¯ãã¬ãŒãã³ã°çšã®å
¬éããŒã¿ã䜿ãæããã€ã€ãããå€ãã®äººã
ããµãŒãããŒãã£ã®ãã³ããŒã«äŸé Œããããé«åºŠãª LLM ã䜿çšããŠåæããŒã¿ãçæãããããããã«ãªã£ãŠããŸãã
ãã®èšäºã§ã¯ããã¬ãŒãã³ã°çšã®ããŒã¿ã®å質ãåäžããããšã§ LLM ã®ããã©ãŒãã³ã¹ãæé©åããããã®ããŒã¿åŠçãã¯ããã¯ãšãã¹ã ãã©ã¯ãã£ã¹ã«ã€ããŠèª¬æããŸãããŸãã
NVIDIA NeMo Curator
ã®æŠèŠããã³åè¿°ãã課é¡ãžã®å¯ŸåŠæ¹æ³ã説æããLLM ã®å®éã®ããŒã¿åŠçã®ãŠãŒã¹ ã±ãŒã¹ãã玹ä»ããŸãã
ããã¹ãåŠçãã€ãã©ã€ã³ãšãã¹ã ãã©ã¯ãã£ã¹
倧èŠæš¡ããŒã¿ã®ååŠçã¯å®¹æã§ã¯ãããŸãããç¹ã«ãããŒã¿ã»ãããäž»ã«Web ã¹ã¯ã¬ã€ãã³ã°ãããããŒã¿ã§æ§æãããŠããã倧éã®äžé©åãªãã©ãŒãããã®äœå質ããŒã¿ãå«ãŸããŠããå¯èœæ§ãé«ãå Žåã¯ãªãããã§ãã
å³ 1. NeMo Curator ã䜿çšããŠæ§ç¯ã§ããããã¹ãåŠçãã€ãã©ã€ã³
å³ 1 ã¯ã以äžã®æé ãå«ãå
æ¬çãªããã¹ãåŠçãã€ãã©ã€ã³ã®æŠèŠã瀺ããŠããŸãã
ãœãŒã¹ããããŒã¿ã»ãããããŠã³ããŒãããJSONL ãªã©ã®æãŸãããã©ãŒãããã§æœåºããŸãã
Unicode ã®ä¿®æ£ãèšèªã«ããåé¡ãªã©ãäºåçãªããã¹ã ã¯ãªãŒãã³ã°ãé©çšããŸãã
ç¹å®ã®å質åºæºã«åºã¥ããŠãæšæºçãªãã£ã«ã¿ãŒãšã«ã¹ã¿ã å®çŸ©ã®ãã£ã«ã¿ãŒã®äž¡æ¹ãããŒã¿ã»ããã«é©çšããŸãã
ããŸããŸãªã¬ãã«ã®éè€æé€ (å³å¯ãææ§ãæå³ç) ãå®è¡ããŸãã
ã¢ãã«ããŒã¹ã®å質ãã£ã«ã¿ãªã³ã°ãå人æ
å ± (PII) ã®åé€ã(åæ£åŠçã«ãã) ããŒã¿åé¡ãäžæµã¿ã¹ã¯ã®æ±æé€å»ãªã©ã®é«åºŠãªå質ãã£ã«ã¿ãªã³ã°ãå¿
èŠã«å¿ããŠéžæçã«é©çšããŸãã
è€æ°ã®ãœãŒã¹ããåéããã粟éžãããããŒã¿ã»ãããäžäœåããçµ±åããããŒã¿ã»ãããäœæããŸãã
以äžã®ã»ã¯ã·ã§ã³ã§ã¯ããããã®å段éã«ã€ããŠè©³ãã説æããŸãã
ããã¹ããããŠã³ããŒãããŠæœåº
ããŒã¿ ãã¥ã¬ãŒã·ã§ã³ã®æåã®ã¹ãããã§ã¯ã Common Crawl ã®ãããªãããŸããŸãªäžè¬çãªãœãŒã¹ãarXiv ã PubMed ãªã©ã®å°éçãªã³ã¬ã¯ã·ã§ã³ãèªç€Ÿä¿æã®ãã©ã€ããŒã ããŒã¿ãªã©ããããŒã¿ã»ãããããŠã³ããŒãããŠæºåããŸãããããã®ããŒã¿ã»ããã«ã¯ããããããã©ãã€ãåäœã®ããŒã¿ãå«ãŸããŠããå¯èœæ§ããããŸãã
ãã®éèŠãªãã§ãŒãºã§ã¯ãä¿å圢åŒãšæœåºæ¹æ³ãæ
éã«æ€èšããå¿
èŠããããŸããäžè¬ã«å
¬éãããã¹ããããŠããããŒã¿ã»ããã¯å§çž®åœ¢åŒ (äŸ: .warc.gzãtar.gzãzip ãã¡ã€ã«) ã§æäŸãããããšãå€ããããåŸç¶ã®åŠçã®ããã«ããæ±ããããåœ¢åŒ (.jsonl ã .parquet ãªã©) ã«å€æããå¿
èŠããããŸãã
äºåçãªããã¹ã ã¯ãªãŒãã³ã°
Unicode ã®ä¿®æ£ãšèšèªã«ããåé¡ã¯ãç¹ã«å€§èŠæš¡ãª Web ã¹ã¯ã¬ã€ãã³ã°ã«ããããã¹ã ã³ãŒãã¹ãæ±ãå ŽåãããŒã¿ ãã¥ã¬ãŒã·ã§ã³ ãã€ãã©ã€ã³ã®éèŠãªåæã¹ãããã§ãããã®ãã§ãŒãºã§ã¯ãäžé©åã«ãã³ãŒãããã Unicode æåãšãããŒã¿ã»ããå
ã«è€æ°ã®èšèªãååšãããšãã 2 ã€ã®åºæ¬çãªèª²é¡ã«å¯ŸåŠããŸãã
Unicode 圢åŒã«é¢ããåé¡ã¯ãå€ãã®å Žåãæåãšã³ã³ãŒãã®èª€ããããšã³ã³ãŒã/ãã³ãŒã ãµã€ã¯ã«ãè€æ°åå®è¡ãããããšã«ãã£ãŠçºçããŸããããããåé¡ãšããŠã¯ãç¹æ®æåãæååãããæåå (äŸ:ãcaféãããcaféããšè¡šç€ºããã) ãšããŠè¡šç€ºãããããšãæããããŸããèšèªã®èå¥ãšåé¡ã¯ãç¹ã«åäžèšèªã®ããŒã¿ã»ããã®ãã¥ã¬ãŒã·ã§ã³ã«é¢å¿ã®ããéçºè
ã«ãšã£ãŠã¯åæ§ã«éèŠã§ããããã«ããã¥ãŒãªã¹ãã£ã㯠ãã£ã«ã¿ãªã³ã°ãã¢ãã«ããŒã¹ã®å質åé¡åšãªã©ã®ããŒã¿ ãã¥ã¬ãŒã·ã§ã³ã®ã¹ãããã®äžéšã¯èšèªã«äŸåããŠããŸãã
ãã®äºåçãªååŠçã¹ãããã§ã¯ãèå¥ãããèšèªã§é©åã«ãšã³ã³ãŒããããã¯ãªãŒã³ãªããã¹ãã確ä¿ããããã®åŸã®ãã¥ã¬ãŒã·ã§ã³ã¹ãããã®åºç€ãšãªããŸãã
ãã¥ãŒãªã¹ãã£ã㯠ãã£ã«ã¿ãªã³ã°
ãã¥ãŒãªã¹ãã£ã㯠ãã£ã«ã¿ãªã³ã°ã§ã¯ãã«ãŒã«ããŒã¹ã®è©äŸ¡ææšãšçµ±èšç尺床ã䜿çšããŠãäœå質ãªã³ã³ãã³ããç¹å®ããåé€ããŸãã
ãã®ããã»ã¹ã¯éåžžãããã¥ã¡ã³ãã®é·ããç¹°ãè¿ããã¿ãŒã³ãå¥èªç¹ã®ååžãããã¹ãã®æ§é çæŽåæ§ãªã©ãè€æ°ã®å質åºæºã§è©äŸ¡ãããŸããäžè¬çãªãã¥ãŒãªã¹ãã£ã㯠ãã£ã«ã¿ãŒã«ã¯ä»¥äžã®ãããªãã®ããããŸãã
åèªæ°ãã£ã«ã¿ãŒ:
æå³ããªããªãã»ã©çãããããŸãã¯çãããã»ã©ã«é·ãããããã¹ãããã£ã«ã¿ãªã³ã°ããŸãã
å®åæãã£ã«ã¿ãŒ:
éå°ãªå®åæãå«ãããã¹ããç¹å®ããåé€ããŸãã
N-gram å埩ãã£ã«ã¿ãŒ:
ç°ãªãé·ãã§ç¹°ãè¿ããããã¬ãŒãºãç¹å®ããäœå質ãŸãã¯äººå·¥çã«çæãããã³ã³ãã³ãã§ããå¯èœæ§ãããéå°ãªå埩ãå«ãææžãåé€ããŸãã
ãã¥ãŒãªã¹ãã£ã㯠ãã£ã«ã¿ãªã³ã°ã®å Žåã¯ãã«ã¹ã±ãŒã ã¢ãããŒããæ¡ãã®ãæåã®æ¹æ³ã§ããããã«ããããã£ã«ã¿ãªã³ã° ããã»ã¹ã®éææ§ãç¶æããªãããããç¹çŽ°ãªå質管çãå¯èœã«ãªããŸããåŠçããã©ãŒãã³ã¹ãåäžãããããã«ãããã ãã£ã«ã¿ãªã³ã°ãæ¡çšããŠè€æ°ã®ããã¥ã¡ã³ããåæã«åŠçãããšå€§èŠæš¡ãªããŒã¿ã»ãããæ±ãéã®èšç®æéã倧å¹
ã«ççž®ããããšãã§ããŸãã
éè€æé€
éè€æé€ã¯ãã¢ãã«ã®ãã¬ãŒãã³ã°å¹çã®åäžãèšç®ã³ã¹ãã®åæžãããŒã¿ã®å€æ§æ§ã®ç¢ºä¿ã«äžå¯æ¬ ã§ããç¹°ãè¿ãåºçŸããã³ã³ãã³ãã«ã¢ãã«ãéå°é©åããã®ãé²ããæ±çšæ§ãé«ããŸãããã®ããã»ã¹ã¯ãå³å¯ãææ§ãæå³ãšãã 3 ã€ã®äž»ãªéè€æé€ã¢ãããŒããéããŠå®è£
ã§ããŸãããããã¯ãåäžã®ã³ããŒããæŠå¿µçã«é¡äŒŒããã³ã³ãã³ããŸã§ã倧èŠæš¡ããŒã¿ã»ããå
ã®ç°ãªãã¿ã€ãã®éè€ãåŠçããå
æ¬çãªæŠç¥ã圢æããŸãã
å³å¯ãªéè€æé€
å³å¯ãªéè€æé€ã¯ãå®å
šã«åäžã®ããã¥ã¡ã³ããèå¥ããåé€ããããšã«éç¹ã眮ããŠããŸãããã®æ¹æ³ã§ã¯ãããã¥ã¡ã³ãããšã«ããã·ã¥çœ²åãçæããããã·ã¥ããšã«ããã¥ã¡ã³ããã°ã«ãŒãåããŠãã±ããã«æ ŒçŽãããã±ããããšã« 1 ã€ã®ããã¥ã¡ã³ãã®ã¿ãæ®ããŸãããã®æ¹æ³ã¯èšç®å¹çãé«ããé«éãã€ä¿¡é Œæ§ãé«ãã®ã§ãããå®å
šã«äžèŽããã³ã³ãã³ãã®æ€åºã«éå®ããããããæå³çã«ã¯åçãªã®ã«ãããã«ç°ãªãææžãèŠéãå¯èœæ§ããããŸãã
ææ§ãªéè€æé€
ææ§ãªéè€æé€ã¯ãMinHash 眲åãšå±ææ§éæåããã·ã¥å (LSH: Locality-Sensitive Hashing) ã䜿çšããŠãé¡äŒŒããããã¥ã¡ã³ããèå¥ããã»ãŒéè€ããã³ã³ãã³ãã«å¯ŸåŠããŸãã
ãã®ããã»ã¹ã«ã¯ã以äžã®ã¹ããããå«ãŸããŸãã
ããã¥ã¡ã³ãã® MinHash 眲åãèšç®ããŸãã
LSH ã䜿çšããŠãé¡äŒŒããããã¥ã¡ã³ãããã±ããã«ã°ã«ãŒãåããŸãã1 ã€ã®ããã¥ã¡ã³ãã 1 ã€ä»¥äžã®ãã±ããã«å±ããå ŽåããããŸãã
åããã±ããå
ã®ããã¥ã¡ã³ãé㧠Jaccard é¡äŒŒåºŠãèšç®ããŸãã
Jaccard é¡äŒŒåºŠã«åºã¥ããŠãé¡äŒŒåºŠè¡åãã°ã©ãã«å€æããã°ã©ãå
ã®é£çµæåãç¹å®ããŸãã
é£çµæåå
ã®ããã¥ã¡ã³ãã¯ææ§ãªéè€ãšèŠãªãããŸãã
ç¹å®ããéè€ãããŒã¿ã»ããããåé€ããŸãã
ãã®æ¹æ³ã¯ã軜埮ãªå€æŽãå ããããã³ã³ãã³ãã®ç¹å®ãéšåçãªããã¥ã¡ã³ãã®éè€ã®æ€åºãç°ãªããã©ãŒãããã§ãããé¡äŒŒããã³ã³ãã³ããæã€ããã¥ã¡ã³ãã®æ€çŽ¢ã«ç¹ã«æçšã§ããèšç®å¹çãšéè€æ€åºèœåã®ãã©ã³ã¹ãåããŠããŸãã
æå³çãªéè€æé€
æå³çãªéè€æé€ã¯ãæãæŽç·Žãããã¢ãããŒãã§ãããé«åºŠãªåã蟌ã¿ã¢ãã«ã䜿çšããŠã»ãã³ãã£ãã¯ãªæå³ãæããã¯ã©ã¹ã¿ãªã³ã°æè¡ãšçµã¿åãããŠæå³çã«é¡äŒŒããã³ã³ãã³ããã°ã«ãŒãåããŸããç 究ã§ã¯ãæå³çãªéè€æé€ã¯ãã¢ãã«ã®ããã©ãŒãã³ã¹ãç¶æãŸãã¯æ¹åããªãããããŒã¿ã»ããã®ãµã€ãºãå¹æçã«çž®å°ã§ããããšã瀺ãããŠããŸããèšãæããããã³ã³ãã³ããåãçŽ æã®ç¿»èš³çãæŠå¿µçã«åäžã®æ
å ±ãç¹å®ããã®ã«ç¹ã«æçšã§ãã
æå³ã«ããéè€æé€ã¯ã以äžã®ã¹ãããã§æ§æãããŸãã
åããŒã¿ ãã€ã³ãããäºååŠç¿æžã¿ã¢ãã«ã䜿çšããŠåã蟌ãŸããŸãã
åã蟌ã¿ã¯ãk-means ã䜿çšã㊠k åã®ã¯ã©ã¹ã¿ãŒã«ã°ã«ãŒãåãããŸãã
åã¯ã©ã¹ã¿ãŒå
ã§ããã¢ããšã®ã³ãµã€ã³é¡äŒŒåºŠãèšç®ãããŸãã
éŸå€ãè¶
ããã³ãµã€ã³é¡äŒŒåºŠãæããããŒã¿ ãã¢ã¯ãæå³ã®éè€ãšèŠãªãããŸãã
ã¯ã©ã¹ã¿ãŒå
ã®æå³çãªéè€ã®åã°ã«ãŒãããã1 ã€ã®ä»£è¡šçãªããŒã¿ãã€ã³ããä¿æãããæ®ãã¯åé€ãããŸãã
ã¢ãã«ããŒã¹ã®å質ãã£ã«ã¿ãªã³ã°
ã¢ãã«ããŒã¹ã®å質ãã£ã«ã¿ãªã³ã°ã§ã¯ãããŸããŸãªçš®é¡ã®ã¢ãã«ã䜿çšããŠãå質ææšã«åºã¥ããŠã³ã³ãã³ããè©äŸ¡ããŠãã£ã«ã¿ãªã³ã°ããŸããã¢ãã«ã®çš®é¡ã®éžæã¯ããã£ã«ã¿ãªã³ã°ã®æå¹æ§ãšå¿
èŠãªèšç®ãªãœãŒã¹ã®äž¡æ¹ã«å€§ããªåœ±é¿ãåãŒããããç¹å®ã®ãŠãŒã¹ ã±ãŒã¹ã«é©åãªã¢ãã«ãéžæããããšãéèŠã§ãã
å質ãã£ã«ã¿ãªã³ã°ã«äœ¿çšã§ããã¢ãã«ã«ã¯ã以äžã®çš®é¡ããããŸãã
N-gram ããŒã¹ã®åé¡åš:
æãåçŽãªã¢ãããŒãã¯ãfastText ã®ãã㪠N-gram ããŒã¹ã® Bag-of-Words åé¡åšã䜿çšããæ¹æ³ã§ããå¿
èŠãªãã¬ãŒãã³ã° ããŒã¿ (10 äžïœ100 äžãµã³ãã«) ãæãå°ãªãæžããããå¹çæ§ãšå®çšæ§ã«åªããŠããŸãã
BERT ã¹ã¿ã€ã«ã®åé¡åš:
BERT ã¹ã¿ã€ã«ã®åé¡åšã¯äžéçãªã¢ãããŒãã§ãããTransformer ããŒã¹ã®ã¢ãŒããã¯ãã£ãéããŠãã質ã®é«ãè©äŸ¡ãæäŸããŸããããè€éãªèšèªãã¿ãŒã³ãæèäžã®é¢ä¿ãæããããšãã§ããå質è©äŸ¡ã«å¹æçã§ãã
LLM:
LLM ã¯ãããã¹ãã®å質è©äŸ¡ã«å¹
åºãç¥èã掻çšããæãæŽç·Žãããå質è©äŸ¡æ©èœãæäŸããŸããã³ã³ãã³ãã®å質ãããæ·±ãç解ã§ããŸãããèšç®èŠä»¶ãé«ãããããã¡ã€ã³ãã¥ãŒãã³ã°çšã®ããŒã¿ã»ãããªã©ãå°èŠæš¡ãªã¢ããªã±ãŒã·ã§ã³ã«åããŠããŸãã
å ±é
¬ã¢ãã«:
å ±é
¬ã¢ãã«ã¯ãäŒè©±ããŒã¿ã®å質ãè©äŸ¡ã«ç¹åãèšèšãããå°éã«ããŽãªã§ãããããã®ã¢ãã«ã¯è€æ°ã®å質åºæºãåæã«è©äŸ¡ã§ããŸãããLLM ãšåããé«ãèšç®èŠä»¶ãæ±ããããŸãã
æé©ãªå質ãã£ã«ã¿ãªã³ã° ã¢ãã«ã®éžæã«ã¯ãããŒã¿ã»ããã®èŠæš¡ãšå©çšå¯èœãªèšç®ãªãœãŒã¹ã®äž¡æ¹ãèæ
®ããå¿
èŠããããŸãã倧èŠæš¡ãªäºååŠç¿ããŒã¿ã»ããã®å Žåãåæãã£ã«ã¿ãªã³ã°ã«ã¯è»œéãªã¢ãã«ã䜿çšããæçµçãªå質è©äŸ¡ã«ã¯é«åºŠãªã¢ãã«ãçµã¿åãããããšã§ãå¹çæ§ãšæå¹æ§ã®ãã©ã³ã¹ãåŸãããŸããå質ãéèŠãšãªãå°èŠæš¡ã§å°éçãªããŒã¿ã»ããã®å Žåã¯ãLLM ãå ±é
¬ã¢ãã«ãªã©ã®ã¢ãã«ã䜿çšããããšããããå®çŸçã§æçãšãªããŸãã
PII ã®åé€
å人ãç¹å®ã§ããæ
å ± (PII) ã®åé€ã«ã¯ãå人ã®ãã©ã€ãã·ãŒãä¿è·ããããŒã¿ä¿è·èŠå¶ã«å¯Ÿããéµå®ã確å®ã«ããããã«ãããŒã¿ã»ããå
ã®æ©å¯æ
å ±ãèå¥ããã³åé€ããããšãå«ãŸããŸãã
ãã®ããã»ã¹ã¯ãæ°åã瀟äŒä¿éçªå·ãªã©ã®çŽæ¥çãªèå¥åãããä»ã®ããŒã¿ãšçµã¿åãããããšã§å人ãèå¥ã§ããéæ¥çãªèå¥åãŸã§ãå人æ
å ±ãå«ãããŒã¿ã»ãããæ±ãå Žåã«ã¯ç¹ã«éèŠã§ãã
ææ°ã® PII åé€ã§ã¯ãæ©å¯æ
å ±ãä¿è·ããããã«ã以äžãå«ãããŸããŸãªæè¡ãçšããããŠããŸãã
ããŒã¿åœ¢åŒãšæ§é ãç¶æããªãããæ©å¯æ
å ±ãèšå·ã«çœ®ãæãã (ããšãã°ãç±³åœç€ŸäŒä¿éçªå·ã®å Žå XXX-XX-1234 ã«çœ®ãæãã)ã
åæã®ç®çã§åç
§æŽåæ§ãç¶æããªãããæ©å¯ããŒã¿ãæ©å¯ã§ãªãåçã®ããŒã¿ã«çœ®ãæããã
äžæµã¿ã¹ã¯ã«å¿
èŠã§ãªãå Žåããã®æ©å¯æ
å ±ãåé€ããã
å
šäœãšã㊠PII ã®åé€ã¯ãããŒã¿ã®ãã©ã€ãã·ãŒãä¿è·ããèŠå¶ãéµå®ãããã¬ãŒãã³ã°ãšåæã®ç®çã§ããŒã¿ã»ããã®æçšæ§ãç¶æããªããããŠãŒã¶ãŒãšä¿¡é Œé¢ä¿ãæ§ç¯ããã®ã«åœ¹ç«ã¡ãŸãã
(åæ£åŠçã«ãã) ããŒã¿åé¡
ããŒã¿åé¡ã¯ãããŒã¿ ãã¥ã¬ãŒã·ã§ã³ã«ãããŠéèŠãªåœ¹å²ãæãããŸãããã®ããã»ã¹ã§ã¯ããã¡ã€ã³ãå質ãªã©å€æ§ãªå±æ§ã«åºã¥ããŠããŒã¿ãæŽçããåé¡ããããšã§ããŒã¿ã®ãã©ã³ã¹ãåããããŸããŸãªç¥èãã¡ã€ã³ã代衚ãããã®ãšãªãããã«ããŸãã
ãã¡ã€ã³åé¡ã¯ãäž»é¡ã«åºã¥ããŠã³ã³ãã³ããèå¥ããŠã«ããŽãªãŒåãããããšã§ãLLM ãå
¥åããã¹ãã®ã³ã³ããã¹ããç¹å®ã®ãã¡ã€ã³ãç解ããã®ã«åœ¹ç«ã¡ãŸãããã¡ã€ã³æ
å ±ã¯ãéçºè
ãæœåšçã«æ害ãŸãã¯äžèŠãªã³ã³ãã³ããç¹å®ãããã£ã«ã¿ãªã³ã°ããªãããããå€æ§ãªãã¬ãŒãã³ã° ããŒã¿ã»ãããæ§ç¯ããããšãå¯èœã«ãã貎éãªè£å©çæ
å ±ãšãªããŸããããšãã°ãã³ã³ãã³ãã 13 ã®é倧ãªãªã¹ã¯ ã«ããŽãªã«åé¡ãã AEGIS Safety Model ã䜿çšããããšã§ãéçºè
ã¯ãã¬ãŒãã³ã° ããŒã¿ããæ害ãªã³ã³ãã³ããå¹æçã«èå¥ãããã£ã«ã¿ãªã³ã°ããããšãã§ããŸãã
æ°ååãã®ããã¥ã¡ã³ããå«ãŸããŠããããšãå€ãäºååŠç¿ã³ãŒãã¹ãæ±ãå Žåãåé¡ãè¡ãããã®æšè«ãå®è¡ããã®ã«å€ãã®èšç®åŠçãšæéãå¿
èŠãšãªããŸãããããã£ãŠããããã®èª²é¡ãå
æããã«ã¯ãåæ£åŠçãé©çšã§ããããŒã¿åé¡ãå¿
èŠã§ããããã¯ãããŒã¿ã»ãããè€æ°ã® GPU ããŒãã«åå²ããããšã§ãåé¡ã¿ã¹ã¯ãé«éåããããšã«ãã£ãŠå®çŸãããŸãã
äžæµã¿ã¹ã¯ã®æ±æé€å»
ãã¬ãŒãã³ã°ã®åŸãLLM ã¯éåžžãèŠããªããã¹ã ããŒã¿ã§æ§æãããäžæµã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ã«ãã£ãŠè©äŸ¡ãããŸããäžæµã¿ã¹ã¯ã®æ±æé€å»ã¯ããã¹ã ããŒã¿ããã¬ãŒãã³ã° ããŒã¿ã»ããã«æ··å
¥ãæŒæŽ©ããå¯èœæ§ã«å¯ŸåŠããã¹ãããã§ããããã¯æå³ããªãè©äŸ¡çµæããããããªã¹ã¯ãæããŸããæ±æé€å»ããã»ã¹ã«ã¯ãéåžžã以äžã®äž»èŠãªã¹ããããå«ãŸããŸãã
æœåšçãªäžæµã¿ã¹ã¯ãšãã®ãã¹ã ã»ãããç¹å®ããŸãã
ãã¹ã ããŒã¿ã N-gram è¡šçŸã«å€æããŸãã
ãã¬ãŒãã³ã° ã³ãŒãã¹ã§äžèŽãã N-gram ãæ€çŽ¢ããŸãã
ããã¥ã¡ã³ãã®æŽåæ§ãç¶æããªãããæ±æãããã»ã¯ã·ã§ã³ãåé€ãŸãã¯ä¿®æ£ããŸãã
ãã®äœç³»çãªã¢ãããŒãã¯ãããŒã¿ã®å質ã«å¯Ÿããæå³ããªã圱é¿ãæå°éã«æããªãããæ±æé€å»ã®å¹æã確å®ãªãã®ã«ããŠãæçµçã«ã¯ãããä¿¡é Œæ§ã®é«ãã¢ãã«ã®è©äŸ¡ãšéçºã«è²¢ç®ããŸãã
ãã¬ã³ããšã·ã£ããã«
ããŒã¿ã®ãã¬ã³ããšã·ã£ããã«ã¯ãããŒã¿ ãã¥ã¬ãŒã·ã§ã³ ãã€ãã©ã€ã³ã«ãããæçµã¹ãããã§ãããè€æ°ã®ãã¥ã¬ãŒã·ã§ã³ãããããŒã¿ã»ãããçµã¿åããããšåæã«é©åãªã©ã³ãã æ§ã確ä¿ããæé©ãªã¢ãã« ãã¬ãŒãã³ã°ãå®çŸããŸãããã®ããã»ã¹ã¯ãã¢ãã«ã®äžè¬åãšããã©ãŒãã³ã¹ãåäžããããå€æ§ã§ãã©ã³ã¹ã®åãããã¬ãŒãã³ã° ããŒã¿ã»ãããäœæããäžã§äžå¯æ¬ ã§ããããŒã¿ã®ãã¬ã³ãã§ã¯ãè€æ°ã®ãœãŒã¹ããã®ããŒã¿ãçµ±åããŠåäžã®ããŒã¿ã»ããã«çµåããããå
æ¬çã§å€æ§ãªãã¬ãŒãã³ã° ããŒã¿ãäœæããŸãããã¬ã³ã ããã»ã¹ã¯ã次㮠2 ã€ã®ã¢ãããŒãã䜿çšããŠå®è£
ãããŸãã
ãªã³ã©ã€ã³: ãã¬ãŒãã³ã°äžã«ããŒã¿ãçµåããã
ãªãã©ã€ã³: ãã¬ãŒãã³ã°åã«ããŒã¿ã»ãããçµåããã
ããããã®ã¢ãããŒãã«ã¯ããã¬ãŒãã³ã° ããã»ã¹ã®ç¹å®ã®èŠä»¶ãšæçµçãªããŒã¿ã»ããã®äœ¿çšç®çã«å¿ããŠç°ãªãå©ç¹ããããŸãã
åæããŒã¿ã®çæ
ååŠçãã§ãŒãºã®è€éãªããã»ã¹ãçµããŸããããçŸåšãLLM éçºã®åéã§ã¯ããŒã¿ã®äžè¶³ãšãã倧ããªèª²é¡ã«çŽé¢ããŠããŸããLLM ãåŠç¿çšããŒã¿ã»ããã倧éã«å¿
èŠãšããã®ã¯ããã¥ãŒãã³ã°ãç®çãšããå Žåã§ãåæ§ã§ããããã®é£œããªãèŠæ±ã¯ãç¹å®ã®ãã¡ã€ã³ãèšèªã«ç¹åããããŒã¿ã®å
¥æå¯èœæ§ãäžåãããšãå°ãªããããŸããããã®åé¡ã«å¯ŸåŠãã
åæããŒã¿çæ (SDG: Synthetic Data Generation)
ã¯ãLLM ã掻çšããŠããã©ã€ãã·ãŒã®ä¿è·ãšããŒã¿ã®æçšæ§ã確ä¿ããªãããçŸå®ã®ããŒã¿ç¹æ§ãæš¡å£ãã人工çãªããŒã¿ã»ãããçæãã匷åãªã¢ãããŒãã§ãããã®ããã»ã¹ã§ã¯å€éš LLM ãµãŒãã¹ã䜿çšããŠãäºååŠç¿ããã¡ã€ã³ãã¥ãŒãã³ã°ãä»ã®ã¢ãã«ã®è©äŸ¡ã«äœ¿çšã§ãããé«å質ã§å€æ§ãã€æèçã«é¢é£æ§ã®é«ãããŒã¿ãçæããŸãã
SDG ã¯ãäœãªãœãŒã¹èšèªã« LLM ãé©å¿ã§ããããã«ããããšã§ããã¡ã€ã³ã®å°éæ§ããµããŒãããã¢ãã«éã®ç¥èã®æœåºãä¿é²ããã¢ãã«æ©èœãæ¡åŒµããæ±çšçãªããŒã«ã«ãªããŸããSDG ã¯ãç¹ã«å®ããŒã¿ãäžè¶³ããŠããããæ©å¯ã§ãã£ãããååŸããã®ãå°é£ã ã£ããããã·ããªãªã«ãããŠãéèŠãªååšãšãªã£ãŠããŸãã
å³ 2. NeMo Curator ã«ããäžè¬çãªåæããŒã¿çæã¢ãŒããã¯ãã£
åæããŒã¿ ãã€ãã©ã€ã³ã«ã¯ãçæãæ¹è©ããã£ã«ã¿ãŒã® 3 ã€ã®äž»èŠãªã¹ãããããããŸãã
çæ:
ããã³ãã ãšã³ãžãã¢ãªã³ã°ã䜿çšããŠãããŸããŸãªã¿ã¹ã¯çšã®åæããŒã¿ãçæããŸãã
Nemotron-4
ãäŸã«ãšããšãSDG ã¯ã5 çš®é¡ã®ç°ãªãã¿ã¹ã¯ (èªç±åœ¢åŒ QAãéžæåŒ QAãèšè¿°åŒèª²é¡ãã³ãŒãã£ã³ã°ãæ°åŠåé¡) ã®ãã¬ãŒãã³ã° ããŒã¿ãçæããããã«é©çšãããŸãã
æ¹è©:
LLM ReflectionãLLM-as-judgeãå ±é
¬ã¢ãã«æšè«ããã®ä»ã®ãšãŒãžã§ã³ããªã©ã®ææ³ã䜿çšããŠãåæããŒã¿ã®å質ãè©äŸ¡ããŸããè©äŸ¡çµæ㯠SDG LLM ãžã®ãã£ãŒãããã¯ãšããŠäœ¿çšããããè¯ãçµæãçæããããäœå質ããŒã¿ããã£ã«ã¿ãªã³ã°ãããããããšãã§ããŸãã代衚çãªäŸã¯
Nemotron-4-340B reward NIM
ã§ããããã¯ã5 ã€ã®äž»èŠãªå±æ§ãããªãã¡ Helpfulness (æçšæ§)ãCorrectness (æ£ç¢ºæ§)ãCoherence (äžè²«æ§)ãComplexity (è€éæ§)ãVerbosity (åé·æ§) ãéããŠããŒã¿ã®å質ãè©äŸ¡ããŸãããããã®å±æ§ã¹ã³ã¢ã«é©åãªéŸå€ãèšå®ããããšã§ããã£ã«ã¿ãªã³ã°åŠçã§ã¯ãäœå質ãŸãã¯äžé©åãªã³ã³ãã³ããé€å€ããªãããé«å質ãªåæããŒã¿ã®ã¿ãä¿æãããããã«ãªããŸãã
ãã£ã«ã¿ãŒ:
éè€æé€ã PII ã®åé€ãªã©ã®ã¹ãããã§ãSDG ããŒã¿ã®å質ãããã«åäžãããŸãã
ãã ããSDG ããã¹ãŠã®ã±ãŒã¹ã«é©ããŠããããã§ã¯ãªãããšã«æ³šæããŠãã ãããå€éš LLM ã«ããå¹»èŠã¯ãä¿¡é Œæ§ã®äœãæ
å ±ããããããããŒã¿ã®æŽåæ§ãæãªãå¯èœæ§ããããŸããå ããŠãçæãããããŒã¿ã®ååžãã¿ãŒã²ããã®ååžãšäžèŽããªãå¯èœæ§ããããçŸå®äžçã®ããã©ãŒãã³ã¹ã«æªåœ±é¿ãåãŒãå¯èœæ§ããããŸãããã®ãããªå Žåã¯ãSDG ã䜿çšããããšã§ãã·ã¹ãã ã®å¹çæ§ãæ¹åããã©ãããããããäœäžãããå¯èœæ§ããããŸãã
ãœããªã³ AI LLM æ§ç¯ã®ããã®ããŒã¿åŠç
ãªãŒãã³ãœãŒã¹ LLM ã¯è±èªã§ã¯åªããŠããŸããããã®ä»ã®èšèªãç¹ã«æ±åã¢ãžã¢ã®èšèªã§ã¯èŠæŠããŠããŸãããã®äž»ãªåå ã¯ããããã®èšèªã®ãã¬ãŒãã³ã° ããŒã¿ã®äžè¶³ãçŸå°ã®æåã«å¯Ÿããç解ãéãããŠããããšãç¬èªã®èšèªæ§é ãšè¡šçŸãæããã®ã«ååãªããŒã¯ã³ãäžè¶³ããŠããããšã§ãã
è±èªå以å€ã®åœã
ã®äŒæ¥ã¯ã顧客ã®ããŒãºãå®å
šã«æºãããããæ±çšã¢ãã«ã«ãšã©ãŸãããçŸå°ã®èšèªã®ãã¥ã¢ã³ã¹ãæããããã«ã¢ãã«ãã«ã¹ã¿ãã€ãºããã·ãŒã ã¬ã¹ã§ã€ã³ãã¯ãã®ãã顧客äœéšã確ä¿ããå¿
èŠããããŸããäŸãã°ãViettel Solutions ã¯ãNeMo Curator ã䜿çšããŠã
é«å質ãªãããã èªããŒã¿
ãåŠçãã粟床ã 10% åäžãããããŒã¿ã»ããã®ãµã€ãºã 60% åæžãããã¬ãŒãã³ã°ã 3 åé«éåããŸããã
ãã®ãŠãŒã¹ ã±ãŒã¹ã®äž»ãªæé ã¯æ¬¡ã®ãšããã§ãã
ããã€ãã®ãããã èªããã³å€èšèªããŒã¿ã»ãã (Wikipediaããããã èªãã¥ãŒã¹ ã³ãŒãã¹ã
OSCAR
ãC4) ãããŠã³ããŒããã倧èŠæš¡ãªããŒã¿ã»ãããå¹ççã«åŠçããããã«ãParquet ã«å€æããŸãã
è€æ°ã®ããŒã¿ã»ãããçµåãæšæºåããåäžã®ããŒã¿ã»ããã«ã·ã£ãŒãããŸãã
Unicode ã®åãã©ãŒããããå³å¯ãªéè€æé€ãå質ãã£ã«ã¿ãªã³ã° (ãã¥ãŒãªã¹ãã£ãã¯ããã³åé¡åšããŒã¹) ãé©çšããŸãã
詳现ã¯ããã®
ãã¥ãŒããªã¢ã«
ãåç
§ããŠãã ããã
NVIDIA NeMo Curator ã«ããããŒã¿ã®å質åäž
ãããŸã§ãLLM ã®ç²ŸåºŠåäžã«ãããããŒã¿å質ã®éèŠæ§ã«ã€ããŠããããŠããŸããŸãªããŒã¿åŠçææ³ã«ã€ããŠèª¬æããŠããŸãããéçºè
ã¯ã
NeMo Curator
ãä»ããŠçŽæ¥ãããã®ææ³ãè©Šãããšãã§ããŸããNeMo Curator ã¯ãã«ã¹ã¿ãã€ãºå¯èœãªã¢ãžã¥ãŒã«åŒã®ã€ã³ã¿ãŒãã§ã€ã¹ãæäŸããŠãããããéçºè
ã¯ãããããŒã¹ã«ç°¡åã«æ§ç¯ããããšãã§ããŸãã
NeMo Curator ã¯ãcuDFãcuMLãcuGraphãDask ãªã©ã® NVIDIA RAPIDS GPU ã§é«éåãããã©ã€ãã©ãªã䜿çšããŠããã«ãããŒãããã«ã GPU ã«ãããã¯ãŒã¯ããŒããé«éåããå¿
èŠã«å¿ããŠã¹ã±ãŒã«ãããåŠçæéãåæžã§ããŸããäŸãã°ãGPU ã䜿çšããŠããŒã¿åŠçã®ãã€ãã©ã€ã³ãé«éåããããšã§ã
Zyphra ã¯ç·ææã³ã¹ã (TCO)
ã 50% åæžããããŒã¿ã 10 åé«éã«åŠçããŠããŸã (3 é±éãã 2 æ¥é)ã
ãŸãã¯ã
NVIDIA/NeMo-Curator GitHub ãªããžããª
ãšã以äžã®ããŸããŸãªããŒã¿ ãã¥ã¬ãŒã·ã§ã³ã®ã¯ãŒã¯ãããŒã網çŸ
ããŠãã
ãã¥ãŒããªã¢ã«
ãã芧ãã ããã
äºååŠç¿ã®ããã®ããŒã¿åŠç
ã«ã¹ã¿ãã€ãºã®ããã®ããŒã¿åŠç
SDG ãã€ãã©ã€ã³
ãŸãã
NeMo ãã¬ãŒã ã¯ãŒã¯ ã³ã³ãããŒ
ãä»ããŠã¢ã¯ã»ã¹ãã
NVIDIA AI Enterprise
ã©ã€ã»ã³ã¹ã§ãšã³ã¿ãŒãã©ã€ãº ãµããŒãããªã¯ãšã¹ãããããšãã§ããŸãã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
ã»ãã¥ã¢ãªãšã³ã¿ãŒãã©ã€ãº ããŒã¿ã§ã«ã¹ã¿ã LLM ã¢ããªãæ°åã§æ§ç¯ãã
GTC ã»ãã·ã§ã³:
LLM ã€ã³ãã©ã®æ§ç¯ããã¬ãŒãã³ã°é床ã®é«éåãçæ AI ã€ãããŒã·ã§ã³ã®æšé²ã®ããã®ãšã³ãããŒãšã³ãã®ãœãªã¥ãŒã·ã§ã³ã®èšèš (Aivres æäŸ)
NGC ã³ã³ãããŒ:
genai-llm-playground
NGC ã³ã³ãããŒ:
rag-application-query-decomposition-agent
ãŠã§ãããŒ:
AI ã«ããå»çã¯ãŒã¯ãããŒã®å€é©: CLLM ãæ·±ãæãäžãã |
https://developer.nvidia.com/blog/expanding-ai-agent-interface-options-with-2d-and-3d-digital-human-avatars/ | Expanding AI Agent Interface Options with 2D and 3D Digital Human Avatars | When interfacing with
generative AI
applications, users have multiple communication optionsâtext, voice, or through digital avatars.
Traditional chatbot or copilot applications have text interfaces where users type in queries and receive text-based responses. For hands-free communication, speech AI technologies like
automatic speech recognition
(ASR) and
text-to-speech
(TTS) facilitate verbal interactions, ideal for scenarios like phone-based customer service. Moreover, combining digital avatars with speech capabilities provides a more dynamic interface for users to engage visually with the application. According to Gartner, by 2028, 45% of organizations with more than 500 employees will leverage employee AI avatars to expand the capacity of human capital.
1
Digital avatars can vary widely in styleâsome use cases benefit from photorealistic 3D or 2D avatars, while other use cases work better with a stylized, or cartoonish avatar.
3D Avatars
offer fully immersive experiences, showcasing lifelike movements and photorealism. Developing these avatars requires specialized software and technical expertise, as they involve intricate body animations and high-quality renderings.
2D Avatars
are quicker to develop and ideal for web-embedded solutions. They offer a streamlined approach to creating interactive AI, often requiring artists for design and animation but less intensive in terms of technical resources.
To kickstart your creation of a photo-realistic digital human, the
NVIDIA AI Blueprint on digital humans for customer service
can be tailored for various use cases. This functionality is now included with support for the NVIDIA Maxine
Audio2Face-2D
NIM microservice. âAdditionally, the blueprint now offers flexibility in rendering for 3D avatar developers to use
Unreal Engine
.
How to add a talking digital avatar to your agent application
In the AI Blueprint for digital humans, a user interacts with an
AI agent
that leverages
NVIDIA ACE
technology (Figure 1).
Figure 1. Architecture diagram for the NVIDIA AI Blueprint for digital humans
The audio input from the user is sent to the ACE agent which orchestrates the communication between various NIM microservices. The ACE agent uses the
Riva Parakeet NIM
to convert the audio to text, which is then processed by a RAG pipeline. The RAG pipeline uses the NVIDIA NeMo Retriever
embedding
and
reranking
NIM microservices, and an
LLM NIM
, to respond with relevant context from stored documents.
Finally, the response is converted back to speech via Riva TTS, animating the digital human using the Audio2Face-3D NIM or Audio2Face-2D NIM.
Considerations when designing your AI agent application
In global enterprises, communication barriers across languages can slow down operations. AI-powered avatars with multilingual capabilities communicate across languages effortlessly. The digital human AI Blueprint provides conversational AI capabilities that simulate human interactions that accommodate usersâ speech styles and languages through Riva ASR, neural machine translation (NMT) along with intelligent interruption and barge-in support.
One of the key benefits of digital human AI agents is their ability to function as âalways-onâ resources for employees and customers alike. RAG-powered AI agents continuously learn from interactions and improve over time, providing more accurate responses and better user experiences.
For enterprises considering digital human interfaces, choosing the right avatar and rendering option depends on the use case and customization preferences.
Use Case
: 3D avatars are ideal for highly immersive use cases like in physical stores, kiosks or primarily one-to-one interactions, while 2D avatars are effective for web or mobile conversational AI use cases.
Development and Customization Preferences
: Teams with 3D and animation expertise can leverage their skillset to create an immersive and ultra-realistic avatar, while teams looking to iterate and customize quickly can benefit from the simplicity of 2D avatars.
Scaling Considerations:
Scaling is an important consideration when evaluating avatars and corresponding rendering options. Stream throughput, especially for 3D avatars, is highly dependent on the choice and quality of the character asset used, the desired output resolution and the rendering option of choice (Omniverse Renderer or Unreal Engine) can play a critical role in determining per stream compute footprint.
NVIDIA Audio2Face-2D allows creation of lifelike 2D avatars from just a portrait image and voice input. Easy and simple configurations allow developers to quickly iterate and produce target avatars and animations for their digital human use cases. With real-time output and cloud-native deployment, 2D digital humans are ideal for interactive use cases and streaming avatars for interactive web-embedded solutions.
For example, enterprises looking to deploy AI agents across multiple devices and inserting digital humans into web- or mobile-first customer journeys, can benefit from the reduced hardware demands of 2D avatars.
3D photorealistic avatars provide an unmatched immersive experience for use cases demanding âhighly empathetic user engagement. NVIDIA Audio2Face-3D and Animation NIM microservices animate a 3D character by generating blendshapes along with subtle head and body animation to create an immersive, photorealistic avatar. The digital human AI Blueprint now supports two rendering options for 3D avatars, including Omniverse Renderer and Unreal Engine Renderer, providing developers the flexibility to integrate the rendering option of their choice.
To explore how digital humans can enhance your enterprise, visit the
NVIDIA API catalog
to learn about the different avatar options.
Getting started with digital avatars
For hands-on development with Audio2Face-2D and Unreal Engine NIM microservices,
apply for ACE Early Access
or dive into the digital human AI Blueprint
technical blog
to learn how you can add digital human interfaces to personalize chatbot applications.
1
Gartner®, Hype Cycle for the Future of Work, 2024 by Tori Paulman, Emily Rose McRae, etc., July 2024
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. | https://developer.nvidia.com/ja-jp/blog/expanding-ai-agent-interface-options-with-2d-and-3d-digital-human-avatars/ | 2D ãš 3D ã®ããžã¿ã« ãã¥ãŒãã³ ã¢ãã¿ãŒã«ãã AI ãšãŒãžã§ã³ã ã€ã³ã¿ãŒãã§ã€ã¹ ãªãã·ã§ã³ã®æ¡åŒµ | Reading Time:
2
minutes
ãŠãŒã¶ãŒã
çæ AI
ã¢ããªã±ãŒã·ã§ã³ã䜿ã£ãŠããåãããéã«ã¯ãããã¹ããé³å£°ãããžã¿ã« ã¢ãã¿ãŒãªã©è€æ°ã®ã³ãã¥ãã±ãŒã·ã§ã³ ãªãã·ã§ã³ãå©çšããããšãã§ããŸãã
åŸæ¥ã®ãã£ããããããã³ãã€ããã ã¢ããªã±ãŒã·ã§ã³ã§ã¯ããŠãŒã¶ãŒãåãåãããå
¥åããããã¹ãããŒã¹ã®å¿çãåä¿¡ããããã¹ã ã€ã³ã¿ãŒãã§ã€ã¹ã䜿çšããŠããŸãããã³ãºããªãŒã®ã³ãã¥ãã±ãŒã·ã§ã³ã§ã¯ã
èªåé³å£°èªè
(ASR: Automatic Speech Recognition) ã
é³å£°åæ
(TTS: Text-To-Speech) ãªã©ã®é³å£° AI æè¡ã«ãããé»è©±ã䜿çšããã«ã¹ã¿ã㌠ãµãŒãã¹ãªã©ã®ã·ããªãªã«æé©ãªå£é ã«ããããåãã容æã«ãªããŸããããã«ãããžã¿ã« ã¢ãã¿ãŒã«é³å£°æ©èœãæãããããšã§ããŠãŒã¶ãŒãã¢ããªã±ãŒã·ã§ã³ãèŠèŠçã«äœ¿çšã§ããããããã€ãããã¯ãªã€ã³ã¿ãŒãã§ã€ã¹ãæäŸã§ããŸããGartner ã«ãããšã2028 幎ãŸã§ã«ãåŸæ¥å¡ 500 å以äžã®çµç¹ã® 45% ãã人çè³æ¬ã®èœåæ¡å€§ã®ããã«ã AI ã¢ãã¿ãŒã®åŸæ¥å¡ã掻çšããããã«ãªãããã§ãã
1
ããžã¿ã« ã¢ãã¿ãŒã®ã¹ã¿ã€ã«ã¯æ§ã
ã§ããã©ããªã¢ãªã¹ãã£ãã¯ãª 3D ãŸã㯠2D ã®ã¢ãã¿ãŒãé©ããŠããã±ãŒã¹ãããã°ãå®ååãããã¢ãã¿ãŒã挫ç»ã®ãããªã¢ãã¿ãŒã®æ¹ãé©ããŠããã±ãŒã¹ããããŸãã
3D ã¢ãã¿ãŒ
ã¯ããªã¢ã«ãªåããšåå®æ§ãåçŸããå®å
šãªæ²¡å
¥äœéšãæäŸããŸãããã®ãããªã¢ãã¿ãŒã®éçºã«ã¯ãè€éãªããã£ãŒ ã¢ãã¡ãŒã·ã§ã³ãé«å質ã®ã¬ã³ããªã³ã°ãå¿
èŠãšãªããããå°éçãªãœãããŠã§ã¢ãæè¡çãªå°éç¥èãå¿
èŠã«ãªããŸãã
2D ã¢ãã¿ãŒ
ã¯éçºãè¿
éã§ãWeb ã«çµã¿èŸŒã¿ãœãªã¥ãŒã·ã§ã³ã«æé©ã§ããã€ã³ã¿ã©ã¯ãã£ã㪠AI ã®äœæã«åççãªã¢ãããŒããæäŸãããã¶ã€ã³ãã¢ãã¡ãŒã·ã§ã³ã«ã¯ã¢ãŒãã£ã¹ããå¿
èŠã«ãªãããšãå€ãã§ãããæè¡çãªãªãœãŒã¹ã®é¢ã¯ããã»ã©è² æ
ã«ãªããŸããã
ãã©ããªã¢ãªã¹ãã£ãã¯ãªããžã¿ã« ãã¥ãŒãã³ã®äœæãå§ããã«ãããã
ã«ã¹ã¿ã㌠ãµãŒãã¹åãããžã¿ã« ãã¥ãŒãã³ã® NVIDIA AI Blueprint
ã¯ãããŸããŸãªãŠãŒã¹ ã±ãŒã¹ã«åãããŠã«ã¹ã¿ãã€ãºããããšãã§ããŸãããã®æ©èœã¯çŸåšãNVIDIA Maxine
Audio2Face-2D
NIM ãã€ã¯ããµãŒãã¹ã®ãµããŒãã«å«ãŸããŠããŸããããã«ããã® Blueprint ã§ã¯ã3D ã¢ãã¿ãŒéçºè
ã
Unreal Engine
ã䜿çšã§ãããããã¬ã³ããªã³ã°ã«æè»æ§ãæãããŠããŸãã
ãšãŒãžã§ã³ã ã¢ããªã±ãŒã·ã§ã³ã«äŒè©±ããããžã¿ã« ã¢ãã¿ãŒãè¿œå ããæ¹æ³
ããžã¿ã« ãã¥ãŒãã³åã AI Blueprint ã§ã¯ããŠãŒã¶ãŒã
NVIDIA ACE
æè¡ã掻çšãã
AI ãšãŒãžã§ã³ã
ãšå¯Ÿè©±ããŸã (å³ 1)ã
å³ 1. ããžã¿ã« ãã¥ãŒãã³åã NVIDIA AI Blueprint ã®ã¢ãŒããã¯ãã£
ãŠãŒã¶ãŒã«ããé³å£°å
¥åã¯ãããŸããŸãª NIM ãã€ã¯ããµãŒãã¹éã®éä¿¡ã調æŽãã ACE ãšãŒãžã§ã³ãã«éä¿¡ãããŸããACE ãšãŒãžã§ã³ãã¯ã
Riva Parakeet NIM
ã䜿çšããŠé³å£°ãããã¹ãã«å€æãããã®ããã¹ã㯠RAG ãã€ãã©ã€ã³ã§åŠçãããŸããRAG ãã€ãã©ã€ã³ã§ã¯ãNIM ãã€ã¯ããµãŒãã¹ã®
åã蟌ã¿
ãš
ãªã©ã³ã¯
ãè¡ã NVIDIA NeMo Retriever ãš
LLM NIM
ã䜿çšããŠãä¿åãããããã¥ã¡ã³ãããé¢é£ããã³ã³ããã¹ããçšããŠå¿çããŸãã
æåŸã«ãRiva TTS ãä»ããŠãã®å¿çãé³å£°ã«å€æããAudio2Face-3D NIM ãŸã㯠Audio2Face-2D NIM ã䜿çšããŠããžã¿ã« ãã¥ãŒãã³ãã¢ãã¡ãŒã·ã§ã³åããŸãã
AI ãšãŒãžã§ã³ã ã¢ããªã±ãŒã·ã§ã³ãèšèšããéã«èæ
®ãã¹ããã€ã³ã
ã°ããŒãã«äŒæ¥ã§ã¯ãèšèªã®å£ã«ããã³ãã¥ãã±ãŒã·ã§ã³ã®é害ãæ¥åã®åŠšããšãªãããšããããŸããå€èšèªæ©èœãåãã AI æèŒã¢ãã¿ãŒã䜿çšããã°ãèšèªã®å£ãè¶
ããåæ»ãªã³ãã¥ãã±ãŒã·ã§ã³ãåãããšãã§ããŸããããžã¿ã« ãã¥ãŒãã³ AI Blueprint ã¯ãRiva ASR ããã¥ãŒã©ã«æ©æ¢°ç¿»èš³ (NMT: Neural Machine Translation) ã«å ããã€ã³ããªãžã§ã³ããªå²ã蟌ã¿ãããŒãžã€ã³æ©èœãåãããŠãŒã¶ãŒã®è©±ãæ¹ãèšèªã«æè»ã«å¯Ÿå¿ã§ããã人éããã察話å AI ãå®çŸããŸãã
ããžã¿ã« ãã¥ãŒãã³ AI ãšãŒãžã§ã³ãã®äž»ãªå©ç¹ã® 1 ã€ã¯ãåŸæ¥å¡ãšé¡§å®¢ã®äž¡è
ã«ãšã£ãŠãåžžæ皌åããããªãœãŒã¹ãšããŠæ©èœã§ããããšã§ããRAG ãæèŒãã AI ãšãŒãžã§ã³ãã¯ããããšãããç¶ç¶çã«åŠç¿ããæéã®çµéãšãšãã«æ¹åããŠãããããããæ£ç¢ºãªå¯Ÿå¿ãšããåªãããŠãŒã¶ãŒäœéšãæäŸããããšãã§ããŸãã
ããžã¿ã« ãã¥ãŒãã³ ã€ã³ã¿ãŒãã§ã€ã¹ãæ€èšããŠããäŒæ¥ã«ãšã£ãŠãé©åãªã¢ãã¿ãŒãšã¬ã³ããªã³ã° ãªãã·ã§ã³ã®éžæã¯ããŠãŒã¹ ã±ãŒã¹ãã«ã¹ã¿ãã€ãºèšå®ã«äŸåããŸãã
ãŠãŒã¹ ã±ãŒã¹
: 3D ã¢ãã¿ãŒã¯ãå®åºèãããªã¹ã¯ (ç¡äººç«¯æ«) ãªã©ã䞻㫠1察 1 ã®ãããšãã®ãããªãéåžžã«æ²¡å
¥æã®é«ããŠãŒã¹ ã±ãŒã¹ã«æé©ã§ããã2D ã¢ãã¿ãŒã¯ãWeb ãã¢ãã€ã«ã®å¯Ÿè©±å AI ãŠãŒã¹ ã±ãŒã¹ã«å¹æçã§ãã
éçºãšã«ã¹ã¿ãã€ãºã®èšå®
: 3D ãã¢ãã¡ãŒã·ã§ã³ã®å°éç¥èãæã€ããŒã ã¯ããã®ã¹ãã«ã掻çšããŠæ²¡å
¥æã®ããè¶
ãªã¢ã«ãªã¢ãã¿ãŒãäœæã§ããŸããäžæ¹ãå埩äœæ¥ãã«ã¹ã¿ãã€ãºãè¿
éã«è¡ãããããŒã ã«ã¯ãã·ã³ãã«ãª 2D ã¢ãã¿ãŒãæå¹ã§ãã
ã¹ã±ãŒãªã³ã°ã®èæ
®ãã¹ããã€ã³ã
: ã¢ãã¿ãŒãšå¯Ÿå¿ããã¬ã³ããªã³ã° ãªãã·ã§ã³ãè©äŸ¡ããéã«ãã¹ã±ãŒãªã³ã°ã¯èæ
®ãã¹ãéèŠãªãã€ã³ãã§ããã¹ããªãŒã ã®ã¹ã«ãŒãããã¯ãç¹ã« 3D ã¢ãã¿ãŒã®å Žåã䜿çšãããã£ã©ã¯ã¿ãŒ ã¢ã»ããã®éžæãšå質ã«ãã£ãŠå€§ããç°ãªããŸããåžæããåºå解å床ãéžæããã¬ã³ããªã³ã° ãªãã·ã§ã³ (Omniverse Renderer ãŸã㯠Unreal Engine) ã¯ãã¹ããªãŒã ãããã®èšç®ãããããªã³ãã決å®ããäžã§éèŠãªåœ¹å²ãæãããŸãã
NVIDIA Audio2Face-2D ã§ã¯ãé¡åçãšé³å£°å
¥åã ãã§ãªã¢ã«ãª 2D ã¢ãã¿ãŒãäœæã§ããŸããç°¡åã§ã·ã³ãã«ãªæ§æã®ãããéçºè
ã¯ããžã¿ã« ãã¥ãŒãã³ã®ãŠãŒã¹ ã±ãŒã¹ã«åãããã¢ãã¿ãŒãã¢ãã¡ãŒã·ã§ã³ãè¿
éã«ç¹°ãè¿ãäœæã§ããŸãããªã¢ã«ã¿ã€ã åºåãšã¯ã©ãŠã ãã€ãã£ãã®ãããã€ã«ããã2D ããžã¿ã« ãã¥ãŒãã³ã¯ãã€ã³ã¿ã©ã¯ãã£ããªãŠãŒã¹ ã±ãŒã¹ããã€ã³ã¿ã©ã¯ãã£ã㪠Web çµã¿èŸŒã¿ãœãªã¥ãŒã·ã§ã³åãã®ã¹ããªãŒãã³ã° ã¢ãã¿ãŒã«æé©ã§ãã
ããšãã°ãè€æ°ã®ããã€ã¹ã« AI ãšãŒãžã§ã³ãããããã€ããWeb ãŸãã¯ã¢ãã€ã« ãã¡ãŒã¹ãã®ã«ã¹ã¿ã㌠ãžã£ãŒããŒã«ããžã¿ã« ãã¥ãŒãã³ãå°å
¥ããããšããŠããäŒæ¥ã«ã¯ã2D ã¢ãã¿ãŒã¯ããŒããŠã§ã¢èŠä»¶ã軜æžããã®ã§ã¡ãªããããããŸãã
3D ã®ãã©ããªã¢ãªã¹ãã£ãã¯ãªã¢ãã¿ãŒã¯ãé«ãå
±æãèŠæ±ããããŠãŒã¶ãŒ ãšã³ã²ãŒãžã¡ã³ããå¿
èŠãšãããŠãŒã¹ ã±ãŒã¹ã«ãæ¯é¡ã®ãªã没å
¥äœéšãæäŸããŸããNVIDIA Audio2Face-3D ãšã¢ãã¡ãŒã·ã§ã³ NIM ãã€ã¯ããµãŒãã¹ã¯ãç¹çŽ°ãªé éšãšèº«äœã®ã¢ãã¡ãŒã·ã§ã³ãšãšãã«ãã¬ã³ãã·ã§ã€ããçæãã没å
¥æã®ãããã©ããªã¢ãªã¹ãã£ãã¯ãªã¢ãã¿ãŒãäœæããããšã§ã3D ãã£ã©ã¯ã¿ãŒãã¢ãã¡ãŒã·ã§ã³åããŸããããžã¿ã« ãã¥ãŒãã³ AI Blueprint ã¯ã3D ã¢ãã¿ãŒã®ã¬ã³ããªã³ã° ãªãã·ã§ã³ããšããŠãOmniverse ã¬ã³ãã©ãŒãš Unreal-Engine ã¬ã³ãã©ãŒããµããŒãããŠãããéçºè
ãéžæããã¬ã³ããªã³ã° ãªãã·ã§ã³ãæè»ã«çµ±åã§ããããã«ãªããŸããã
ããžã¿ã« ãã¥ãŒãã³ãäŒæ¥ã匷åããæ¹æ³ã«ã€ããŠã¯ã
NVIDIA API ã«ã¿ãã°
ã«ã¢ã¯ã»ã¹ããŠãããŸããŸãªã¢ãã¿ãŒã®ãªãã·ã§ã³ãã芧ãã ããã
ããžã¿ã« ã¢ãã¿ãŒãå§ãã
Audio2Face-2D ãš Unreal Engine NIM ãã€ã¯ããµãŒãã¹ã䜿çšããå®è·µçãªéçºã«ã€ããŠã¯ã
ACE æ©æã¢ã¯ã»ã¹ã«ç³ã蟌ã
ããããžã¿ã« ãã¥ãŒãã³ AI Blueprint ã®
æè¡ããã°
ã«ã¢ã¯ã»ã¹ããŠããã£ããããã ã¢ããªã±ãŒã·ã§ã³ãããŒãœãã©ã€ãºããããã«ããžã¿ã« ãã¥ãŒãã³ ã€ã³ã¿ãŒãã§ã€ã¹ãè¿œå ããæ¹æ³ã«ã€ããŠåŠã¶ããšãã§ããŸãã
1
Gartner®, Hype Cycle for the Future of Work, 2024 by Tori Paulman, Emily Rose McRae, etc., July 2024
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
Enhancing the Digital Human Experience with Cloud Microservices Accelerated by Generative AI
GTC ã»ãã·ã§ã³:
Build a World of Interactive Avatars Based on NVIDIA Omniverse, AIGC, and LLM
NGC ã³ã³ãããŒ:
ACE ãšãŒãžã§ã³ã ãµã³ãã« ããã³ããšã³ã
SDK:
NVIDIA Tokkio
ãŠã§ãããŒ:
How Telcos Transform Customer Experiences with Conversational AI |
https://developer.nvidia.com/blog/ai-ran-goes-live-and-unlocks-a-new-ai-opportunity-for-telcos/ | AI-RAN Goes Live and Unlocks a New AI Opportunity for Telcos | AI is transforming industries, enterprises, and consumer experiences in new ways. Generative AI models are moving towards reasoning,
agentic AI
is enabling new outcome-oriented workflows and
physical AI
is enabling endpoints like cameras, robots, drones, and cars to make decisions and interact in real time.
The common glue between all these use cases is the need for pervasive, reliable, secure, and super-fast connectivity.
Telecommunication networks must prepare for this new kind of AI traffic, which can come directly through the fronthaul wireless access network or backhauled from the public or private cloud as a completely standalone AI inferencing traffic generated by enterprise applications.
Local wireless infrastructure offers an ideal place to process AI inferencing. This is where a new approach to telco networks, AI radio access network (
AI-RAN
), stands out.
Traditional CPU or ASIC-based RAN systems are designed only for RAN use and cannot process AI traffic today. AI-RAN enables a common GPU-based infrastructure that can run both wireless and AI workloads concurrently, turning networks from single-purpose to multi-purpose infrastructures and turning sites from cost-centers to revenue sources.
With a strategic investment in the right kind of technology, telcos can leap forward to become the AI grid that facilitates the creation, distribution, and consumption of AI across industries, consumers, and enterprises. This moment in time presents a massive opportunity for telcos to build a fabric for AI training (creation) and AI inferencing (distribution) by repurposing their central and distributed infrastructures.
SoftBank and NVIDIA fast-forward AI-RAN commercialization
SoftBank has turned the AI-RAN vision into reality, with its
successful outdoor field trial
in Fujisawa City, Kanagawa, Japan, where NVIDIA-accelerated hardware and
NVIDIA Aerial
software served as the technical foundation.
This achievement marks multiple steps forward for AI-RAN commercialization and provides real proof points addressing industry requirements on technology feasibility, performance, and monetization:
Worldâs first outdoor 5G AI-RAN field trial running on an NVIDIA-accelerated computing platform. This is an end-to-end solution based on a full-stack, virtual 5G RAN software integrated with 5G core.
Carrier-grade virtual RAN performance achieved.
AI and RAN multi-tenancy and orchestration achieved.
Energy efficiency and economic benefits validated compared to existing benchmarks.
A new solution to unlock AI marketplace integrated on an AI-RAN infrastructure.
Real-world AI applications showcased, running on an AI-RAN network.
Above all, SoftBank aims to commercially release their own AI-RAN product for worldwide deployment in 2026.
To help other mobile network operators get started on their AI-RAN journey now, SoftBank is also planning to offer a reference kit comprising the hardware and software elements required to trial AI-RAN in a fast and easy way.
End-to-end AI-RAN solution and field results
SoftBank developed their AI-RAN solution by integrating hardware and software components from NVIDIA and ecosystem partners and hardening them to meet carrier-grade requirements. Together, the solution enables a full 5G vRAN stack that is 100% software-defined, running on NVIDIA GH200 (CPU+GPU), NVIDIA Bluefield-3 (NIC/DPU), and Spectrum-X for fronthaul and backhaul networking. It integrates with 20 radio units and a 5G core network and connects 100 mobile UEs.
The core software stack includes the following components:
SoftBank-developed and optimized 5G RAN Layer 1 functions such as channel mapping, channel estimation, modulation, and forward-error-correction, using
NVIDIA Aerial CUDA-Accelerated-RAN
libraries
Fujitsu software for Layer 2 functions
Red Hatâs OpenShift Container Platform (OCP) as the container virtualization layer, enabling different types of applications to run on the same underlying GPU computing infrastructure
A SoftBank-developed E2E AI and RAN orchestrator, to enable seamless provisioning of RAN and AI workloads based on demand and available capacity
The underlying hardware is the
NVIDIA GH200 Grace Hopper Superchip
, which can be used in various configurations from distributed to centralized RAN scenarios. This implementation uses multiple GH200 servers in a single rack, serving AI and RAN workloads concurrently, for an aggregated-RAN scenario. This is comparable to deploying multiple traditional RAN base stations.
In this pilot, each GH200 server was able to process 20 5G cells using 100-MHz bandwidth, when used in RAN-only mode. For each cell, 1.3 Gbps of peak downlink performance was achieved in ideal conditions, and 816Mbps was demonstrated with carrier-grade availability in the outdoor deployment.
AI-RAN multi-tenancy achieved
One of the first principles of AI-RAN technology is to be able to run RAN and AI workloads concurrently and without compromising carrier-grade performance. This multi-tenancy can be either in time or space: dividing the resources based on time of day or based on percentage of compute. This also implies the need for an orchestrator that can provision, de-provision, or shift workloads seamlessly based on available capacity.
At the Fujisawa City trial, concurrent AI and RAN processing was successfully demonstrated over GH200 based on static allocation of resources between RAN and AI workloads (Figure 1).
Figure 1. AI and RAN concurrency and total GPU utilization
Each NVIDIA GH200 server constitutes multiple MIGs (Multi-Instance GPU), that enable a single GPU to be divided into multiple isolated GPU instances. Each instance has its own dedicated resources, such as memory, cache, and compute cores, and can operate independently.
The SoftBank orchestrator intelligently assigns whole GPUs or some MIGs within a GPU to run AI and some to run RAN workloads and switches them dynamically when needed. It is also possible to statically allocate a certain percentage of compute for RAN and AI, for example, 60% for RAN and 40% for AI instead of demand-based allocation.
The goal is to maximize capacity utilization. With AI-RAN, telcos can achieve almost 100% utilization compared to 33% capacity utilization for typical RAN-only networks. This is an increase of up to 3x while still catering to peak RAN loads, thanks to dynamic orchestration and prioritization policies.
Enabling an AI-RAN marketplace
With a new capacity for AI computing now available on distributed AI-RAN infrastructure, the question arises of how to bring AI demand to this AI computing supply.
To solve this, SoftBank used a serverless API powered by NVIDIA AI Enterprise to deploy and manage AI workloads on AI-RAN, with security, scale, and reliability. The NVIDIA AI Enterprise serverless API is hosted on the AI-RAN infrastructure and integrated with the SoftBank E2E AI-RAN orchestrator. It connects to any public or private cloud running the same API, to dispatch external AI inferencing jobs to the AI-RAN server when compute is available (Figure 2).
Figure 2. AI marketplace solution integrated with SoftBank AI-RAN
This solution enables an AI marketplace, helping SoftBank deliver localized, low-latency, secured inferencing services. It also demonstrated the importance of AI-RAN in helping telcos become the AI distribution grid, particularly for external AI inferencing jobs, and opened a new revenue opportunity.
AI-RAN applications showcased
In this outdoor trial, new edge AI applications developed by SoftBank were demonstrated over the live AI-RAN network:
Remote support of autonomous vehicles over 5G
Factory multi-modal AI applications
Robotics applications
Remote support of autonomous vehicles over 5G
The key requirements of the social implementation of autonomous driving are vehicle safety and reducing operational costs.
At the Fujisawa City trial, SoftBank demonstrated an autonomous vehicle, relaying its front camera video using 5G to an AI-based remote support service hosted on the AI-RAN server. Multi-modal AI models analyzed the video stream, did risk assessment, and sent recommended actions to autonomous vehicles using text over 5G.
This is an example of explainable AI as well, as all the actions of the autonomous vehicle could be monitored and explained through summarized text and logging for remote support.
Factory multi-modal AI applications
In this use case, multi-modal inputs including video, audio, and sensor data, are streamed using 5G into the AI-RAN server. Multiple LLMs, VLMs, retrieval-augmented generation (RAG) pipelines, and NVIDIA NIM microservices hosted on the AI-RAN server are used to coalesce these inputs and make the knowledge accessible through a chat interface to users using 5G.
This fits well for factory monitoring, construction site inspections, and similar complex indoor and outdoor environments. The use case demonstrates how edge AI-RAN enables local data sovereignty by keeping data access and analysis local, secure, and private, which is a mandatory requirement of most enterprises.
Robotics applications
SoftBank demonstrated the benefit of edge AI inferencing for a robot connected over 5G. A robodog was trained to follow a human based on voice and motion.
The demo compared the response time of the robot when the AI inferencing was hosted on the local AI-RAN server to when it was hosted on the central cloud. The difference was apparent and obvious. The edge-based inference robodog followed the humanâs movements instantly, while the cloud-based inference robot struggled to keep up.
Accelerating the AI-RAN business case with the Aerial RAN Computer-1
While the AI-RAN vision has been embraced by the industry, the energy efficiency and economics of GPU-enabled infrastructure remain key requirements, particularly how they compare to traditional CPUâ and ASIC-based RAN systems.
With this live field trial of AI-RAN, SoftBank and NVIDIA have not only proven that GPU-enabled RAN systems are feasible and high-performant, but they are also significantly better in energy efficiency and economic profitability.
NVIDIA recently announced the
Aerial RAN Computer-1
based on the next-generation NVIDIA Grace Blackwell superchips as the recommended AI-RAN deployment platform. The goal is to migrate SoftBank 5G vRAN software from NVIDIA GH200 to NVIDIA Aerial RAN Computer-1 based on GB200-NVL2, which is an easier shift given the code is already CUDA-ready.
With
GB200-NVL2
, the available compute for AI-RAN will increase by a factor of 2x. The AI processing capabilities will improve by 5x for Llama-3 inferencing, 18x for data processing, and 9x for vector database search compared to prior H100 GPU systems.
For this evaluation, we compared the target deployment platform, Aerial RAN Computer-1 based on GB200 NVL2, with the latest generation of x86 and the best-in-class custom RAN product benchmarks and validated the following findings:
Accelerated AI-RAN offers best-in-class AI performance
Accelerated AI-RAN is sustainable RAN
Accelerated AI-RAN is highly profitable
Accelerated AI-RAN offers best-in-class AI performance
In 100% AI-only mode, each GB200-NVL2 server generates 25000 tokens/second, which translates to $20/hr of available monetizable compute per server, or $15K/month per server.
Keeping in mind that the average revenue per user (ARPU) of wireless services today ranges between $5â50/month depending on the country, AI-RAN opens a new multi-billion-dollar AI revenue opportunity that is orders of magnitude higher than revenues from RAN-only systems.
The token AI workload used is Llama-3-70B FP4, showcasing that AI-RAN is already capable of running the worldâs most advanced LLM models.
Accelerated AI-RAN is sustainable RAN
In 100% RAN-only mode, GB200-NVL2 server power performance in Watt/Gbps shows the following benefits:
40% less power consumption than the best-in-class custom RAN-only systems today
60% less power consumption than x86-based vRAN
For an even comparison, this assumes the same number of 100-MHz 4T4R cells and 100% RAN-only workload across all platforms.
Figure 3. RAN power consumption and performance (watt/Gbps)
Accelerated AI-RAN is highly profitable
For this evaluation, we used the scenario of covering one district in Tokyo with 600 cells as the common baseline for RAN deployment for each of the three platforms being compared. We then looked at multiple scenarios for AI and RAN workload distribution, ranging from RAN-only to RAN-heavy or AI-heavy.
In the AI-heavy scenario (Figure 4), we used a one-third RAN and two-third AI workload distribution:
For every dollar of CapEx investment in accelerated AI-RAN infrastructure based on NVIDIA GB200 NVL2, telcos can generate 5x the revenue over 5 years.
From an ROI perspective, the overall investment delivers a 219% return, considering all CapEx and OpEx costs.This is of course specific to SoftBank, as it uses local country costs assumptions.
Figure 4. AI-RAN economics for covering one Tokyo district with 600 cells
33% AI and 67% RAN
67% AI and 33% RAN
$ of revenue per $ of CapEx
2x
5x
ROI %
33%
219%
Table 1. AI-heavy scenario compared to RAN-heavy results
In the RAN-heavy scenario, we used two-thirds RAN and one-third AI workload distribution and found that revenue divided by CapEx for NVIDIA-accelerated AI-RAN is 2x, with a 33% ROI over 5 years, using SoftBank local cost assumptions.
In the RAN-only scenario, NVIDIA Aerial RAN Computer-1 is more cost-efficient than custom RAN-only solutions, which underscores the benefits of using accelerated computing for radio signal processing.
From these scenarios, it is evident that AI-RAN is highly profitable as compared to RAN-only solutions, in both AI-heavy and RAN-heavy modes. In essence, AI-RAN transforms traditional RAN from a cost center to a profit center.
The profitability per server improves with higher AI use. Even in RAN-only, AI-RAN infrastructure is more cost-efficient than custom RAN-only options.
Key assumptions used for the revenue and TCO calculations include the following:
The respective number of platforms, servers, and racks for each platform are calculated using a common baseline of deploying 600 cells on the same frequency, 4T4R.
The total cost of ownership (TCO) is calculated over 5 years and includes the cost of hardware, software, and vRAN and AI operating costs.
For the new AI revenue calculation, we used $20/hr/server based on GB200 NVL2 AI performance benchmarks.
OpEx costs are based on local Japan power costs and arenât extensible worldwide.
ROI % = (new AI revenues â TCO) / TCO
This validation of AI revenue upside, energy efficiency, and profitability of AI-RAN leaves no doubts about the feasibility, performance, and economic benefits of the technology.
Going forward, exponential gains with each generation of NVIDIA superchips, such as Vera Rubin, will multiply these benefits by orders of magnitude further, enabling the much-awaited business transformation of telco networks.
Looking ahead
SoftBank and NVIDIA are
continuing to collaborate
toward the commercialization of AI-RAN and bringing new applications to life. The next phase of the engagements will entail work on AI-for-RAN to improve spectral efficiency and on NVIDIA Aerial Omniverse digital twins to simulate accurate physical networks in the digital world for fine-tuning and testing.
NVIDIA AI Aerial lays the foundation for operators and ecosystem partners globally to use the power of accelerated computing and software-defined RAN + AI to transform 5G and 6G networks. You can now use NVIDIA Aerial RAN Computer-1 and AI Aerial software libraries to develop your own implementation of AI-RAN.
NVIDIA AI Enterprise is also helping create new AI applications for telcos, hostable on AI-RAN, as is evident from this trial where many NVIDIA software toolkits have been used. This includes NIM microservices for generative AI, RAG, VLMs, NVIDIA Isaac for robotics training, NVIDIA NeMo, RAPIDS, NVIDIA Triton for inferencing, and a serverless API for AI brokering.
The telecom industry is at the forefront of a massive opportunity to become an AI service provider. AI-RAN can kickstart this new renaissance for telcos worldwide, using accelerated computing as the new foundation for wireless networks.
This announcement marks a breakthrough moment for AI-RAN technology, proving its feasibility, carrier-grade performance, superior energy efficiency, and economic value. Every dollar of CapEx invested in NVIDIA-accelerated AI-RAN infrastructure generates 5x revenues, while being 6G-ready.
The journey to AI monetization can start now. | https://developer.nvidia.com/ja-jp/blog/ai-ran-goes-live-and-unlocks-a-new-ai-opportunity-for-telcos/ | AI-RAN ãéä¿¡äºæ¥è
åãã«æ°ãã AI ã®ããžãã¹ ãã£ã³ã¹ããããã | Reading Time:
4
minutes
AI ã¯ãæ¥çãäŒæ¥ãæ¶è²»è
ã®äœéšãæ°ããæ¹æ³ã§å€é©ããŠããŸãã çæ AI ã¢ãã«ã¯æšè«ã«ç§»è¡ãã
ãšãŒãžã§ã³ãå AI
ã¯æ°ããçµæéèŠã®ã¯ãŒã¯ãããŒãå¯èœã«ã
ãã£ãžã«ã« AI
ã«ãããã«ã¡ã©ãããããããããŒã³ãèªåè»ãªã©ã®ãšã³ããã€ã³ãããªã¢ã«ã¿ã€ã ã§ææ決å®ãè¡ãã察話ã§ããããã«ãªããŸãã
ãããã®ãŠãŒã¹ ã±ãŒã¹ã«å
±éããã®ã¯ãæ®åããä¿¡é Œæ§ãé«ããå®å
šã§ãè¶
é«éãªæ¥ç¶ãå¿
èŠã§ããããšã§ãã
éä¿¡ãããã¯ãŒã¯ã¯ãããã³ãããŒã«ç¡ç·ã¢ã¯ã»ã¹ ãããã¯ãŒã¯ãä»ããŠçŽæ¥éä¿¡ããããããšã³ã¿ãŒãã©ã€ãº ã¢ããªã±ãŒã·ã§ã³ã«ãã£ãŠçæããããããªã㯠ã¯ã©ãŠããŸãã¯ãã©ã€ããŒã ã¯ã©ãŠãããã®ããã¯ããŒã«ããã®å®å
šã«ã¹ã¿ã³ãã¢ãã³ã® AI æšè«ãã©ãã£ãã¯ã®ãããªæ°ããçš®é¡ã® AI ãã©ãã£ãã¯ã«åããå¿
èŠããããŸãã
ããŒã«ã« ã¯ã€ã€ã¬ã¹ ã€ã³ãã©ã¹ãã©ã¯ãã£ã¯ãAI æšè«ãåŠçããã®ã«æé©ãªå ŽæãæäŸããŸãã ããã¯ãéä¿¡äŒç€Ÿ ãããã¯ãŒã¯ã«å¯Ÿããæ°ããã¢ãããŒãã§ãã AI ç¡ç·ã¢ã¯ã»ã¹ ãããã¯ãŒã¯ (
AI-RAN
) ã®ç¹åŸŽã§ãã
åŸæ¥ã® CPU ãŸã㯠ASIC ããŒã¹ã® RAN ã·ã¹ãã ã¯ãRAN ã®ã¿ã®ããã«èšèšãããŠãããçŸåšã§ã¯ AI ãã©ãã£ãã¯ãåŠçã§ããŸããã AI-RAN ã¯ãã¯ã€ã€ã¬ã¹ãš AI ã®ã¯ãŒã¯ããŒããåæã«å®è¡ã§ããå
±éã® GPU ããŒã¹ã®ã€ã³ãã©ã¹ãã©ã¯ãã£ãæäŸããŸããããã«ããããããã¯ãŒã¯ãåäžç®çããå€ç®çã€ã³ãã©ã¹ãã©ã¯ãã£ã«å€ããã³ã¹ã ã»ã³ã¿ãŒãããããã£ãã ã»ã³ã¿ãŒã«å€ããããŸãã
é©åãªçš®é¡ã®ãã¯ãããžã«æŠç¥çæè³ãè¡ãããšã§ãéä¿¡äŒç€Ÿã¯æ¥çãæ¶è²»è
ãäŒæ¥ã«ããã£ãŠ AI ã®äœæãé
ä¿¡ã䜿çšã容æã«ããã AI ã°ãªãããžãšé£èºããããšãã§ããŸããä»ãéä¿¡äŒç€Ÿã«ãšã£ãŠãäžå€®éäžçã§åæ£ãããã€ã³ãã©ã¹ãã©ã¯ãã£ãåå©çšããããšã§ãAI ãã¬ãŒãã³ã° (äœæ) ãš AI æšè« (é
ä¿¡) ã®ããã®ãã¡ããªãã¯ãæ§ç¯ãã倧ããªæ©äŒãšãªããŸãã
SoftBank ãš NVIDIA ã AI-RANã®åçšåãé²ãã
SoftBank ã¯ãNVIDIA ã¢ã¯ã»ã©ã¬ãŒã·ã§ã³ ããŒããŠã§ã¢ãš NVIDIA Aerial ãœãããŠã§ã¢ãæè¡åºç€ãšããŠæŽ»çšãã
ç¥å¥å·çè€æ²¢åžã§å±å€
ãã£ãŒã«ã ãã©ã€ã¢ã«ãæåããã
AI-RAN ããžã§ã³ã
çŸå®ã®ãã®ã«ããŸããã
ãã®éæã¯ãAI-RAN ã®åçšåã«åãã倧ããªåé²ã§ããããã¯ãããžã®å®çŸæ§ãããã©ãŒãã³ã¹ãåçåã«é¢ããæ¥çã®èŠä»¶ã«å¯Ÿå¿ããå®èšŒãã€ã³ããæäŸããŸãã
NVIDIA ã®ã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã° ãã©ãããã©ãŒã ã§å®è¡ãããäžçåã®å±å€ 5G AI-RAN ãã£ãŒã«ã ãã©ã€ã¢ã«ã ããã¯ã5G ã³ã¢ãšçµ±åããããã«ã¹ã¿ãã¯ã®ä»®æ³ 5G RAN ãœãããŠã§ã¢ã«åºã¥ããšã³ãããŒãšã³ãã®ãœãªã¥ãŒã·ã§ã³ã§ãã
ãã£ãªã¢ ã°ã¬ãŒãã®ä»®æ³ RAN ã®ããã©ãŒãã³ã¹ãå®çŸã
AI ãš RAN ã®ãã«ãããã³ããšãªãŒã±ã¹ãã¬ãŒã·ã§ã³ãå®çŸã
ãšãã«ã®ãŒå¹çãšçµæžçãªã¡ãªããããæ¢åã®ãã³ãããŒã¯ãšæ¯èŒããŠæ€èšŒãããŸããã
AI-RAN ã€ã³ãã©ã¹ãã©ã¯ãã£ã«çµ±åããã AI ããŒã±ãããã¬ã€ã¹ãæäŸããæ°ãããœãªã¥ãŒã·ã§ã³ã
AI-RAN ãããã¯ãŒã¯ã§å®è¡ãããå®éã® AI ã¢ããªã±ãŒã·ã§ã³ã玹ä»ãããŸãã
äœããããSoftBank ã¯ãäžçäžã«å±éããããã«ãç¬èªã® AI-RAN 補åãåæ¥çã«ãªãªãŒã¹ããããšãç®æããŠããŸãã
ä»ã®éä¿¡äºæ¥è
ãä»ãã AI-RAN ã®å°å
¥ãæ¯æŽããããã«ãSoftBank ã¯ãAI-RAN ãè©Šçšããããã«å¿
èŠãªããŒããŠã§ã¢ãšãœãããŠã§ã¢ã®èŠçŽ ã§æ§æããããªãã¡ã¬ã³ã¹ ãããããç°¡åãã€è¿
éã«æäŸããäºå®ã§ãã
ãšã³ãããŒãšã³ãã® AI-RAN ãœãªã¥ãŒã·ã§ã³ãšãã£ãŒã«ã ãã©ã€ã¢ã«ã®çµæ
SoftBank ã¯ãNVIDIA ãšãšã³ã·ã¹ãã ããŒãããŒã®ããŒããŠã§ã¢ãšãœãããŠã§ã¢ ã³ã³ããŒãã³ããçµ±åãããã£ãªã¢ã°ã¬ãŒãã®èŠä»¶ãæºããããã«åŒ·åããããšã§ãAI-RAN ãœãªã¥ãŒã·ã§ã³ãéçºããŸããã ãã®ãœãªã¥ãŒã·ã§ã³ã¯ãNVIDIA GH200 (CPU+GPU)ãNVIDIA Bluefield-3 (NIC/DPU)ãããã³ãããŒã«ããã³ããã¯ããŒã« ãããã¯ãŒãã³ã°çšã® Spectrum-X ã§å®è¡ããã 100% ãœãããŠã§ã¢ ããã¡ã€ã³ãã®å®å
šãª 5G vRAN ã¹ã¿ãã¯ãå®çŸããŸãã 20 å°ã®ç¡ç·ãŠããããš 5G ã³ã¢ ãããã¯ãŒã¯ãçµ±åãã100 å°ã®ã¢ãã€ã« UE ãæ¥ç¶ããŸãã
ã³ã¢ ãœãããŠã§ã¢ ã¹ã¿ãã¯ã«ã¯ã以äžã®ã³ã³ããŒãã³ããå«ãŸããŠããŸãã
SoftBank ã
NVIDIA Aerial CUDA-Accelerated-RAN
ã©ã€ãã©ãªã䜿çšããŠã 5G RAN ã¬ã€ã€ãŒ 1 ã®ãã£ãã« ãããã³ã°ããã£ãã«æšå®ãå€èª¿ãåæ¹ãšã©ãŒèšæ£ãªã©ã®æ©èœãéçºããæé©åããŸããã
ã¬ã€ã€ãŒ 2 æ©èœåã Fujitsu ãœãããŠã§ã¢
ã³ã³ãããŒã®ä»®æ³åã¬ã€ã€ãŒãšããŠã® Red Hat ã® OpenShift Container Platform (OCP) ã«ãããåãåºç€ãšãªã GPU ã³ã³ãã¥ãŒãã£ã³ã° ã€ã³ãã©ã¹ãã©ã¯ãã£ã§ç°ãªãã¿ã€ãã®ã¢ããªã±ãŒã·ã§ã³ãå®è¡ãããŸã
SoftBank ãéçºãã E2EãAI ãš RAN ãªãŒã±ã¹ãã¬ãŒã¿ãŒãéèŠãšäœ¿çšå¯èœãªå®¹éã«åºã¥ã㊠RAN ãš AI ã®ã¯ãŒã¯ããŒãã®ã·ãŒã ã¬ã¹ãªããããžã§ãã³ã°ãå¯èœã«ããŸãã
åºç€ãšãªãããŒããŠã§ã¢ã¯ã
NVIDIA GH200 Grace Hopper Superchip
ã§ãããåæ£åããéäžå RAN ã·ããªãªãŸã§ãããŸããŸãªæ§æã§äœ¿çšã§ããŸãã ãã®å®è£
ã§ã¯ãéçŽããã RAN ã®ã·ããªãªã®ããã«ã1 ã€ã®ã©ãã¯ã§è€æ°ã® GH200 ãµãŒããŒã䜿çšããAI ãš RAN ã®ã¯ãŒã¯ããŒããåæã«åŠçããŸãã ããã¯ãåŸæ¥ã® RAN åºå°å±ãè€æ°å±éããã®ã«çžåœããŸãã
ãã®ãã€ãããã§ã¯ãRAN ã®ã¿ã®ã¢ãŒãã§äœ¿çšãããå Žåãå GH200 ãµãŒããŒã¯ã100 MHz 垯åå¹
㧠20 åã® 5G ã»ã«ãåŠçããããšãã§ããŸããã åã»ã«ã§ã¯ãçæ³çãªæ¡ä»¶äžã§ 1.3 Gbps ã®ããŒã¯ ããŠã³ãªã³ã¯æ§èœãéæãããå±å€å±éã§ã¯ãã£ãªã¢ã°ã¬ãŒãã®å¯çšæ§ã§ 816 Mbps ãå®èšŒãããŸããã
AI-RAN ã®ãã«ãããã³ããå®çŸ
AI-RAN ãã¯ãããžã®ç¬¬äžã®ååã® 1 ã€ã¯ããã£ãªã¢ã°ã¬ãŒãã®ããã©ãŒãã³ã¹ãæãªãããšãªããRAN ãš AI ã®ã¯ãŒã¯ããŒããåæã«å®è¡ã§ããããšã§ãã ãã®ãã«ãããã³ãã¯ãæéãŸãã¯ç©ºéã®ããããã§å®è¡ã§ããæé垯ãŸãã¯ã³ã³ãã¥ãŒãã£ã³ã°ã®å²åã«åºã¥ããŠãªãœãŒã¹ãåå²ããŸãã ãŸããããã¯ã䜿çšå¯èœãªå®¹éã«åºã¥ããŠãã¯ãŒã¯ããŒããã·ãŒã ã¬ã¹ã«ããããžã§ãã³ã°ãããããžã§ãã³ã°ã®è§£é€ãã·ããã§ãããªãŒã±ã¹ãã¬ãŒã¿ãŒã®å¿
èŠæ§ãæå³ããŸãã
è€æ²¢åžã®å®èšŒå®éšã§ã¯ãRAN ãš AI ã¯ãŒã¯ããŒãéã®ãªãœãŒã¹ã®éçå²ãåœãŠã«åºã¥ããŠãGH200 äžã§ã® AI ãš RAN ã®åæåŠçãå®èšŒãããŸããã (å³ 1)ã
å³ 1. AI ãš RAN ã®åæåŠçãš GPU ã®åèšäœ¿çšç
å NVIDIA GH200 ãµãŒããŒã¯ãè€æ°ã® MIG (ãã«ãã€ã³ã¹ã¿ã³ã¹ GPU) ã§æ§æããã1 ã€ã® GPU ãè€æ°ã®ç¬ç«ãã GPU ã€ã³ã¹ã¿ã³ã¹ã«åå²ã§ããŸãã åã€ã³ã¹ã¿ã³ã¹ã«ã¯ãã¡ã¢ãªããã£ãã·ã¥ãã³ã³ãã¥ãŒãã£ã³ã° ã³ã¢ãªã©ãç¬èªã®å°çšãªãœãŒã¹ããããç¬ç«ããŠåäœã§ããŸãã
SoftBank ãªãŒã±ã¹ãã¬ãŒã¿ãŒã¯ãAI ãå®è¡ããããã« GPU å
šäœãŸã㯠GPU ã®äžéšãã€ã³ããªãžã§ã³ãã«å²ãåœãŠãRAN ã®ã¯ãŒã¯ããŒããå®è¡ããå¿
èŠã«å¿ããŠåçã«åãæ¿ããŸãã éèŠã«åºã¥ãå²ãåœãŠã§ã¯ãªããRAN ãš AI ã«äžå®ã®å²ãåœãŠããRAN ã« 60% ãš AI ã« 40% ã®ã³ã³ãã¥ãŒãã£ã³ã°ãéçã«å²ãåœãŠãããšãã§ããŸãã
ç®æšã¯ã容é䜿çšçãæ倧åããããšã§ãã AI-RAN ã䜿çšãããšãéä¿¡äŒç€Ÿã¯ãéåžžã® RAN ã®ã¿ã®ãããã¯ãŒã¯ã§ã® 33% ã®å®¹é䜿çšçãšæ¯èŒããŠãã»ãŒ 100% ã®äœ¿çšçãå®çŸã§ããŸãã ããã¯ãåçãªãªãŒã±ã¹ãã¬ãŒã·ã§ã³ãšåªå
é äœä»ãããªã·ãŒã®ãããã§ãããŒã¯ã® RAN ã®è² è·ã«å¯Ÿå¿ããªãããæ倧 3 åã®å¢å ã§ãã
AI-RAN ããŒã±ãããã¬ã€ã¹ã®å®çŸ
åæ£å AI-RAN ã€ã³ãã©ã¹ãã©ã¯ãã£ã§ AI ã³ã³ãã¥ãŒãã£ã³ã°ã®æ°ããæ©èœãå©çšã§ããããã«ãªã£ãããããã® AI ã³ã³ãã¥ãŒãã£ã³ã°ã®äŸçµŠã« AI ã®éèŠãã©ã®ããã«åã蟌ãããšããçåãçããŸãã
ãã®åé¡ã解決ããããã«ãSoftBank ã¯ãNVIDIA AI ãšã³ã¿ãŒãã©ã€ãº ã掻çšãããµãŒããŒã¬ã¹ API ã䜿çšããŠãã»ãã¥ãªãã£ãæ¡åŒµæ§ãä¿¡é Œæ§ãåã㊠AI-RAN 㧠AI ã¯ãŒã¯ããŒããå±éãã管çããŸããã NVIDIA AI ãšã³ã¿ãŒãã©ã€ãºã®ãµãŒããŒã¬ã¹ API ã¯ãAI-RAN ã€ã³ãã©ã¹ãã©ã¯ãã£ã§ãã¹ããããSoftBank E2E AI-RAN ãªãŒã±ã¹ãã¬ãŒã¿ãŒãšçµ±åãããŠããŸãã åã API ãå®è¡ãããããªã㯠ã¯ã©ãŠããŸãã¯ãã©ã€ããŒã ã¯ã©ãŠãã«æ¥ç¶ããã³ã³ãã¥ãŒãã£ã³ã°ãå©çšå¯èœã«ãªã£ããšãã«ãå€éšã® AI æšè«ãžã§ãã AI-RAN ãµãŒããŒã«å²ãåœãŠãŸã (å³ 2)ã
å³ 2. SoftBank AI-RAN ãšçµ±åããã AI ããŒã±ãããã¬ã€ã¹ ãœãªã¥ãŒã·ã§ã³
ãã®ãœãªã¥ãŒã·ã§ã³ã«ãã AI ããŒã±ãããã¬ã€ã¹ãå®çŸãããœãããã³ã¯ã¯ããŒã«ã©ã€ãºãããäœé
延ã®å®å
šãªæšè«ãµãŒãã¹ãæäŸã§ããããã«ãªããŸãã ãŸããç¹ã«å€éšã® AI æšè«ã®ä»äºã®ããã«ãéä¿¡äŒç€Ÿã AI é
ä¿¡ã°ãªããã«ãªãã®ãæ¯æŽããäžã§ AI-RAN ã®éèŠæ§ãå®èšŒããæ°ããåçã®æ©äŒãäœããŸãã
AI-RAN ã¢ããªã±ãŒã·ã§ã³ã玹ä»
ãã®å±å€ã®è©Šçšã§ã¯ãSoftBank ãéçºããæ°ãããšããž AI ã¢ããªã±ãŒã·ã§ã³ãã©ã€ã AI-RAN ãããã¯ãŒã¯ã§ãã¢ã³ã¹ãã¬ãŒã·ã§ã³ãããŸããã
5G ãä»ããèªåé転è»ã®ãªã¢ãŒã ãµããŒã
å·¥å Žåºè·æã®ãã«ãã¢ãŒãã«
ãããã£ã¯ã¹ ã¢ããªã±ãŒã·ã§ã³
5G ãä»ããèªåé転è»ã®ãªã¢ãŒã ãµããŒã
èªåé転ã®ç€ŸäŒçå®è£
ã®éèŠãªèŠä»¶ã¯ãè»ã®å®å
šæ§ãšéçšã³ã¹ãã®åæžã§ãã
è€æ²¢åžã®å®èšŒå®éšã§ã¯ããœãããã³ã¯ãèªåé転è»ãå®æŒããåæ¹ã«ã¡ã©ã®æ åã 5G 㧠AI-RAN ãµãŒããŒã«ãã¹ãããã AI ããŒã¹ã®é éãµããŒã ãµãŒãã¹ã«äžç¶ããã ãã«ãã¢ãŒãã« AI ã¢ãã«ã¯ããã㪠ã¹ããªãŒã ãåæãããªã¹ã¯è©äŸ¡ãè¡ãã5G ãä»ããããã¹ãã䜿çšããŠèªåé転è»ã«æšå¥šã®ã¢ã¯ã·ã§ã³ãéä¿¡ããŸããã
ããã¯ã説æå¯èœãª AI ã®äŸã§ããããŸãããªã¢ãŒã ãµããŒãã®ããã®èŠçŽãããããã¹ããšãã°ãéããŠãèªåé転è»ã®ãã¹ãŠã®åäœãç£èŠãã説æããããšãã§ããŸããã
å·¥å Žåºè·æã®ãã«ãã¢ãŒãã«
ãã®ãŠãŒã¹ ã±ãŒã¹ã§ã¯ããããªããªãŒãã£ãªãã»ã³ãµãŒ ããŒã¿ãå«ããã«ãã¢ãŒãã«å
¥åãã5G ã䜿çšã㊠AI-RAN ãµãŒããŒã«ã¹ããªãŒãã³ã°ãããŸãã AI-RAN ãµãŒããŒã§ãã¹ãããããè€æ°ã® LLMãVLMãæ€çŽ¢æ¡åŒµçæ (RAG) ãã€ãã©ã€ã³ãNVIDIA NIM ãã€ã¯ããµãŒãã¹ã¯ããããã®å
¥åãçµ±åãã5G ã䜿çšãããŠãŒã¶ãŒããã£ãã ã€ã³ã¿ãŒãã§ã€ã¹ãä»ããŠæ
å ±ã«ã¢ã¯ã»ã¹ã§ããããã«ããããã«äœ¿çšãããŸãã
ããã¯ãå·¥å Žã®ç£èŠã建èšçŸå Žã®æ€æ»ãåæ§ã®è€éãªå±å
ããã³å±å€ã®ç°å¢ã«æé©ã§ãã ãã®ãŠãŒã¹ ã±ãŒã¹ã§ã¯ããšããž AI-RAN ãããŒã¿ ã¢ã¯ã»ã¹ãšåæãããŒã«ã«ãå®å
šããã©ã€ããŒãã«ä¿ã€ããšã§ãããŒã«ã« ããŒã¿ã®äž»æš©ãå®çŸããæ¹æ³ã瀺ããŠããŸããããã¯ãã»ãšãã©ã®äŒæ¥ã«ãšã£ãŠå¿
é ã®èŠä»¶ã§ãã
ãããã£ã¯ã¹ ã¢ããªã±ãŒã·ã§ã³
SoftBank ã¯ã5G ãä»ããŠæ¥ç¶ãããããããã®ãšããž AI æšè«ã®å©ç¹ãå®èšŒããŸããã ããããã°ã¯ã声ãšåãã«åºã¥ããŠäººéãè¿œãããã«ãã¬ãŒãã³ã°ãããŸããã
ãã®ãã¢ã§ã¯ãAI æšè«ãããŒã«ã« AI-RAN ãµãŒããŒã§ãã¹ãããããšãã®ããããã®å¿çæéãšãã»ã³ãã©ã« ã¯ã©ãŠãã§ãã¹ãããããšãã®å¿çæéãæ¯èŒããŸããã ãã®éãã¯æçœã§ããã ãšããž ããŒã¹ã®æšè« ããããã°ã¯ã人éã®åããå³åº§ã«è¿œè·¡ããŸããããã¯ã©ãŠã ããŒã¹ã®æšè«ããããã¯ãè¿œãã€ãã®ã«èŠåŽããŸããã
Aerial RAN Computer-1 㧠AI-RAN ã®ããžãã¹ ã±ãŒã¹ãé«éå
AI-RAN ããžã§ã³ã¯æ¥çã§åãå
¥ããããŠããŸãããGPU 察å¿ã€ã³ãã©ã¹ãã©ã¯ãã£ã®ãšãã«ã®ãŒå¹çãšçµæžæ§ãç¹ã«åŸæ¥ã® CPU ããã³ ASIC ããŒã¹ã® RAN ã·ã¹ãã ãšã®æ¯èŒã¯äŸç¶ãšããŠéèŠãªèŠä»¶ã§ãã
AI-RAN ã®ãã®ã©ã€ã ãã£ãŒã«ã ãã©ã€ã¢ã«ã«ãããSoftBank ãš NVIDIA ã¯ãGPU 察å¿ã® RAN ã·ã¹ãã ãå®çŸå¯èœã§ãé«æ§èœã§ããããšãå®èšŒããã ãã§ãªãããšãã«ã®ãŒå¹çãšçµæžçãªåçæ§ã倧å¹
ã«åäžããŠããããšãå®èšŒããŸããã
NVIDIA ã¯æè¿ã次äžä»£ NVIDIA Grace Blackwell Superchip ãããŒã¹ã«ãã
Aerial RAN Computer-1
ãæšå¥š AI-RAN å±éãã©ãããã©ãŒã ãšããŠçºè¡šããŸããã ç®çã¯ãGB200-NVL2 ãããŒã¹ãšãã SoftBank 5G vRAN ãœãããŠã§ã¢ã NVIDIA GH200 ãã NVIDIA Aerial RAN Computer-1 ã«ç§»è¡ããããšã§ããããã¯ãã³ãŒãããã§ã« CUDA ã«å¯Ÿå¿ããŠããããã移è¡ã容æã§ãã
ãŸãã
GB200-NVL2
ã䜿çšãããšãAI-RAN ã§å©çšå¯èœãªã³ã³ãã¥ãŒãã£ã³ã°èœåã 2 åã«ãªããŸãã AI åŠçæ©èœã¯ã以åã® H100 GPU ã·ã¹ãã ãšæ¯èŒããŠãLlama-3 æšè«ã 5 åãããŒã¿åŠçã 18 åããã¯ãã« ããŒã¿ããŒã¹æ€çŽ¢ã 9 åã«æ¹åãããŸãã
ãã®è©äŸ¡ã®ããã«ãã¿ãŒã²ããã®å±é ãã©ãããã©ãŒã ãGB200 NVL2 ãããŒã¹ãšãã Aerial RAN Computer-1ãææ°äžä»£ã® x86 ãšã¯ã©ã¹æé«ã®ã«ã¹ã¿ã RAN 補åãã³ãããŒã¯ãæ¯èŒãã以äžã®çµæãæ€èšŒããŸããã
é«éåããã AI-RAN ã¯ãã¯ã©ã¹æé«ã® AI ããã©ãŒãã³ã¹ãæäŸããŸã
é«éåããã AI-RAN ã¯æç¶å¯èœãª RAN
é«éåããã AI-RAN ã¯ãéåžžã«åçæ§ãé«ã
é«éåããã AI-RAN ã¯ãã¯ã©ã¹æé«ã® AI ããã©ãŒãã³ã¹ãæäŸããŸã
100% AI ã®ã¿ã®ã¢ãŒãã§ã¯ãå GB200-NVL2 ãµãŒããŒã¯ãæ¯ç§ 25,000 ããŒã¯ã³ãçæããŸããããã¯ããµãŒã㌠1 å°ã®åçåå¯èœãªã³ã³ãã¥ãŒãã£ã³ã°ã®å©çšçã 20 ãã«/æéããŸãã¯ãµãŒããŒãããã®æ15,000 ãã«ã«æç®ããŸãã
çŸåšã®ã¯ã€ã€ã¬ã¹ ãµãŒãã¹ã®ãŠãŒã¶ãŒ 1 人ã®å¹³ååç (ARPU) ã¯ãåœã«ãã£ãŠã¯æ 5 ïœ 50 ãã«ã®ç¯å²ã§ããããšã«çæããŠãAI-RAN ã¯ãRAN ã®ã¿ã®ã·ã¹ãã ãããæ°åã®é«ããæ°ååãã«èŠæš¡ã® AI åçã®æ©äŒãæäŸããŸãã
䜿çšãããããŒã¯ã³ AI ã¯ãŒã¯ããŒãã¯ãLlama-3-70B FP4 ã§ãããAI-RAN ããã§ã«äžçã§æãé«åºŠãª LLM ã¢ãã«ãå®è¡ã§ããããšãå®èšŒããŸãã
é«éåããã AI-RAN ã¯æç¶å¯èœãª RAN
100% RAN ã®ã¿ã®ã¢ãŒãã§ã¯ãGB200-NVL2 ãµãŒããŒã®é»åããã©ãŒãã³ã¹ã¯ãã¯ãã/Gbps ã§ä»¥äžã®å©ç¹ããããŸãã
ä»æ¥ãã¯ã©ã¹æé«ã®ã«ã¹ã¿ã RAN ã®ã¿ã®ã·ã¹ãã ãšæ¯èŒããŠãæ¶è²»é»åã 40% åæž
x86 ããŒã¹ã® vRAN ãšæ¯èŒããŠãæ¶è²»é»åã 60% åæž
æ¯èŒã®ããã«ãããã¯ãã¹ãŠã®ãã©ãããã©ãŒã ã§åãæ°ã® 100 MHz 4T4R ã»ã«ãšã100% RAN ã®ã¿ã®ã¯ãŒã¯ããŒããæ³å®ããŠããŸãã
å³ 3. RAN ã®æ¶è²»é»åãšããã©ãŒãã³ã¹ (ã¯ãã/Gbps)
é«éåããã AI-RAN ã¯ãéåžžã«åçæ§ãé«ã
ãã®è©äŸ¡ã®ããã«ãæ¯èŒããã 3 ã€ã®ãã©ãããã©ãŒã ã®ãããã㧠RAN å±éã®å
±éã®ããŒã¹ã©ã€ã³ãšããŠãæ±äº¬éœã® 1 å°åºã 600 ã»ã«ã§ã«ããŒããã·ããªãªã䜿çšããŸããã 次ã«ãRAN ã®ã¿ãã RAN ãéããããŸã㯠AI ãéèŠãããŸã§ãAI ãš RAN ã®ã¯ãŒã¯ããŒãååžã®è€æ°ã®ã·ããªãªã調ã¹ãŸããã
AI ãå€ãã·ããªãª (å³ 4) ã§ã¯ãRAN ã 3 åã® 1ãAI ã¯ãŒã¯ããŒãã 3 åã® 2 ãåæ£ããŸããã
NVIDIA GB200 NVL2 ãããŒã¹ãšããé«éåããã AI-RAN ã€ã³ãã©ã¹ãã©ã¯ãã£ãžã®è³æ¬æ¯åº (CapEx) æè³é¡ã®1ãã«ã«å¯ŸããŠãéä¿¡äŒç€Ÿã¯ 5 幎é㧠5 åã®åçãçã¿åºãããšãã§ããŸãã
ROI ã®èŠ³ç¹ãããè³æ¬æ¯åºãšéçšæ¯åºã®ãã¹ãŠã®ã³ã¹ããèæ
®ããŠãæè³å
šäœã¯ 219% ã®ãªã¿ãŒã³ãå®çŸããŸããããã¯ãçŸå°ã®ã³ã¹ãæ³å®ã䜿çšããŠããããããã¡ãã SoftBank ç¹æã®ãã®ã§ãã
å³ 4. 600 ã»ã«ã§ 1 ã€ã®æ±äº¬éœå°åºãã«ããŒãã AI-RAN ã®çµæžæ§
33% AIãš 67% RAN
67% AI ãš 33% RAN
CapEx 1 ãã«ãããã®åç $
2x
5x
ROI %
33%
219%
è¡š 1. AI ãå€çšããã·ããªãªãšæ¯èŒããçµæ
RAN ãå€çšããã·ããªãªã§ã¯ã3 åã® 2 ã RANã3 åã® 1 ã AI ã¯ãŒã¯ããŒãåæ£ã«äœ¿çšããNVIDIA ã¢ã¯ã»ã©ã¬ãŒã·ã§ã³ AI-RAN ã® CapEx ã§å²ã£ãåç㯠2 åã«ãªããSoftBank ã®ããŒã«ã« ã³ã¹ãæ³å®ã䜿çšã㊠5 幎é㧠33% ã® ROI ãåŸãããããšãããããŸããã
RAN ã®ã¿ã®ã·ããªãªã§ã¯ãNVIDIA Aerial RAN Computer-1 ã¯ã«ã¹ã¿ã RAN ã®ã¿ã®ãœãªã¥ãŒã·ã§ã³ãããã³ã¹ãå¹çãé«ããç¡ç·ä¿¡å·åŠçã«ã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ã䜿çšãã倧ããªå©ç¹ãšãªããŸãã
ãããã®ã·ããªãªãããAI ãå€çšããã¢ãŒã RAN ãå€çšããã¢ãŒãã®äž¡æ¹ã§ãRAN ã®ã¿ã®ãœãªã¥ãŒã·ã§ã³ãšæ¯èŒããŠãAI-RAN ãé«ãåçæ§ãæããã«ãªããŸãã æ¬è³ªçã«ãAI-RAN ã¯ãåŸæ¥ã® RAN ãã³ã¹ã ã»ã³ã¿ãŒããå©çã»ã³ã¿ãŒã«å€é©ããŸãã
AI ã®äœ¿çšéã®å¢å ã«ããããµãŒããŒãããã®åçæ§ãåäžããŸãã RAN ã®ã¿ã®å Žåã§ããAI-RAN ã€ã³ãã©ã¹ãã©ã¯ãã£ã¯ãã«ã¹ã¿ã RAN ã®ã¿ã®ãªãã·ã§ã³ãããã³ã¹ãå¹çãé«ããªããŸãã
åçãš TCO ã®èšç®ã«äœ¿çšãããäž»ãªåææ¡ä»¶ã«ã¯ã次ã®ãã®ãå«ãŸããŸãã
åãã©ãããã©ãŒã ã®ãã©ãããã©ãŒã ããµãŒããŒãã©ãã¯ã®ããããã®æ°ã¯ãåãåšæ³¢æ°ã§ãã 4T4R 㧠600 ã»ã«ããããã€ããå
±éã®ããŒã¹ã©ã€ã³ã䜿çšããŠèšç®ãããŸãã
ç·ææã³ã¹ã (TCO) ã¯ã5 幎以äžã§èšç®ãããŠãããããŒããŠã§ã¢ããœãããŠã§ã¢ãvRANãAI ã®éçšã³ã¹ããå«ãŸããŠããŸãã
æ°ãã AI åçã®èšç®ã«ã¯ãGB200 NVL2 AI ããã©ãŒãã³ã¹ ãã³ãããŒã¯ã«åºã¥ããŠããµãŒããŒãããã®æé 20 ãã«ã䜿çšããŸããã
éçšæ¯åºã³ã¹ãã¯ãæ¥æ¬ã®çŸå°ã®é»åã³ã¹ãã«åºã¥ããŠãããäžççã«æ¡åŒµããããšã¯ã§ããŸããã
ROI % = (æ°ãã AI åç â TCO) / TCO
AI ã®åçã®åäžããšãã«ã®ãŒå¹çãåçæ§ãåçæ§ã®ãã®æ€èšŒã«ããããã®ãã¯ãããžã®å®çŸæ§ãããã©ãŒãã³ã¹ãçµæžçãªã¡ãªããã«çãã®äœå°ã¯ãããŸããã
ä»åŸãVera Rubin ãªã©ã® NVIDIAã¹ãŒããŒãããã®åäžä»£ãææ°é¢æ°çã«å¢å ããããšã§ããããã®ã¡ãªããã¯ããã«æ¡éãã«å¢å€§ããåŸ
æã®éä¿¡ãããã¯ãŒã¯ã®ããžãã¹å€é©ãå¯èœã«ãªããŸãã
å°æ¥ãèŠæ®ãã
SoftBank ãš NVIDIA ã¯ãAI-RAN ã®åæ¥åãšæ°ããã¢ããªã±ãŒã·ã§ã³ãçã¿åºãããã«ã
ç¶ç¶çã«åå
ããŠããŸãã ãã®å¥çŽã®æ¬¡ã®ãã§ãŒãºã§ã¯ãã¹ãã¯ãã«å¹çãåäžããã AI-for-RAN ã®åãçµã¿ãšããã¡ã€ã³ãã¥ãŒãã³ã°ãšãã¹ãã®ããã«ããžã¿ã« ãããã¯ãŒã¯ãã·ãã¥ã¬ãŒããã NVIDIA Aerial Omniverse ããžã¿ã« ãã€ã³ã®åãçµã¿ãå«ãŸããŸãã
NVIDIA AI Aerial ã¯ãäžçäžã®éä¿¡äºæ¥è
ãšãšã³ã·ã¹ãã ããŒãããŒããã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ãšãœãããŠã§ã¢ ããã¡ã€ã³ã RAN + AI ã®ãã¯ãŒã䜿çšããŠã5G ããã³ 6G ãããã¯ãŒã¯ãå€é©ããåºç€ãç¯ããŸãã NVIDIA Aerial RAN Computer-1 ãš AI Aerial ãœãããŠã§ã¢ ã©ã€ãã©ãªã䜿çšããŠãç¬èªã® AI-RAN å®è£
ãéçºã§ããããã«ãªããŸããã
NVIDIA AI ãšã³ã¿ãŒãã©ã€ãº ã¯ãå€ãã® NVIDIA ãœãããŠã§ã¢ ããŒã«ãããã䜿çšããããã®ãã©ã€ã¢ã«ãããæãããªããã«ãAI-RAN ã§ãã¹ãå¯èœãªéä¿¡äºæ¥è
åãã®æ°ãã AI ã¢ããªã±ãŒã·ã§ã³ã®äœæã«ãè²¢ç®ããŠããŸããããã«ã¯ãçæ AI åãã® NIM ãã€ã¯ããµãŒãã¹ãRAGãVLMããããã£ã¯ã¹ ãã¬ãŒãã³ã°çšã® NVIDIA IsaacãNVIDIA NeMoãRAPIDSãæšè«çšã® NVIDIA TritonãAI ãããŒã«ãŒçšãµãŒããŒã¬ã¹ API ãå«ãŸããŸãã
éä¿¡æ¥çã¯ãAI ãµãŒãã¹ ãããã€ããŒã«ãªã倧ããªãã£ã³ã¹ã®æåç·ã«ç«ã£ãŠããŸãã AI-RAN ã¯ãã¯ã€ã€ã¬ã¹ ãããã¯ãŒã¯ã®æ°ããåºç€ãšããŠã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ã䜿çšããããšã§ãäžçäžã®éä¿¡äŒç€Ÿã«ãšã£ãŠãã®æ°ããå€é©ãä¿é²ã§ããŸãã
ãã®çºè¡šã¯ãAI-RAN ãã¯ãããžã®ç»æçãªç¬éã§ããããã®å®çŸæ§ããã£ãªã¢ã°ã¬ãŒãã®ããã©ãŒãã³ã¹ãåªãããšãã«ã®ãŒå¹çãçµæžçãªäŸ¡å€ã蚌æããŸããã NVIDIA ã®é«éåããã AI-RAN ã€ã³ãã©ã¹ãã©ã¯ãã£ã«æè³ãããè³æ¬æ¯åº 1 ãã«ã¯ã6G ã«å¯Ÿå¿ããªããã5 åã®åçãçã¿åºããŸãã
AI åçåãžã®åãçµã¿ã¯ãä»ããå§ããããŸãã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
éä¿¡äŒç€Ÿãåœå®¶ AI ã€ã³ãã©ã¹ãã©ã¯ãã£ãšãã©ãããã©ãŒã ãã©ã®ããã«å®çŸããã
GTC ã»ãã·ã§ã³:
çŸä»£ã®éä¿¡äŒç€Ÿ Blueprint: AI ã䜿çšããŠå€é©ãšåçºæ
GTC ã»ãã·ã§ã³:
人工ç¥èœãéä¿¡ãå€é©ãã 3 ã€ã®æ¹æ³
SDK:
Aerial Omniverse ããžã¿ã« ãã€ã³
ãŠã§ãããŒ:
How Telcos Transform Customer Experiences with Conversational AI
ãŠã§ãããŒ:
å€èšèªé³å£° AI ã«ã¹ã¿ãã€ãºããããšãŒãžã§ã³ãã¢ã·ã¹ãã§éä¿¡äŒç€Ÿ ã³ã³ã¿ã¯ã ã»ã³ã¿ãŒ ãšãŒãžã§ã³ãã®åŒ·å |
https://developer.nvidia.com/blog/developing-a-172b-llm-with-strong-japanese-capabilities-using-nvidia-megatron-lm/ | Developing a 172B LLM with Strong Japanese Capabilities Using NVIDIA Megatron-LM | Generative AI has the ability to create entirely new content that traditional machine learning (ML) methods struggle to produce. In the field of natural language processing (NLP), the advent of
large language models (LLMs)
specifically has led to many innovative and creative AI use cases. These include customer support chatbots, voice assistants, text summarization and translation, and moreâtasks previously handled by humans.
LLMs continue to evolve through various approaches, including increasing the number of parameters and the adoption of new algorithms like Mixture of Experts (MoE). The application and adaptation of LLMs are anticipated across many industries, including retail, manufacturing, and finance.
However, many models that currently top the LLM leaderboard show insufficient understanding and performance in non-English languages, including Japanese. One of the reasons for this is that the training corpus contains a high proportion of English data. For example,
only 0.11% of the GPT-3 corpus is Japanese data
. Creating LLM models that perform well in Japanese, which has less training data than English, has been immensely challenging.
This post presents insights gained from training an AI model with 172 billion parameters as part of the
Generative AI Accelerator Challenge (GENIAC)
project, using
NVIDIA Megatron-LM
to help address the shortage of high-performance models for Japanese language understanding.
LLM-jp initiatives at GENIAC
The
Ministry of Economy, Trade and Industry (METI)
launched GENIAC to raise the level of platform model development capability in Japan and to encourage companies and others to be creative. GENIAC has provided computational resources, supported matching with companies and data holders, fostered collaboration with global technology companies, held community events, and evaluated the performance of the developed platform models.
The
LLM-jp
project to develop a completely
open model with 172 billion parameters
(available on Hugging Face) with strong Japanese language capabilities was selected for the GENIAC initiative. LLM-jp 172B was the largest model development in Japan at that time (February to August 2024), and it was meaningful to share the knowledge of its development widely.
LLM-jp is an initiative launched by researchers in the field of natural language processing and computer systems, mainly at NII, to accumulate know-how on the mathematical elucidation of training principles, such as how large-scale models acquire generalization performance and the efficiency of learning, through the continuous development of models that are completely open and commercially available. The objective is to accumulate know-how on the efficiency of training.
Training the model using NVIDIA Megatron-LM
Megatron-LM
serves as a lightweight research-oriented framework leveraging
Megatron-Core
for training LLMs at unparalleled speed. Megatron-Core, the main component, is an open-source library that contains GPU-optimized techniques and cutting-edge system-level optimizations essential for large-scale training.
Megatron-Core supports various advanced model parallelism techniques, including tensor, sequence, pipeline, context, and MoE expert parallelism. This library offers
customizable building blocks
, training resiliency features such as
fast distributed checkpointing
, and many other innovations such as
Mamba-based hybrid model training
. Itâs compatible with all NVIDIA Tensor Core GPUs, and includes support for
Transformer Engine (TE)
with FP8 precision introduced with
NVIDIA Hopper architecture
.
Model architecture and training settings
Table 1 provides an overview of the model architecture for this project, which follows
Llama 2 architecture
.
Parameter
Value
Hidden size
12288
FFN intermediate size
38464
Number of layers
96
Number of attention heads
96
Number of query groups
16
Activation function
SwiGLU
Position embedding
RoPE
Normalization
RMSNorm
Table 1. Overview of LLM-jp 172B model architecture
The LLM-jp 172B model is being trained from scratch using 2.1 trillion tokens of a multilingual corpus developed for the project consisting mainly of Japanese and English. The training is performed using NVIDIA H100 Tensor Core GPUs on Google Cloud A3 Instance with FP8 hybrid training using the Transformer Engine. Megatron-Core v0.6 and Transformer Engine v1.4 are used in the experiment.
Table 2 shows hyperparameter settings for training.
Parameter
Value
LR
1E-4
min LR
1E-5
LR WARMUP iters
2000
Weight decay
0.1
Grad clip
1.0
Global batch size
1728
Context length
4096
Table 2. Hyperparameters used for the model training
In addition, z-loss and batch-skipping techniques, which are used in
PaLM
, are incorporated to stabilize the training process, and flash attention is used to further speed up the training process.
To view other training configurations, please see
llm-jp/Megatron-LM
.
Training throughput and results
Pretraining for the latest LLM-jp 172B model is currently underway, with periodic evaluations every few thousand iterations to monitor training progress and ensure successful accuracy results on Japanese and English downstream tasks (Figure 1). So far, over 80% is complete, of the targeted 2.1 trillion tokens.
Figure 1. Training loss curves for pretraining with 1.7 trillion tokens using Megatron FP8 hybrid training
Notably, there is a sharp increase in TFLOP/s after approximately 7,000 iterations, corresponding to the transition from BF16 to FP8-hybrid precision. In this experiment, BF16 plus TE was used for training before 7,000 iterations, and FP8 hybrid plus TE was used after 7,000 iterations. In Megatron-LM, it is possible to enable hybrid FP8 training with the simple option
--fp8-format
â
hybrid
â. Note that this feature is experimental, with further optimizations coming soon.
Figure 2. Training throughput (TFLOP/s) when TE is used with BF16 and FP8 hybrid
The reason we started the training with BF16 plus TE and then switched to FP8 hybrid was not only to see the tokens/sec performance difference between BF16 and FP8, but also to make the initial training more stable. In the early stages of training, the learning rate (LR) increases due to the warm-up, leading to unstable training.
We chose to perform the initial training with BF16, and after confirming that there were no problems with the values of training loss, optimizer states, gradient norm, and so on, we switched to FP8 to speed up the training process. FP8 hybrid has improved the training speed. We observed a training speed of 545-553 TFLOP/s with Megatron-LM.
Figure 3. Weak scaling performance based on the results of the main and preliminary experiments of the LLM-jp 172B model training
Conclusion
As mentioned above, the training of LLM-jp 172BÂ is still ongoing using Megatron-LM. Based on the evaluation results of downstream tasks using the current checkpoint data, we suppose that the model has already acquired excellent Japanese language capabilities, but the complete model is expected to be ready early next year.Training time is often a significant challenge in pretraining LLMs, where vast datasets are required.Therefore, efficient training frameworks like Megatron-LM are crucial for accelerating Generative AI research and development.ãFor the 172B model trained with
Megatron-LM
, we explored FP8-hybrid training as a potential method for improving training speed, achieving a 1.4x training speed acceleration from 400 TFLOP/s to 550 TFLOP/s. We observed a performance acceleration from 400 TFLOP/s to 550 TFLOP/s, suggesting that FP8-hybrid could be a valuable approach for enhancing the efficiency of large-scale model pretraining. | https://developer.nvidia.com/ja-jp/blog/developing-a-172b-llm-with-strong-japanese-capabilities-using-nvidia-megatron-lm/ | Megatron-LM ãçšããæ¥æ¬èªã«åŒ·ã 172B 倧èŠæš¡èšèªã¢ãã«ã®éçº | Reading Time:
2
minutes
çæ AI ã¯ããã®åè¶ããèœåã®ãããã§ãåŸæ¥ã®æ©æ¢°åŠç¿ææ³ã§ã¯ã§ããªãã£ãã¿ã¹ã¯ãå®è¡ãã泚ç®ãéããŠããŸããäŸãã°ãèªç¶èšèªåŠçã®åéã§ã¯ã
倧èŠæš¡èšèªã¢ãã« (LLM)
ãç»å Žããããšã§ããã£ãããããã«ããã«ã¹ã¿ã㌠ãµããŒããäŒè°å
容ã®èŠçŽãªã©ããããŸã§äººéãæ
ã£ãŠãã圹å²ã AI ã代ããã«è¡ããªã©å€ãã®é©æ°çã§åµé çãªãŠãŒã¹ ã±ãŒã¹ãçãŸããŠããŸãã
LLM ã¯ããã©ã¡ãŒã¿ãŒæ°ã®å¢å ã MoE (Mixture of Experts) ã®ãããªæ°ããã¢ã«ãŽãªãºã ã®æ¡çšãªã©ãæ§ã
ãªã¢ãããŒããéããŠé²åãç¶ããŠãããå°å£²æ¥ã補é æ¥ãéèæ¥ãªã©ãããŸããŸãªæ¥çãžã®å¿çšãšé©çšãæåŸ
ãããŠããŸãã
ããããçŸåš LLM ãªãŒããŒããŒãã®äžäœã¢ãã«ã®å€ãã¯ãè±èªã«æ¯ã¹ãŠæ¥æ¬èªã®ç解床ãããã©ãŒãã³ã¹ãäœãåŸåã«ãããŸãããã®çç±ã®äžã€ã¯ãåŠç¿ã³ãŒãã¹ã®è±èªããŒã¿ã®å²åã倧ããããšã§ããäŸãã°ã
GPT-3 ã®å Žåãæ¥æ¬èªããŒã¿ã¯ã³ãŒãã¹ã® 0.11% ãããããŸãã
ãæ¥æ¬ã®çæ AI ã®çºå±ã®ããã«ã¯ãéåžžã«å°é£ã§ããè±èªãããåŠç¿ããŒã¿ã®å°ãªãæ¥æ¬èªã§åªããæ§èœãçºæ®ãã LLM ã¢ãã«ãäœæããããšããä¹ãè¶ããã¹ãéèŠãªèª²é¡ã§ãã
æ¬çš¿ã§ã¯ã
GENIAC (Generative AI Accelerator Challenge)
ãããžã§ã¯ãã®äžç°ãšããŠåãçµãã ãMegatron-LM ãçšãã 172B 倧èŠæš¡èšèªã¢ãã«ã®åŠç¿ããåŸãããç¥èŠã玹ä»ããããŒã¿äžè¶³ã®åé¡ãä¹ãè¶ããŠæ¥æ¬èªç解èœåã®é«ãã¢ãã«äœæã«åãçµãã éã®æŽå¯ã«ã€ããŠçŽ¹ä»ããŸãã
GENIAC ã«ããã LLM-jp ã®åãçµã¿
äžèšã§è¿°ã¹ããããªèª²é¡ã解決ããããã«ãçµæžç£æ¥çã¯ãæ¥æ¬åœå
ã®ãã©ãããã©ãŒã ã¢ãã«éçºåã®åäžãšäŒæ¥çã®åµæ工倫ã奚å±ãããããã
Generative AI Accelerator Challenge (GENIAC)
ããç«ã¡äžããŸãããGENIAC ã§ã¯ãèšç®è³æºã®æäŸãäŒæ¥ãšããŒã¿ä¿æè
ãšã®ãããã³ã°æ¯æŽãã°ããŒãã«ããã¯äŒæ¥ãšã®é£æºä¿é²ãã³ãã¥ãã㣠ã€ãã³ãã®éå¬ãéçºããããã©ãããã©ãŒã ã¢ãã«ã®æ§èœè©äŸ¡ãªã©ãçŸåšãç¶ç¶ããŠè¡ã£ãŠããŸãã
ãã®åãçµã¿ã«ã
LLM-jp
ã®æ¥æ¬èªå¯Ÿå¿åã«åªãã
å®å
šãªãŒãã³ãª 172B ã¢ãã«
ã®éçºãšããããŒããéžã°ããŸããã172B ã¯åœæ (2024 幎 2 æãã 8 æ) æ¥æ¬åœå
ã§æ倧èŠæš¡ã®ã¢ãã«éçºã§ããããã®éçºããŠããŠãåºãå
±æããããšã¯éåžžã«ææ矩ãªããšã§ããã
LLM-jp ã¯ã
NII (åœç«æ
å ±åŠç 究æ)
ãäžå¿ãšããèªç¶èšèªåŠçãèšç®æ©ã·ã¹ãã åéã®ç 究è
ãäžå¿ãšãªã£ãŠç«ã¡äžããåãçµã¿ã§ã倧èŠæš¡ã¢ãã«ãæ±åæ§èœãç²åŸããä»çµã¿ãåŠç¿ã®å¹çæ§ãšãã£ãåŠç¿åçã®æ°åŠç解æã«é¢ããããŠããŠããå®å
šã«ãªãŒãã³ã§åçšå©çšå¯èœãªã¢ãã«ã®ç¶ç¶çãªéçºãéããŠèç©ããäºãããã³åŠç¿ã®å¹çæ§ã«é¢ããããŠããŠãèç©ããããšãç®çãšããŠããŸãã
NVIDIA Megatron-LM
Megatron-LM
ã¯ã
Megatron-Core
ã掻çšããŠå€§èŠæš¡èšèªã¢ãã« (LLM) ãæ¯é¡ã®ãªãé床ã§åŠç¿ãã軜éãªç 究æåãã¬ãŒã ã¯ãŒã¯ãšããŠæ©èœããŸããäž»èŠã³ã³ããŒãã³ãã§ãã Megatron-Core ã¯ã倧èŠæš¡ãªåŠç¿ã«äžå¯æ¬ 㪠GPU æé©åæè¡ãšæå
端ã®ã·ã¹ãã ã¬ãã«ã®æé©åãå«ãã©ã€ãã©ãªã§ãã
Megatron-Core ã¯ããã³ãœã«ãã·ãŒã±ã³ã¹ããã€ãã©ã€ã³ãã³ã³ããã¹ããMoE ãšãã¹ããŒã䞊ååŠçãªã©ãããŸããŸãªé«åºŠãªã¢ãã«äžŠååŠçææ³ããµããŒãããŠããŸãããã®ã©ã€ãã©ãªã¯ã
ã«ã¹ã¿ãã€ãºå¯èœãªãã«ãã£ã³ã° ãããã¯
ã
é«éåæ£ãã§ãã¯ãã€ã³ã
ãªã©ã®åŠç¿å埩åæ©èœã
Mamba ããŒã¹ã®ãã€ããªãã ã¢ãã«åŠç¿
ãªã©ã®ä»ã®å€ãã®ã€ãããŒã·ã§ã³ãæäŸããŸãããã¹ãŠã® NVIDIA Tensor ã³ã¢ GPU ãšäºææ§ãããã
NVIDIA Hopper ã¢ãŒããã¯ãã£
ã§å°å
¥ããã FP8 粟床㮠Transformer Engine ãã¯ãããžã®ãµããŒããå«ãŸããŠããŸãã
äžèšã®ãããªæå
端ã®æ©èœãæäŸããããšã§ãMegatron-LM ã¯ãç 究è
ãã¢ãã«éçºè
ã 1,000 åã®ãã©ã¡ãŒã¿ãŒãè¶
ããã¢ãã«ã§ããé«éãªåŠç¿ãšæ°åã® GPU ã¹ã±ãŒã«ãžã®ã¹ã±ãŒã©ããªãã£ãå®çŸã§ããããã«ããŸãã
ã¢ãã« ã¢ãŒããã¯ãã£ãšåŠç¿èšå®
以äžã¯ã
Meta ã® Llama2
ã¢ãŒããã¯ãã£ã«æºæ ãããã®ãããžã§ã¯ãã®ã¢ãã« ã¢ãŒããã¯ãã£ã®æŠèŠã§ãã
ãã©ã¡ãŒã¿ãŒ
å€
Hidden size
12288
FFN Intermediate size
38464
Number of layers
96
Number of attention heads
96
Number of query groups
16
Activation function
SwiGLU
Position embedding
RoPE
Normalization
RMSNorm
è¡š 1. LLM-jp 172B ã¢ãã«ã¢ãŒããã¯ãã£æŠèŠ
ãã® 172B ã¢ãã«ã¯ããããžã§ã¯ãçšã«éçºãããå€èšèªã³ãŒãã¹ (äž»ã«æ¥æ¬èªãšè±èª) ã® 2.1T ããŒã¯ã³ (2.1 å
ããŒã¯ã³) ã䜿çšããŠãŒãããåŠç¿ãããŠããŸããåŠç¿ã¯ãTransformer Engine ã䜿çšãã FP8 ãã€ããªããåŠç¿ã§ãGoogle Cloud ã® A3 ã€ã³ã¹ã¿ã³ã¹äžã® H100 Tensor ã³ã¢ GPU ã䜿çšããŠå®è¡ãããŠããŸããå®éšã§ã¯ã
Megatron-Core
v0.6 ãš
Transformer Engine
v1.4 ã䜿çšãããŠããŸãã
åŠç¿ã®ãã€ããŒãã©ã¡ãŒã¿ãŒèšå®ã¯æ¬¡ã®ãšããã§ãã
ãã©ã¡ãŒã¿ãŒ
å€
LR
1E-4
min LR
1E-5
LR WARMUP iters
2000
Weight Decay
0.1
Grad Clip
1.0
global batch size
1728
context length
4096
è¡š 2. ãã®å®éšã§äœ¿çšãããã€ããŒãã©ã¡ãŒã¿ãŒã®æŠèŠ
詳现ãªèšå®ã«èå³ã®ããæ¹ã¯ãä»ã®åŠç¿èšå®ã
llm-jp/Megatron-LM
ã§ã芧ããã ããŸãã
ãŸãã
PaLM
ã§æ¡çšãããŠãããz-loss ã batch-skipping ãã¯ããã¯ãåãå
¥ããããšã§åŠç¿ããã»ã¹ãå®å®åãããflash attention ãå©çšããããšã§åŠç¿ããã»ã¹ãããã«é«éåãããŠããŸãã
åŠç¿çµæãšã¹ã«ãŒããã
LLM-jp 172B ã¢ãã«ã®äºååŠç¿ã¯ãæ°åã€ãã¬ãŒã·ã§ã³ããšã«ãæ¥æ¬èªãšè±èªã®äžæµã¿ã¹ã¯ã®è©äŸ¡çµæãã¢ãã¿ãŒããåŠç¿ãããŸãé²ãã§ãããã©ããã確èªããªããçŸåšãé²è¡äžã§ãããããŸã§ã®ãšãããç®æšãšãã 2 å
1,000 åããŒã¯ã³ã® 80% 匷ãŸã§å®äºããŠããŸãã
Megatron ã® FP8 ãã€ããªããåŠç¿ãçšãã 1.7 å
ããŒã¯ã³ã®äºååŠç¿ã«ãããåŠç¿æ倱æ²ç·ã以äžã«ç€ºããŸãããã®æ²ç·ã¯ã240,000 ã¹ããããŸã§æ倱ãçå®ã«æžå°ããŠããããšã瀺ããŠããŸãã
å³ 1. 240k ã¹ããããŸã§ã®åŠç¿ãã¹
以äžã®ã°ã©ãã¯ãY 軞㯠TFLOP/sãX 軞ã¯ã€ãã¬ãŒã·ã§ã³åæ°ã瀺ããŠããŸãã泚ç®ãã¹ãã¯ãåŠç¿ã BF16 ãã FP8 ãã€ããªãããžåãæ¿ããçŽ 7,000 åã®ã€ãã¬ãŒã·ã§ã³ã®ã¿ã€ãã³ã°ã§ãTFLOP/s ãæ¥æ¿ã«å¢å ããŠããããšã§ãããã®å®éšã§ã¯ãBF16 + Transformer Engine ã 7,000 å以åã®åŠç¿ã«äœ¿çšãããFP8 ãã€ããªãã + Transformer Engine ã 7000 å以éã®åŠç¿ã«äœ¿çšãããŸãããMegatron-LM ã§ã¯ãåçŽãªãªãã·ã§ã³
--fp8-format
â
hybrid
â 㧠FP8 ãã€ããªããåŠç¿ãæå¹ã«ããããšãã§ããŸãã
å³ 2. Transformer Engine ã BF16 ãš FP8 ã®ãã€ããªããã§äœ¿çšããå Žåã®åŠç¿ã¹ã«ãŒããã (TFLOP/s)
BF16 + Transformer Engine ã§åŠç¿ãéå§ãããã®åŸ FP8 ãã€ããªããã«åãæ¿ããçç±ã¯ãBF16 ãš FP8 ã§ã©ã®çšåºŠã® tokens/sec ã®æ§èœå·®ãããããèŠãããã ãã§ãªããåæåŠç¿ãããå®å®ãããããã§ããããŸããåŠç¿ã®åæ段éã§ã¯ããŠã©ãŒã ã¢ããã«ããåŠç¿ç (LR) ãäžæããåŠç¿ãäžå®å®ã«ãªããŸããããã§ãåæåŠç¿ã¯ BF16 ã§è¡ããåŠç¿æ倱ã®å€ããªããã£ãã€ã¶ãŒã®ç¶æ
ãåŸé
ãã«ã ãªã©ã«åé¡ããªãããšã確èªããåŸãFP8 ã«åãæ¿ããŠåŠç¿ãé«éåããããšã«ããŸãããFP8 ãã€ããªããã«ããåŠç¿é床ãåäžããŠããããšãåãããŸãã
ç§ãã¡ã¯ãæçµçã« Megatron-LM ãçšã㊠545-553TFLOP/s ã®æ§èœãéæã§ããããšã確èªããŸããã以äžã¯ãLLM-jp 172B ã¢ãã«åŠç¿ã®æ¬å®éšãšäºåå®éšã®çµæã«åºã¥ãã匱ã¹ã±ãŒãªã³ã°æ§èœã®ã°ã©ãã§ãããã®ã°ã©ãã§ã¯ãY 軞ã Aggregate Throughput ãè¡šããX 軞ãåŠç¿ã«äœ¿çšãã GPU ã®æ°ãè¡šããŠããŸããLlama2 7BãLlama2 13BãLLM-jp 172B ã®åŠç¿çµæã¯ãç·åœ¢ã¹ã±ãŒãªã³ã°ã瀺ããŠããããšãåãããŸãã
å³ 3. LLM-jp 172B ã¢ãã«å®éšã®åŒ±ã¹ã±ãŒãªã³ã°æ§èœ
ãŸãšã
åè¿°ã®éããLLM-jp 172B ã®åŠç¿ã¯çŸåšã Megatron-LM ãçšããŠé²è¡äžã§ããçŸåšã®ãã§ãã¯ãã€ã³ã ããŒã¿ãçšããäžæµã¿ã¹ã¯ã®è©äŸ¡çµæãããçŸç¶ã§ãæ¢ã«åªããæ¥æ¬èªèœåãç²åŸããŠãããšæšå¯ãããŸãããå®å
šãªã¢ãã«ã¯æ¥å¹Žã®åãã«å®æäºå®ã§ãã
èšå€§ãªããŒã¿ã»ãããå¿
èŠãšãã倧èŠæš¡èšèªã¢ãã«ã®äºååŠç¿ã«ãããŠãåŠç¿æéã¯ãã°ãã°å€§ããªèª²é¡ãšãªããŸãããã®ãããMegatron-LM ã®ãããªå¹ççã«åŠç¿å¯èœãªãã¬ãŒã ã¯ãŒã¯ã¯ãçæ AI ã®ç 究éçºãå éãããçºã«éåžžã«éèŠã§ãã
Megatron-LM
ã§åŠç¿ãã 172B ã¢ãã«ã«ãããŠãFP8-hybrid åŠç¿ãåŠç¿é床ãåäžãããå¹æçãªææ³ã§ããããšãå®èšŒãã1.4 åã®é«éå (400 TFLOP/s â 550 TFLOP/s) ãéæããŸããããã®çµæã¯ãFP8-hybrid ã倧èŠæš¡ã¢ãã«ã®äºååŠç¿ã®å¹çãåäžãããæçšãªã¢ãããŒãã§ããããšã匷調ããŠããŸãã |
https://developer.nvidia.com/blog/5x-faster-time-to-first-token-with-nvidia-tensorrt-llm-kv-cache-early-reuse/ | 5x Faster Time to First Token with NVIDIA TensorRT-LLM KV Cache Early Reuse | In our previous
blog post
, we demonstrated how reusing the key-value (KV) cache by offloading it to CPU memory can accelerate time to first token (TTFT) by up to 14x on x86-based NVIDIA H100 Tensor Core GPUs and 28x on the NVIDIA GH200 Superchip. In this post, we shed light on KV cache reuse techniques and best practices that can drive even further TTFT speedups.
Introduction to KV cache
LLM models are rapidly being adopted for many tasks, including question-answering, and code generation. To generate a response, these models begin by converting the userâs prompt into tokens, which are then transformed into dense vectors. Extensive dot-product operations follow to mathematically model the relationships between the tokens and build a contextual understanding of the user input. The computational cost of generating this contextual understanding increases quadratically with the length of the input sequence.
This resource-intensive process generates keys and values, which are cached to avoid recomputation when generating subsequent tokens. Reusing the KV cache reduces the computational load and time needed to generate additional tokensâleading to a faster and more efficient user experience.
When reusing the KV cache, careful attention must be given to how long it remains in memory, which components to evict first when memory is full, and when it can be reused for new incoming prompts. Optimizing these factors can lead to incremental performance improvements in KV cache reuse. NVIDIA TensorRT-LLM offers three key features that specifically address these areas.
Early KV cache reuse
Traditional reuse algorithms require the entire KV cache computation to be completed before any portions of it can be reused with new user prompts. In scenarios such as enterprise chatbots, where system promptsâpredefined instructions added to user queriesâare essential to direct the LLMâs responses in line with enterprise guidelines, this method can be inefficient.
When a surge of users interacts with the chatbot simultaneously, each user would require a separate computation of the system prompt KV cache. With TensorRT-LLM, we can instead reuse the system prompt as it is being generated in real time, enabling it to be shared across all users during the burst, rather than recalculating it for each user. This can significantly accelerate inference for use cases requiring system prompts by up to 5x.
Figure 1. TensorRT-LLM KV cache reuse can speed up TTFT by up to 5x
Flexible KV cache block sizing
In reuse implementations, only entire cache memory blocks can be allocated for reuse. For example, if the cache memory block size is 64 tokens and KV cache is 80 tokens, only 64 tokens will be stored for reuse, while the remaining 16 tokens will need to be recomputed. However, if the memory block size is reduced to 16 tokens, all 64 tokens can be stored across five memory blocks, eliminating the need for re-computation.
This effect is most pronounced when the input sequences are short. For long input sequences, larger blocks can be more beneficial. As is clear, the more granular the control you have over the KV cache, the better you can optimize it for your specific use case.
TensorRT-LLM provides fine-grained control over KV cache memory blocks, giving developers the ability to chop them into smaller blocks between 64 to 2 tokens. This optimizes the usage of allocated memory, increases reuse rates, and improves TTFT. When running LLAMA70B on NVIDIA H100 Tensor Core GPUs, we can speed up TTFT up to 7% in multi-user environments by reducing KV cache block size from 64 tokens to 8 tokens.
Figure 2. Impact of changing KV cache block size on inference speedup
Efficient KV cache eviction protocols
Partitioning the KV cache into smaller blocks and evicting unused ones can be effective for memory optimization, but it introduces dependency complexities. When a specific block is used to generate a response, and the result is stored as a new block, it can form a tree-like structure of dependencies.
Over time, the counters tracking the usage of the source blocks (the branches) may become stale as the dependent nodes (the leaves) are reused. Evicting the source block then requires the eviction of all dependent blocks, which would require recalculation of the KV cache for new user prompts, increasing TTFT.
To address this challenge, TensorRT-LLM includes intelligent eviction algorithms that can trace the dependent nodes from their source nodes and evict dependent nodes first, even if they have more recent reuse counters. This ensures more efficient memory management while preventing unnecessary evictions of dependent blocks.
Figure 3. A logical representation of KV cache eviction algorithm show how it can reduce the number of evicted blocks, increasing the likelihood of reuse
Getting started with TensorRT-LLM KV cache reuse
Generating KV cache during inference requires a lot of compute and memory resources. Using it efficiently is critical to improving model response, accelerating inference, and increasing system throughput. TensorRT-LLM provides advanced reuse features for developers looking to further optimize TTFT response times for peak performance.
To start using TensorRT-LLM KV cache reuse check out our
GitHub documentation
. | https://developer.nvidia.com/ja-jp/blog/5x-faster-time-to-first-token-with-nvidia-tensorrt-llm-kv-cache-early-reuse/ | NVIDIA TensorRT-LLM ã® KV Cache Early Reuseã§ãTime to First Token ã 5 åé«éå | Reading Time:
2
minutes
以åã®
ããã°èšäº
ã§ã¯ãkey-value (KV) ãã£ãã·ã¥ã CPU ã¡ã¢ãªã«ãªãããŒãããŠåå©çšããããšã§ãæåã®ããŒã¯ã³ãåºåããããŸã§ã®æé (TTFT: Time To First Token) ã x86 ããŒã¹ã® NVIDIA H100 Tensor ã³ã¢ GPU ã§æ倧 14 åãNVIDIA GH200 Superchip ã§æ倧 28 åã«é«éåã§ããæ¹æ³ãã玹ä»ããŸãããæ¬èšäºã§ã¯ãKV ãã£ãã·ã¥ã®åå©çšæè¡ãšãTTFT ã®ãããªãé«éåãå®çŸãããã¹ããã©ã¯ãã£ã¹ã«ã€ããŠè§£èª¬ããŸãã
KV ãã£ãã·ã¥ã®æŠèŠ
LLM ã¢ãã«ã¯ã質ååçãã³ãŒãçæãªã©ãå€ãã®ã¿ã¹ã¯ã§æ¥éã«æ¡çšãããŠããŸããå¿çãçæããã«ãããããããã®ã¢ãã«ã¯ãŸãããŠãŒã¶ãŒã®ããã³ãããããŒã¯ã³ãžå€æãããã®åŸãããã®ããŒã¯ã³ãå¯ãã¯ãã«ãžãšå€æããŸããèšå€§ãªãããç©æŒç®ããã®åŸã«ç¶ãããã®åŸããŒã¯ã³éã®é¢ä¿æ§ãæ°åŠçã«ã¢ãã«åãããŠãŒã¶ãŒå
¥åã«å¯Ÿããæèç解ãæ§ç¯ããŸãããã®æèç解ãçæããããã«ãããèšç®ã³ã¹ãã¯ãå
¥åã·ãŒã±ã³ã¹ã®é·ãã®äºä¹ã«æ¯äŸããŠå¢å ããŸãã
ãã®ãªãœãŒã¹ã倧éã«æ¶è²»ããããã»ã¹ãã key ãšvalue ãçæãããåŸç¶ã®ããŒã¯ã³ãçæãããšãã«å床èšç®ãããªãããã«ãã£ãã·ã¥ãããŸããKV ãã£ãã·ã¥ãåå©çšããããšã§ãè¿œå ã®ããŒã¯ã³ãçæããéã«å¿
èŠãšãªãèšç®è² è·ãšæéã軜æžãããããé«éã§å¹ççãªãŠãŒã¶ãŒäœéšãå®çŸããŸãã
KV ãã£ãã·ã¥ãåå©çšãããšãã«ã¯ããã£ãã·ã¥ãã¡ã¢ãªã«æ®ãæéãã¡ã¢ãªãäžæ¯ã«ãªã£ããšãã«æåã«åé€ããã³ã³ããŒãã³ããããã³æ°ããå
¥åããã³ããã«åå©çšã§ããã¿ã€ãã³ã°ãªã©ã®ç¹ã«çŽ°å¿ã®æ³šæãæãå¿
èŠããããŸãããããã®èŠå ãæé©åããããšã§ãKV ãã£ãã·ã¥ã®åå©çšã«ãããããã©ãŒãã³ã¹ã®æ®µéçãªå¢å ãžãšã€ãªããããšãã§ããŸããNVIDIA TensorRT-LLM ã¯ããããã®åéã«ç¹åãã 3 ã€ã®äž»èŠãªæ©èœãæäŸããŸãã
Early KV cache reuse
åŸæ¥ã®åå©çšã¢ã«ãŽãªãºã ã§ã¯ãKV ãã£ãã·ã¥ããã®äžéšã§ãã£ãŠãæ°ãããŠãŒã¶ãŒ ããã³ããã§åå©çšããããã«ã¯ãäºåã«ãã¹ãŠã® KV ãã£ãã·ã¥ã®èšç®ãå®äºãããŠããå¿
èŠããããŸããããã®æ¹æ³ã¯ãLLM ã®ã¬ã¹ãã³ã¹ãäŒæ¥ã®ã¬ã€ãã©ã€ã³ã«æ²¿ã£ããã®ã«ããããã«ãã·ã¹ãã ããã³ãã (ãŠãŒã¶ãŒã®åãåããã«è¿œå ãããäºåå®çŸ©ã®æ瀺) ãäžå¯æ¬ ãšãªãäŒæ¥åããã£ããããããªã©ã®ã·ããªãªã§ã¯ãéå¹ççã§ããå¯èœæ§ããããŸãã
ãã£ããããããšåæã«ããåããããŠãŒã¶ãŒãæ¥å¢ããå ŽåãåãŠãŒã¶ãŒã«å¯ŸããŠã·ã¹ãã ããã³ãã KV ãã£ãã·ã¥ãåå¥ã«èšç®ããå¿
èŠããããŸããTensorRT-LLM ã§ã¯ããªã¢ã«ã¿ã€ã ã§çæãããã·ã¹ãã ããã³ãããåå©çšããããšãã§ãããããæ¥å¢æã«ã¯ãã¹ãŠã®ãŠãŒã¶ãŒãšå
±æããããšãã§ãããŠãŒã¶ãŒããšã«åèšç®ããå¿
èŠããããŸãããããã«ãããã·ã¹ãã ããã³ãããå¿
èŠãšãããŠãŒã¹ ã±ãŒã¹ã®æšè«ãæ倧 5 åã«ãŸã§é«éåããããšãã§ããŸãã
å³ 1. TensorRT-LLM KV cache reuse ã«ãããTTFT ãæ倧 5 åé«éå
æè»ãª KV ãã£ãã·ã¥ ããã㯠ãµã€ãº
åå©çšãå®è£
ããéã«ã¯ããã£ãã·ã¥ ã¡ã¢ãª ãããã¯å
šäœã®ã¿ãåå©çšã«å²ãåœãŠãããšãã§ããŸããäŸãã°ããã£ãã·ã¥ ã¡ã¢ãª ããã㯠ãµã€ãºã 64 ããŒã¯ã³ã§ãKV ãã£ãã·ã¥ã 80 ããŒã¯ã³ã§ããå Žåãåå©çšã®ããã«ä¿åã§ããã®ã¯ 64 ããŒã¯ã³ã®ã¿ã§ãããæ®ãã® 16 ããŒã¯ã³ã¯åèšç®ããå¿
èŠããããŸããããããªãããã¡ã¢ãª ããã㯠ãµã€ãºã 16 ããŒã¯ã³ã«æžãããšã64 ããŒã¯ã³ãã¹ãŠã 5 ã€ã®ã¡ã¢ãª ãããã¯ã«æ ŒçŽããããšãã§ããåèšç®ã®å¿
èŠæ§ããªããªããŸãã
ãã®å¹æã¯ãå
¥åã·ãŒã±ã³ã¹ãçããšãã«æãé¡èã«çŸããŸããé·ãå
¥åã·ãŒã±ã³ã¹ã®å Žåã¯ããã倧ããªãããã¯ã®æ¹ãããæçã§ããæããã«ãKV ãã£ãã·ã¥ããã现ããå¶åŸ¡ã§ããã°ã§ããã»ã©ãç¹å®ã®ãŠãŒã¹ ã±ãŒã¹ã«åãããæé©åãåäžããŸãã
TensorRT-LLM ã§ã¯ãKV ãã£ãã·ã¥ ã¡ã¢ãª ãããã¯ããã现ããå¶åŸ¡ã§ãããããéçºè
㯠KV ãã£ãã·ã¥ ã¡ã¢ãª ãããã¯ã 64 ãã 2 ããŒã¯ã³ãŸã§ãããå°ããªãããã¯ã«åå²ããããšãã§ããŸããããã«ãããå²ãåœãŠãããã¡ã¢ãªã®äœ¿çšãæé©åãããåå©çšçãäžæããTTFT ãæ¹åãããŸããNVIDIA H100 Tensor ã³ã¢ GPU 㧠LLAMA70B ãå®è¡ããå ŽåãKV ãã£ãã·ã¥ ãããã¯ãµã€ãºã 64 ããŒã¯ã³ãã 8 ããŒã¯ã³ãžãšæžããããšã§ããã«ããŠãŒã¶ãŒç°å¢ã§ TTFT ãæ倧 7% é«éåã§ããŸãã
å³ 2. KV ãã£ãã·ã¥ ããã㯠ãµã€ãºã®å€æŽã«ããæšè«ã®é«éå
å¹çç㪠KV ãã£ãã·ã¥ã®é€å€ (Eviction) ãããã³ã«
KV ãã£ãã·ã¥ãããå°ããªãããã¯ã«åå²ããæªäœ¿çšã®ãããã¯ãé€å€ããããšã¯ãã¡ã¢ãªã®æé©åã«å¹æçã§ãããäŸåé¢ä¿ã«è€éããçãŸããŸããç¹å®ã®ãããã¯ãã¬ã¹ãã³ã¹ã®çæã«äœ¿çšããããã®çµæãæ°ãããããã¯ãšããŠä¿åããããšãäŸåé¢ä¿ã®ããªãŒæ§é ã圢æãããå¯èœæ§ããããŸãã
æéã®çµéãšãšãã«ããœãŒã¹ ããã㯠(ãã©ã³ã) ã®äœ¿çšã远跡ããã«ãŠã³ã¿ãŒã¯ãåŸå±ããŒã (ãªãŒã) ãåå©çšãããã«ã€ããŠå€ããªãå¯èœæ§ããããŸãããœãŒã¹ ãããã¯ãé€å€ããã«ã¯ãåŸå±ãããã¹ãŠã®ãããã¯ãé€å€ããå¿
èŠããããæ°ãããŠãŒã¶ ããã³ããã® KV ãã£ãã·ã¥ãåèšç®ããå¿
èŠãçã㊠TTFT ãå¢å ããŸãã
ãã®èª²é¡ã«å¯ŸåŠããããã«ãTensorRT-LLM ã«ã¯ãåŸå±ããŒãããœãŒã¹ ããŒããã远跡ããåŸå±ããŒããããæè¿ã®åå©çšã«ãŠã³ã¿ãŒãæã£ãŠããå Žåã§ããæåã«åŸå±ããŒããé€å€ããããšãã§ããã€ã³ããªãžã§ã³ããªé€å€ã¢ã«ãŽãªãºã ãå«ãŸããŠããŸããããã«ãããããå¹ççã«ã¡ã¢ãªã管çã§ããããã«ãªããšå
±ã«ãåŸå±ãããã¯ã®äžèŠãªé€å€ãåé¿ã§ããŸãã
å³ 3. KV ãã£ãã·ã¥ã®é€å€ã¢ã«ãŽãªãºã ã®è«çãè¡šçŸããå³ãé€å€ããããããã¯ã®æ°ãæžãããåå©çšã®å¯èœæ§ãé«ããããæ§åã瀺ããŠããŸãã
TensorRT-LLM KV cache reuse ã䜿ãå§ãã
æšè«äžã« KV ãã£ãã·ã¥ãçæããã«ã¯ãå€ãã®èšç®ãšã¡ã¢ãª ãœãŒã¹ãå¿
èŠã«ãªããŸããå¹ççã«äœ¿çšããããšããã¢ãã«å¿çã®æ¹åãæšè«ã®é«éåãã·ã¹ãã ã¹ã«ãŒãããã®åäžã«ã¯äžå¯æ¬ ã§ããTensorRT-LLM ã¯ãããŒã¯æ§èœã®ããã« TTFT å¿çæéãããã«æé©åããããšããéçºè
ã«é«åºŠãªåå©çšæ©èœãæäŸããŸãã
TensorRT-LLM KV cache reuse ã䜿ãå§ããã«ã¯ã
GitHub ã®ããã¥ã¡ã³ã
ãåç
§ããŠãã ããã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
Speeding up LLM Inference With TensorRT-LLM (TensorRT-LLM ã«ãã LLM æšè«ã®é«éå)
GTC ã»ãã·ã§ã³:
Optimizing and Scaling LLMs With TensorRT-LLM for Text Generation (ããã¹ãçæã®ããã® TensorRT-LLM ã䜿çšãã LLM ã®æé©åãšã¹ã±ãŒãªã³ã°)
SDK:
Torch-TensorRT
SDK:
TensorRT
SDK:
TensorFlow-TensorRT |
https://developer.nvidia.com/blog/state-of-the-art-multimodal-generative-ai-model-development-with-nvidia-nemo/ | State-of-the-Art Multimodal Generative AI Model Development with NVIDIA NeMo | Generative AI
has rapidly evolved from text-based models to multimodal capabilities. These models perform tasks like image captioning and visual question answering, reflecting a shift toward more human-like AI. The community is now expanding from text and images to video, opening new possibilities across industries.
Video AI models are poised to revolutionize industries such as robotics, automotive, and retail. In
robotics
, they enhance autonomous navigation in complex, ever-changing environments, which is vital for sectors like manufacturing and warehouse management. In the automotive industry, video AI is propelling autonomous driving, boosting vehicle perception, safety, and predictive maintenance to improve efficiency.
To build image and video foundation models, developers must curate and preprocess a large amount of training data, tokenize the resulting high-quality data at high fidelity, train or customize pretrained models efficiently and at scale, and then generate high-quality images and videos during inference.
Announcing NVIDIA NeMo for multimodal generative AI
NVIDIA NeMo
is an end-to-end platform for developing, customizing, and deploying generative AI models.
NVIDIA just announced the expansion of NeMo to support the end-to-end pipeline for developing multimodal models. NeMo enables you to easily curate high-quality visual data, accelerate
training
and
customization
with highly efficient tokenizers and parallelism techniques, and reconstruct high-quality visuals during inference.
Accelerated video and image data curation
High-quality training data ensures high-accuracy results from an AI model. However, developers face various challenges in building data processing pipelines, ranging from scaling to data orchestration.
NeMo Curator
streamlines the data curation process, making it easier and faster for you to build multimodal generative AI models. Its out-of-the-box experience minimizes the total cost of ownership (TCO) and accelerates time-to-market.
While working with visuals, organizations can easily reach petabyte-scale data processing. NeMo Curator provides an orchestration pipeline that can load balance on multiple GPUs at each stage of the data curation. As a result, you can reduce video processing time by 7x compared to a naive GPU-based implementation. The scalable pipelines can efficiently process over 100 PB of data, ensuring the seamless handling of large datasets.
Figure 1. NVIDIA NeMo Curator video processing speed
NeMo Curator provides reference video curation models optimized for high-throughput filtering, captioning, and embedding stages to enhance dataset quality, empowering you to create more accurate AI models.
For instance, NeMo Curator uses an optimized captioning model that delivers an order of magnitude throughput improvement compared to unoptimized inference model implementations.
NVIDIA Cosmos tokenizers
Tokenizers map redundant and implicit visual data into compact and semantic tokens, enabling efficient training of large-scale generative models and democratizing their inference on limited computational resources.
Todayâs open video and image tokenizers often generate poor data representations, leading to lossy reconstructions, distorted images, and temporally unstable videos and placing a cap on the capability of generative models built on top of the tokenizers. Inefficient tokenization processes also result in slow encoding and decoding and longer training and inference times, negatively impacting both developer productivity and the user experience.
NVIDIA Cosmos tokenizers are open models that offer superior visual tokenization with exceptionally large compression rates and cutting-edge reconstruction quality across diverse image and video categories.
Video 1. Efficient Generative AI Tokenizers for Image and Video
These tokenizers provide ease of use through a suite of tokenizer standardized models that support vision-language models (VLMs) with discrete latent codes, diffusion models with continuous latent embeddings, and various aspect ratios and resolutions, enabling the efficient management of large-resolution images and videos. This provides you with tools for tokenizing a wide variety of visual input data to build image and video AI models.
Cosmos tokenizer architecture
A Cosmos tokenizer uses a sophisticated encoder-decoder structure designed for high efficiency and effective learning. At its core, it employs 3D
causal convolution blocks
, which are specialized layers that jointly process spatiotemporal information, and uses causal temporal attention that captures long-range dependencies in data.
The causal structure ensures that the model uses only past and present frames when performing tokenization, avoiding future frames. This is crucial for aligning with the causal nature of many real-world systems, such as those in physical AI or multimodal LLMs.
Figure 2. NVIDIA Cosmos tokenizer architecture
The input is downsampled using 3D wavelets, a signal processing technique that represents pixel information more efficiently. After the data is processed, an inverse wavelet transform reconstructs the original input.
This approach improves learning efficiency, enabling the tokenizer encoder-decoder learnable modules to focus on meaningful features rather than redundant pixel details. The combination of such techniques and its unique training recipe makes the Cosmos tokenizers a cutting-edge architecture for efficient and powerful tokenization.
During inference, the Cosmos tokenizers significantly reduce the cost of running the model by delivering up to 12x faster reconstruction compared to leading open-weight tokenizers (Figure 3).
Figure 3. Quantitative comparison of reconstruction quality (left) and runtime performance (right) for video tokenizers
The Cosmos tokenizers also produce high-fidelity images and videos while compressing more than other tokenizers, demonstrating an unprecedented quality-compression trade-off.
Figure 4. Continuous tokenizer compression rate compared to reconstruction quality
Figure 5. Discrete tokenizer compression rate compared to reconstruction quality
Although the Cosmos tokenizer regenerates from highly compressed tokens, it is capable of creating high-quality images and videos due to an innovative neural network training technique and architecture.
Figure 6. Reconstructed video frame for continuous video tokenizers
Build Your Own Multimodal Models with NeMo
The expansion of the NVIDIA NeMo platform with at-scale data processing using
NeMo Curator
and high-quality tokenization and visual reconstruction using the Cosmos tokenizer empowers you to build state-of-the-art multimodal, generative AI models.
Join the waitlist
and be notified when NeMo Curator is available. The tokenizer is available now on the
/NVIDIA/cosmos-tokenizer
GitHub repo and
Hugging Face
. | https://developer.nvidia.com/ja-jp/blog/state-of-the-art-multimodal-generative-ai-model-development-with-nvidia-nemo/ | NVIDIA NeMo ã«ããæå
端ã®ãã«ãã¢ãŒãã«çæ AI ã¢ãã«éçº | Reading Time:
2
minutes
çæ AI
ã¯ãããã¹ãããŒã¹ã®ã¢ãã«ãããã«ãã¢ãŒãã«æ©èœãžãšæ¥éã«é²åããŠããŸãããããã®ã¢ãã«ã¯ãç»åã®ãã£ãã·ã§ã³äœæãèŠèŠçãªè³ªååçãªã©ã®ã¿ã¹ã¯ãå®è¡ãããã人éã«è¿ã AI ãžãšã·ããããŠããããšãåæ ããŠããŸãããã®ã³ãã¥ããã£ã¯çŸåšãããã¹ããç»åããåç»ãžãšæ¡å€§ããŠãããããŸããŸãªæ¥çã§æ°ããªå¯èœæ§ãåãéãããŠããŸãã
åç» AI ã¢ãã«ã¯ããããã£ã¯ã¹ãèªåè»ãå°å£²ãªã©ã®æ¥çã«é©åœãèµ·ããããšããŠããŸãã
ãããã£ã¯ã¹
ã§ã¯ã補é æ¥ãå庫管çãªã©ã®åéã«äžå¯æ¬ ãªãè€éã§å€åãç¶ããç°å¢ã«ãããèªåŸçãªããã²ãŒã·ã§ã³ã匷åããŠããŸããèªåè»æ¥çã§ã¯ãåç» AI ãèªåé転ãæšé²ããè»äž¡ã®èªèãå®å
šæ§ãäºç¥ä¿å
šã匷åããå¹çæ§ãé«ããŠããŸãã
ç»åãåç»ã®åºç€ã¢ãã«ãæ§ç¯ããã«ã¯ãéçºè
ã¯å€§éã®åŠç¿ããŒã¿ã®ãã¥ã¬ãŒã·ã§ã³ãšäºååŠçãè¡ããçµæãšããŠåŸãããé«å質ããŒã¿ãé«ãå¿ å®åºŠã§ããŒã¯ã³åããåŠç¿æžã¿ã¢ãã«ãå¹ççã«å€§èŠæš¡ã«åŠç¿ãŸãã¯ã«ã¹ã¿ãã€ãºããŠãæšè«äžã«é«å質ãªç»åãåç»ãçæããå¿
èŠããããŸãã
ãã«ãã¢ãŒãã«çæ AI åãã® NVIDIA NeMo ãçºè¡š
NVIDIA NeMo
ã¯ãçæ AI ã¢ãã«ãéçºãã«ã¹ã¿ãã€ãºããããã€ãããšã³ãããŒãšã³ãã®ãã©ãããã©ãŒã ã§ãã
NVIDIA ã¯ããã«ãã¢ãŒãã« ã¢ãã«éçºåãã®ãšã³ãããŒãšã³ãã®ãã€ãã©ã€ã³ããµããŒããã NeMo ã®æ¡åŒµãçºè¡šããŸãããNeMo ã«ãããé«å質ãªèŠèŠããŒã¿ãç°¡åã«ãã¥ã¬ãŒã·ã§ã³ããé«å¹çãªããŒã¯ãã€ã¶ãŒãšäžŠååŠçæè¡ã§
åŠç¿
ãš
ã«ã¹ã¿ãã€ãº
ãå éããæšè«äžã«é«å質ãªããžã¥ã¢ã«ãåæ§ç¯ããããšãã§ããŸãã
åç»ãšç»åããŒã¿ã®ãã¥ã¬ãŒã·ã§ã³ãå é
é«å質ãªåŠç¿ããŒã¿ã§ã¯ãAI ã¢ãã«ããé«ç²ŸåºŠãªçµæãåŸãããŸããããããéçºè
ã¯ãããŒã¿åŠçãã€ãã©ã€ã³ã®æ§ç¯ã«ãããŠãã¹ã±ãŒãªã³ã°ããããŒã¿ã®ãªãŒã±ã¹ãã¬ãŒã·ã§ã³ãŸã§ãããŸããŸãªèª²é¡ã«çŽé¢ããŠããŸãã
NeMo Curator
ã¯ãããŒã¿ ãã¥ã¬ãŒã·ã§ã³ ããã»ã¹ãåçåããããšã§ããã«ãã¢ãŒãã«çæ AI ã¢ãã«ãããç°¡åãã€è¿
éã«æ§ç¯ããããšãã§ããŸããããã«è©Šãããšãã§ãããããç·ä¿æã³ã¹ã (TCO) ãæå°éã«æããåžå Žæå
¥ãŸã§ã®æéãççž®ããŸãã
ããžã¥ã¢ã«ãæ±ãéã«ã¯ãçµç¹ã¯ãã¿ãã€ãèŠæš¡ã®ããŒã¿åŠçã容æã«å®è¡ã§ããŸããNeMo Curator ã¯ãããŒã¿ ãã¥ã¬ãŒã·ã§ã³ã®å段éã§è€æ°ã® GPU ã«è² è·åæ£ã§ãããªãŒã±ã¹ãã¬ãŒã·ã§ã³ ãã€ãã©ã€ã³ãæäŸããŸãããã®çµæãåçŽãª GPU ããŒã¹ã®å®è£
ãšæ¯èŒããŠãåç»åŠçæéã 7 åã® 1 ã«ççž®ã§ããŸããã¹ã±ãŒã«å¯èœãªãã€ãã©ã€ã³ã¯ã100 PB ãè¶
ããããŒã¿ãå¹ççã«åŠçã§ãã倧èŠæš¡ãªããŒã¿ã»ãããã·ãŒã ã¬ã¹ã«åãæ±ãããšãã§ããŸãã
å³ 1. NVIDIA NeMo Curator ã®åç»åŠçé床
NeMo Curator ã¯ãé«ãã¹ã«ãŒãããã®ãã£ã«ã¿ãªã³ã°ããã£ãã·ã§ã³äœæãåã蟌ã¿ã®å段éã«æé©åããããªãã¡ã¬ã³ã¹ ãã㪠ãã¥ã¬ãŒã·ã§ã³ ã¢ãã«ãæäŸããããŒã¿ã»ããã®å質ãåäžãããããæ£ç¢ºãª AI ã¢ãã«ã®äœæããµããŒãããŸãã
ããšãã°ãNeMo Curator ã¯ãæé©åããããã£ãã·ã§ã³ ã¢ãã«ã䜿çšããæé©åãããŠããªãæšè«ã¢ãã«ã®å®è£
ãšæ¯èŒããŠãæ¡éãã®ã¹ã«ãŒãããã®åäžãå®çŸããŸãã
NVIDIA Cosmos ããŒã¯ãã€ã¶ãŒ
ããŒã¯ãã€ã¶ãŒã¯ãåé·çã§æé»çãªèŠèŠããŒã¿ãã³ã³ãã¯ãã§æå³ã®ããããŒã¯ã³ã«ãããã³ã°ãã倧èŠæš¡ãªçæã¢ãã«ã®å¹ççãªåŠç¿ãå®çŸãã誰ããéãããèšç®ãªãœãŒã¹ã§æšè«ã§ããããã«ããŸãã
ä»æ¥ã®ãªãŒãã³ãªåç»ãç»åã®ããŒã¯ãã€ã¶ãŒã¯ãããŒã¿è¡šçŸãäžååãªããšãå€ããããå£åã®å€ãåæ§ç¯ãæªãã ç»åãäžé£ç¶ãªåç»ã«ã€ãªãããããŒã¯ãã€ã¶ãŒäžã«æ§ç¯ãããçæã¢ãã«ã®èœåã«éçããããããŸããããŒã¯ã³åããã»ã¹ãéå¹çãªããããšã³ã³ãŒãããã³ãŒãã«æéãããããåŠç¿ãæšè«ã®æéãé·ããªããéçºè
ã®çç£æ§ãšãŠãŒã¶ãŒäœéšã®äž¡æ¹ã«æªåœ±é¿ãåãŒããŸãã
NVIDIA Cosmos ããŒã¯ãã€ã¶ãŒã¯ãåªããèŠèŠããŒã¯ã³åãæäŸãããªãŒãã³ãªã¢ãã«ã§ãããŸããŸãªç»åãåç»ã®ã«ããŽãªãŒã§ãé«ãå§çž®çãšæå
端ã®åæ§ç¯å質ãå®çŸããŸãã
é¢æ£çãªæœåšã³ãŒããåããèŠèŠèšèªã¢ãã« (VLM: Vision-language Model)ãé£ç¶ããæœåšçåã蟌ã¿ã«ããæ¡æ£ã¢ãã«ãããŸããŸãªã¢ã¹ãã¯ãæ¯ã解å床ããµããŒãããäžé£ã®ããŒã¯ãã€ã¶ãŒæšæºåã¢ãã«ã䜿çšããŠããããã®ããŒã¯ãã€ã¶ãŒãç°¡åã«äœ¿çšã§ããé«è§£å床ã®ç»åãåç»ãå¹ççã«ç®¡çããããšãã§ããŸããããã«ãããç»åãåç» AI ã¢ãã«ãæ§ç¯ããããã«ãå¹
åºãèŠèŠå
¥åããŒã¿ãããŒã¯ã³åããããŒã«ãæäŸãããŸãã
Cosmos ããŒã¯ãã€ã¶ãŒã®ã¢ãŒããã¯ãã£
Cosmos ããŒã¯ãã€ã¶ãŒã¯ãé«å¹çãã€å¹æçãªåŠç¿åãã«èšèšãããŠãããé«åºŠãªãšã³ã³ãŒã㌠/ ãã³ãŒããŒæ§é ã䜿çšããŠããŸãããã®äžæ žã«ã¯ 3D
Causal Convolution Block
(å æç³ã¿èŸŒã¿ãããã¯) ãæ¡çšããŠããŸããããã¯æ空éæ
å ±ãå
±ååŠçããç¹æ®ãªã¬ã€ã€ãŒã§ãããŒã¿ã®é·æçãªäŸåé¢ä¿ãæãã Causal Temporal Attention (å æçæé泚ææ©æ§) ã䜿çšããŠããŸãã
ãã®å ææ§é ã«ãããããŒã¯ã³åã®å®è¡æã«ã¢ãã«ãéå»ãšçŸåšã®ãã¬ãŒã ã®ã¿ã䜿çšããæªæ¥ã®ãã¬ãŒã ã¯äœ¿çšããŸãããããã¯ãç©ççãªAIããã«ãã¢ãŒãã«LLMãªã©ã®å€ãã®çŸå®äžçã®ã·ã¹ãã ã®å ææ§ã«åãããããã«éèŠã§ãã
å³ 2. NVIDIA Cosmos ããŒã¯ãã€ã¶ãŒã®ã¢ãŒããã¯ãã£
å
¥åã¯ããã¯ã»ã«æ
å ±ãããå¹ççã«è¡šãä¿¡å·åŠçæè¡ã§ãã 3D ãŠã§ãŒãã¬ããã䜿çšããŠããŠã³ãµã³ããªã³ã°ãããŸããããŒã¿åŠçåŸãéãŠã§ãŒãã¬ããå€æã«ãã£ãŠå
ã®å
¥åãåæ§ç¯ãããŸãã
ãã®ã¢ãããŒãã«ãããåŠç¿å¹çãåäžããããŒã¯ãã€ã¶ãŒã®ãšã³ã³ãŒã㌠/ ãã³ãŒããŒã®åŠç¿å¯èœãªã¢ãžã¥ãŒã«ã¯ãåé·ãªãã¯ã»ã«ã®è©³çŽ°ã§ã¯ãªããæå³ã®ããç¹åŸŽã«çŠç¹ãåœãŠãããšãã§ããŸãããã®ãããªæè¡ãšç¬èªã®åŠç¿ã¬ã·ãã®çµã¿åããã«ãããCosmos ããŒã¯ãã€ã¶ãŒã¯ãå¹ççãã€åŒ·åãªããŒã¯ã³åãå®çŸããæå
端ã®ã¢ãŒããã¯ãã£ãšãªã£ãŠããŸãã
æšè«ã®éãCosmos ããŒã¯ãã€ã¶ãŒã¯ãäž»èŠãªãªãŒãã³ãŠã§ã€ãã®ããŒã¯ãã€ã¶ãŒãšæ¯èŒããŠæ倧 12 åé«éãªåæ§ç¯ãå®çŸããã¢ãã«ã®å®è¡ã³ã¹ãã倧å¹
ã«åæžããŸãã (å³ 3)ã
å³ 3. Cosmos ããŒã¯ãã€ã¶ãŒãšäž»èŠãªãªãŒãã³ãŠã§ã€ãã®ããŒã¯ãã€ã¶ãŒãšã®æ¯èŒ
Cosmos ããŒã¯ãã€ã¶ãŒã¯ãä»ã®ããŒã¯ãã€ã¶ãŒãããé«ãå§çž®çãå®çŸããªãããé«ãå¿ å®åºŠã®ç»åãåç»ãçæããåäŸã®ãªãå質ãšå§çž®ã®ãã¬ãŒããªããå®çŸããŠããŸãã
å³ 4. é£ç¶ããŒã¯ãã€ã¶ãŒã®å§çž®çãšåæ§ç¯å質ã®æ¯èŒ
å³ 5. é¢æ£ããŒã¯ãã€ã¶ãŒã®å§çž®çãšåæ§ç¯å質ã®æ¯èŒ
Cosmos ããŒã¯ãã€ã¶ãŒã¯ãé«åºŠã«å§çž®ãããããŒã¯ã³ããåçæãããŸãããé©æ°çãªãã¥ãŒã©ã« ãããã¯ãŒã¯ã®åŠç¿æè¡ãšã¢ãŒããã¯ãã£ã«ãããé«å質ãªç»åãåç»ãäœæããããšãã§ããŸãã
å³ 6. é£ç¶åç»ããŒã¯ãã€ã¶ãŒã§åæ§ç¯ãããåç»ãã¬ãŒã
NeMo ã§ç¬èªã®ãã«ãã¢ãŒãã« ã¢ãã«ãæ§ç¯
NeMo Curator
ã䜿çšãã倧èŠæš¡ãªããŒã¿åŠçãšãCosmos ããŒã¯ãã€ã¶ãŒã䜿çšããé«å質ãªããŒã¯ã³åãããžã¥ã¢ã«åæ§ç¯ãåãããNVIDIA NeMo ãã©ãããã©ãŒã ã®æ¡åŒµã«ãããæå
端ã®ãã«ãã¢ãŒãã«çæ AI ã¢ãã«ãæ§ç¯ããããšãã§ããŸãã
ç»é²
ããŠããã ããšãNeMo Curator ãå©çšå¯èœã«ãªã£ãéã«éç¥ãåãåãããšãã§ããŸããããŒã¯ãã€ã¶ãŒã¯ãçŸåš
/NVIDIA/cosmos-tokenizer
GitHub ãªããžããªããã³
Hugging Face
ã§å©çšããããšãã§ããŸãã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
Large Language Model Fine-Tuning using Parameter Efficient Fine-Tuning (PEFT ã䜿çšãã倧èŠæš¡èšèªã¢ãã«ã®ãã¡ã€ã³ãã¥ãŒãã³ã°)
GTC ã»ãã·ã§ã³:
Large Language Model Fine-Tuning using NVIDIA NeMo (NVIDIA NeMo ã䜿çšãã倧èŠæš¡èšèªã¢ãã«ã®ãã¡ã€ã³ãã¥ãŒãã³ã° â Domino Data Lab æäŸ)
SDK:
NVIDIA NeMo ã«ã¹ã¿ãã€ã¶ãŒ
SDK:
NeMo LLM ãµãŒãã¹
SDK:
NeMo Megatron |
https://developer.nvidia.com/blog/frictionless-collaboration-and-rapid-prototyping-in-hybrid-environments-with-nvidia-ai-workbench/ | Frictionless Collaboration and Rapid Prototyping in Hybrid Environments with NVIDIA AI Workbench | NVIDIA AI Workbench
is a free development environment manager that streamlines data science, AI, and machine learning (ML) projects on systems of choice. The goal is to provide a frictionless way to create, compute, and collaborate on and across PCs, workstations, data centers, and clouds. The basic user experience is straightforward:
Easy setup on single systems:
Click through install in minutes on Windows, Ubuntu, and macOS, with a one-line install on remote systems.
Managed experience for decentralized deployment
: A free, PaaS/SaaS type UX in truly hybrid contexts with no need for a centralized, service-based platform.
Seamless collaboration for experts and beginners:
Friendly Git, container, and application management without limiting customization by power users.
Consistent across users and systems:
Migrate workloads and applications across different systems while maintaining functionality and user experience.
Simplified GPU handling
: Handles system dependencies like
NVIDIA drivers
and the
NVIDIA Container Toolkit
, as well as
GPU-enabled container
runtime configuration.
This post explores highlights of the October release of NVIDIA AI Workbench, which is the most significant since the product launch at GTC 2024 and is a big step closer to the full product vision.
Release highlights
This section will detail the major new capabilities and user-requested updates in the latest release.
Major new capabilities include:
Enhance collaboration through expanded Git support, such as branching, merging, diffs, and finer-grained control for commits and gitignore.
Create complex applications and workflows with multicontainer environments through Docker Compose support.
Simple, fast, and secure rapid prototyping with application sharing with single-user URLs.
User requested updates:
Dark mode for the Desktop App
Improved installation on localized versions of Windows
Expanded Git support
Previously, AI Workbench supported only single, monolithic commits on the main branch. Users had to manage branches and merges manually, and this created various types of confusion, especially around resolving merge conflicts. Now, users can manage branches, merges, and conflicts directly in the Desktop App and the CLI. In addition, they can see and triage individual file diffs for commits. The UI is built to work seamlessly with manual Git operations and will update to reflect relevant changes.
Figure 1. AI Workbench Desktop App tab for Git branching
These features are found in two new tabs on the Desktop App: Changes and Branches.
Changes
: Gives a line-by-line view of the diffs between the working tree and previous commits. Users can now select and commit file changes individually or in bulk based on visible file diffs tracked changes (addition, modification, or deletion), as well as being able to individually reject or add a file to git-ignore. The view also updates dynamically to reflect manual Git actions, for example manually staging a file and then following up with a change to the file in the working tree.
Branches
: Provides branch management, including creation, switching, and merging, as well as visibility for remote branches on a Git server. Merging branches with a conflict initiates a conflict resolution flow that users can do within the UI, or move to a terminal or file editor of their choice.
Learn more about how these advanced Git features work
.
Multicontainer support with Docker Compose stacks
AI Workbench now supports
Docker Compose
. Users can work with multicontainer applications and workflows with the same ease of configuration, reproducibility, and portability that AI Workbench provides for single-container environments.
Figure 2. The Docker Compose feature in the AI Workbench Environment Management tab
The basic idea is to add a Docker Compose-based âstackâ that is managed by AI Workbench and connects to the main development container. To add the stack, a user just needs to add the appropriate Docker Compose file to the project repository and do some configuration in the Desktop App or CLI.
Weâre using Docker Compose for a few reasons. First, we didnât want to develop in a vacuum, and thatâs why weâve been
collaborating with the Docker team
on features like a
managed Docker Desktop install
.
Second, we want users to be able to work with the multicontainer applications outside of AI Workbench, and Docker Compose is the easiest way to do that. The vision for this feature is to enable streamlined, powerful development and compute for multicontainer applications within AI Workbench that can then be stood up outside of AI Workbench with a simple
docker-compose
up command.
This multicontainer feature is new and will continue to evolve. We would love to get feedback and help you sort out any issues through the
NVIDIA AI Workbench Developer Forum
.
Learn more about how Docker Compose works
.
Web application sharing through secure URLs
AI Workbench enables users to easily spin up managed web applications that are built into a project. The process is fairly simple: create or clone a project with the web app installed, start the project, then start the app, and it appears in your browser.
This approach is great for a developer UX, but it wasnât good for rapid prototyping UX and collaboration. If you wanted another user to access and test your application, you either asked them to install AI Workbench, clone the project and run it, or you had to fully extract the application to run it and make it available to the user. The first is a speed bump for the user, and the second is a speed bump for the developer.
We eliminated these speed bumps with a simple feature that enables you to set a remote AI Workbench to enable external access and to create single-use, secure URLs for running web applications in a project on that remote. You just need to make sure the user has access to port 10000 on the remote, and the application will be directly accessible. All they have to do is click the link and go to the app.
Figure 3. Developers can now give end users direct access to applications running in an AI Workbench Project on a remote through secure, one-time-use URLs
Enabling this kind of access is useful for rapid prototyping and collaboration. Thatâs why various SaaS offerings provide this as a managed service. The difference with AI Workbench is that you can provide this access on your own resources and in your own network, for example on data center resources or a shared server. It doesnât have to be in the cloud.
AI Workbench keeps things secure by restricting this access to a single browser and to a single application thatâs running in the project. This means a user canât share the URL with someone else, and they are constrained to the web app that you shared with them.
Learn more about how application sharing works.
Dark mode and localized Windows installation
Many users requested a dark mode option because itâs easier on the eyes. Itâs now available and can be selected through the Settings window that is now available directly from within the Desktop App.
Learn more about how dark mode works
.
Windows users are by far our main demographic for the local installs, and not all Windows users are using the English language pack, and this blocked AI Workbench install due to how we handled some WSL commands. In particular, weâve had users working in Cyrillic or Chinese that were blocked on Windows. We adjusted how we handle non-English language packs, and it should work well now. If you were previously blocked by this, give it a try now. If it still doesnât work for you, let us know in the
NVIDIA AI Workbench Developer Forum
so we can continue to improve this capability.
New AI Workbench projects
This release introduces new example projects designed to jumpstart your AI development journey, detailed below. An
AI Workbench project
is a structured Git repository that defines a containerized development environment in AI Workbench. AI Workbench projects provide:
Effortless setup and GPU configuration:
Simply clone a project from GitHub or GitLab, and AI Workbench handles the rest with automatic GPU configuration.
Development integrations:
Seamless support for popular development environments such as Jupyter and VS Code, as well as support for user-configured web applications.
Containerized and customizable environments:
Projects are containerized, isolated, and easily modifiable. Adapt example projects to suit your specific needs while ensuring consistency and reproducibility.
Explore NVIDIA AI Workbench example projects
.
Multimodal virtual assistant
example project
This project enables users to build their own virtual assistant using a multimodal
retrieval-augmented generation (RAG)
pipeline with fallback to web search. Users can interact with two RAG-based applications to learn more about AI Workbench, converse with the user documentation, troubleshoot their own installation, or even focus the RAG pipeline to their own, custom product.
Control-Panel:
Customizable Gradio app for working with product documentation allows uploading webpages, PDFs, images, and videos to a persistent vector store and query them. For inference, users can select between cloud endpoints like on the NVIDIA API Catalog or use self-hosted endpoints to run their own inference.
Public-Chat:
With product documents loaded, the Gradio app is a simplified, âread-onlyâ chatbot that you can share with end users through the new AI Workbench App Sharing feature.
Figure 4. Using the Public-Chat web app, a read-only, pared down chat application that is meant to be more consumable and shareable to end users
Competition-Kernel example project
This project provides an easy, local experience when working on Kaggle competitions. You can easily leverage your local machine or a cloud instance to work on competition datasets, write code, build out models, and submit results, all through AI Workbench. The Competition Kernel project offers:
A managed experience to develop and test on your own GPUs and set up and customize in minutes.
Easy version control and tracking of code through GitHub or GitLab and very easy collaboration.
The power of using a local, dedicated IDE: robust debugging, intelligent code completion, extensive customization options.
Easy plugin to existing data sources (external or your own).
No Internet? No problem. Develop while offline.
Get started
This release of NVIDIA AI Workbench marks a significant step forward in providing a frictionless experience for AI development across GPU systems. New features from this release, including expanded Git support, support for multicontainer environments, and secure web app sharing, streamline developing and collaborating on AI workloads. Explore these features in the three new example projects available with this release or create your own projects.
To get started with AI Workbench,
install the application from the webpage
. For more information about installing and updating, see the
NVIDIA AI Workbench documentation
.
Explore a range of
NVIDIA AI Workbench example projects
, from data science to RAG.
Visit the
NVIDIA AI Workbench Developer Forum
to report issues and learn more about how other developers are using AI Workbench. | https://developer.nvidia.com/ja-jp/blog/frictionless-collaboration-and-rapid-prototyping-in-hybrid-environments-with-nvidia-ai-workbench/ | NVIDIA AI Workbench ã«ãããã€ããªããç°å¢ã«ãããã¹ã ãŒãºãªã³ã©ãã¬ãŒã·ã§ã³ãšè¿
éãªãããã¿ã€ãã³ã° | Reading Time:
3
minutes
NVIDIA AI Workbench
ã¯ãéžæããã·ã¹ãã ã§ããŒã¿ ãµã€ãšã³ã¹ãAIãæ©æ¢°åŠç¿ (ML) ãããžã§ã¯ããåçåããç¡æã®éçºç°å¢ãããŒãžã£ãŒã§ãã PCãã¯ãŒã¯ã¹ããŒã·ã§ã³ãããŒã¿ ã»ã³ã¿ãŒãã¯ã©ãŠãäžã§ããããã¯ãããããŸããããã¹ã ãŒãºãªäœæãèšç®ãã³ã©ãã¬ãŒã·ã§ã³ãè¡ãããšãç®çãšããŠããŸããåºæ¬çãªãŠãŒã¶ãŒäœéšã¯ã·ã³ãã«ã§ã:
åäžã·ã¹ãã ã§ç°¡åãªã»ããã¢ãã:
WindowsãUbuntuãmacOS ã§ã¯ã¯ãªãã¯æäœã§ã€ã³ã¹ããŒã«ãå®äºãããªã¢ãŒã ã·ã¹ãã ã§ã¯ 1 è¡ã®ã³ãã³ãã§ã€ã³ã¹ããŒã«ããããšãã§ããŸãã
åæ£åãããã€ã®ããã®ç®¡çåãããäœéš
: éäžåã®ãµãŒãã¹ããŒã¹ã®ãã©ãããã©ãŒã ãå¿
èŠãšããªããæ¬åœã®æå³ã§ãã€ããªãããªã³ã³ããã¹ãã«ãããç¡æã® PaaS/SaaS åã®ãŠãŒã¶ãŒäœéšã
ãšãã¹ããŒããšåå¿è
åãã®ã·ãŒã ã¬ã¹ãªã³ã©ãã¬ãŒã·ã§ã³:
ãã¯ãŒ ãŠãŒã¶ãŒã«ããã«ã¹ã¿ãã€ãºãå¶éããããšã®ãªãã䜿ãããã Gitãã³ã³ãããŒãã¢ããªã±ãŒã·ã§ã³ç®¡çã
ãŠãŒã¶ãŒãšã·ã¹ãã éã®äžè²«æ§:
æ©èœãšãŠãŒã¶ãŒäœéšãç¶æããªãããç°ãªãã·ã¹ãã éã§ã¯ãŒã¯ããŒããšã¢ããªã±ãŒã·ã§ã³ã移è¡ã
GPU åŠçã®ç°¡çŽ å
:
NVIDIA ãã©ã€ããŒ
ã
NVIDIA ã³ã³ãã㌠ããŒã«ããã
ãªã©ã®ã·ã¹ãã äŸåé¢ä¿ã
ããã³ GPU 察å¿ã®ã³ã³ãããŒ
ã©ã³ã¿ã€ã æ§æãåŠçã
ãã®èšäºã§ã¯ãGTC 2024 ã§ã®è£œåçºè¡šä»¥æ¥ãæãéèŠãª NVIDIA AI Workbench ã® 10 æã®ãªãªãŒã¹ã«ããããã€ã©ã€ããã玹ä»ããŸãã補åããžã§ã³å®çŸã«åãã倧ããªäžæ©ã§ãã
ãªãªãŒã¹ ãã€ã©ã€ã
ãã®ã»ã¯ã·ã§ã³ã§ã¯ãææ°ãªãªãŒã¹ã§ã®äž»èŠãªæ°æ©èœãšãŠãŒã¶ãŒããèŠæã®ãã£ãæŽæ°ã«ã€ããŠã詳ãã説æããŸãã
äž»ãªæ°æ©èœã«ã¯ä»¥äžãå«ãŸããŸãã
ãã©ã³ããããŒãžãå·®åãã³ããããš gitignore ã®çŽ°ããå¶åŸ¡ãªã©ãGit ã®ãµããŒããæ¡å€§ããã³ã©ãã¬ãŒã·ã§ã³ã匷åããŸãã
Docker Compose ã®ãµããŒããéããŠããã«ãã³ã³ãããŒç°å¢ã§è€éãªã¢ããªã±ãŒã·ã§ã³ãšã¯ãŒã¯ãããŒãäœæããŸãã
ã·ã³ã°ã«ãŠãŒã¶ãŒ URL ã§ã¢ããªã±ãŒã·ã§ã³ãå
±æããããšã§ãã·ã³ãã«ãã€è¿
éãå®å
šãªãããã¿ã€ãã³ã°ãå®çŸããŸãã
ãŠãŒã¶ãŒã®èŠæã«ããã¢ããããŒã:
ãã¹ã¯ããã ã¢ããªã®ããŒã¯ã¢ãŒã
ããŒã«ã©ã€ãºç Windows ã®ã€ã³ã¹ããŒã«æ¹å
Git ãµããŒãã®æ¡åŒµ
ãããŸã§ AI Workbench ã¯ãã¡ã€ã³ ãã©ã³ãã§ã®åäžã®ã¢ããªã·ãã¯ãªã³ãããã®ã¿ããµããŒãããŠããŸããã ãŠãŒã¶ãŒã¯ãã©ã³ããšããŒãžãæåã§ç®¡çããå¿
èŠããããç¹ã«ããŒãžã®ç«¶åã®è§£æ±ºã«é¢ããŠãããŸããŸãªçš®é¡ã®æ··ä¹±ãçããŠããŸããã çŸåšã¯ããã©ã³ããããŒãžã競åãããã¹ã¯ããã ã¢ããªãš CLI ã§çŽæ¥ç®¡çããããšãã§ããŸãã å ããŠãã³ãããã®åã
ã®ãã¡ã€ã«å·®åã確èªããåªå
é äœãä»ããããšãã§ããŸãã ãã® UI ã¯ãæåã® Git æäœãšã·ãŒã ã¬ã¹ã«åäœããããã«æ§ç¯ãããŠãããé¢é£ããå€æŽãåæ ããŠæŽæ°ãããŸãã
å³ 1. Git ãã©ã³ãçšã® AI Workbench Desktop ã¢ã㪠ã¿ã
ãããã®æ©èœã¯ããã¹ã¯ããã ã¢ããªã® 2 ã€ã®æ°ããã¿ã: [Changes (å€æŽ)] ãš [Branches (ãã©ã³ã)] ã«è¡šç€ºãããŸãã
å€æŽ
: äœæ¥ããªãŒãšä»¥åã®ã³ãããéã®å·®åã 1 è¡ãã€è¡šç€ºããŸãã ãŠãŒã¶ãŒã¯ã衚瀺ãããŠãããã¡ã€ã«å·®åã远跡ãããå€æŽ (è¿œå ãä¿®æ£ãåé€) ã«åºã¥ããŠããã¡ã€ã«å€æŽãåå¥ãŸãã¯äžæ¬ã§éžæããã³ãããããããšãã§ããããã«ãªããŸããããŸããgit-ignore ã«ãã¡ã€ã«ãåå¥ã«æåŠããŸãã¯è¿œå ããããšãã§ããŸãã ãã®ãã¥ãŒã¯ãŸããæåã® Git æäœãåæ ããããã«åçã«æŽæ°ãããŸããäŸãã°ããã¡ã€ã«ãæåã§ã¹ããŒãžã³ã°ããäœæ¥ããªãŒå
ã®ãã¡ã€ã«ã«å€æŽãå ããŸãã
ãã©ã³ã
: Git ãµãŒããŒäžã®ãªã¢ãŒã ãã©ã³ããå¯èŠåããã ãã§ãªããäœæãåãæ¿ããããŒãžãªã©ã®ãã©ã³ã管çãæäŸããŸãã競åã®ãããã©ã³ããããŒãžãããšã競å解決ãããŒãéå§ãããŸãããã®ãããŒã¯ããŠãŒã¶ãŒã UI å
ã§å®è¡ããããšããéžæãã端æ«ããã¡ã€ã« ãšãã£ã¿ãŒã«ç§»åããããšãã§ããŸãã
ãããã®é«åºŠãª Git æ©èœã®ä»çµã¿ã®è©³çŽ°ãã芧ãã ãã
ã
Docker Compose ã¹ã¿ãã¯ã«ãããã«ãã³ã³ãããŒã®ãµããŒã
AI Workbench ãã
Docker Compose
ããµããŒãããããã«ãªããŸããã ãŠãŒã¶ãŒã¯ãAI Workbench ãã·ã³ã°ã«ã³ã³ãããŒç°å¢åãã«æäŸããæ§æãåçŸæ§ã移æ€æ§ãšåæ§ã®å®¹æãã§ããã«ãã³ã³ãã㌠ã¢ããªã±ãŒã·ã§ã³ãšã¯ãŒã¯ãããŒãæäœããããšãã§ããŸãã
å³ 2. AI Workbench ç°å¢ç®¡çã¿ãã® Docker Compose æ©èœ
åºæ¬çãªèãæ¹ã¯ãAI Workbench ã«ãã£ãŠç®¡çãããã¡ã€ã³ã®éçºã³ã³ãããŒã«æ¥ç¶ãã Docker Compose ããŒã¹ã®ãã¹ã¿ãã¯ããè¿œå ããããšã§ãã ã¹ã¿ãã¯ãè¿œå ããã«ã¯ããŠãŒã¶ãŒã¯ é©å㪠Docker Compose ãã¡ã€ã«ããããžã§ã¯ã ãªããžããªã«è¿œå ãããã¹ã¯ããã ã¢ããªãŸã㯠CLI ã§ããã€ãã®èšå®ãè¡ãã ãã§ãã
NVIDIA ã§ã¯ãããã€ãã®çç±ããã£ãŠ Docker Compose ã䜿çšããŠããŸãã 1 ã€ç®ã¯ãäœããªãæããéçºãè¡ãããšãæãã§ããªãã£ãããã§ãããã®ããã
管çããã Docker ãã¹ã¯ããã ã€ã³ã¹ããŒã«
ãªã©ã®æ©èœã«ã€ããŠ
Docker ããŒã ãšåå
ããŠããŸããã
2 ã€ç®ã¯ãAI Workbench ã®ä»¥å€ã§ããŠãŒã¶ãŒããã«ãã³ã³ãã㌠ã¢ããªã±ãŒã·ã§ã³ãæäœã§ããããã«ããã«ã¯ãDocker Compose ãæãç°¡åãªæ¹æ³ã ããã§ãããã®æ©èœã®ããžã§ã³ã¯ãAI Workbench å
ã®ãã«ãã³ã³ãã㌠ã¢ããªã±ãŒã·ã§ã³ãåçåããå¹æçãªéçºãšæŒç®åŠçãå¯èœã«ããã·ã³ãã«ãª
docker-compose
up ã³ãã³ã㧠AI Workbench å€ã§èµ·åã§ããããã«ããããšã§ãã
ãã®ãã«ãã³ã³ãããŒæ©èœã¯æ°ããæ©èœã§ãããä»åŸãé²åãç¶ããŸãã
NVIDIA AI Workbench éçºè
ãã©ãŒã©ã
ãéããŠãæ¯éãã£ãŒãããã¯ããå¯ããã ãããåé¡è§£æ±ºããæäŒãããããŸãã
Docker Compose ã®ä»çµã¿ã®è©³çŽ°ãã芧ãã ãã
ã
ã»ãã¥ã¢ãª URL ã«ãããŠã§ã ã¢ããªã±ãŒã·ã§ã³å
±æ
AI Workbench ã«ããããŠãŒã¶ãŒã¯ãããžã§ã¯ãã«çµã¿èŸŒãŸãã管çããããŠã§ã ã¢ããªã±ãŒã·ã§ã³ãç°¡åã«èµ·åããããšãã§ããŸãã ãã®ããã»ã¹ã¯éåžžã«ç°¡åã§ãWeb ã¢ããªãã€ã³ã¹ããŒã«ããããããžã§ã¯ããäœæãŸãã¯è€è£œãããããžã§ã¯ããéå§ããŠããã¢ããªãéå§ãããšããã©ãŠã¶ãŒã«è¡šç€ºãããŸãã
ãã®ã¢ãããŒãã¯éçºè
ã® UX ã«ã¯æé©ã§ãããè¿
éãªãããã¿ã€ãã³ã°ã® UX ãã³ã©ãã¬ãŒã·ã§ã³ã«ã¯é©ããŠããŸããã§ããã ä»ã®ãŠãŒã¶ãŒã«ã¢ããªã±ãŒã·ã§ã³ãžã®ã¢ã¯ã»ã¹ãšãã¹ããè¡ã£ãŠãããå ŽåãAI Workbench ã®ã€ã³ã¹ããŒã«ããããžã§ã¯ãã®è€è£œãå®è¡ãäŸé Œããããã¢ããªã±ãŒã·ã§ã³ãå®å
šã«æœåºããŠå®è¡ãããŠãŒã¶ãŒãå©çšã§ããããã«ããå¿
èŠããããŸããã 1 ã€ç®ã¯ãŠãŒã¶ãŒã®èª²é¡ã§ããã2 ã€ç®ã¯éçºè
ã®èª²é¡ã§ãã
NVIDIA ã§ã¯ããªã¢ãŒãã® AI Workbench ãèšå®ããŠå€éšããã®ã¢ã¯ã»ã¹ãå¯èœã«ãããã®ãªã¢ãŒãäžã®ãããžã§ã¯ãã«ãããŠããŠã§ã ã¢ããªã±ãŒã·ã§ã³ãå®è¡ããããã®äžåéãã®å®å
šãª URL ãäœæããããšãã§ããã·ã³ãã«ãªæ©èœã§ãããã®èª²é¡ãå
æããŸããã ãŠãŒã¶ãŒããªã¢ãŒãã®ããŒã 10000 ã«ã¢ã¯ã»ã¹ã§ããããšã確èªããã ãã§ãã¢ããªã±ãŒã·ã§ã³ã«çŽæ¥ã¢ã¯ã»ã¹ã§ããããã«ãªããŸãã ãªã³ã¯ãã¯ãªãã¯ããŠã¢ããªã«ç§»åããã ãã§ãã
å³ 3. éçºè
ã¯ãäžåéãã®å®å
šãª URL ãéããŠããšã³ããŠãŒã¶ãŒã AI Workbench ãããžã§ã¯ãã§å®è¡ããŠããã¢ããªã±ãŒã·ã§ã³ã«ããªã¢ãŒãã§çŽæ¥ã¢ã¯ã»ã¹ãããããšãã§ããããã«ãªããŸãã
ãã®ãããªã¢ã¯ã»ã¹ãæå¹ã«ããããšã¯ãè¿
éãªãããã¿ã€ãã³ã°ãšã³ã©ãã¬ãŒã·ã§ã³ã«åœ¹ç«ã¡ãŸãã ã ãããããããŸããŸãª SaaS ããããæäŸãããããŒãžã ãµãŒãã¹ãšããŠæäŸããŠããã®ã§ããAI Workbench ãšã®éãã¯ãããŒã¿ ã»ã³ã¿ãŒã®ãªãœãŒã¹ãå
±æãµãŒããŒãªã©ãç¬èªã®ãªãœãŒã¹ãç¬èªã®ãããã¯ãŒã¯ã§ããã®ã¢ã¯ã»ã¹ãæäŸã§ããããšã§ãã ã¯ã©ãŠãã§ããå¿
èŠã¯ãããŸããã
AI Workbench ã¯ãåäžã®ãã©ãŠã¶ãŒãšãããžã§ã¯ãã§å®è¡ãããŠããåäžã®ã¢ããªã±ãŒã·ã§ã³ã«å¯ŸããŠããã®ã¢ã¯ã»ã¹ãå¶éããããšã§ãå®å
šæ§ã確ä¿ããŸãã ã€ãŸãããŠãŒã¶ãŒã¯ URL ãä»ã®ãŠãŒã¶ãŒãšå
±æã§ãããå
±æãããŠã§ã ã¢ããªã«å¶éãããŸãã
ã¢ããªã±ãŒã·ã§ã³å
±æã®ä»çµã¿ã®è©³çŽ°ãã芧ãã ããã
ããŒã¯ ã¢ãŒããšããŒã«ã©ã€ãºããã Windows ã€ã³ã¹ããŒã«
å€ãã®ãŠãŒã¶ãŒãããç®ã«åªããããŒã¯ ã¢ãŒããªãã·ã§ã³ã®èŠæãå¯ããããŸããã çŸåšããã®ãªãã·ã§ã³ã¯å©çšå¯èœã§ããã¹ã¯ããã ã¢ããªããçŽæ¥äœ¿çšã§ããèšå®ãŠã£ã³ããŠããéžæã§ããããã«ãªã£ãŠããŸãã
ããŒã¯ ã¢ãŒãã®ä»çµã¿ã®è©³çŽ°ãã芧ãã ãã
ã
ããŒã«ã« ã€ã³ã¹ããŒã«ã®äž»ãªãŠãŒã¶ãŒå±€ã¯ Windows ãŠãŒã¶ãŒã§ããããã¹ãŠã® Windows ãŠãŒã¶ãŒãè±èªããã¯ã䜿çšããŠããããã§ã¯ãããŸããããŸããWSL ã³ãã³ãã®åŠçæ¹æ³ã«ããããã® AI Workbench ã®ã€ã³ã¹ããŒã«ããããã¯ãããŠããŸããã ç¹ã«ãWindows äžã§ããªã«æåããŸãã¯äžåœèªã§äœæ¥ãããŠãŒã¶ãŒããããã¯ãããŠããŸããã è±èªä»¥å€ã®èšèªããã¯ãåŠçããæ¹æ³ã調æŽããã®ã§ãçŸåšã¯åé¡ãªãæ©èœããã¯ãã§ãã 以åãããã¯ãããŠããå Žåã¯ãæ¯éãè©Šããã ããã ããã§ãæ©èœããªãå Žåã¯ã
NVIDIA AI Workbench éçºè
ãã©ãŒã©ã
ã§ãç¥ãããã ãããããã«ãã NVIDIA ã¯ãåŒãç¶ããã®æ©èœãæ¹åããããšãã§ããŸãã
æ°ãã AI Workbench ãããžã§ã¯ã
ãã®ãªãªãŒã¹ã§ãAI éçºãããã«å§ããããããã«èšèšãããæ°ãããµã³ãã« ãããžã§ã¯ããã玹ä»ããŸãã詳现ã¯ä»¥äžãã芧ãã ããã
AI Workbench ãããžã§ã¯ã
ã¯ãAI Workbench ã®ã³ã³ãããŒåãããéçºç°å¢ãå®çŸ©ããæ§é åããã Git ãªããžããªã§ãã AI Workbench ãããžã§ã¯ãã§ã¯ã以äžãæäŸããŸãã
ç°¡åãªã»ããã¢ãããš GPU æ§æ:
ãããžã§ã¯ãã GitHub ãŸã㯠GitLab ããã¯ããŒã³ããã ãã§ãæ®ã㯠AI Workbench ãèªå㧠GPU æ§æãè¡ããŸãã
éçºã®çµ±å:
Jupyter ã VS Code ãªã©ã®äžè¬çãªéçºç°å¢ã«å¯ŸããŠã·ãŒã ã¬ã¹ã«ãµããŒããããŠãŒã¶ãŒæ§æã®ãŠã§ã ã¢ããªã±ãŒã·ã§ã³ã«ã€ããŠããµããŒãããŸãã
ã³ã³ãããŒåãããã«ã¹ã¿ãã€ãºå¯èœãªç°å¢:
ãããžã§ã¯ãã¯ã³ã³ãããŒåãããåé¢ãããç°¡åã«å€æŽå¯èœã§ãã äžè²«æ§ãšåçŸæ§ã確ä¿ããªãããç¹å®ã®ããŒãºã«åãããŠãµã³ãã« ãããžã§ã¯ããé©åãããããšãã§ããŸãã
NVIDIA AI Workbench ãµã³ãã« ãããžã§ã¯ããã芧ãã ãã
ã
ãã«ãã¢ãŒãã«ä»®æ³ã¢ã·ã¹ã¿ã³ã ãµã³ãã« ãããžã§ã¯ã
ãã®ãããžã§ã¯ãã§ã¯ããŠã§ãæ€çŽ¢ãžã®ãã©ãŒã«ããã¯ã䌎ããã«ãã¢ãŒãã«
æ€çŽ¢æ¡åŒµçæ (RAG)
ãã€ãã©ã€ã³ã䜿çšããŠãç¬èªã®ä»®æ³ã¢ã·ã¹ã¿ã³ããããããšãæ§ç¯ã§ããŸãã ãŠãŒã¶ãŒã¯ã2 ã€ã® RAG ããŒã¹ã®ã¢ããªã±ãŒã·ã§ã³ãæäœããŠãAI Workbench ã®è©³çŽ°ãåŠãã ãããŠãŒã¶ãŒ ããã¥ã¡ã³ããåç
§ããããèªèº«ã®ã€ã³ã¹ããŒã«ã®ãã©ãã«ã·ã¥ãŒãã£ã³ã°ãããããããã㯠RAG ãã€ãã©ã€ã³ãç¬èªã®ã«ã¹ã¿ã 補åã«éäžããããããããšãã§ããŸãã
Control-Panel:
補åã®ããã¥ã¡ã³ããæäœããããã®ã«ã¹ã¿ãã€ãºå¯èœãª Gradio ã¢ããªã§ã¯ããŠã§ãããŒãžãPDFãç»åãåç»ãæ°žç¶çãªãã¯ãã« ã¹ãã¢ã«ã¢ããããŒãããããããåãåããã§ããŸãã æšè«ã«é¢ããŠã¯ãNVIDIA API ã«ã¿ãã°ã®ããã«ãã¯ã©ãŠã ãšã³ããã€ã³ããéžæããããã»ã«ããã¹ãåã®ãšã³ããã€ã³ãã䜿çšããŠãç¬èªã®æšè«ãå®è¡ã§ããŸãã
Public-Chat:
補åããã¥ã¡ã³ããèªã¿èŸŒãŸãããšãGradio ã¢ããªã¯ç°¡çŽ åããããèªã¿åãå°çšããã£ããããããšãªããæ°ãã AI Workbench ã¢ããªå
±ææ©èœãéããŠãšã³ã ãŠãŒã¶ãŒãšå
±æã§ããŸãã
å³ 4. Public-Chat ãŠã§ã ã¢ããªã¯ãèªã¿åãå°çšã®ã·ã³ãã«ãªãã£ãã ã¢ããªã±ãŒã·ã§ã³ã§ããšã³ããŠãŒã¶ãŒã®å©çšãšå
±æã容æã«ããŸã
Competition-Kernel ãµã³ãã« ãããžã§ã¯ã
ãã®ãããžã§ã¯ãã¯ãKaggle ã³ã³ããã£ã·ã§ã³ã«åãçµãéã«ç°¡åãªããŒã«ã« ãšã¯ã¹ããªãšã³ã¹ãæäŸããŸãã AI Workbench ãéããŠãããŒã«ã« ãã·ã³ããŸãã¯ã¯ã©ãŠã ã€ã³ã¹ã¿ã³ã¹ã掻çšããã³ã³ããã£ã·ã§ã³ã®ããŒã¿ã»ãããã³ãŒãã®äœæãã¢ãã«ã®æ§ç¯ãçµæã®æåºãªã©ãç°¡åã«å®è¡ããããšãã§ããŸãã Competition Kernel ãããžã§ã¯ãã§ã¯ã以äžãæäŸããŸãã
ç¬èªã® GPU ã§éçºãšãã¹ããè¡ããæ°åã§ã»ããã¢ãããšã«ã¹ã¿ãã€ãºãè¡ã管çãããäœéšã
GitHub ãŸã㯠GitLab ã«ããã³ãŒãã®ããŒãžã§ã³ ã³ã³ãããŒã«ãšè¿œè·¡ãã³ã©ãã¬ãŒã·ã§ã³ã容æã
ããŒã«ã«ã§å°çš IDE ã䜿çšãããã¯ãŒ: å
ç¢ãªãããã°ãã€ã³ããªãžã§ã³ããªã³ãŒãè£å®ãåºç¯ãªã«ã¹ã¿ãã€ãº ãªãã·ã§ã³ã
æ¢åã®ããŒã¿ ãœãŒã¹ (å€éšãŸãã¯ç¬èª) ãžã®ç°¡åãªãã©ã°ã€ã³ã
ã€ã³ã¿ãŒãããã䜿ããªã? åé¡ãããŸããããªãã©ã€ã³ã§ãéçºã§ããŸãã
ä»ããå§ããŸããã
ãã® NVIDIA AI Workbench ã®ãªãªãŒã¹ã¯ãGPU ã·ã¹ãã å
šäœã§ AI éçºã«åæ»ãªäœéšãæäŸãã倧ããªäžæ©ãšãªããŸãã ãã®ãªãªãŒã¹ã«ã¯ãGit ã®ãµããŒãã®æ¡åŒµããã«ãã³ã³ãããŒç°å¢ã®ãµããŒããå®å
šãªãŠã§ã ã¢ããªå
±æãªã©ãAI ã¯ãŒã¯ããŒãã§ã®éçºãšã³ã©ãã¬ãŒã·ã§ã³ã®å¹çåãªã©ã®æ°æ©èœãå«ãŸããŸãã ãã®ãªãªãŒã¹ã§å©çšå¯èœãšãªã£ã 3 ã€ã®æ°ãããµã³ãã« ãããžã§ã¯ãã§ãããã®æ©èœããè©Šãããã ãããç¬èªã®ãããžã§ã¯ããäœæããããšãã§ããŸãã
AI Workbench ãå§ããã«ã¯ã
ãŠã§ãããŒãžããã¢ããªã±ãŒã·ã§ã³ãã€ã³ã¹ããŒã«ããŠãã ãã
ã ã€ã³ã¹ããŒã«ãšæŽæ°ã®è©³çŽ°ã«ã€ããŠã¯ã
NVIDIA AI Workbench ã®ããã¥ã¡ã³ã
ãåç
§ããŠãã ããã
ããŒã¿ ãµã€ãšã³ã¹ãã RAG ãŸã§ãããŸããŸãª
NVIDIA AI Workbench ã®ãµã³ãã« ãããžã§ã¯ã
ããçšæããŠããŸãã
åé¡ãå ±åããããä»ã®éçºè
ã«ãã AI Workbench ã®æŽ»çšæ¹æ³ã確èªããã«ã¯ã
NVIDIA AI Workbench éçºè
ãã©ãŒã©ã
ã«ã¢ã¯ã»ã¹ããŠãã ããã
é¢é£æ
å ±
DLI ã³ãŒã¹:
察話å AI Building Conversational AI Applications (ã¢ããªã±ãŒã·ã§ã³ã®æ§ç¯ )
GTC ã»ãã·ã§ã³:
Breaking Barriers: How NVIDIA AI Workbench Makes AI Accessible to All (éå£ã®æç Ž: NVIDIA AI Workbench ã«ãã AI ããã¹ãŠã®äººã
ã«èº«è¿ã«ããæ¹æ³)
ãŠã§ãããŒ:
Virtual Desktop in the Era of AI (AI æ代ã®ä»®æ³ãã¹ã¯ããã)
ãŠã§ãããŒ:
Jumpstart AI Development With Virtual Workstations (ä»®æ³ã¯ãŒã¯ã¹ããŒã·ã§ã³ã§ AI éçºãå é) |
https://developer.nvidia.com/blog/build-multimodal-visual-ai-agents-powered-by-nvidia-nim/ | Build Multimodal Visual AI Agents Powered by NVIDIA NIM | The exponential growth of visual dataâranging from images to PDFs to streaming videosâhas made manual review and analysis virtually impossible. Organizations are struggling to transform this data into actionable insights at scale, leading to missed opportunities and increased risks.
To solve this challenge, vision-language models (VLMs) are emerging as powerful tools, combining visual perception of images and videos with text-based reasoning. Unlike traditional
large language models
(LLMs) that only process text, VLMs empower you to build
visual AI agents
that understand and act on complex multimodal data, enabling real-time decision-making and automation.
Imagine having an intelligent AI agent that can analyze remote camera footage to detect early signs of wildfires or scan business documents to extract critical information buried within charts, tables, and imagesâall autonomously.
With
NVIDIA NIM microservices
, building these advanced visual AI agents is easier and more efficient than ever. Offering flexible customization, streamlined API integration, and smooth deployment, NIM microservices enable you to create dynamic agents tailored to your unique business needs.
In this post, we guide you through the process of designing and building intelligent visual AI agents using NVIDIA NIM microservices. We introduce the different types of vision AI models available, share four sample applicationsâstreaming video alerts, structured text extraction, multimodal search, and few-shot classificationâand provide Jupyter notebooks to get you started. For more information about bringing these models to life, see the
/NVIDIA/metropolis-nim-workflows
GitHub repo.
Types of vision AI models
To build a robust visual AI agent, you have the following core types of vision models at your disposal:
VLMs
Embedding models
Computer vision (CV) models
These models serve as essential building blocks for developing intelligent visual AI agents. While the VLM functions as the core engine of each agent, CV and embedding models can enhance its capabilities, whether by improving accuracy for tasks like object detection or parsing complex documents.
In this post, we use
vision NIM microservices
to access these models. Each vision NIM microservice can be easily integrated into your workflows through simple REST APIs, allowing for efficient model inference on text, images, and videos. To get started, you can experiment with hosted preview APIs on
build.nvidia.com
, without needing a local GPU.
Figure 1.The llama-3.2-vision-90b model on build.nvidia.com
Vision language models
VLMs bring a new dimension to language models by adding vision capabilities, making them multimodal. These models can process images, videos, and text, enabling them to interpret visual data and generate text-based outputs. VLMs are versatile and can be fine-tuned for specific use cases or prompted for tasks such as Q&A based on visual inputs.
NVIDIA and its partners offer several VLMs as NIM microservices each differing in size, latency, and capabilities (Table 1).
Company
Model
Size
Description
NVIDIA
VILA
40B
A powerful general-purpose model built on SigLIP and Yi that is suitable for nearly any use case.
NVIDIA
Neva
22B
A medium-sized model combining NVGPT and CLIP and offering the functionality of much larger multimodal models.
Meta
Llama 3.2
90B/11B
The first vision-capable Llama model in two sizes, excelling in a range of vision-language tasks and supporting higher-resolution input.
Microsoft
phi-3.5-vision
4.2B
A small, fast model that excels at OCR and is capable of processing multiple images.
Microsoft
Florence-2
0.7B
A multi-task model capable of captioning, object detection, and segmentation using simple text prompts.
Table 1. VLM NIM microservices
Embedding models
Embedding models convert input data (such as images or text) into dense feature-rich vectors known as embeddings. These embeddings encapsulate the essential properties and relationships within the data, enabling tasks like similarity search or classification. Embeddings are typically stored in
vector databases
where GPU-accelerated search can quickly retrieve relevant data.
Embedding models play a crucial role in creating intelligent agents. For example, they support
retrieval-augmented generation
(RAG) workflows, enabling agents to pull relevant information from diverse data sources and improve accuracy through in-context learning.
Company
Model
Description
Use Cases
NVIDIA
NV-CLIP
Multimodal foundation model generating text and image embeddings
Multimodal search, Zero-shot classification
NVIDIA
NV-DINOv2
Vision foundation model generating high-resolution image embeddings
Similarity search, Few-shot classification
Table 2.
Embedding NIM microservices
Computer vision models
CV models focus on specialized tasks like image classification, object detection, and optical character recognition (OCR). These models can augment VLMs by adding detailed metadata, improving the overall intelligence of AI agents.
Company
Model
Description
Use Cases
NVIDIA
Grounding Dino
Open-vocabulary object detection
Detect anything
NVIDIA
OCDRNet
Optical character detection and recognition
Document parsing
NVIDIA
ChangeNet
Detects pixel-level changes between two images
Defect detection, satellite imagery analysis
NVIDIA
Retail Object Detection
Pretrained to detect common retail items
Loss prevention
Table 3.
Computer vision NIM microservices
Build visual AI agents with vision NIM microservices
Here are real-world examples of how the vision NIM microservices can be applied to create powerful visual AI agents.
To make application development with NVIDIA NIM microservices more accessible, we have published a collection of examples on GitHub. These examples demonstrate how to use NIM APIs to build or integrate them into your applications. Each example includes a Jupyter notebook tutorial and demo that can be easily launched, even without GPUs.
On the
NVIDIA API Catalog
, select a model page, such as
Llama 3.1 405B
. Choose
Get API Key
and
enter your business email
for a 90-day
NVIDIA AI Enterprise
license, or use your personal email to
access NIM
through the
NVIDIA Developer Program
.
On the
/NVIDIA/metropolis-nim-workflows
GitHub repo, explore the Jupyter notebook tutorials and demos. These workflows showcase how vision NIM microservices can be combined with other components, like vector databases and LLMs to build powerful AI agents that solve real-world problems. With your API key, you can easily recreate the workflows showcased in this post, giving you hands-on experience with Vision NIM microservices.
Here are a few example workflows:
VLM streaming video alerts agent
Structured text extraction agent
Few-shot classification with NV-DINOv2 agent
Multimodal search with NV-CLIP agent
VLM streaming video alerts agent
With vast amounts of video data generated every second, itâs impossible to manually review footage for key events like package deliveries, forest fires, or unauthorized access.
This workflow shows how to use VLMs, Python, and OpenCV to build an AI agent that
autonomously monitors live streams for user-defined events
. When an event is detected, an alert is generated, saving countless hours of manual video review. Thanks to the flexibility of VLMs, new events can be detected by changing the promptâno need for custom CV models to be built and trained for each new scenario..
Video 1. Visual AI Agent Powered by NVIDIA NIM
In Figure 2, the VLM runs in the cloud while the video streaming pipeline operates locally. This setup enables the demo to run on almost any hardware, with the heavy computation offloaded to the cloud through NIM microservices.
Figure 2. Streaming video alert agent architecture
Here are the steps for building this agent:
Load and process the video stream
: Use OpenCV to load a video stream or file, decode it, and subsample frames.
Create REST API endpoints:
Use FastAPI to create control REST API endpoints where users can input custom prompts.
Integrate with the VLM API:
A wrapper class handles interactions with the VLM API by sending video frames and user prompts. It forms the NIM API requests and parses the response.
Overlay responses on video:
The VLM response is overlaid onto the input video, streamed out using OpenCV for real-time viewing.
Trigger alerts:
Send the parsed response over a WebSocket server to integrate with other services, triggering notifications based on detected events.
For more information about building a VLM-powered streaming video alert agent, see the
/NVIDIA/metropolis-nim-workflows
notebook tutorial and demo on GitHub. You can experiment with different VLM NIM microservices to find the best model for your use case.
For more information about how VLMs can transform edge applications with
NVIDIA Jetson
and
Jetson Platform Services
, see
Develop Generative AI-Powered Visual AI Agents for the Edge
and explore additional resources on the
Jetson Platform Services
page.
Structured text extraction agent
Many business documents are stored as images rather than searchable formats like PDFs. This presents a significant challenge when it comes to searching and processing these documents, often requiring manual review, tagging, and organizing.
While optical character detection and recognition (OCDR) models have been around for a while, they often return cluttered results that fail to retain the original formatting or interpret its visual data. This becomes especially challenging when working with documents in irregular formats, such as photo IDs, which come in various shapes and sizes.
Traditional CV models make processing such documents time-consuming and costly. However, by combining the flexibility of VLMs and LLMs with the precision of OCDR models, you can build a
powerful text-extraction pipeline to autonomously parse documents
and store user-defined fields in a database.
Figure 3. Structured text extraction agent architecture
Here are the structured text-extraction pipeline building steps:
Document input:
Provide an image of the document to an OCDR model, such as OCDRNet or Florence, which returns metadata for all the detected characters in the document.
VLM integration:
The VLM processes the userâs prompt specifying the desired fields and analyzes the document. It uses the detected characters from the OCDR model to generate a more accurate response.
LLM formatting:
The response of the VLM is passed to an LLM, which formats the data into JSON, presenting it as a table.
Output and storage:
The extracted fields are now in a structured format, ready to be inserted into a database or stored for future use.
Figure 4. Structured text extraction example with vision NIM microservices
The preview APIs make it easy to experiment by combining multiple models to build complex pipelines. From the demo UI, you can switch between different VLMs, OCDR, and LLM models available on
build.nvidia.com
for quick experimentation.
Few-shot classification with NV-DINOv2
NV-DINOv2 generates embeddings from high-resolution images, making it ideal for tasks requiring detailed analysis, such as defect detection with only a few sample images. This workflow demonstrates how to build a
scalable few-shot classification pipeline
using NV-DINOv2 and a Milvus vector database.
Figure 5. Few-shot classification with NV-DINOv2
Here is how the few-shot classification pipeline works:
Define classes and upload samples:
Users define classes and upload a few sample images for each. NV-DINOv2 generates embeddings from these images, which are then stored in a Milvus vector database along with the class labels.
Predict new classes:
When a new image is uploaded, NV-DINOv2 generates its embedding, which is compared with the stored embeddings in the vector database. The closest neighbors are identified using the k-nearest neighbors (k-NN) algorithm, and the majority class among them is predicted.
Multimodal search with NV-CLIP
NV-CLIP offers a unique advantage: the ability to embed both text and images, enabling
multimodal search
. By converting text and image inputs into embeddings within the same vector space, NV-CLIP facilitates the retrieval of images that match a given text query. This enables highly flexible and accurate search results.
Figure 6. Multimodal search (image and text) with NV-CLIP
In this workflow, users upload a folder of images, which are embedded and stored in a vector database. Using the UI, they can type a query, and NV-CLIP retrieves the most similar images based on the input text.
More advanced agents can be built using this approach with VLMs to create multimodal RAG workflows, enabling visual AI agents to build on past experiences and improve responses.
Get started with visual AI agents today
Ready to dive in and start building your own visual AI agents? Use the code provided in the
/NVIDIA/metropolis-nim-workflows
GitHub repo as a foundation to develop your own custom workflows and AI solutions powered by NIM microservices. Let the example inspire new applications that solve your specific challenges.
For any technical questions or support, join our community and engage with experts in the
NVIDIA Visual AI Agent forum
. | https://developer.nvidia.com/ja-jp/blog/build-multimodal-visual-ai-agents-powered-by-nvidia-nim/ | NVIDIA NIM ã«ãããã«ãã¢ãŒãã« ããžã¥ã¢ã« AI ãšãŒãžã§ã³ãã®æ§ç¯ | Reading Time:
3
minutes
ç»åãã PDFãã¹ããªãŒãã³ã°åç»ã«è³ããŸã§ãããžã¥ã¢ã« ããŒã¿ãææ°é¢æ°çã«æ¥å¢ããŠãããããæåã«ããã¬ãã¥ãŒãšåæã¯äºå®äžäžå¯èœã«ãªã£ãŠããŸããäŒæ¥ã¯ããã®ããŒã¿ã倧èŠæš¡ã«å®çšçãªæŽå¯ã«å€ããã®ã«èŠåŽããŠããããã®çµæãæ©äŒéžå€±ããªã¹ã¯ã®å¢å€§ã«ã€ãªãã£ãŠããŸãã
ãã®èª²é¡ã解決ããããã«ãç»åãåç»ã®èŠèŠèªèãšããã¹ãããŒã¹ã®æšè«ãçµã¿åããã匷åãªããŒã«ãšããŠãããžã§ã³èšèªã¢ãã« (VLM) ãç»å ŽããŠããŸããããã¹ãã®ã¿ãåŠçããåŸæ¥ã®
倧èŠæš¡èšèªã¢ãã«
(LLM) ãšã¯ç°ãªããVLM ã¯è€éãªãã«ãã¢ãŒãã« ããŒã¿ãç解ããããã«åºã¥ããŠè¡åãã
ããžã¥ã¢ã« AI ãšãŒãžã§ã³ã
ãæ§ç¯ã§ããããããªã¢ã«ã¿ã€ã ã®ææ決å®ãšèªååãå¯èœã«ãªããŸãã
ãªã¢ãŒã ã«ã¡ã©ã®æ åã解æããŠå±±ç«äºã®åæå
åãæ€åºããããããžãã¹ææžãã¹ãã£ã³ããŠãå³è¡šãç»åã«åãããŠããéèŠãªæ
å ±ãæœåºãããããã€ã³ããªãžã§ã³ã㪠AI ãšãŒãžã§ã³ããæ³åããŠã¿ãŠãã ããããã®ãã¹ãŠãèªåŸçã«è¡ãã®ã§ãã
NVIDIA NIM ãã€ã¯ããµãŒãã¹
ã䜿çšããã°ããããã®é«åºŠãªããžã¥ã¢ã« AI ãšãŒãžã§ã³ãã®æ§ç¯ããããŸã§ä»¥äžã«ç°¡åã§å¹ççã«ãªããŸããæè»ãªã«ã¹ã¿ãã€ãºãåçç㪠API çµ±åãã¹ã ãŒãºãªãããã€ãªã©ãå¯èœã«ãã NIM ãã€ã¯ããµãŒãã¹ã«ãããç¬èªã®ããžãã¹ ããŒãºã«åãããåçãªãšãŒãžã§ã³ããäœæã§ããŸãã
ãã®èšäºã§ã¯ãNVIDIA NIM ãã€ã¯ããµãŒãã¹ã䜿çšããŠã€ã³ããªãžã§ã³ããªããžã¥ã¢ã« AI ãšãŒãžã§ã³ããèšèšããã³æ§ç¯ããããã»ã¹ã«ã€ããŠèª¬æããŸããçŸåšå©çšå¯èœãªããŸããŸãªçš®é¡ã®ããžã§ã³ AI ã¢ãã«ã玹ä»ãã4 ã€ã®ãµã³ãã« ã¢ããªã±ãŒã·ã§ã³ (ã¹ããªãŒãã³ã°åç»ã¢ã©ãŒããæ§é åããã¹ãæœåºããã«ãã¢ãŒãã«æ€çŽ¢ãFew-shot åé¡) ãå
±æããJupyter ããŒãããã¯ãæäŸããŠå©çšãéå§ãããæäŒããããŸãããããã®ã¢ãã«ãå®éã«äœ¿çšããæ¹æ³ã®è©³çŽ°ã«ã€ããŠã¯ãGitHub ãªããžããª
/NVIDIA/metropolis-nim-workflows
ãåç
§ããŠãã ããã
ããžã§ã³ AI ã¢ãã«ã®çš®é¡
å
ç¢ãªããžã¥ã¢ã« AI ãšãŒãžã§ã³ããæ§ç¯ããã«ã¯ã以äžã®äžæ žãšãªãããžã§ã³ ã¢ãã«ãå©çšã§ããŸãã
VLM
åã蟌ã¿ã¢ãã«
ã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ (CV) ã¢ãã«
ãããã®ã¢ãã«ã¯ãã€ã³ããªãžã§ã³ããªããžã¥ã¢ã« AI ãšãŒãžã§ã³ããéçºããããã®ãéèŠãªæ§æèŠçŽ ãšãªããŸããVLM ã¯åãšãŒãžã§ã³ãã®ã³ã¢ ãšã³ãžã³ãšããŠæ©èœããŸãããCV ãšåã蟌ã¿ã¢ãã«ã¯ããªããžã§ã¯ãæ€åºãªã©ã®ã¿ã¹ã¯ã®ç²ŸåºŠãåäžãããããè€éãªææžã解æãããããããšã§ããã®æ©èœã匷åã§ããŸãã
ãã®èšäºã§ã¯ã
ããžã§ã³ NIM ãã€ã¯ããµãŒãã¹
ã䜿çšããŠããããã®ã¢ãã«ã«ã¢ã¯ã»ã¹ããŸããåããžã§ã³ NIM ãã€ã¯ããµãŒãã¹ã¯ãã·ã³ãã«ãª REST API ãéããŠç°¡åã«ã¯ãŒã¯ãããŒã«çµ±åã§ãããããããã¹ããç»åãåç»ã«ãããå¹ççãªã¢ãã«æšè«ãå¯èœã«ãªããŸãããŸããããŒã«ã«ã® GPU ãå¿
èŠãšããªãã
build.nvidia.com
ã«ãã¹ããããŠãããã¬ãã¥ãŒ API ãè©ŠããŠã¿ãããšãã§ããŸãã
å³ 1. build.nvidia.com ã® llama-3.2-vision-90b ã¢ãã«
ããžã§ã³èšèªã¢ãã«
VLM ã¯ãããžã§ã³æ©èœãè¿œå ããŠãã«ãã¢ãŒãã«ã«ããããšã§ãèšèªã¢ãã«ã«æ°ããªæ¬¡å
ããããããŸãããããã£ãã¢ãã«ã¯ãç»åãåç»ãããã¹ããåŠçã§ãããããããžã¥ã¢ã« ããŒã¿ã解éããããã¹ãããŒã¹ã®åºåãçæã§ããŸããVLM ã¯æ±çšæ§ãé«ããç¹å®ã®ãŠãŒã¹ ã±ãŒã¹ã«åãããŠãã¡ã€ã³ãã¥ãŒãã³ã°ããããããžã¥ã¢ã«å
¥åã«åºã¥ã Q&A ãªã©ã®ã¿ã¹ã¯ãå®è¡ããããã«ããã³ãããèšå®ãããã§ããŸãã
NVIDIA ãšãã®ããŒãããŒã¯ããµã€ãºãã¬ã€ãã³ã·ãæ©èœãããããç°ãªãè€æ°ã® VLM ã NIM ãã€ã¯ããµãŒãã¹ãšããŠæäŸããŠããŸã (è¡š 1)ã
äŒæ¥å
ã¢ãã«
ãµã€ãº
説æ
NVIDIA
VILA
40B
SigLIP ãš Yi ãåºã«æ§ç¯ããã匷åãã€æ±çšçãªã¢ãã«ã§ãã»ãŒãããããŠãŒã¹ ã±ãŒã¹ã«é©ããŠããŸãã
NVIDIA
Neva
22B
NVGPT ãš CLIP ãçµã¿åãããäžèŠæš¡ã¢ãã«ã§ããã倧èŠæš¡ãªãã«ãã¢ãŒãã« ã¢ãã«ãšåçã®æ©èœãæäŸããŸãã
Meta
Llama 3.2
90B/11B
åããŠããžã§ã³æ©èœã«å¯Ÿå¿ãã 2 ã€ã®ãµã€ãºã® Llama ã¢ãã«ã§ãããŸããŸãªããžã§ã³èšèªã¿ã¹ã¯ã«åªããé«è§£å床ã®å
¥åã«å¯Ÿå¿ããŠããŸãã
Microsoft
phi-3.5-vision
4.2B
OCR ã«åªããè€æ°ã®ç»åãåŠçã§ããå°èŠæš¡ã§é«éãªã¢ãã«ã
Microsoft
Florence-2
0.7B
ã·ã³ãã«ãªããã¹ã ããã³ããã䜿çšããŠããã£ãã·ã§ã³äœæããªããžã§ã¯ãæ€åºãã»ã°ã¡ã³ããŒã·ã§ã³ãå¯èœãªãã«ãã¿ã¹ã¯ ã¢ãã«ã
è¡š 1. VLM NIM ãã€ã¯ããµãŒãã¹
åã蟌ã¿ã¢ãã«
åã蟌ã¿ã¢ãã«ã¯ãå
¥åããŒã¿ (ç»åãããã¹ããªã©) ãåã蟌ã¿ãšåŒã°ããé«å¯åºŠã®ç¹åŸŽè±å¯ãªãã¯ãã«ã«å€æããŸãããã®ãããªåã蟌ã¿ã¯ãããŒã¿ã®æ¬è³ªçãªç¹åŸŽé¢ä¿æ§ããŸãšããé¡äŒŒæ€çŽ¢ãåé¡ãªã©ã®ã¿ã¹ã¯ãå¯èœã«ããŸããåã蟌ã¿ã¯éåžžã
ãã¯ãã« ããŒã¿ããŒã¹
ã«ä¿åãããGPU ã¢ã¯ã»ã©ã¬ãŒã·ã§ã³ã«ããæ€çŽ¢ã§é¢é£ããŒã¿ãè¿
éã«ååŸã§ããŸãã
åã蟌ã¿ã¢ãã«ã¯ãã€ã³ããªãžã§ã³ã ãšãŒãžã§ã³ããäœæããéã«éèŠãªåœ¹å²ãæãããŸããããšãã°ã
æ€çŽ¢æ¡åŒµçæ
(RAG) ã¯ãŒã¯ãããŒããµããŒããããšãŒãžã§ã³ããããŸããŸãªããŒã¿ ãœãŒã¹ããé¢é£æ
å ±ãåŒãåºããæèå
åŠç¿ (In-Context Learning) ãéããŠç²ŸåºŠãåäžãããããšãå¯èœã«ããŸãã
äŒæ¥å
ã¢ãã«
説æ
äºäŸ
NVIDIA
NV-CLIP
ããã¹ãããã³ç»åã®åã蟌ã¿ãçæãããã«ãã¢ãŒãã«ã®åºç€ã¢ãã«
ãã«ãã¢ãŒãã«æ€çŽ¢ããŒãã·ã§ããåé¡
NVIDIA
NV-DINOv2
é«è§£å床ã®ç»åã®åã蟌ã¿ãçæããããžã§ã³åºç€ã¢ãã«
é¡äŒŒæ€çŽ¢ãFew-shot åé¡
è¡š 2. åã蟌㿠NIM ãã€ã¯ããµãŒãã¹
ã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ ã¢ãã«
CV ã¢ãã«ã¯ãç»ååé¡ããªããžã§ã¯ãæ€åºãå
åŠæåèªè (OCR) ãªã©ã®å°éçã¿ã¹ã¯ã«éç¹ã眮ããŠããŸãããããã®ã¢ãã«ã¯ã詳现ãªã¡ã¿ããŒã¿ãè¿œå ããããšã§ VLM ã匷åããAI ãšãŒãžã§ã³ãã®å
šäœçãªã€ã³ããªãžã§ã³ã¹ãæ¹åããããšãã§ããŸãã
äŒæ¥å
ã¢ãã«
説æ
äºäŸ
NVIDIA
Grounding Dino
ãªãŒãã³ ããã£ãã©ãª ãªããžã§ã¯ãæ€åº
ãããããã®ãæ€åº
NVIDIA
OCDRNet
å
åŠæåæ€åºãšèªè
ææžè§£æ
NVIDIA
ChangeNet
2 ã€ã®ç»åéã®ãã¯ã»ã« ã¬ãã«ã®å€åãæ€åº
æ¬ é¥æ€åºãè¡æç»ååæ
NVIDIA
Retail Object Detection
äžè¬çãªå°å£²ååãæ€åºããããã«äºååŠç¿æžã¿
æ倱é²æ¢
è¡š 3. ã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ NIM ãã€ã¯ããµãŒãã¹
ããžã§ã³ NIM ãã€ã¯ããµãŒãã¹ã§ããžã¥ã¢ã« AI ãšãŒãžã§ã³ããæ§ç¯
ããžã§ã³ NIM ãã€ã¯ããµãŒãã¹ãé©çšããŠåŒ·åãªããžã¥ã¢ã« AI ãšãŒãžã§ã³ããäœæããæ¹æ³ã®å®äŸã瀺ããŸãã
NVIDIA NIM ãã€ã¯ããµãŒãã¹ã掻çšããã¢ããªã±ãŒã·ã§ã³éçºããã身è¿ãªãã®ã«ããããã«ãNVIDIA 㯠GitHub ã«äžé£ã®äŸãå
¬éããŠããŸãããããã®äŸã§ã¯ãNIM API ã䜿çšããŠã¢ããªã±ãŒã·ã§ã³ãæ§ç¯ãŸãã¯çµ±åããæ¹æ³ã玹ä»ããŠããŸããåã
ã®äŸã«ã¯ãGPU ããªããŠãç°¡åã«èµ·åã§ãã Jupyter ããŒãããã¯ã®ãã¥ãŒããªã¢ã«ãšãã¢ãå«ãŸããŠããŸãã
NVIDIA API ã«ã¿ãã°
ã§ã
Llama3.1 405B
ãªã©ã®ã¢ãã« ããŒãžãéžæããŸãã[
Get API Key
] (API ããŒãå
¥æ) ãéžæã㊠90 æ¥éã®
NVIDIA AI Enterprise
ã©ã€ã»ã³ã¹çšã®
ããžãã¹çšã¡ãŒã«ãå
¥å
ãããã
NVIDIA éçºè
ããã°ã©ã
ãéããŠ
NIM ã«ã¢ã¯ã»ã¹
ããå人çšã¡ãŒã« ã¢ãã¬ã¹ã䜿çšããŸãã
GitHub ãªããžããª
/NVIDIA/metropolis-nim-workflows
ã§ãJupyter ããŒãããã¯ã®ãã¥ãŒããªã¢ã«ãšãã¢ãåç
§ããŸãããããã®ã¯ãŒã¯ãããŒã§ã¯ãããžã§ã³ NIM ãã€ã¯ããµãŒãã¹ããã¯ãã« ããŒã¿ããŒã¹ã LLM ãªã©ã®ä»ã®ã³ã³ããŒãã³ããšçµã¿åãããçŸå®ã®åé¡ã解決ãã匷å㪠AI ãšãŒãžã§ã³ããæ§ç¯ããæ¹æ³ã玹ä»ããŠããŸããAPI ããŒãããã°ããã®èšäºã§çŽ¹ä»ãããŠããã¯ãŒã¯ãããŒãç°¡åã«åçŸã§ããããžã§ã³ NIM ãã€ã¯ããµãŒãã¹ãå®éã«äœéšã§ããŸãã
以äžã«ã¯ãŒã¯ãããŒã®äŸã瀺ããŸãã
VLM ã¹ããªãŒãã³ã°åç»ã¢ã©ãŒã ãšãŒãžã§ã³ã
æ§é åããã¹ãæœåºãšãŒãžã§ã³ã
NV-DINOv2 ãšãŒãžã§ã³ãã«ãã Few-shot åé¡
NV-CLIP ãšãŒãžã§ã³ãã«ãããã«ãã¢ãŒãã«æ€çŽ¢
VLM ã¹ããªãŒãã³ã°åç»ã¢ã©ãŒã ãšãŒãžã§ã³ã
æ¯ç§èšå€§ãªéã®åç»ããŒã¿ãçæãããç¶æ³ã§ãããã±ãŒãžã®é
éã森æç«çœãäžæ£ã¢ã¯ã»ã¹ãªã©ã®éèŠãªã€ãã³ãã®æ åãæåã§ç¢ºèªããããšã¯äžå¯èœã§ãã
ãã®ã¯ãŒã¯ãããŒã§ã¯ãVLMãPythonãOpenCV ã䜿çšããŠã
ãŠãŒã¶ãŒå®çŸ©ã€ãã³ãã®ã©ã€ã ã¹ããªãŒã ãèªåŸçã«ç£èŠãã
AI ãšãŒãžã§ã³ããæ§ç¯ããæ¹æ³ã瀺ããŠããŸããã€ãã³ããæ€åºããããšãã¢ã©ãŒããçæãããèšå€§ãªæéãèŠããæåã§ã®åç»ã¬ãã¥ãŒã«ãããæéãççž®ã§ããŸããVLM ã®æè»æ§ã®ãããã§ãããã³ãããå€æŽããããšã§æ°ããã€ãã³ããæ€åºã§ããŸãããã®ãããæ°ããã·ããªãªããšã«ã«ã¹ã¿ã CV ã¢ãã«ãæ§ç¯ããã³ãã¬ãŒãã³ã°ããå¿
èŠããããŸããã
åç» 1. NVIDIA NIM ã掻çšããããžã¥ã¢ã« AI ãšãŒãžã§ã³ã
å³ 2 ã§ã¯ãVLM ã¯ã¯ã©ãŠãã§å®è¡ãããåç»ã¹ããªãŒãã³ã° ãã€ãã©ã€ã³ã¯ããŒã«ã«ã§åäœããŸãããã®ã»ããã¢ããã§ã¯éãèšç®ã NIM ãã€ã¯ããµãŒãã¹ãéããŠã¯ã©ãŠãã«ãªãããŒããããã»ãŒãã¹ãŠã®ããŒããŠã§ã¢äžã§ãã¢ãå®è¡ã§ããããã«ãªããŸãã
å³ 2. ã¹ããªãŒãã³ã°åç»ã¢ã©ãŒã ãšãŒãžã§ã³ãã®ã¢ãŒããã¯ãã£
ãã®ãšãŒãžã§ã³ããæ§ç¯ããæé ã¯ä»¥äžã®ãšããã§ãã
åç»ã¹ããªãŒã ã®èªã¿èŸŒã¿ãšåŠç
: OpenCV ã䜿çšããŠãåç»ã¹ããªãŒã ãŸãã¯ãã¡ã€ã«ã®èªã¿èŸŒã¿ããã³ãŒãããã¬ãŒã ã®ãµããµã³ãã«ãå®è¡ããŸãã
REST API ãšã³ããã€ã³ãã®äœæ:
FastAPI ã䜿çšããŠããŠãŒã¶ãŒãã«ã¹ã¿ã ããã³ãããå
¥åã§ããå¶åŸ¡çšã® REST API ãšã³ããã€ã³ããäœæããŸãã
VLM API ãšã®çµ±å:
ã©ãã㌠ã¯ã©ã¹ã§ã¯ãåç»ãã¬ãŒã ãšãŠãŒã¶ãŒ ããã³ãããéä¿¡ããããšã§ãVLM API ãšã®ãããšããåŠçããŸããNIM API ãªã¯ãšã¹ãã圢æããå¿çã解æããŸãã
å¿çãåç»äžã«ãªãŒããŒã¬ã€:
VLM å¿çã¯å
¥ååç»ã«ãªãŒããŒã¬ã€ãããOpenCV ã䜿çšããŠã¹ããªãŒãã³ã°é
ä¿¡ããããªã¢ã«ã¿ã€ã 衚瀺ãããŸãã
ã¢ã©ãŒããããªã¬ãŒ:
解æãããå¿çã WebSocket ãµãŒããŒã«éä¿¡ããŠä»ã®ãµãŒãã¹ãšçµ±åããæ€åºãããã€ãã³ãã«åºã¥ããŠéç¥ãããªã¬ãŒããŸãã
VLM ã掻çšããã¹ããªãŒãã³ã°åç»ã¢ã©ãŒã ãšãŒãžã§ã³ãã®æ§ç¯ã«é¢ãã詳现ã«ã€ããŠã¯ãGitHub ã®
/NVIDIA/metropolis-nim-workflows
ã®ããŒããã㯠ãã¥ãŒããªã¢ã«ãšãã¢ãã芧ãã ãããããŸããŸãª VLM NIM ãã€ã¯ããµãŒãã¹ãè©ŠãããŠãŒã¹ ã±ãŒã¹ã«æé©ãªã¢ãã«ãèŠã€ããããšãã§ããŸãã
NVIDIA Jetson
ãš
Jetson ãã©ãããã©ãŒã ãµãŒãã¹
ãé§äœ¿ã㊠VLM ããšããž ã¢ããªã±ãŒã·ã§ã³ãã©ã®ããã«å€é©ãããã«ã€ããŠã¯ãã
Edge åãã®çæ AI ã掻çšããããžã¥ã¢ã« AI ãšãŒãžã§ã³ããéçºãã
ããåç
§ããŠãã ããããŸãã
Jetson ãã©ãããã©ãŒã ãµãŒãã¹
ããŒãžã§ã¯è¿œå ã®é¢é£æ
å ±ãã芧ããã ããŸãã
æ§é åããã¹ãæœåºãšãŒãžã§ã³ã
å€ãã®ããžãã¹ææžã¯ãPDF ãªã©ã®æ€çŽ¢å¯èœãªãã©ãŒãããã§ã¯ãªããç»åãšããŠä¿åãããŠããŸãããã®ããããããã®ææžã®æ€çŽ¢ãšåŠçãè¡ãéã«ã¯ãæåã«ããã¬ãã¥ãŒãã¿ã°ä»ããæŽçãå¿
èŠã«ãªãããšãå€ãããããããã倧ããªèª²é¡ãšãªããŸãã
å
åŠæåæ€åºãšèªè (OCDR) ã¢ãã«ã¯ããã°ããåããååšããŠããŸããããå
ã®ãã©ãŒããããä¿æã§ããªãã£ãããããžã¥ã¢ã« ããŒã¿ã解éã§ããªãã£ããããŠãéç¶ãšããçµæãè¿ãããšããããããŸãããããã¯ãããŸããŸãªåœ¢ããµã€ãºãããåçä»ã ID ãªã©ã®äžèŠåãªåœ¢åŒã®ææžãæ±ãå Žåã«ç¹ã«å°é£ã«ãªããŸãã
åŸæ¥ã® CV ã¢ãã«ã¯ããã®ãããªææžã®åŠçã«æéãšã³ã¹ããããããŸããããããVLM ãš LLM ã®æè»æ§ãšãOCDR ã¢ãã«ã®ç²ŸåºŠãçµã¿åãããããšã§ã
匷åãªããã¹ãæœåºãã€ãã©ã€ã³ãæ§ç¯ããŠææžãèªåŸçã«è§£æã
ãããŒã¿ããŒã¹ã«ãŠãŒã¶ãŒå®çŸ©ã®ãã£ãŒã«ããä¿åããããšãã§ããŸãã
å³ 3. æ§é åããã¹ãæœåºãšãŒãžã§ã³ãã®ã¢ãŒããã¯ãã£
æ§é åããã¹ãæœåºãã€ãã©ã€ã³ã®æ§ç¯æé ã¯æ¬¡ã®ãšããã§ãã
ææžå
¥å:
OCDRNet ã Florence ãªã©ã® OCDR ã¢ãã«ã«ææžã®ç»åãæäŸãããšãææžå
ã®æ€åºããããã¹ãŠã®æåã®ã¡ã¿ããŒã¿ãè¿ãããŸãã
VLM ã®çµ±å:
VLM ã¯ãåžæãããã£ãŒã«ããæå®ãããŠãŒã¶ãŒã®ããã³ãããåŠçããææžãåæããŸããOCDR ã¢ãã«ã§æ€åºãããæåã䜿çšããŠãããæ£ç¢ºãªå¿çãçæããŸãã
LLM ã®ãã©ãŒããã:
VLM ã®å¿ç㯠LLM ã«æž¡ãããLLM ã¯ããŒã¿ã JSON 圢åŒã«æŽåœ¢ããè¡šãšããŠè¡šç€ºããŸãã
åºåãšä¿å:
æœåºããããã£ãŒã«ãã¯æ§é åããããã©ãŒãããã«ãŸãšããããããŒã¿ããŒã¹ã«æ¿å
¥ããããå°æ¥ã®äœ¿çšã«åããŠä¿åãããã§ããŸãã
å³ 4. ããžã§ã³ NIM ãã€ã¯ããµãŒãã¹ã«ããæ§é åããã¹ãæœåºã®äŸ
ãã¬ãã¥ãŒ API ã䜿çšãããšãè€æ°ã®ã¢ãã«ãçµã¿åãããŠè€éãªãã€ãã©ã€ã³ãæ§ç¯ããç°¡åã«å®éšãããããšãã§ããŸããã㢠UI ããã
build.nvidia.com
ã§å©çšå¯èœãªå皮㮠VLMãOCDRãLLM ã¢ãã«ãåãæ¿ããããšãã§ããçŽ æ©ãå®éšã§ããŸãã
NV-DINOv2 ã«ãã Few-shot åé¡
NV-DINOv2 ã¯ãé«è§£å床ã®ç»åããåã蟌ã¿ãçæãããããå°æ°ã®ãµã³ãã«ç»åã§ã®æ¬ é¥æ€åºãªã©ã詳现ãªåæãå¿
èŠãšããäœæ¥ã«çæ³çã§ãããã®ã¯ãŒã¯ãããŒã¯ãNV-DINOv2 ãš Milvus ãã¯ãã« ããŒã¿ããŒã¹ã䜿çšããŠã
ã¹ã±ãŒã©ãã«ãª Few-shot åé¡ãã€ãã©ã€ã³
ãæ§ç¯ããæ¹æ³ã瀺ããŠããŸãã
å³ 5. NV-DINOv2ã«ãã Few-shot åé¡
Few-shot åé¡ãã€ãã©ã€ã³ã®ä»çµã¿ã¯æ¬¡ã®ãšããã§ãã
ã¯ã©ã¹ã®å®çŸ©ãšããµã³ãã«ã®ã¢ããããŒã:
ãŠãŒã¶ãŒã¯ã¯ã©ã¹ãå®çŸ©ããããããã«å°æ°ã®ãµã³ãã«ç»åãã¢ããããŒãããŸããNV-DINOv2 ã¯ããããã®ç»åããåã蟌ã¿ãçæããã¯ã©ã¹ ã©ãã«ãšãšãã« Milvus ãã¯ãã« ããŒã¿ããŒã¹ã«ä¿åããŸãã
æ°ããã¯ã©ã¹ãäºæž¬:
æ°ããç»åãã¢ããããŒãããããšãNV-DINOv2 ã¯ãã®åã蟌ã¿ãçæãããã¯ãã« ããŒã¿ããŒã¹ã«ä¿åãããŠããåã蟌ã¿ãšæ¯èŒããŸããkè¿åæ³ (k-NN) ã¢ã«ãŽãªãºã ã䜿çšããŠæãè¿ãé£æ¥ããŒã¿ãç¹å®ãããã®äžããå€æ°æŽŸã¯ã©ã¹ã®ã¯ã©ã¹ãäºæž¬ãããŸãã
NV-CLIP ã«ãããã«ãã¢ãŒãã«æ€çŽ¢
NV-CLIP ã¯ãããã¹ããšç»åã®äž¡æ¹ãåã蟌ãããšãã§ãã
ãã«ãã¢ãŒãã«æ€çŽ¢
ãå¯èœã«ãããšããç¬èªã®å©ç¹ããããŸãã ããã¹ããšç»åã®å
¥åãåããã¯ãã«ç©ºéå
ã®åã蟌ã¿ã«å€æããããšã§ãNV-CLIP ã¯ãæå®ãããããã¹ã ã¯ãšãªã«äžèŽããç»åã®æ€çŽ¢ã容æã«ããŸããããã«ãããæè»æ§ãé«ãæ£ç¢ºãªæ€çŽ¢çµæãåŸãããŸãã
å³ 6. NV-CLIP ã«ãããã«ãã¢ãŒãã«æ€çŽ¢ (ç»åãšããã¹ã)
ãã®ã¯ãŒã¯ãããŒã§ã¯ããŠãŒã¶ãŒã¯ç»åãã©ã«ããã¢ããããŒãããç»åã¯åã蟌ã¿ã«å€æããããã¯ãã« ããŒã¿ããŒã¹ã«ä¿åãããŸããUI ã䜿çšããŠã¯ãšãªãå
¥åãããšãNV-CLIP ã¯ãå
¥åããã¹ãã«åºã¥ããŠæãé¡äŒŒããç»åãæ€çŽ¢ããŸãã
ãã®ã¢ãããŒãã VLM ãšäœ¿çšããŠãããé«åºŠãªãšãŒãžã§ã³ããæ§ç¯ãããã«ãã¢ãŒãã« RAG ã¯ãŒã¯ãããŒãäœæããããšãã§ããŸããããã«ãããããžã¥ã¢ã« AI ãšãŒãžã§ã³ããéå»ã®çµéšãåºã«æ§ç¯ãããå¿çãæ¹åã§ããããã«ãªããŸãã
ä»ããããžã¥ã¢ã« AI ãšãŒãžã§ã³ããå§ããŸããã
ç¬èªã®ããžã¥ã¢ã« AI ãšãŒãžã§ã³ããæ§ç¯ããæºåã¯ã§ããŸããã? GitHub ãªããžããª
/NVIDIA/metropolis-nim-workflows
ã§æäŸãããŠããã³ãŒããããŒã¹ãšããŠãNIM ãã€ã¯ããµãŒãã¹ãå©çšããç¬èªã®ã«ã¹ã¿ã ã¯ãŒã¯ãããŒãš AI ãœãªã¥ãŒã·ã§ã³ãéçºããŠã¿ãŠãã ããããã®äŸãåèã«ãããªãã®çµç¹åºæã®èª²é¡ã解決ããæ°ããã¢ããªã±ãŒã·ã§ã³ãéçºããŸãããã
æè¡çãªè³ªåããµããŒããå¿
èŠãªå Žåã¯ãNVIDIA ã®ã³ãã¥ããã£ã«åå ãã
NVIDIA ããžã¥ã¢ã« AI ãšãŒãžã§ã³ã ãã©ãŒã©ã
ã®ãšãã¹ããŒããšäº€æµããŠãã ããã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
Create purpose-built AI using vision and language With multi-modal Foundation Models (ãã«ãã¢ãŒãã«ã®åºç€ã¢ãã«ã掻çšããããžã§ã³ãšèšèªã«ããç®çå¥ AI ã®äœæ)
NGC ã³ã³ãããŒ:
Llama-3.2-90B-Vision-Instruct
NGC ã³ã³ãããŒ:
Llama-3.2-11B-Vision-Instruct
NGC ã³ã³ãããŒ:
rag-application-multimodal-chatbot
SDK:
Llama3 70B Instruct NIM
SDK:
BioNeMo Service |
https://developer.nvidia.com/blog/an-introduction-to-model-merging-for-llms/ | An Introduction to Model Merging for LLMs | One challenge organizations face when customizing
large language models (LLMs)
is the need to run multiple experiments, which produces only one useful model. While the cost of experimentation is typically low, and the results well worth the effort, this experimentation process does involve âwastedâ resources, such as compute assets spent without their product being utilized, dedicated developer time, and more.
Model merging combines the weights of multiple customized LLMs, increasing resource utilization and adding value to successful models. This approach provides two key solutions:
Reduces experimentation waste by repurposing âfailed experimentsâ
Offers a cost-effective alternative to join training
This post explores how models are customized, how model merging works, different types of model merging, and how model merging is iterating and evolving.
Revisiting model customization
This section provides a brief overview of how models are customized and how this process can be leveraged to help build an intuitive understanding of model merging.
Note that some of the concepts discussed are oversimplified for the purpose of building this intuitive understanding of model merging. It is suggested that you familiarize yourself with customization techniques, transformer architecture, and training separately before diving into model merging. See, for example,
Mastering LLM Techniques: Customization
.
The role of weight matrices in models
Weight matrices are essential components in many popular model architectures, serving as large grids of numbers (weights, or parameters) that store the information necessary for the model to make predictions.
As data flows through a model, it passes through multiple layers, each containing its own weight matrix. These matrices transform the input data through mathematical operations, enabling the model to learn from and adapt to the data.
To modify a modelâs behavior, the weights within these matrices must be updated. Although the specifics of weight modification are not essential, itâs crucial to understand that each customization of a base model results in a unique set of updated weights.
Task customization
When fine-tuning an LLM for a specific task, such as summarization or math, the updates made to the weight matrices are targeted towards improving performance on that particular task. This implies that the modifications to the weight matrices are localized to specific regions, rather than being uniformly distributed.
To illustrate this concept, consider a simple analogy where the weight matrices are represented as a sports field that is 100 yards in length. When customizing the model for summarization, the updates to the weight matrices might concentrate on specific areas, such as the 10-to-30 yard lines. In contrast, customizing the model for math might focus updates on a different region, like the 70-to-80 yard lines.
Interestingly, when customizing the model for a related task, such as summarization in the French language, the updates might overlap with the original summarization task, affecting the same regions of the weight matrices (the 25-to-35 yard lines, for example). This overlap suggests an important insight: different task customizations can significantly impact the same areas of the weight matrices.
While the previous example is purposefully oversimplified, the intuition is accurate. Different task customizations will lead to different parts of the weight matrices being updated, and customization for similar tasks might lead to changing the same parts of their respective weight matrices.
This understanding can inform strategies for customizing LLMs and leveraging knowledge across tasks.
Model merging
Model merging is a loose grouping of strategies that relates to combining two or more models, or model updates, into a single model for the purpose of saving resources or improving task-specific performance.
This discussion focuses primarily on the implementation of these techniques through an open-source library developed by
Arcee AI
called
mergekit
. This library simplifies the implementation of various merging strategies.
Many methods are used to merge models, in various levels of complexity. Here, weâll focus on four main merging methods:
Model Soup
Spherical Linear Interpolation (SLERP)
Task Arithmetic (using Task Vectors)
TIES leveraging DARE
Model Soup
The Model Soup method involves averaging the resultant model weights created by hyperparameter optimization experiments, as explained in
Model Soups: Averaging Weights of Multiple Fine-Tuned Models Improves Accuracy Without Increasing Inference Time
.
Originally tested and verified through computer vision models, this method has shown promising results for LLMs as well. In addition to generating some additional value out of the experiments, this process is simple and not compute intensive.
There are two ways to create Model Soup: naive and greedy. The naive approach involves merging all models sequentially, regardless of their individual performance. In contrast, the greedy implementation follows a simple algorithm:
Rank models by performance on the desired task
Merge the best performing model with the second best performing model
Evaluate the merged modelâs performance on the desired task
If the merged model performs better, continue with the next model; otherwise, skip the current model and try again with the next best model
This greedy approach ensures that the resulting Model Soup is at least as good as the best individual model.
Figure 1. The Model Soup method outperforms the constituent models using the greedy model Soup Model merging technique
Each step of creating a Model Soup is implemented by simple weighted and normalized linear averaging of two or more model weights. Both the weighting and normalization are optional, though recommended. The implementation of this from the
mergekit
library is as follows:
res = (weights * tensors).sum(dim=0)
if self.normalize:
res = res / weights.sum(dim=0)
While this method has shown promising results in the computer vision and language domains, it faces some serious limitations. Specifically, there is no guarantee that the model will be more performant. The linear averaging can lead to degraded performance or loss of generalizability.
The next method, SLERP, addresses some of those specific concerns.
SLERP
Spherical Linear Interpolation, or SLERP, is a method introduced in a 1985 paper titled
Animating Rotation with Quaternion Curves
. Itâs a âsmarterâ way of computing the average between two vectors. In a technical sense, it helps compute the shortest path between two points on a curved surface.
This method excels at combining two models. The classic example is imagining the shortest path between two points on the Earth. Technically, the shortest path would be a straight line that goes through the Earth, but in reality itâs a curved path on the surface of the Earth. SLERP computes this smooth path to use for averaging two models together while maintaining their unique model weight âsurfaces.â
The following code snippet is the core of the SLERP algorithm, and is what provides such a good interpolation between the two models:
# Calculate initial angle between v0 and v1
theta_0 = np.arccos(dot)
sin_theta_0 = np.sin(theta_0)
# Angle at timestep t
theta_t = theta_0 * t
sin_theta_t = np.sin(theta_t)
# Finish the slerp algorithm
s0 = np.sin(theta_0 - theta_t) / sin_theta_0
s1 = sin_theta_t / sin_theta_0
res = s0 * v0_copy + s1 * v1_copy
return maybe_torch(res, is_torch)
Task Arithmetic (using Task Vectors)
This group of model merging methods utilizes Task Vectors to combine models in various ways, increasing in complexity.
Task Vectors: Capturing customization updates
Recalling how models are customized, updates are made to the modelâs weights, and those updates are captured in the base model matrices. Instead of considering the final matrices as a brand new model, they can be viewed as the difference (or delta) between the base weights and the customized weights. This introduces the concept of a task vector,a structure containing the delta between the base and customized weights.
This is the same intuition behind Low Rank Adaptation (LoRA), but without the further step of factoring the matrices representing the weight updates.
Task Vectors can be simply obtained from customization weights by subtracting out the base model weights.
Task Interference: Conflicting updates
Recalling the sports field example, there is a potential for overlap in the updated weights between different customizations. There is some intuitive understanding that customization done for the same task would lead to a higher rate of conflicting updates than customization done for two, or more, separate tasks.
This âconflicting updateâ idea is more formally defined as Task Interference and it relates to the potential collision of important updates between two, or more, Task Vectors.
Task Arithmetic
As introduced in the paper
Editing Models with Task Arithmetic
, Task Arithmetic represents the simplest implementation of a task vector approach. The process is as follows:
Obtain two or more task vectors and merge them linearly as seen in Model Soup.
After the resultant merged task vector is obtained, it is added into the base model.
This process is simple and effective, but has a key weakness: no attention is paid to the potential interference between the task vectors intended to be merged.
TIES-Merging
As introduced in the paper
TIES-Merging: Resolving Interference When Merging Models
, TIES (TrIm Elect Sign and Merge) is a method that takes the core ideas of Task Arithmetic and combines it with heuristics for resolving potential interference between the Task Vectors.
The general procedure is to consider, for each weight in the Task Vectors being merged, the magnitude of each incoming weight, then the sign of each incoming weight, and then averaging the remaining weights.
Figure 2. A visual representation of the TIES process
This method seeks to resolve interference by enabling the models that had the most significant weight updates for any given weight update take precedence during the merging process. In essence, the models that âcaredâ more about that weight would be prioritized over the models that did not.
DARE
Introduced in the paper
Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
, DARE isnât directly a model merging technique. Rather, itâs an augment that can be considered alongside other approaches. DARE derives from the following:
D
rops delta parameters with a ratio p
A
nd
RE
scales the remaining ones by 1/(1 â p) to approximate the original embeddings.
Instead of trying to address the problem of interference through heuristics, DARE approaches it from a different perspective. In essence, it randomly drops a large number of the updates found in a specific task vector by setting them to 0, and then rescales the remaining weight proportional to the ratio of the dropped weights.
DARE has been shown to be effective even when dropping upwards of 90%, or even 99% of the task vector weights.
Increase model utility with model merging
The concept of model merging offers a practical way to maximize the utility of multiple LLMs, including task-specific fine-tuning done by a larger community. Through techniques like Model Soup, SLERP, Task Arithmetic, TIES-Merging, and DARE, organizations can effectively merge multiple models in the same family in order to reuse experimentation and cross-organizational efforts.
As the techniques behind model merging are better understood and further developed, they are poised to become a cornerstone of the development of performant LLMs. While this post has only scratched the surface, more techniques are constantly under development, including some
evolution-based methods
. Model merging is a budding field in the generative AI landscape, as more applications are being tested and proven. | https://developer.nvidia.com/ja-jp/blog/an-introduction-to-model-merging-for-llms/ | LLM ã®ã¢ãã« ããŒãžã®ãçŽ¹ä» | Reading Time:
2
minutes
倧èŠæš¡èšèªã¢ãã« (LLM)
ãã«ã¹ã¿ãã€ãºããéã«ãçµç¹ãçŽé¢ãã課é¡ã® 1 ã€ã¯ãè€æ°ã®å®éšãå®è¡ããå¿
èŠãããã®ã«ããã®çµæåŸãããã®ã¯ 1 ã€ã®æçšãªã¢ãã«ã®ã¿ãšããããšã§ãã å®éšã«ãããã³ã¹ãã¯éåžžäœããåŽåã«èŠåãææãåŸããããã®ã®ããã®å®éšããã»ã¹ã«ã¯ãå®éšã«å²ãåœãŠãããŠããã©äœ¿çšçã®äœããŸãã¯ãå
šã皌åããŠããªãèšç®æ©ãå°ä»»ã®éçºè
ãè²»ããæéãªã©ããç¡é§ãªããªãœãŒã¹ãå«ãŸããŸãã
ã¢ãã« ããŒãžã¯ãè€æ°ã®ã«ã¹ã¿ãã€ãºããã LLM ã®éã¿ãçµã¿åãããããšã§ããªãœãŒã¹ã®å©çšçãé«ããæåããã¢ãã«ã«ä»å 䟡å€ãå ããŸãã ãã®ã¢ãããŒãã¯ã2 ã€ã®éèŠãªãœãªã¥ãŒã·ã§ã³ãæäŸããŸãã
ã倱æããå®éšããå¥ã®ç®çã«äœ¿çšããããšã§ãå®éšã®ç¡é§ãåæžãã
ãã¬ãŒãã³ã°ã«åå ããããã®ã³ã¹ãå¹çã®é«ã代æ¿æ段ãæäŸãã
æ¬æçš¿ã§ã¯ãã¢ãã«ãã©ã®ããã«ã«ã¹ã¿ãã€ãºãããã®ããã¢ãã« ããŒãžãã©ã®ããã«æ©èœããã®ããããŸããŸãªçš®é¡ã®ã¢ãã« ããŒãžãããã³ã¢ãã« ããŒãžãã©ã®ããã«ç¹°ãè¿ãããé²åããŠããã®ãã«ã€ããŠæ¢ããŸãã
ã¢ãã« ã«ã¹ã¿ãã€ãºå蚪
ãã®ã»ã¯ã·ã§ã³ã§ã¯ãã¢ãã«ãã©ã®ããã«ã«ã¹ã¿ãã€ãºããããããŸããã®ããã»ã¹ãã©ã®ããã«æŽ»çšããããšã§ãã¢ãã« ããŒãžãçŽæçã«ç解ã§ããã®ãã«ã€ããŠç°¡åã«èª¬æããŸãã
ããã§èª¬æãããŠããæŠå¿µã®äžéšã¯ãã¢ãã« ããŒãžã«å¯ŸããçŽæçãªç解ããæ·±ããããã«ãé床ã«åçŽåãããŠããããšããããŸãã ã¢ãã« ããŒãžãå§ããåã«ãã«ã¹ã¿ãã€ãºæè¡ãTransformer ã¢ãŒããã¯ãã£ããã³ãã¬ãŒãã³ã°ã«ã€ããŠã¯ãåå¥ã«ç解ããŠããããšããå§ãããŸãã ããšãã°ã
倧èŠæš¡èšèªã¢ãã«ã®ã«ã¹ã¿ãã€ãºææ³ãéžæãã
ãªã©ãåèã«ããŠãã ããã
ã¢ãã«ã«ãããéã¿è¡åã®åœ¹å²
éã¿è¡åã¯ãå€ãã®äžè¬çãªã¢ãã« ã¢ãŒããã¯ãã£ã«ãããŠå¿
é ã®ã³ã³ããŒãã³ãã§ãããã¢ãã«ãäºæž¬ãè¡ãã®ã«å¿
èŠãªæ
å ±ãæ ŒçŽãã倧ããªã°ãªãã (éã¿ããŸãã¯ãã©ã¡ãŒã¿ãŒ) ãšããŠæ©èœããŸãã
ããŒã¿ã¯ã¢ãã«ãæµããéã«ã¯è€æ°ã®ã¬ã€ã€ãŒãééããŸããããã®åã¬ã€ã€ãŒã«ã¯ç¬èªã®éã¿è¡åãå«ãŸããŠããŸãã ãããã®è¡åã¯ãæ°åŠæŒç®ãéããŠå
¥åããŒã¿ãå€æããã¢ãã«ãããŒã¿ããåŠã³ããããã«é©å¿ã§ããããã«ããŸãã
ã¢ãã«ã®åäœãå€æŽããã«ã¯ããããã®è¡åå
ã®éã¿ãæŽæ°ããå¿
èŠããããŸãã éã¿ãå€æŽããéã®è©³çŽ°ã¯éèŠã§ã¯ãããŸããããããŒã¹ã®ã¢ãã«ãã«ã¹ã¿ãã€ãºãã床ã«ãæŽæ°ãããéã¿ã®äžæãªã»ãããçæãããããšãç解ããããšãéèŠã§ãã
ã¿ã¹ã¯ã®ã«ã¹ã¿ãã€ãº
èŠçŽãæ°åŠãªã©ã®ç¹å®ã®ã¿ã¹ã¯ã®ããã« LLM ããã¡ã€ã³ãã¥ãŒãã³ã°ããå Žåãéã¿è¡åã«è¡ãããæŽæ°ã¯ããã®ç¹å®ã®ã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ãåäžãããããã«è¡ãããŸãã ããã¯ãéã¿è¡åãžã®å€æŽã¯åçã«ååžãããã®ã§ã¯ãªããç¹å®ã®é åã«éå®ãããããšãæå³ããŸãã
ãã®æŠå¿µã説æããããã«ãéã¿è¡åã 100 ã€ãŒãã®ã¹ããŒããã£ãŒã«ãã«èŠç«ãŠãåçŽãªäŸã«ã€ããŠèããŠã¿ãŸããããèŠçŽããããã«ã¢ãã«ãã«ã¹ã¿ãã€ãºããå Žåãéã¿è¡åãžã®æŽæ°ã¯ã10ïœ30 ã€ãŒãã®ã©ã€ã³ãªã©ã®ç¹å®ã®é åã«éäžããå¯èœæ§ããããŸãã 察ç
§çã«ãã¢ãã«ãæ°åŠåãã«ã«ã¹ã¿ãã€ãºããå Žåã70ïœ80 ã€ãŒãã®ã©ã€ã³ãªã©ãå¥ã®é åã«æŽæ°ãéäžããå¯èœæ§ããããŸãã
èå³æ·±ãããšã«ããã©ã³ã¹èªã®èŠçŽãªã©ãé¢é£ããã¿ã¹ã¯ã®ã¢ãã«ãã«ã¹ã¿ãã€ãºããå ŽåãæŽæ°ãå
ã
ã®èŠçŽã¿ã¹ã¯ãšéè€ããéã¿è¡å (äŸãã°ã25ïœ35 ã€ãŒãïŒã®åãé åã«åœ±é¿ãåãŒãå¯èœæ§ããããŸãã ãã®éè€éšåã¯ãéèŠãªæŽå¯ã瀺ããŠããŸããç°ãªãã¿ã¹ã¯ã®ã«ã¹ã¿ãã€ãºããéã¿è¡åå
ã®åãé åã«å€§ããªåœ±é¿ãäžããå¯èœæ§ãããã®ã§ãã
åè¿°ã®äŸã¯æå³çã«åçŽåããããã®ã§ãããçŽæã¯æ£ç¢ºã§ãã ç°ãªãã¿ã¹ã¯ã®ã«ã¹ã¿ãã€ãºã«ãããéã¿è¡åã®ç°ãªãéšåãæŽæ°ãããããšã«ã€ãªãããŸãããŸããé¡äŒŒããã¿ã¹ã¯ãã«ã¹ã¿ãã€ãºãããšãããããã®éã¿è¡åã®åãéšåãå€æŽãããå¯èœæ§ããããŸãã
ãã®ããšãç解ããããšã¯ãLLM ãã«ã¹ã¿ãã€ãºããããã¿ã¹ã¯å
šäœã§ç¥èã掻çšããéã®æŠç¥ã«åœ¹ç«ã¡ãŸãã
ã¢ãã« ããŒãž
ã¢ãã« ããŒãžãšã¯ããªãœãŒã¹ã®ç¯çŽãã¿ã¹ã¯åºæã®ããã©ãŒãã³ã¹åäžãç®çãšããŠã2 ã€ä»¥äžã®ã¢ãã«ãŸãã¯ã¢ãã«ã®æŽæ°ã 1 ã€ã®ã¢ãã«ã«ãŸãšããããšã«é¢é£ããæŠç¥ã倧ãŸãã«ã°ã«ãŒãåããããšã§ãã
ããã§ã¯ã
Arcee AI
ãéçºãã
mergekit
ãšåŒã°ãããªãŒãã³ãœãŒã¹ ã©ã€ãã©ãªãéããŠããããã®æè¡ãå®è£
ããããšã«äž»ã«çŠç¹ãåœãŠãŸãã ãã®ã©ã€ãã©ãªã¯ãããŸããŸãªããŒãžæŠç¥ã®å®è£
ãç°¡çŽ åããŸãã
ã¢ãã«ã®ããŒãžã«äœ¿çšãããæ¹æ³ã¯æ°å€ãããããã®è€éããæ§ã
ã§ãã ããã§ã¯ã䞻㪠4 ã€ã®ããŒãžæ¹æ³ãåãäžããŸãã
Model Soup
çé¢ç·åè£é (SLERP: Spherical Linear Interpolation)
Task Arithmetic (Task Vector ã䜿çš)
DARE ã掻çšãã TIES
Model Soup
Model Soup ã¡ãœããã§ã¯ããã€ã㌠ãã©ã¡ãŒã¿ãŒæé©åå®éšã«ãã£ãŠåŸãããã¢ãã«ã®éã¿ãå¹³ååããŸããããã«ã€ããŠã¯ã
Model Soups: Averaging Weights of Multiple Fine-Tuned Models Improves Accuracy Without Increasing Inference Time
ã«èª¬æãããŠããŸãã
åœåãã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ ã¢ãã«ãéããŠãã¹ããšæ€èšŒãè¡ããããã®æ¹æ³ã¯ãLLM ã§ãææãªçµæã瀺ããŠããŸãã å®éšããäœããã®ä»å 䟡å€ãçã¿åºãã ãã§ãªãããã®ããã»ã¹ã¯åçŽã§ãèšç®éãå€ããããŸããã
Model Soup ãäœæããæ¹æ³ã«ã¯ãNaive ãš Greedy ã® 2 ã€ã®æ¹æ³ããããŸãã Naive ã¢ãããŒãã§ã¯ãåã
ã®ããã©ãŒãã³ã¹ã«é¢ä¿ãªãããã¹ãŠã®ã¢ãã«ãé 次ããŒãžããŸãã ãããšã¯å¯Ÿç
§çã«ãGreedy å®è£
ã¯ä»¥äžã®åçŽãªã¢ã«ãŽãªãºã ã«åŸã£ããã®ã«ãªããŸãã
ç®çã®ã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ã«åºã¥ããŠã¢ãã«ãã©ã³ã¯ä»ããã
æãè¯ãããã©ãŒãã³ã¹ ã¢ãã«ãšã2 çªç®ã«è¯ãããã©ãŒãã³ã¹ ã¢ãã«ãããŒãžãã
ããŒãžãããã¢ãã«ã®ããã©ãŒãã³ã¹ããç®çã®ã¿ã¹ã¯ã§è©äŸ¡ãã
ããŒãžãããã¢ãã«ãããè¯ãããã©ãŒãã³ã¹ãçºæ®ããå Žåã¯ã次ã®ã¢ãã«ã§ç¶è¡ãããããã§ãªãå Žåã¯ãçŸåšã®ã¢ãã«ãã¹ããããã次ã®æãè¯ãã¢ãã«ã§åè©Šè¡ãã
ãã® Greedy ã¢ãããŒãã§ã¯ãçµæãšããŠåŸããã Model Soup ããå°ãªããšãæè¯ã®åã
ã®ã¢ãã«ãšåçã®å質ã«ãªãããšãä¿èšŒããŸãã
å³ 1. Model Soup ã¡ãœããã¯ãGreedy ã¢ãã« ããŒãžæè¡ã䜿çšããããšã§ãåã
ã®ã¢ãã«ãããåªããæ§èœãçºæ®ããŸã
Model Soup ãäœæããåæé ã¯ã2 ã€ä»¥äžã®ã¢ãã«ã®éã¿ãåçŽã«éã¿ä»ãããæ£èŠåããç·åœ¢å¹³åãåãããšã§å®è£
ãããŸããéã¿ä»ããšæ£èŠåã¯ã©ã¡ãããªãã·ã§ã³ã§ãããæšå¥šãããŸãã
mergekit
ã©ã€ãã©ãªããå®è£
ããå Žåã¯ã以äžã®ããã«ãªããŸãã
res = (weights * tensors).sum(dim=0)
if self.normalize:
res = res / weights.sum(dim=0)
ãã®æ¹æ³ã¯ãã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ãèšèªã®é åã§ææãªçµæã瀺ããŠããŸãããããã€ãã®æ·±å»ãªéçã«çŽé¢ããŠããŸãã å
·äœçã«ã¯ãã¢ãã«ãããåªããæ§èœãçºæ®ã§ãããšããä¿èšŒã¯ãããŸããã ç·åœ¢å¹³ååã¯ãæ§èœã®äœäžããæ±åæ§èœã®åªå€±ã«ã€ãªããããšããããŸãã
次ã®ã¡ãœãããšãªã SLERP ã¯ããããã®ç¹å®ã®æžå¿µã®äžéšã«å¯ŸåŠããŸãã
SLERP
çé¢ç·åè£é (SLERP: Spherical Linear Interpolation) ã¯ã1985 幎ã«
Animating Rotation with Quaternion Curves
ãšé¡ãããè«æã§çŽ¹ä»ãããæ¹æ³ã§ãã ããã¯ã2 ã€ã®ãã¯ãã«éã®å¹³åãèšç®ãããããã¹ããŒãããªæ¹æ³ã§ãã æè¡çãªæå³ã§ã¯ãæ²é¢äžã® 2 ã€ã®ç¹éã®æççµè·¯ãèšç®ããã®ã«åœ¹ç«ã¡ãŸãã
ãã®æ¹æ³ã§ã¯ã2 ã€ã®ã¢ãã«ãçµã¿åãããããšã«åªããŠããŸãã å
žåçãªäŸãšããŠã¯ãå°çäžã® 2 ã€ã®å°ç¹éã®æççµè·¯ãæ³åããŠã¿ãŠãã ãããæè¡çã«ã¯ãæççµè·¯ã¯å°çã貫ãçŽç·ã«ãªããŸãããå®éã«ã¯å°çè¡šé¢ã«æ²¿ã£ãæ²ç·ã«ãªããŸãã SLERP ã¯ã2 ã€ã®ã¢ãã«ã®ç¬èªã®éã¿ãè¡šé¢ããç¶æããªããå¹³ååãããã®æ»ãããªçµè·¯ãèšç®ããŸãã
以äžã®ã³ãŒãã®æç²ã¯ãSLERP ã¢ã«ãŽãªãºã ã®äžæ žéšåã§ããã2 ã€ã®ã¢ãã«éã®è£éãããŸãè¡ããŸãã
# Calculate initial angle between v0 and v1
theta_0 = np.arccos(dot)
sin_theta_0 = np.sin(theta_0)
# Angle at timestep t
theta_t = theta_0 * t
sin_theta_t = np.sin(theta_t)
# Finish the slerp algorithm
s0 = np.sin(theta_0 - theta_t) / sin_theta_0
s1 = sin_theta_t / sin_theta_0
res = s0 * v0_copy + s1 * v1_copy
return maybe_torch(res, is_torch)
Task Arithmetic (Task Vector ã䜿çš)
ãã®ã¢ãã« ããŒãžæ³ã§ã¯ãTask Vector ã䜿çšããŠããŸããŸãªæ¹æ³ã§ã¢ãã«ãçµã¿åãããããšã§ãè€éããé«ããŠãããŸãã
Task Vector: ã«ã¹ã¿ãã€ãºã®æŽæ°ããã£ããã£
ã¢ãã«ãã©ã®ããã«ã«ã¹ã¿ãã€ãºãããã®ããæãåºããŠãã ãããã¢ãã«ã®éã¿ãæŽæ°ããããããã®æŽæ°ãããŒã¹ ã¢ãã«ã®è¡åã«ãã£ããã£ãããŸãã æçµçãªè¡åãçæ°ããã¢ãã«ãšã¿ãªã代ããã«ãããŒã¹ã®éã¿ãšã«ã¹ã¿ãã€ãºããéã¿ã®å·® (ãŸãã¯ãã«ã¿) ãšããŠèŠãããšãã§ããŸãã ããã«ãããããŒã¹ã®éã¿ãšã«ã¹ã¿ãã€ãºããéã¿ãšã®éã®ãã«ã¿ãå«ãŸããæ§é ã§ãã Task Vector ã®æŠå¿µãå°å
¥ãããŸãã
ããã¯ãLoRA (Low Rank Adaptation) ãšåãèãæ¹ã§ãããéã¿ã®æŽæ°ãè¡šãè¡åãå æ°å解ããè¿œå ã®ã¹ãããã¯ãããŸããã
Task Vector ã¯ãããŒã¹ ã¢ãã«ã®éã¿ãæžç®ããããšã§ãã«ã¹ã¿ãã€ãºã®éã¿ããç°¡åã«ååŸã§ããŸãã
Task Interference: æŽæ°ã®ç«¶å
ã¹ããŒã ãã£ãŒã«ãã®äŸãæãåºããŠãã ãããç°ãªãã«ã¹ã¿ãã€ãºéã§æŽæ°ãããéã¿ãéãªãåãå¯èœæ§ããããŸããã åãã¿ã¹ã¯ã«å¯ŸããŠã«ã¹ã¿ãã€ãºãããšã2 ã€ä»¥äžã®å¥ã
ã®ã¿ã¹ã¯ã«å¯ŸããŠã«ã¹ã¿ãã€ãºãè¡ãå Žåããããé«ãå²åã§æŽæ°ã®ç«¶åãåŒãèµ·ããããšã¯çŽæçã«ç解ã§ããŸãã
ãã®ãæŽæ°ã®ç«¶åããšããèãæ¹ã¯ãããæ£åŒã«ã¯ Task Interference (ã¿ã¹ã¯ã®å¹²æž) ãšããŠå®çŸ©ãããŠããã2 ã€ä»¥äžã® ã¿ã¹ã¯ ãã¯ãã«éã®éèŠãªã¢ããããŒããè¡çªããå¯èœæ§ã«é¢é£ããŠããŸãã
Task Arithmetic
Editing Models with Task Arithmetic
ã®è«æã§çŽ¹ä»ãããããã«ãTask Arithmetic 㯠Task Vector ã¢ãããŒãã®æãåçŽãªå®è£
ã§ãã ããã»ã¹ã¯ä»¥äžã®ãšããã§ãã
2 ã€ä»¥äžã®ã¿ã¹ã¯ ãã¯ãã«ãååŸããModel Soup ã§èŠãããããã«ç·åœ¢ãªããŒãžãããã
çµæãšããŠããŒãžããã Task Vector ãåŸãããããããŒã¹ ã¢ãã«ã«è¿œå ããã
ãã®ããã»ã¹ã¯åçŽã§å¹æçã§ãããããŒãžãããäºå®ã® Task Vector éã®æœåšçãªå¹²æžã«ã¯æ³šæãæãããªããšãããé倧ãªåŒ±ç¹ããããŸãã
TIES-Merging
TIES-Merging: Resolving Interference When Merging Models
ã®è«æã«çŽ¹ä»ãããŠããããã«ãTIES (TrIm Elect Sign and Merge) ã¯ãTask Arithmetic ã®äžæ žãšãªãã¢ã€ãã¢ãåãå
¥ããããã Task Vector éã®æœåšçãªå¹²æžã解決ããããã®ãã¥ãŒãªã¹ãã£ãã¯ãšçµã¿åãããæ¹æ³ã§ãã
äžè¬çãªæé ã§ã¯ãããŒãžããã Task Vector å
ã®éã¿ããšã«ãåå
¥åãããéã¿ã®å€§ããã次ã«åå
¥åãããéã¿ã®ç¬Šå·ãèæ
®ãããã®åŸæ®ãã®éã¿ãå¹³ååããŸãã
å³ 2. TIES ããã»ã¹ã®å³ã«ãã解説
ãã®æ¹æ³ã¯ãä»»æã®éã¿æŽæ°ã«ãããŠæã倧ããªéã¿ã«å¯ŸããæŽæ°ãè¡ã£ãã¢ãã«ããããŒãž ããã»ã¹ã«ãããŠåªå
ãããããã«ããããšã§ãå¹²æžã解決ããããšãããã®ã§ããèŠããã«ããã®éã¿ããããèæ
®ãããã¢ãã«ããããã§ãªãã¢ãã«ãããåªå
ãããŸãã
DARE
Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
ã®è«æã«çŽ¹ä»ãããŠããéããDARE ã¯çŽæ¥çãªã¢ãã« ããŒãžã®æè¡ã§ã¯ãããŸããã ããããä»ã®ã¢ãããŒããšäžŠè¡ããŠæ€èšããããšãã§ããè£åŒ·ã§ãã DARE ã¯ã以äžããç±æ¥ããŠããŸãã
D
rops delta parameters with a ratio p
A
nd
RE
scales the remaining ones by 1/(1 â p) to approximate the original embeddings (ãã«ã¿ ãã©ã¡ãŒã¿ãŒã p ã®æ¯çã§ããããããæ®ãã®ãã©ã¡ãŒã¿ãŒã 1/ (1 â p) ã§åã¹ã±ãŒã«ããŠå
ã®åã蟌ã¿ã«è¿äŒŒãã)ã
ãã¥ãŒãªã¹ãã£ãã¯ã«ãã£ãŠå¹²æžã®åé¡ã«å¯ŸåŠããããšããã®ã§ã¯ãªããDARE ã¯å¥ã®èŠ³ç¹ããã¢ãããŒãããŸãã å
·äœçã«ã¯ãç¹å®ã® Task Vector ã§èŠã€ãã£ãå€æ°ã®æŽæ°ãã0 ã«èšå®ããããšã§ã©ã³ãã ã«ãããããããããããããéã¿ã®æ¯çã«æ¯äŸããŠæ®ãã®éã¿ãåã¹ã±ãŒã«ããŸãã
DARE ã¯ãTask Vector ã®éã¿ã 90% 以äžãããã㯠99% 以äžãããããããå Žåã§ããæå¹ã§ããããšã瀺ãããŠããŸãã
ã¢ãã« ããŒãžã§ã¢ãã«ã®æçšæ§ãåäžããã
ã¢ãã« ããŒãžã®æŠå¿µã¯ããã倧èŠæš¡ãªã³ãã¥ããã£ã«ãã£ãŠè¡ãããã¿ã¹ã¯åºæã®ãã¡ã€ã³ãã¥ãŒãã³ã°ãã¯ãããšãããè€æ°ã® LLM ã®æçšæ§ãæ倧éã«é«ããå®çšçãªæ¹æ³ãæäŸããŸããModel SoupãSLERPãTask ArithmeticãTIES-MergingãDARE ãªã©ã®æè¡ãéããŠãå®éšãšçµç¹ããŸãããåãçµã¿ãåå©çšããããã«ãå系統ã®è€æ°ã®ã¢ãã«ãå¹æçã«ããŒãžããããšãã§ããŸãã
ã¢ãã« ããŒãžã®èæ¯ã«ããæè¡ãããæ·±ãç解ãããããã«çºå±ããã«ã€ããŠãé«æ§èœãª LLM éçºã®ç€ç³ãšãªãã§ãããããã®æçš¿ã¯ãã»ãã®è¡šé¢ããªãã£ãã«éããŸãããã
é²åã«åºã¥ãææ³
ãªã©ãå«ããããã«å€ãã®æè¡ã絶ãéãªãéçºãããŠããŸããã¢ãã« ããŒãžã¯ãããå€ãã®å¿çšããã¹ããããå®èšŒãããŠãããçæ AI ã®æ°ããªåéãšãªã£ãŠããã®ã§ãã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
LLM ã€ã³ãã©ã®æ§ç¯ãåŠç¿é床ã®é«éåãçæ AI ã€ãããŒã·ã§ã³ã®ä¿é²ã®ããã®ãšã³ãããŒãšã³ãã®ãœãªã¥ãŒã·ã§ã³ã®èšèš (Aivres æäŸ)
NGC ã³ã³ãããŒ:
genai-llm-playground
NGC ã³ã³ãããŒ:
rag-application-query-decomposition-agent
ãŠã§ãããŒ:
AI ã§å»çã¯ãŒã¯ãããŒãå€é©: CLLM ã«ã€ããŠæ·±ãç解ãã |
https://developer.nvidia.com/blog/bringing-ai-ran-to-a-telco-near-you/ | Bringing AI-RAN to a Telco Near You | Inferencing for generative AI and AI agents will drive the need for AI compute infrastructure to be distributed from edge to central clouds.
IDC predicts
that âBusiness AI (consumer excluded) will contribute $19.9 trillion to the global economy and account for 3.5% of GDP by 2030.â
5G networks must also evolve to serve this new incoming AI traffic. At the same time, there is an opportunity for telcos to become the local AI compute infrastructure for hosting enterprise AI workloads, independent of network connectivity while meeting their data privacy and sovereignty requirements. This is where an accelerated computing infrastructure shines â with the ability to accelerate both Radio signal processing and AI workloads. And most importantly, the same compute infrastructure can be used to process AI and radio access network (RAN) services. This combination has been called
AI-RAN by the telecoms industry
.
NVIDIA is introducing Aerial RAN Computer-1, the worldâs first AI-RAN deployment platform, that can serve AI and RAN workloads concurrently, on a common accelerated infrastructure.
Following the launch of the
AI-RAN Innovation Center by T-Mobile
, the Aerial RAN Computer-1 turns AI-RAN into reality with a deployable platform that telcos can adopt globally. It can be used in small, medium, or large configurations for deployment at cell sites, distributed or centralized sites, effectively turning the network into a multi-purpose infrastructure that serves voice, video, data, and AI traffic.
This is a transformative solution that reimagines wireless networks for AI, with AI. It is also a huge opportunity for telcos to fuel the AI flywheel, leveraging their distributed network infrastructure, low latency, guaranteed quality of service, massive scale, and ability to preserve data privacy, security, and localization â all key requirements for AI inferencing and agentic AI applications.
AI-RAN, AI Aerial, and Aerial RAN Computer-1
AI-RAN is the technology framework to build multipurpose networks that are also AI-native. As telcos embrace AI-RAN, and move from the traditional single-purpose ASIC-based computing networks for RAN to new multi-purpose accelerated computing-based networks serving RAN and AI together, telcos can now participate in the new AI economy and can leverage AI to improve the efficiency of their networks.
NVIDIA AI Aerial
includes three computer systems to design, simulate, train, and deploy AI-RAN-based 5G and 6G wireless networks. Aerial RAN Computer-1 is the base foundation of NVIDIA AI Aerial and provides a commercial-grade deployment platform for AI-RAN.
Aerial RAN Computer-1 (Figure 1) offers a common scalable hardware foundation to run RAN and AI workloads including â software-defined 5G, Private 5G RAN from NVIDIA or other RAN software providers, containerized network functions, AI microservices from NVIDIA or partners or host internal and third-party generative AI applications. Aerial RAN Computer-1 is modular by design, enabling it to scale from D-RAN to C-RAN architectures covering rural to dense urban use cases.
NVIDIA CUDA-X Libraries are central to accelerated computing, providing speed, accuracy, and reliability in addition to improved efficiency. That means more work is done in the same power envelope. Most importantly, domain-specific libraries, including telecom-specific adaptations, are key to making Aerial RAN Computer-1 suited for telecom deployments.
NVIDIA DOCA
offers a suite of tools and libraries that can significantly boost the performance enhancements for telco workloads, including RDMA, PTP/timing synchronization, and Ethernet-based fronthaul (eCPRI), as well as AI workloads that are crucial for modern network infrastructure.
Collectively, the full stack enables scalable hardware, common software, and an open architecture to deliver a high-performance AI-RAN together with ecosystem partners.
Figure 1. NVIDIA Aerial RAN Computer-1, as a part of the NVIDIA AI Aerial platform
Benefits of Aerial RAN Computer-1
With Aerial RAN Computer-1, wireless networks can turn into a massively distributed grid of AI and RAN data centers, unleashing new monetization avenues for telcos while paving the way for 6G with a software upgrade.
Benefits of Aerial RAN Computer-1 for telecom service providers include the following:
Monetize with AI and generative AI applications, AI inferencing at the edge, or with GPU-as-a-Service.
Increase utilization of infrastructure by 2-3x compared to single-purpose base stations that are typically only 30% utilized today. Use the same infrastructure to host internal generative AI workloads and other containerized network functions such as UPF and RIC.
Improve radio network performance through site-specific AI learning, with up to 2x gains possible in spectral efficiency. This means direct cost savings per Mhz of the acquired spectrum.
Deliver high-performance RAN and AI experiences for next-gen applications that intertwine AI into every interaction. Aerial RAN Computer-1 can service up to 170 Gb/s throughput in RAN-only mode and 25K tokens/sec in AI-only mode, or a combination of both with superior performance compared to traditional networks.
Building blocks of Aerial RAN Computer-1
The key hardware components of Aerial RAN Computer-1 include the following:
NVIDIA GB200 NVL2
NVIDIA Blackwell GPU
NVIDIA Grace CPU
NVLink2 C2C
Fifth-generation NVIDIA NVLink
Key-value caching
MGX reference architecture
Real-time mainstream LLM inference
NVIDIA GB200 NVL2
The NVIDIA GB200 NVL2 platform (Figure 2) used in Aerial RAN Computer-1 revolutionizes data center and edge computing, offering unmatched performance for mainstream large language models (LLMs), vRAN, vector database searches, and data processing.
Powered by two NVIDIA Blackwell GPUs and two NVIDIA Grace CPUs, the scale-out single-node architecture seamlessly integrates accelerated computing into existing infrastructure.
This versatility enables a wide range of system designs and networking options, making the GB200 NVL2 platform an ideal choice for data centers, edge, and cell site locations seeking to harness the power of AI as well as wireless 5G connectivity.
For instance, half of a GB200 server could be allocated to RAN tasks and the other half to AI processing through
Multi-instance GPU (MIG)
technology at a single cell site. For aggregated sites, a full GB200 server could be dedicated to RAN, with another used exclusively for AI. In a centralized deployment, a cluster of GB200 servers could be shared between RAN and AI workloads
NVIDIA Blackwell GPU
NVIDIA Blackwell is a revolutionary architecture that delivers improved performance, efficiency, and scale. NVIDIA Blackwell GPUs pack 208B transistors and are manufactured using a custom-built TSMC 4NP process. All NVIDIA Blackwell products feature two reticle-limited dies connected by a 10-TB/s chip-to-chip interconnect in a unified single GPU.
NVIDIA Grace CPU
The NVIDIA Grace CPU is a breakthrough processor designed for modern data centers running AI, vRAN, cloud, and high-performance computing (HPC) applications. It provides outstanding performance and memory bandwidth with 2x the energy efficiency of todayâs leading server processors.
NVLink2 C2C
The GB200 NVL2 platform uses NVLink-C2C for a groundbreaking 900 GB/s interconnect between each NVIDIA Grace CPU and NVIDIA Blackwell GPU. Combined with fifth-generation NVLink, this delivers a massive 1.4-TB coherent memory model, fueling accelerated AI and vRAN performance.
Fifth-generation NVIDIA NVLink
To fully harness the power of exascale computing and trillion-parameter AI models, every GPU in a server cluster must communicate seamlessly and swiftly.
Fifth-generation NVLink is a high-performance interconnect to deliver accelerated performance from the GB200 NVL2 platform.
Key-value caching
Key-value (KV) caching
improves LLM response speeds by storing conversation context and history.
GB200 NVL2 optimizes KV caching through its fully coherent NVIDIA Grace GPU and NVIDIA Blackwell GPU memory connected by NVLink-C2C, 7x faster than PCIe. This enables LLMs to predict words faster than x86-based GPU implementations.
MGX reference architecture
MGX GB200 NVL2 is a 2:2 configuration with CPU C-Links and GPU NVLinks connectedâ.
HPM contains the following components:
NVIDIA Grace CPUs (2)
Connectors for GPU pucks and I/O cards
GPU modules populated in 2U AC Server (2)
Each pluggable GPU module contains the GPU, B2B connection, and NVLink connectors.
Figure 2. NVIDIA GB200 NVL2 platform layout
GPU Compute
40 PFLOPS FP4 | 20 PFLOPS FP8/FP6
10x GH200
GPU Memory
Up to 384 GB
CPU
144 Core ARMv9,
960 GB LPDDR5,
1.4x perf & 30% lower power than 2x SPR
CPU to GPU
NVLink C2C
Per GPU 900 GB/s bi-dir. & cache-coherent
GPU to GPU
NVLink
1,800 GB/s bi-dir., NVLink
Scale-Out
Spectrum-X Ethernet or InfiniBand Connect-X or BlueField
OS
Single OS with unified address space covering 2 CPU + 2 GPU
System Power
Full System ~3,500W, configurable
Schedule
Sample: Q4 2024
MP: Q1 2025
Table 1. GB200 NVL2 platform features
Real-time mainstream LLM inference
The GB200 NVL2 platform introduces massive coherent memory up to 1.3 TB shared between two NVIDIA Grace CPUs and two NVIDIA Blackwell GPUs. This shared memory is coupled with fifth-generation NVIDIA NVLink and high-speed, chip-to-chip (C2C) connections to deliver 5x faster real-time LLM inference performance for mainstream language models, such as Llama3-70B.
With an input sequence length of 256, an output sequence length of 8000, FP4 precision, the GB200 NVL2 platform can produce up to 25K tokens/sec, which is 2.16B tokens/day.
Figure 3 shows how GB200 NVL2 performs when supporting AI and RAN workloads.
Figure 3. Compute utilization for RAN and AI in GB200 NVL2
Hereâs what platform tenancy looks like for RAN and AI on the GB200 NVL2 platform:
Workload at 100% utilization
RAN:
~36x 100 MHz 64T64R
*Tokens:
25K tokens/sec
AI:
~$10/hr. | ~$90K/year
Workload at 50:50 split utilization
RAN:
~18x 100 MHz 64T64R
*Tokens:
12.5K tokens/sec
AI:
~$5/hr. | ~$45K/year
*Token AI workload: Llama-3-70B FP4 | Sequence lengths input 256 / output 8K
Supporting hardware for Aerial RAN Computer-1
NVIDIA BlueField-3 and NVIDIA networking Spectrum-XÂ are the supporting hardware for Aerial RAN Computer-1.
NVIDIA BlueField-3
NVIDIA BlueField-3
DPUs enable real-time data transmission with precision 5G timing required for fronthaul eCPRI traffic.
NVIDIA offers a full IEEE 1588v2 Precision Time Protocol (PTP) software solution. NVIDIA PTP software solutions are designed to meet the most demanding PTP profiles. NVIDIA BlueField-3 incorporates an integrated PTP hardware clock (PHC) that enables the device to achieve sub-20 nanosecond accuracy while offering timing-related functions, including time-triggered scheduling and time-based, software-defined networking (SDN) accelerations.
This technology also enables software applications to transmit fronthaul, RAN-compatible data in high bandwidth.
NVIDIA networking Spectrum-X
The edge and data center networks play a crucial role in driving AI and wireless advancements and performance, serving as the backbone for distributed AI model inference, generative AI, and world-class vRAN performance.
NVIDIA BlueField-3 DPUs enable efficient scalability across hundreds and thousands of NVIDIA Blackwell GPUs for optimal application performance.
The
NVIDIA Spectrum-X
Ethernet platform is designed specifically to improve the performance and efficiency of Ethernet-based AI clouds and includes all the required functionality for 5G timing synchronization. It delivers 1.6x better AI networking performance compared to traditional Ethernet, along with consistent, predictable performance in multi-tenant environments.
When Aerial RAN Computer-1 is deployed in a rack configuration, the Spectrum-X Ethernet switch serves as a dual-purpose fabric. It handles both fronthaul and AI (east-west) traffic on the compute fabric, while also carrying backhaul or midhaul and AI (north-south) traffic on the converged fabric. The remote radio units terminate at the switch in compliance with the eCPRI protocol.
Software stacks on Aerial RAN Computer-1
The key software stacks on Aerial RAN Computer-1 include the following:
NVIDIA Aerial CUDA-Accelerated RAN
NVIDIA AI Enterprise and NVIDIA NIM
NVIDIA Cloud Functions
NVIDIA Aerial CUDA-Accelerated RAN
NVIDIA Aerial CUDA-Accelerated RAN
is the primary NVIDIA-built RAN software for 5G and private 5G running on Aerial RAN Computer-1.
It includes NVIDIA GPU-accelerated interoperable PHY and MAC layer libraries that can be easily modified and seamlessly extended with AI components. These hardened RAN software libraries can also be used by other software providers, telcos, cloud service providers (CSPs), and enterprises for building custom commercial-grade, software-defined 5G and future 6G radio access networks (RANs).
Aerial CUDA-Accelerated RAN is integrated with
NVIDIA Aerial AI Radio Frameworks
, which provides a package of AI enhancements to enable training and inference in the RAN using the framework toolsâpyAerial, NVIDIA Aerial Data Lake, and
NVIDIA Sionna
.
It is also complemented by
NVIDIA Aerial Omniverse Digital Twin
, a system-level network digital twin development platform that enables physically accurate simulations of wireless systems.
NVIDIA AI Enterprise and NVIDIA NIM
NVIDIA AI Enterprise
is the software platform for enterprise generative AI.
NVIDIA NIM
is a collection of microservices that simplify the deployment of foundation models for generative AI applications.
Collectively, they provide easy-to-use microservices and blueprints that accelerate data science pipelines and streamline the development and deployment of production-grade co-pilots and other generative AI applications for enterprises.
Enterprises and telcos can either subscribe to the managed NVIDIA Elastic NIM service or deploy and manage NIM themselves. Aerial RAN Computer-1 can host NVIDIA AI Enterprise and NIM-based AI and generative AI workloads.
NVIDIA Cloud Functions
NVIDIA Cloud Functions
offers a serverless platform for GPU-accelerated AI workloads, ensuring security, scalability, and reliability. It supports various communication protocols:
HTTP polling
Streaming
gRPC
Cloud Functions is primarily suited for shorter running, preemptable workloads, such as inferencing and fine-tuning. This is well-suited for the Aerial RAN Computer-1 platform as the RAN workload resource utilization varies over time of the day.
The AI workloads that are ephemeral and preemptable can usually fill up those under-used hours of the day, which maintains high utilization of the Aerial RAN Computer-1 platform.
Deployment options and performance
Aerial RAN Computer-1 has multiple deployment options that include all points in the radio access network:
Radio base station cell site
Point of presence locations
Mobile switching offices
Baseband hotels
For private 5G, it can be located on the enterprise premises.
Aerial RAN Computer-1 can support various configurations and locations, including private, public, or hybrid cloud environments while using the same software regardless of location or interface standard. This ability offers unprecedented flexibility compared to traditional single-purpose RAN computers.
The solution also supports a wide range of network technologies:
Open Radio Access Network (Open-RAN) architectures
AI-RAN
3GPP standards
Other industry-leading specifications
Aerial RAN Computer-1, based on GB200, delivers continued performance improvements in RAN processing, AI processing, and energy efficiency compared to the earlier NVIDIA H100 and NVIDIA H200 GPUs (Figure 4).
The GB200 NVL2 platform provides a single MGX server for existing infrastructure, which is easy to deploy and scale out. You get mainstream LLM inference and data processing with high-end RAN compute.
Figure 4. GB200 NVL2 performance compared to previous generations
Conclusion
AI-RAN will revolutionize the telecom industry, enabling telcos to unlock new revenue streams and deliver enhanced experiences through generative AI, robotics, and autonomous technologies. The NVIDIA AI Aerial platform implements AI-RAN, aligning it with NVIDIAâs broader vision to make wireless networks AI-native.
With Aerial RAN Computer-1, telcos can deploy AI-RAN on a common infrastructure today. You can maximize the utilization by running RAN and AI workloads concurrently and improve RAN performance with AI algorithms.
Most importantly, with this common computer, you can tap into a completely new opportunity to become the AI fabric of choice for enterprises that need local computing and data sovereignty for their AI workloads. You can start with an AI-first approach and RAN next, with a software upgrade, starting the clock on maximizing ROI from day one.
T-Mobile and SoftBank have already announced their plans to commercialize AI-RAN together with leading RAN software providers, using hardware and software components of NVIDIA AI Aerial.
At Mobile World Congress, Americas, Vapor IO and the City of Las Vegas announced the
worldâs first private 5G AI-RAN deployment
using NVIDIA AI Aerial.
We are at a turning point in transforming wireless networks for AI, with AI. Join us at the
NVIDIA AI Summit
in Washington, D.C. and at the
NVIDIA 6G Developer Day
to learn more about NVIDIA Aerial AI and NVIDIA Aerial RAN Computer-1. | https://developer.nvidia.com/ja-jp/blog/bringing-ai-ran-to-a-telco-near-you/ | éä¿¡äŒç€Ÿã« AI-RAN ãæäŸ | Reading Time:
5
minutes
çæ AI ãš AI ãšãŒãžã§ã³ãã®æšè«ã«ããããšããžããã»ã³ãã©ã« ã¯ã©ãŠããŸã§ AI ã³ã³ãã¥ãŒãã£ã³ã° ã€ã³ãã©ã¹ãã©ã¯ãã£ãåæ£ããå¿
èŠæ§ãé«ãŸããŸãã
IDC ã¯
ããããžãã¹ AI (æ¶è²»è
ãé€ã) ã¯ã2030 幎ãŸã§ã«äžççµæžã« 19.9 å
ãã«ã®è²¢ç®ãããGDP ã® 3.5% ãå ããããã«ãªãããšäºæž¬ããŠããŸãã
5G ãããã¯ãŒã¯ãããã®æ°ãã AI ãã©ãã£ãã¯ã«å¯Ÿå¿ããããã«é²åããªããã°ãªããŸããã åæã«ãéä¿¡äºæ¥è
ã«ã¯ãããŒã¿ã®ãã©ã€ãã·ãŒãšäž»æš©ã®èŠä»¶ãæºãããªããããããã¯ãŒã¯æ¥ç¶ã«äŸåããã«ãšã³ã¿ãŒãã©ã€ãº AI ã¯ãŒã¯ããŒãããã¹ãããããã®ããŒã«ã« AI ã³ã³ãã¥ãŒãã£ã³ã° ã€ã³ãã©ã¹ãã©ã¯ãã£ã«ãªãæ©äŒããããŸãã ããã§ãç¡ç·ä¿¡å·åŠçãš AI ã¯ãŒã¯ããŒãã®äž¡æ¹ãé«éåããæ©èœãåããé«éã³ã³ãã¥ãŒãã£ã³ã° ã€ã³ãã©ã¹ãã©ã¯ãã£ã掻èºããŸãã ãããŠæãéèŠãªã®ã¯ãåãã³ã³ãã¥ãŒãã£ã³ã° ã€ã³ãã©ã¹ãã©ã¯ãã£ã䜿çšã㊠AI ãµãŒãã¹ãšç¡ç·ã¢ã¯ã»ã¹ ãããã¯ãŒã¯ (RAN) ãµãŒãã¹ã®åŠçãå¯èœã§ããããšã§ãã ãã®çµã¿åããã¯ã
éä¿¡æ¥çã§ã¯ AI-RAN
ãšåŒã°ããŠããŸãã
NVIDIA ã¯ãäžçåã® AI-RAN å±éãã©ãããã©ãŒã ã§ãã Aerial RAN Computer-1 ãå°å
¥ããå
±éã®ã¢ã¯ã»ã©ã¬ãŒããã ã€ã³ãã©ã¹ãã©ã¯ãã£ã§ AI ãš RAN ã®ã¯ãŒã¯ããŒããåæã«ãµãŒãã¹ã§ããŸãã
ãŸãã
T-Mobile ã«ãã AI-RANã€ãããŒã·ã§ã³ ã»ã³ã¿ãŒ
ã®ç«ã¡äžãã«ç¶ããAerial RAN Computer-1 ã¯ãéä¿¡äŒç€Ÿãäžçäžã§æ¡çšã§ãããã©ãããã©ãŒã 㧠AI-RAN ãçŸå®ã®ãã®ã«ããŸãã å°èŠæš¡ãäžèŠæš¡ã倧èŠæš¡ã®æ§æã§äœ¿çšã§ããã»ã«ãµã€ããåæ£åãŸãã¯éäžåã®ãµã€ãã§å±éããŠããããã¯ãŒã¯ãé³å£°ããããªãããŒã¿ãAIãã©ãã£ãã¯ã«å¯Ÿå¿ããå€ç®çã€ã³ãã©ã¹ãã©ã¯ãã£ã«å¹æçã«å€ããããšãã§ããŸãã
ããã¯ãAI ã掻çšãã AI ã®ããã®ã¯ã€ã€ã¬ã¹ ãããã¯ãŒã¯ã®æŠå¿µãåæ§ç¯ããå€é©çãªãœãªã¥ãŒã·ã§ã³ã§ãã ãŸããéä¿¡äºæ¥è
ã«ãšã£ãŠã¯ãåæ£ãããã¯ãŒã¯ ã€ã³ãã©ã¹ãã©ã¯ãã£ãäœé
延ãä¿èšŒããããµãŒãã¹å質ã倧èŠæš¡ãªã¹ã±ãŒã«ãããŒã¿ã®ãã©ã€ãã·ãŒãã»ãã¥ãªãã£ãããŒã«ãªãŒãŒã·ã§ã³ã®ç¶æèœåãªã©ãAI æšè«ãšãšãŒãžã§ã³ã AI ã¢ããªã±ãŒã·ã§ã³ã®éèŠãªèŠä»¶ã掻çšããŠãAI ãã©ã€ãã€ãŒã«ã掻æ§åãã倧ããªãã£ã³ã¹ã§ããããŸãã
AI-RANãAI Aerialãããã³ Aerial RAN Computer-1
AI-RAN ã¯ãAI ãã€ãã£ããªå€ç®çãããã¯ãŒã¯ãæ§ç¯ããããã®ãã¯ãããž ãã¬ãŒã ã¯ãŒã¯ã§ãã éä¿¡äŒç€Ÿã AI-RAN ãæ¡çšããåŸæ¥ã®åäž ASIC ããŒã¹ã® RAN ã³ã³ãã¥ãŒãã£ã³ã° ãããã¯ãŒã¯ãããRAN ãš AI ã®äž¡æ¹ã«å¯Ÿå¿ããæ°ããå€ç®çã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã° ããŒã¹ã®ãããã¯ãŒã¯ã«ç§»è¡ããã«ã€ããŠãéä¿¡äŒç€Ÿã¯æ°ãã AI ãšã³ãããŒã«åå ããAI ã掻çšããŠãããã¯ãŒã¯ã®å¹çãåäžã§ããŸãã
NVIDIA AI Aerial
ã«ã¯ãAI-RAN ããŒã¹ã® 5G ããã³ 6G ã¯ã€ã€ã¬ã¹ ãããã¯ãŒã¯ãèšèšãã·ãã¥ã¬ãŒã·ã§ã³ããã¬ãŒãã³ã°ãå±éãã 3 ã€ã®ã³ã³ãã¥ãŒã¿ãŒ ã·ã¹ãã ãå«ãŸããŠããŸãã Aerial RAN Computer-1 ã¯ãNVIDIA AI Aerial ã®åºç€ã§ãããAI-RAN ã®åæ¥ã°ã¬ãŒãã®ãã©ãããã©ãŒã ãæäŸããŸãã
Aerial RAN Computer-1 (å³ 1) ã¯ããœãããŠã§ã¢ ããã¡ã€ã³ã 5GãNVIDIA ãŸãã¯ãã®ä»ã® RAN ãœãããŠã§ã¢ ãããã€ããŒã®ãã©ã€ããŒã 5G RANãã³ã³ãããŒåããããããã¯ãŒã¯æ©èœãNVIDIA ãŸãã¯ããŒãããŒã®AI ãã€ã¯ããµãŒãã¹ãå
éšããã³ãµãŒãããŒãã£ã®çæ AI ã¢ããªã±ãŒã·ã§ã³ããã¹ããããªã©ãRAN ãš AI ã¯ãŒã¯ããŒããå®è¡ããããã®å
±éã®æ¡åŒµå¯èœãªããŒããŠã§ã¢åºç€ãæäŸããŸãã Aerial RAN Computer-1 ã¯ã¢ãžã¥ãŒã«åŒã®èšèšã§ãããD-RAN ãã C-RAN ã¢ãŒããã¯ãã£ãŸã§æ¡åŒµã§ããéå€ããå¯éããéœåžéšãŸã§å¹
åºããŠãŒã¹ ã±ãŒã¹ã«å¯Ÿå¿ããŸãã
NVIDIA CUDA-X ã©ã€ãã©ãªã¯ãã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ã®äžå¿ã§ãããå¹çã®åäžã«å ããé床ã粟床ãä¿¡é Œæ§ãæäŸããŸãã ã€ãŸããåããã¯ãŒ ãšã³ãããŒãã§ããå€ãã®äœæ¥ãè¡ããããšããããšã§ãã æãéèŠãªã®ã¯ãéä¿¡éä¿¡äºæ¥åºæã®é©å¿ãå«ããã¡ã€ã³åºæã®ã©ã€ãã©ãªããAerial RAN Computer-1 ãéä¿¡äºæ¥ã®å±éã«é©ãããã®ã«ããäžã§éèŠã§ãã
NVIDIA DOCA
ã¯ãRDMAãPTP/ã¿ã€ãã³ã°åæãã€ãŒãµãããããŒã¹ã®ããã³ãããŒã« (eCPRI) ãªã©ã®éä¿¡ã¯ãŒã¯ããŒãããææ°ã®ãããã¯ãŒã¯ ã€ã³ãã©ã¹ãã©ã¯ãã£ã«äžå¯æ¬ 㪠AI ã¯ãŒã¯ããŒãã®ããã©ãŒãã³ã¹ã倧å¹
ã«åäžã§ããããŒã«ãšã©ã€ãã©ãªã®ã¹ã€ãŒããæäŸããŸãã
ç·åçã«ããã«ã¹ã¿ãã¯ã«ãããã¹ã±ãŒã©ãã«ãªããŒããŠã§ã¢ãå
±éãœãããŠã§ã¢ããªãŒãã³ ã¢ãŒããã¯ãã£ãå®çŸãããšã³ã·ã¹ãã ããŒãããŒãšååããŠé«æ§èœãª AI-RAN ãæäŸã§ããŸãã
å³ 1. NVIDIA AI Aerial ãã©ãããã©ãŒã ã®äžéšãšããŠã® NVIDIA Aerial RAN Computer-1
Aerial RAN Computer-1 ã®å©ç¹
Aerial RAN Computer-1 ã«ãããã¯ã€ã€ã¬ã¹ ãããã¯ãŒã¯ã¯ãAI ãš RAN ããŒã¿ ã»ã³ã¿ãŒã®å€§èŠæš¡ãªåæ£ã°ãªããã«å€èº«ãããœãããŠã§ã¢ã®ã¢ããã°ã¬ãŒãã«ãã 6G ã®éãéãã€ãéä¿¡äŒç€Ÿã«ãšã£ãŠæ°ããåçåã®éãéããŸãã
éä¿¡ãµãŒãã¹ ãããã€ããŒåã Aerial RAN Computer-1 ã®å©ç¹ã«ã¯ã次ã®ãã®ããããŸãã
AI ããã³çæ AI ã¢ããªã±ãŒã·ã§ã³ããšããžã§ã® AI æšè«ããŸã㯠GPU-as-a-Service ã䜿çšããŠåçåããŸãã
çŸåšã¯éåžž 30% ããå©çšãããŠããªãåç®çåºå°å±ãšæ¯èŒããŠãã€ã³ãã©ã¹ãã©ã¯ãã£ã®äœ¿çšçã 2 ïœ 3 åã«åäžããŸãã åãã€ã³ãã©ã¹ãã©ã¯ãã£ã䜿çšããŠãå
éšçæ AI ã¯ãŒã¯ããŒãããUPF ã RIC ãªã©ã®ã³ã³ãããŒåããããããã¯ãŒã¯æ©èœããã¹ãã§ããŸãã
ãµã€ãåºæã® AI åŠç¿ãéããŠç¡ç·ãããã¯ãŒã¯ã®ããã©ãŒãã³ã¹ãåäžãããã¹ãã¯ãã«å¹çãæ倧 2 ååäžã§ããŸãã ããã¯ãååŸã¹ãã¯ãã©ã ã® Mhz ãããã®çŽæ¥ã³ã¹ããåæžããããšãæå³ããŸãã
ããããã€ã³ã¿ã©ã¯ã·ã§ã³ã« AI ãçµã¿èŸŒã次äžä»£ã¢ããªã±ãŒã·ã§ã³ã«ãé«æ§èœ RAN ãš AI äœéšãæäŸããŸãã Aerial RAN Computer-1 ã¯ãRAN ã®ã¿ã¢ãŒãã§æ倧 170 Gb/s ã®ã¹ã«ãŒããããšãAI ã®ã¿ã¢ãŒã㧠25K ããŒã¯ã³/ç§ããŸãã¯äž¡æ¹ã®çµã¿åãããŠãµãŒãã¹ãæäŸã§ããåŸæ¥ã®ãããã¯ãŒã¯ãšæ¯èŒããŠåªããããã©ãŒãã³ã¹ãçºæ®ã§ããŸãã
Aerial RAN Computer-1 ã®æ§æèŠçŽ
Aerial RAN Computer-1 ã®äž»èŠãªããŒããŠã§ã¢ ã³ã³ããŒãã³ãã«ã¯ã以äžãå«ãŸããŸãã
NVIDIA GB200 NVL2
NVIDIA Blackwell GPU
NVIDIA Grace CPU
NVLink2 C2C
第 5 äžä»£ NVIDIA NVLink
Key-value caching
MGX ãªãã¡ã¬ã³ã¹ ã¢ãŒããã¯ãã£
ãªã¢ã«ã¿ã€ã äž»æµ LLM æšè«
NVIDIA GB200 NVL2
Aerial RAN Computer-1 ã§äœ¿çšããã NVIDIA GB200 NVL2 ãã©ãããã©ãŒã (å³ 2) ã¯ãããŒã¿ ã»ã³ã¿ãŒãšãšããž ã³ã³ãã¥ãŒãã£ã³ã°ã«é©åœããããããäž»æµã®å€§èŠæš¡èšèªã¢ãã« (LLM)ãvRANããã¯ã¿ãŒ ããŒã¿ããŒã¹æ€çŽ¢ãããŒã¿åŠçã§æ¯é¡ã®ãªãããã©ãŒãã³ã¹ãæäŸããŸãã
2 ã€ã® NVIDIA Blackwell GPU ãšã® NVIDIA Grace CPU ãæèŒãããã®ã¹ã±ãŒã«ã¢ãŠãã®ã·ã³ã°ã«ããŒã ã¢ãŒããã¯ãã£ã¯ãã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ãæ¢åã®ã€ã³ãã©ã¹ãã©ã¯ãã£ã«ã·ãŒã ã¬ã¹ã«çµ±åããŸãã
ãã®æ±çšæ§ã«ãããåºç¯å²ã®ã·ã¹ãã èšèšãšãããã¯ãŒãã³ã° ãªãã·ã§ã³ãå¯èœã«ãªããGB200 NVL2 ãã©ãããã©ãŒã ã¯ãAI ã®åã掻çšãããããŒã¿ ã»ã³ã¿ãŒããšããžãã»ã«ãµã€ãã®å Žæã«çæ³çãªéžæè¢ãšãªããŸãã
ããšãã°ãGB200 ãµãŒããŒã®ååã RAN ã¿ã¹ã¯ã«å²ãåœãŠãããæ®ãã®ååã¯åäžã»ã«ãµã€ãã§
ãã«ãã€ã³ã¹ã¿ã³ã¹ GPU (MIG)
ãã¯ãããžãéã㊠AI åŠçã«å²ãåœãŠãããšãã§ããŸãã éçŽãµã€ãã§ã¯ãå®å
šãª GB200 ãµãŒããŒã RAN ã«å°çšãããã 1 ã€ã AI å°çšã«äœ¿çšã§ããŸãã éäžåã®å±éã§ã¯ãGB200ãµãŒããŒã®ã¯ã©ã¹ã¿ãŒãRANãšAIã¯ãŒã¯ããŒãéã§å
±æã§ããŸãã
NVIDIA Blackwell GPU
NVIDIA Blackwell ã¯ãããã©ãŒãã³ã¹ãå¹çãæ¡åŒµæ§ã®åäžãå®çŸããé©åœçãªã¢ãŒããã¯ãã£ã§ãã NVIDIA Blackwell GPU 㯠208Bãã©ã³ãžã¹ã¿ãããã¯ããã«ã¹ã¿ã æ§ç¯ãããTSMC 4NP ããã»ã¹ã䜿çšããŠè£œé ãããŠããŸãã ãã¹ãŠã® NVIDIA Blackwell 補åã«ã¯ãçµ±äžãããåäž GPU 㧠10 TB/s ã®ãããéçžäºæ¥ç¶ã«ããæ¥ç¶ããã 2 ã€ã®ã¬ãã¯ã«å¶éä»ããã€ãç¹åŸŽã§ãã
NVIDIA Grace CPU
NVIDIA Grace CPU ã¯ãAIãvRANãã¯ã©ãŠãããã€ããã©ãŒãã³ã¹ ã³ã³ãã¥ãŒãã£ã³ã° (HPC) ã¢ããªã±ãŒã·ã§ã³ãå®è¡ããææ°ã®ããŒã¿ ã»ã³ã¿ãŒåãã«èšèšãããç»æçãªããã»ããµã§ãã åªããããã©ãŒãã³ã¹ãšã¡ã¢ãªåž¯åå¹
ãæäŸããçŸåšã®äž»èŠãªãµãŒã㌠ããã»ããµã® 2 åã®ãšãã«ã®ãŒå¹çã§ãåªããããã©ãŒãã³ã¹ãšã¡ã¢ãªåž¯åå¹
ãæäŸããŸãã
NVLink2 C2C
GB200 NVL2 ãã©ãããã©ãŒã ã¯ãNVLink-C2C ã䜿çšããŠãå NVIDIA Grace CPU ãš NVIDIA Blackwell GPU ã®é㧠900 GB/s ã®çžäºæ¥ç¶ãå®çŸããŸãã 第5äžä»£ NVLink ãšçµã¿åããããšã1.4 TB ã®å·šå€§ãªã³ããŒã¬ã³ã ã¡ã¢ãª ã¢ãã«ãæäŸãããAI ãš vRAN ã®ããã©ãŒãã³ã¹ãå éããŸãã
第 5 äžä»£ NVIDIA NVLink
ãšã¯ãµã¹ã±ãŒã« ã³ã³ãã¥ãŒãã£ã³ã°ãš 1 å
ãã©ã¡ãŒã¿ãŒã® AI ã¢ãã«ã®ãã¯ãŒãæ倧éã«æŽ»çšããã«ã¯ããµãŒã㌠ã¯ã©ã¹ã¿ãŒå
ã®ãã¹ãŠã® GPU ãã·ãŒã ã¬ã¹ãã€è¿
éã«éä¿¡ããå¿
èŠããããŸãã
第5äžä»£ NVLink ã¯ãGB200 NVL2 ãã©ãããã©ãŒã ããé«éãªããã©ãŒãã³ã¹ãå®çŸããé«æ§èœçžäºæ¥ç¶ã§ãã
Key-value caching
ããŒå€ (KV) ãã£ãã·ã¥ã¯ã
äŒè©±ã®ã³ã³ããã¹ããšå±¥æŽãä¿åããããšã§ãLLM ã®å¿çé床ãåäžããŸãã
GB200 NVL2 ã¯ãNVLink-C2C ã§æ¥ç¶ãããå®å
šã«ã³ããŒã¬ã³ã㪠NVIDIA Grace GPU ãš NVIDIA Blackwell GPU ã¡ã¢ãªãéã㊠KV ãã£ãã·ã¥ãæé©åããPCIe ããã 7 åé«éã§ãã ããã«ãããLLM ã¯ãx86 ããŒã¹ã® GPU å®è£
ãããéãåèªãäºæž¬ã§ããŸãã
MGX ãªãã¡ã¬ã³ã¹ ã¢ãŒããã¯ãã£
MGX GB200 NVL2 ã¯ãCPU C-Link ãš GPU NVLink ãæ¥ç¶ããã 2:2 æ§æã§ãã
HPM ã«ã¯ã次ã®ã³ã³ããŒãã³ããå«ãŸããŠããŸãã
NVIDIA Grace CPU (2)
GPU ããã¯ãš I/O ã«ãŒãçšã®ã³ãã¯ã¿ãŒ
2U AC Server ã«æèŒããã GPU ã¢ãžã¥ãŒã« (2)
åãã©ã°å¯èœãª GPU ã¢ãžã¥ãŒã«ã«ã¯ãGPUãB2B æ¥ç¶ãNVLink ã³ãã¯ã¿ãŒãå«ãŸããŠããŸãã
å³ 2. NVIDIA GB200 NVL2 ãã©ãããã©ãŒã ã¬ã€ã¢ãŠã
GPU ã³ã³ãã¥ãŒãã£ã³ã°
40 PFLOPS FP4 | 20 PFLOPS FP8/FP6
10x GH200
GPU ã¡ã¢ãª
æ倧 384 GB
CPU
144 ã³ã¢ ARMv9ã
960 GB LPDDR5ã
2x SPR ãšæ¯èŒã㊠1.4 åã®ããã©ãŒãã³ã¹ãš 30% ã®é»ååæž
CPU ãã GPU
NVLink C2C
GPU ãããã®900 GB/s bi-dir.ãšãã£ãã·ã¥ã³ããŒã¬ã³ã
GPU ãã GPU
NVLink
1,800 GB/s bi-dir.ãNVLink
ã¹ã±ãŒã«ã¢ãŠã
Spectrum-X ã€ãŒãµããããInfiniBand Connect-XãBlueField
OS
2 CPU + 2 GPU ãã«ããŒããçµ±äžã¢ãã¬ã¹ç©ºéãæã€åäž OS
ã·ã¹ãã ãã¯ãŒ
ãã« ã·ã¹ãã ~3,500Wãæ§æå¯èœ
ã¹ã±ãžã¥ãŒã«
ãµã³ãã«: Q4 2024 ååæ
MP: 第1 Q1 2025
è¡š 1. GB200 NVL2 ãã©ãããã©ãŒã æ©èœ
ãªã¢ã«ã¿ã€ã äž»æµ LLM æšè«
GB200 NVL2 ãã©ãããã©ãŒã ã¯ã2 ã€ã® NVIDIA Grace CPU ãš 2 ã€ã® NVIDIA Blackwell GPU ã§å
±æãããæ倧 1.3 TB ã®å·šå€§ãªæŽåã¡ã¢ãªãå°å
¥ããŸãã ãã®å
±æã¡ã¢ãªã¯ã第 5 äžä»£ NVIDIA NVLink ãšé«éãããé (C2C) æ¥ç¶ãšçµã¿åãããŠãLlama3-70B ãªã©ã®äž»æµèšèªã¢ãã«ã§ 5 åé«éãªãªã¢ã«ã¿ã€ã LLM æšè«ããã©ãŒãã³ã¹ãå®çŸããŸãã
å
¥åã·ãŒã±ã³ã¹é· 256 ãåºåã·ãŒã±ã³ã¹é· 8000 ã®åºåã·ãŒã±ã³ã¹é·ãFP4 ã®ç²ŸåºŠã«ãããGB200 NVL2 ãã©ãããã©ãŒã ã¯æ倧 25K ããŒã¯ã³/ç§ïŒ2.16B ããŒã¯ã³/æ¥ïŒãçæã§ããŸãã
å³ 3 ã¯ãAI ãš RAN ã¯ãŒã¯ããŒãããµããŒãããå Žåã« GB200 NVL2 ã®ããã©ãŒãã³ã¹ã瀺ããŸãã
å³ 3. GB200 NVL2 ã«ããã RAN ãš AI ã®ã³ã³ãã¥ãŒãã£ã³ã°äœ¿çšç
GB200 NVL2 ãã©ãããã©ãŒã 㧠RAN ãš AI ã®ãã©ãããã©ãŒã ããã³ã¹ã®æ§åã¯ã次ã®ãšããã§ãã
100% 䜿çšçã®ã¯ãŒã¯ããŒã
RAN:
~36x 100 MHz 64T64R
*ããŒã¯ã³:
25K ããŒã¯ã³/ç§
AI:
~10ãã«/æ | ~9 äžãã«/幎
50:50 åå²äœ¿çšçã§ã®ã¯ãŒã¯ããŒã
RAN:
~18x 100 MHz 64T64R
*ããŒã¯ã³:
12.5K ããŒã¯ã³/ç§
AI:
~5ãã«/æéã | ~45,000 ãã«/幎
*ããŒã¯ã³ AI ã¯ãŒã¯ããŒã: Llama-3-70B FP4 | ã·ãŒã±ã³ã¹é·å
¥å 256 / åºå 8K
Aerial RAN Computer-1 ã®ãµããŒã ããŒããŠã§ã¢
NVIDIA BlueField-3 ãš NVIDIAãããã¯ãŒãã³ã° Spectrum-X ã¯ãAerial RAN Computer-1 ã®ãµããŒãããŒããŠã§ã¢ã§ãã
NVIDIA BlueField-3
NVIDIA BlueField-3 DPU
ã¯ãããã³ãããŒã« eCPRI ãã©ãã£ãã¯ã«å¿
èŠãªæ£ç¢ºãª 5G ã¿ã€ãã³ã°ã§ãªã¢ã«ã¿ã€ã ããŒã¿äŒéãå¯èœã«ããŸãã
NVIDIA ã¯ãå®å
šãª IEEE 1588v2 Precision Time Protocol (PTP) ãœãããŠã§ã¢ ãœãªã¥ãŒã·ã§ã³ãæäŸããŸãã NVIDIA PTP ãœãããŠã§ã¢ ãœãªã¥ãŒã·ã§ã³ã¯ãæãèŠæ±ã®å³ãã PTP ãããã¡ã€ã«ã«å¯Ÿå¿ããããã«èšèšãããŠããŸãã NVIDIA BlueField-3 ã«ã¯çµ±å PTP ããŒããŠã§ã¢ ã¯ãã㯠(PHC) ãçµã¿èŸŒãŸããŠãããæéããªã¬ãŒã®ã¹ã±ãžã¥ãŒãªã³ã°ãæéããŒã¹ã®ãœãããŠã§ã¢ ããã¡ã€ã³ã ãããã¯ãŒãã³ã° (SDN) å éãªã©ã®ã¿ã€ãã³ã°é¢é£æ©èœãæäŸããªãããããã€ã¹ã 20 ããç§æªæºã®ç²ŸåºŠãå®çŸã§ããŸãã
ãã®ãã¯ãããžã¯ããœãããŠã§ã¢ ã¢ããªã±ãŒã·ã§ã³ãããã³ãããŒã«ãRAN äºææ§ã®ããŒã¿ãé«åž¯åå¹
ã§éä¿¡ããããšãå¯èœã«ããŸãã
NVIDIA ãããã¯ãŒãã³ã° Spectrum-X
ãšããžããã³ããŒã¿ã»ã³ã¿ãŒ ãããã¯ãŒã¯ã¯ãAI ãšã¯ã€ã€ã¬ã¹ã®é²æ©ãšããã©ãŒãã³ã¹ãæšé²ããäžã§éèŠãªåœ¹å²ãæãããåæ£ AI ã¢ãã«æšè«ãçæ AIãäžçã¯ã©ã¹ã® vRAN ããã©ãŒãã³ã¹ã®ããã¯ããŒã³ãšããŠæ©èœããŸãã
NVIDIA BlueField-3 DPUã¯ãæ°çŸäžå°ã® NVIDIA Blackwell GPU ã§å¹ççãªæ¡åŒµæ§ãå®çŸããæé©ãªã¢ããªã±ãŒã·ã§ã³ ããã©ãŒãã³ã¹ãå®çŸããŸãã
NVIDIA Spectrum-X
ã€ãŒãµããã ãã©ãããã©ãŒã ã¯ãã€ãŒãµããã ããŒã¹ã® AI ã¯ã©ãŠãã®ããã©ãŒãã³ã¹ãšå¹çãåäžãããããã«ç¹å¥ã«èšèšãããŠããã5G ã¿ã€ãã³ã°åæã«å¿
èŠãªãã¹ãŠã®æ©èœãå«ãŸããŠããŸãã åŸæ¥ã®ã€ãŒãµããããšæ¯èŒã㊠1.6 åã® AI ãããã¯ãŒãã³ã° ããã©ãŒãã³ã¹ãå®çŸãããã«ãããã³ãç°å¢ã§äžè²«ããäºæž¬å¯èœãªããã©ãŒãã³ã¹ãå®çŸããŸãã
Aerial RAN Computer-1 ãã©ãã¯æ§æã§å±éãããå ŽåãSpectrum-X ã€ãŒãµããã ã¹ã€ãã㯠2 ã€ã®ç®çãæã€ãã¡ããªãã¯ãšããŠæ©èœããŸãã ã³ã³ãã¥ãŒãã£ã³ã° ãã¡ããªãã¯ã§ããã³ãããŒã«ãš AI (ã€ãŒã¹ããŠãšã¹ã) ãã©ãã£ãã¯ã®äž¡æ¹ãåŠçããã³ã³ããŒãž ãã¡ããªãã¯ã§ããã¯ããŒã«ãŸãã¯ãããããŒã«ãš AI (ããŒã¹ãµãŠã¹) ãã©ãã£ãã¯ãæ¬éããŸãã ãªã¢ãŒã ç¡ç·ãŠãããã¯ãeCPRI ãããã³ã«ã«æºæ ããŠã¹ã€ããã§çµäºããŸãã
Aerial RAN Computer-1 äžã®ãœãããŠã§ã¢ ã¹ã¿ãã¯
Aerial RAN Computer-1 ã®äž»èŠãªãœãããŠã§ã¢ ã¹ã¿ãã¯ã«ã¯ã以äžãå«ãŸããŸãã
NVIDIA Aerial CUDA-Accelerated RAN
NVIDIA AI Enterprise ãš NVIDIA NIM
NVIDIA Cloud Functions
NVIDIA Aerial CUDA-Accelerated RAN
NVIDIA Aerial CUDA-Accelerated RAN
ã¯ãAerial RAN Computer-1 äžã§å®è¡ããã 5G ãšãã©ã€ããŒã 5G åãã« NVIDIA ãæ§ç¯ããäž»èŠ RAN ãœãããŠã§ã¢ã§ãã
ããã«ã¯ãAI ã³ã³ããŒãã³ãã䜿çšããŠç°¡åã«å€æŽããã·ãŒã ã¬ã¹ã«æ¡åŒµã§ãããNVIDIA GPU ã¢ã¯ã»ã©ã¬ãŒã·ã§ã³ã®çžäºéçšå¯èœãª PHY ããã³ MAC ã¬ã€ã€ãŒ ã©ã€ãã©ãªãå«ãŸããŠããŸãã ãããã®åŒ·åããã RAN ãœãããŠã§ã¢ ã©ã€ãã©ãªã¯ãä»ã®ãœãããŠã§ã¢ ãããã€ããŒãéä¿¡äŒç€Ÿãã¯ã©ãŠã ãµãŒãã¹ ãããã€ã㌠(CSP)ãã«ã¹ã¿ã ã®åçšã°ã¬ãŒãã®ãœãããŠã§ã¢ ããã¡ã€ã³ã 5G ããã³å°æ¥ã® 6G ç¡ç·ã¢ã¯ã»ã¹ ãããã¯ãŒã¯ (RAN) ãæ§ç¯ããããã«äŒæ¥ã§ã䜿çšã§ããŸãã
Aerial CUDA-Accelerated RA ã¯ã
NVIDIA Aerial AI Radio Frameworks
ãšçµ±åããããã¬ãŒã ã¯ãŒã¯ ããŒã«ã§ãã pyAerialãNVIDIA Aerial Data Lakeã
NVIDIA Sionna
ã䜿çšã㊠RAN ã®ãã¬ãŒãã³ã°ãšæšè«ãå¯èœã« AI 匷åããã±ãŒãžãæäŸããŸãã
ãŸããã¯ã€ã€ã¬ã¹ ã·ã¹ãã ã®ç©ççã«æ£ç¢ºãªã·ãã¥ã¬ãŒã·ã§ã³ãå¯èœã«ããã·ã¹ãã ã¬ãã«ã®ãããã¯ãŒã¯ ããžã¿ã« ãã€ã³éçºãã©ãããã©ãŒã ã§ãã
NVIDIA Aerial Omniverse Digital Twin
ãè£å®ãããŠããŸãã
NVIDIA AI Enterprise ãš NVIDIA NIM
NVIDIA AI Enterprise
ã¯ããšã³ã¿ãŒãã©ã€ãºçæ AI ã®ããã®ãœãããŠã§ã¢ ãã©ãããã©ãŒã ã§ãã
NVIDIA NIM
ã¯ãçæ AI ã¢ããªã±ãŒã·ã§ã³ã®ããã®åºç€ã¢ãã«ã®å±éãç°¡çŽ åãããã€ã¯ããµãŒãã¹ã®ã³ã¬ã¯ã·ã§ã³ã§ãã
å
šäœçã«ããããã®è£œåã¯ãããŒã¿ ãµã€ãšã³ã¹ã®ãã€ãã©ã€ã³ãé«éåããäŒæ¥åãã®æ¬çªç°å¢ã°ã¬ãŒãã®ã³ ãã€ãããããã®ä»ã®çæ AI ã¢ããªã±ãŒã·ã§ã³ã®éçºãšãããã€ãåçåãã䜿ãããããã€ã¯ããµãŒãã¹ãšãã«ãŒããªã³ããæäŸããŸãã
äŒæ¥ãšéä¿¡äŒç€Ÿã¯ããããŒãžã NVIDIA Elastic NIM ãµãŒãã¹ã«ç»é²ããããNIM ãèªãå±éã管çã§ããŸãã Aerial RAN Computer-1 ã¯ãNVIDIA AI EnterpriseãNIM ããŒã¹ã® AI ããã³çæ AI ã¯ãŒã¯ããŒãããã¹ãã§ããŸãã
NVIDIA Cloud Functions
NVIDIA Cloud Functions
ã¯ãGPU ã¢ã¯ã»ã©ã¬ãŒããã AI ã¯ãŒã¯ããŒãã®ããã®ãµãŒããŒã¬ã¹ ãã©ãããã©ãŒã ãæäŸããã»ãã¥ãªãã£ãæ¡åŒµæ§ãä¿¡é Œæ§ã確ä¿ããŸãã. ããŸããŸãªéä¿¡ãããã³ã«ããµããŒãããŸãã
HTTP polling
ã¹ããªãŒãã³ã°
gRPC
Cloud Functions ã¯ãäž»ã«æšè«ããã¡ã€ã³ãã¥ãŒãã³ã°ãªã©ãå®è¡æéãçããäºååŠçå¯èœãªã¯ãŒã¯ããŒãã«é©ããŠããŸãã RAN ã¯ãŒã¯ããŒã ãªãœãŒã¹äœ¿çšçã 1 æ¥ã®æéãšãšãã«å€åãããããAerial RAN Computer-1 ãã©ãããã©ãŒã ã«æé©ã§ãã
äžæçã«å
å å¯èœãª AI ã¯ãŒã¯ããŒãã¯ãéåžžã1 æ¥ã®äœ¿çšãããŠããªãæéãåããããšãã§ããAerial RAN Computer-1 ãã©ãããã©ãŒã ã®é«äœ¿çšçãç¶æããŸãã
å±éã®ãªãã·ã§ã³ãšããã©ãŒãã³ã¹
Aerial RAN Computer-1 ã«ã¯ãç¡ç·ã¢ã¯ã»ã¹ ãããã¯ãŒã¯ã®ãã¹ãŠã®ãã€ã³ããå«ãè€æ°ã®å±éã®ãªãã·ã§ã³ããããŸãã
ç¡ç·åºå°å±ã»ã«ãµã€ã
æ ç¹ã®å Žæ
移ååŒã¹ã€ããã³ã° ãªãã£ã¹
ããŒã¹ãã³ã ããã«
ãã©ã€ããŒã 5G ã®å Žåã¯ãäŒæ¥ã®æ·å°å
ã«èšçœ®ã§ããŸãã
Aerial RAN Computer-1 ã¯ãå Žæãã€ã³ã¿ãŒãã§ã€ã¹æšæºã«é¢ä¿ãªãåããœãããŠã§ã¢ã䜿çšããªããããã©ã€ããŒãããããªãã¯ããã€ããªãã ã¯ã©ãŠãç°å¢ãªã©ãããŸããŸãªæ§æãšå ŽæããµããŒãã§ããŸãã ãã®æ©èœã¯ãåŸæ¥ã®åäžç®ç RAN ã³ã³ãã¥ãŒã¿ãŒãšæ¯èŒããŠåäŸã®ãªãæè»æ§ãæäŸããŸãã
ãã®ãœãªã¥ãŒã·ã§ã³ã¯ãåºç¯å²ã®ãããã¯ãŒã¯ ãã¯ãããžããµããŒãããŠããŸãã
ãªãŒãã³ç¡ç·ã¢ã¯ã»ã¹ ãããã¯ãŒã¯ (Open-RAN) ã¢ãŒããã¯ãã£
AI-RAN
3GPP æšæº
ãã®ä»ã®æ¥çããªãŒãããä»æ§
GB200 ãããŒã¹ãšãã Aerial RAN Computer-1 ã¯ã以åã® NVIDIA H100 ãš NVIDIA H200 GPU ãšæ¯èŒããŠãRAN åŠçãAI åŠçããšãã«ã®ãŒå¹çã®ããã©ãŒãã³ã¹ãç¶ç¶çã«åäžããŸã (å³ 4)ã
GB200 NVL2 ãã©ãããã©ãŒã ã¯ãæ¢åã®ã€ã³ãã©ã¹ãã©ã¯ãã£ã« 1 ã€ã® MGX ãµãŒããŒãæäŸããå±éãšæ¡åŒµã容æã§ãã ãã€ãšã³ã RAN ã³ã³ãã¥ãŒãã£ã³ã°ã§äž»æµã® LLM æšè«ãšããŒã¿åŠçãå®çŸããŸãã
å³ 4. GB200 NVL2 ããã©ãŒãã³ã¹ãšåŸæ¥äžä»£æ¯èŒ
ãŸãšã
AI-RAN ã¯ãéä¿¡æ¥çã«é©åœããããããéä¿¡äŒç€Ÿãæ°ããåçæºãç²åŸããçæ AIããããã£ã¯ã¹ãèªåŸãã¯ãããžãéããŠåŒ·åãããäœéšãæäŸããããšãå¯èœã«ããŸãã NVIDIA AI Aerial ãã©ãããã©ãŒã ã¯ãAI-RAN ãå®è£
ããã¯ã€ã€ã¬ã¹ ãããã¯ãŒã¯ã AI ãã€ãã£ãã«ããããšãšãã NVIDIA ã®åºç¯ãªããžã§ã³ãšäžèŽããŠããŸãã
Aerial RAN Computer-1 ã«ãããéä¿¡äŒç€Ÿã¯ä»æ¥ãå
±éã®ã€ã³ãã©ã¹ãã©ã¯ãã£ã« AI-RAN ãå±éã§ããŸãã RAN ãš AI ã¯ãŒã¯ããŒããåæã«å®è¡ããããšã§äœ¿çšçãæ倧åããAI ã¢ã«ãŽãªãºã 㧠RAN ã®ããã©ãŒãã³ã¹ãåäžã§ããŸãã
æãéèŠãªã®ã¯ããã®å
±éã³ã³ãã¥ãŒã¿ãŒã䜿çšãããšãAI ã¯ãŒã¯ããŒãã«ããŒã«ã« ã³ã³ãã¥ãŒãã£ã³ã°ãšããŒã¿äž»æš©ãå¿
èŠãšããäŒæ¥ã«ãšã£ãŠæé©ãª AI ãã¡ããªãã¯ã«ãªããšãããŸã£ããæ°ããæ©äŒã掻çšã§ããããšã§ãã AI ãã¡ãŒã¹ãã®ã¢ãããŒãããå§ããŠã次ã«ãœãããŠã§ã¢ ã¢ããã°ã¬ãŒã㧠RAN ãå®æœããã°ãåæ¥ãã ROI ãæ倧åããããã®åãçµã¿ãéå§ã§ããŸãã
T-Mobile ãšãœãããã³ã¯ã¯ãNVIDIA AI Aerial ã®ããŒããŠã§ã¢ãšãœãããŠã§ã¢ ã³ã³ããŒãã³ãã䜿çšããŠãäž»èŠãª RAN ãœãããŠã§ã¢ ãããã€ããŒãš AI-RAN ãåæ¥åããèšç»ãçºè¡šããŸããã
Mobile World Congress ã§ãã¢ã¡ãªã«ãVapor IO ãšã©ã¹ãã¬ã¹åžã¯ãNVIDIA AI Aerial ã䜿çšãã
äžçåã®ãã©ã€ããŒã 5GAI-RAN ãããã€
ãçºè¡šããŸããã
ç§ãã¡ã¯ãAI ã«ãã AI ã®ããã®ã¯ã€ã€ã¬ã¹ ãããã¯ãŒã¯ã®å€é©ã®è»¢æç¹ã«ããŸãã ã¯ã·ã³ãã³D.C. ã§éå¬ããã
NVIDIA AI Summit
ãš
NVIDIA 6G Developer Day
ã«ãã²åå ããŠãNVIDIA Aerial AI ãš NVIDIA Aerial RAN Computer-1 ã«ã€ããŠè©³ããåŠãã§ãã ããã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
éä¿¡äŒç€Ÿãåœå®¶ AI ã€ã³ãã©ã¹ãã©ã¯ãã£ãšãã©ãããã©ãŒã ãã©ã®ããã«å®çŸããã
GTC ã»ãã·ã§ã³:
AI-RAN ãš 6G ç 究ã®æ°äž»å
GTC ã»ãã·ã§ã³:
çŸä»£ã®éä¿¡äŒç€Ÿ Blueprint: AI ã䜿çšããŠå€é©ãšåçºæ
SDK:
Aerial Omniverse ããžã¿ã« ãã€ã³
ãŠã§ãããŒ:
How Telcos Transform Customer Experiences with Conversational AI
ãŠã§ãããŒ:
å€èšèªé³å£° AI ã«ã¹ã¿ãã€ãºããããšãŒãžã§ã³ã ã¢ã·ã¹ãã§éä¿¡äŒç€Ÿ ã³ã³ã¿ã¯ã ã»ã³ã¿ãŒ ãšãŒãžã§ã³ãã®åŒ·å |
https://developer.nvidia.com/blog/accelerate-large-linear-programming-problems-with-nvidia-cuopt/ | Accelerate Large Linear Programming Problems with NVIDIA cuOpt | The evolution of linear programming (LP) solvers has been marked by significant milestones over the past century, from
Simplex
to the
interior point method (IPM)
. The introduction of
primal-dual linear programming (PDLP)
has brought another significant advancement.
NVIDIA cuOpt
has now implemented PDLP with GPU acceleration. Using cutting-edge algorithms, NVIDIA hardware, dedicated CUDA features, and NVIDIA GPU libraries, the cuOpt LP solver achieves over 5,000x faster performance compared to CPU-based solvers.
This post examines the key components of LP solver algorithms, GPU acceleration in LP, and cuOpt performance on
Mittelmannâs benchmark
and
Min Cost Flow problem
instances.
Harnessing cutting-edge innovations for large-scale LP
LP is a method that involves optimizing a linear objective function, subject to a set of linear constraints.
Consider this scenario: A farmer must decide which vegetables to grow and in what quantities to maximize profit, given limitations on land, seeds, and fertilizer. The goal is to determine the optimal revenue while respecting all constraints, as quickly as possible.
NVIDIA developed an LLM agent
example that helps model the problem and solve it using an LP solver. LP is an essential tool for optimization and has applications in resource allocation, production planning, supply chain, and, as a backbone for
mixed-integer programming
(MIP) solvers. Solving mathematical problems with millions of variables and constraints in seconds is challenging, if not impossible, in some cases.
There are three requirements to solve LP problems efficiently on GPUs:
Efficient and massively parallel algorithms
NVIDIA GPU libraries and CUDA features
Cutting-edge NVIDIA GPUs
Efficient and massively parallel algorithms
Simplex
, introduced by Dantzig in 1947, remains a core component of most LP and MIP solvers. It works by following the edges of the feasible region to find the optimum.
Figure 1. Simplex method
(Source:
Visually Explained â What is Linear Programming (LP)?
)
The next major advancement came with the
interior point method (IPM)
, discovered by I. I. Dikin in 1967. IPM, which moves through the interior of the polytope towards the optimum, is now considered state-of-the-art for solving large-scale LPs on CPUs. However, both techniques face limitations in massive parallelization.
Figure 2. Interior Point Method
(Source:
Visually Explained â What is Linear Programming (LP)?
)
In 2021, a new groundbreaking technique to solve large LPs was introduced by the Google Research team:
PDLP
. It is a first-order method (FOM) that uses the derivative of the problem to iteratively optimize the objective and minimize constraint violation.
Figure 3. Gradient descent
PDLP enhances
primal-dual hybrid gradient (PDHG)
algorithm by introducing tools to improve convergence, including a presolver, diagonal preconditioning, adaptive restarting, and dynamic primal-dual step size selection. Presolving and preconditioning make the input problem simpler and improves numerical stability while restarting and dynamic step size computation enables the solver to adapt itself during optimization.
A key advantage of FOM over previous methods is its ease of massive parallelization, making it well-suited for GPU implementation.
PDLP employs two highly parallelizable computational patterns: Map operations and sparse matrix-vector multiplications (SpMV). This approach enables PDLP to efficiently handle millions of variables and constraints in parallel, making it extremely effective on GPUs.
Map is extensively used in PDLP to perform additions, subtractions, and so on for all the variables and constraints that can span millions of elements. It is extremely parallel and efficient on GPUs.
SpMV corresponds to multiplying a sparse matrix (containing many zeros) and a vector. While this matrix size can reach tens of billions, it contains far fewer useful values. For instance, in a vegetable planting problem, a constraint such as, âI canât plant more than 3.5 kg of potatoesâ would contain only one useful value among millions of variables.
SpMV algorithms have been extensively optimized for GPUs, making them orders of magnitude faster than CPU implementations.
NVIDIA GPU libraries and CUDA features
To have the best performance, our GPU PDLP implementation uses cutting-edge CUDA features and the following NVIDIA libraries:
cuSparse
Thrust
RMM
cuSparse
is the NVIDIA GPU-accelerated library for sparse linear algebra. It efficiently performs SpMVs, a challenging task on GPUs. cuSparse employs unique algorithms designed to fully leverage the NVIDIA massively parallel architecture.
Thrust is part of the
NVIDIA CUDA Core Compute Libraries
(CCCL) and provides high-level C++ parallel algorithms. It simplifies the expression of complex algorithms using patterns and iterators for GPU execution. I used Thrust for map operations and the restart process, which entails sorting values by key. This is a task that can be demanding on the GPU but is efficiently optimized by Thrust.
RMM
is the fast and flexible NVIDIA memory management system that enables the safe and efficient handling of GPU memory through the use of a memory pool.
Finally, I took advantage of advanced CUDA features. One of the most significant challenges in parallelizing PDLP on GPUs is the restart procedure, which is inherently iterative and not suited for parallel execution. To address this, I used
CUDA Cooperative Groups
, which enable you to define GPU algorithms at various levels, with the largest being the grid that encompasses all workers. By implementing a cooperative kernel launch and using grid synchronization, you can efficiently and elegantly express the iterative restart procedure on the GPU.
Cutting-edge NVIDIA GPUs
GPUs achieve fast computation by using thousands of threads to solve many problems in parallel. However, before processing, the GPU must first transfer the data from the main memory to its worker threads.
Memory bandwidth
refers to the amount of data that can be transferred per second. While CPUs can usually handle hundreds of GB/s, the latest GPU,
NVIDIA HGX B100
, has a bandwidth of eight TB/s, two orders of magnitude larger.
The performance of this PDLP implementation scales directly with increased memory bandwidth due to its heavy reliance on memory-intensive computational patterns like Map and SpMV. With future NVIDIA GPU bandwidth increases, PDLP will automatically become faster, unlike other CPU-based LP solvers.
cuOpt outperforms state-of-the-art CPU LP solvers on Mittelmannâs benchmark
The industry standard to benchmark the speed of LP solvers is
Mittelmannâs benchmark
. The objective is to determine the optimal value of the LP function while adhering to the constraints in the shortest time possible. The benchmark problems represent various scenarios and contain between hundreds of thousands to tens of millions of values.
For the comparison, I ran a state-of-the-art CPU LP solver and compared it to this GPU LP solver. I used the same threshold of 10
-4
and disabled crossover. For more information, see the
Potential for PDLP refinement
section later in this post.
Both solvers operated under
float64
precision.
For the CPU LP solver, I used a recommended CPU setup: AMD EPYC 7313P servers with 16 cores and 256 GB of DDR4 memory.
For the cuOpt LP solver, I used an NVIDIA H100 SXM Tensor Core GPU to benefit from the high bandwidth and ran without presolve.
I considered the full solve time without I/O, including scaling for both solvers and presolving for the CPU LP solver. Only instances that have converged for both solvers with a correct objective value are showcased in Figure 4. cuOpt is faster on 60% of the instances and more than 10x faster in 20% of the instances. The biggest speed-up is 5000x on one instance of a large multi-commodity flow optimization problem.
Figure 4. cuOpt acceleration compared to CPU LP on Mittelmannâs benchmark
I also compared cuOpt against a state-of-the-art CPU PDLP implementation using the same setup and conditions. cuOpt is consistently faster and between 10x to 3000x faster.
Figure 5. cuOpt acceleration compared to a CPU PDLP implementation on Mittelmannâs benchmark
The multi-commodity flow problem (MCF) involves finding the most efficient way to route multiple different types of goods through a network from various starting points to their respective destinations, ensuring that the networkâs capacity constraints are not exceeded. One way to solve an MCF problem is to convert it to an LP. On a set of large MCF instances, PDLP is consistently faster, between 10x and 300x.
Figure 6. cuOpt acceleration compared to the CPU LP solver on a set of MCF instances
Potential for PDLP refinement
The NVIDIA cuOpt LP solver delivers incredible performance, but thereâs potential for future enhancements:
Handling higher accuracy
Requiring high bandwidth
Convergence issues on some problems
Limited benefit for small LPs
Handling higher accuracy
To decide whether youâve solved an LP, you measure two things:
Optimality gap:
Measures how far you are from finding the optimum of the objective function.
Feasibility:
Measures how far you are from respecting the constraints.
An LP is considered solved when both quantities are zero. Reaching an exact value of zero can be challenging and often unnecessary, so LP solvers use a threshold that enables faster convergence while maintaining accuracy. Both quantities are now only required to be below this threshold, which is relative to the magnitude of the values of the problem.
Most LP solvers use a threshold, especially for large problems that are extremely challenging to solve. The industry standard so far was to use 10
-8.
While PDLP can solve problems using 10
-8
, it is then usually significantly slower. This can be an issue if you require high accuracy. In practice, many find 10
-4
accurate enough and sometimes even lower. This heavily benefits PDLP while not being a big differentiator for other LP-solving algorithms.
Requiring high bandwidth
PDLPâs performance scales linearly with memory bandwidth, making it more efficient on new GPU architectures. It requires a recent server-grade GPU to reproduce the results shown in the performance analysis section.
Convergence issues on some problems
While PDLP can solve most LPs quickly, it sometimes needs a significant number of steps to converge, resulting in higher runtimes. On Mittelmannâs benchmark, cuOpt LP Solver times out after one hour on 8 of the 49 public instances, due to a slow convergence rate.
Limited benefit for small LPs
Small LPs benefit less from the GPUâs high bandwidth, which doesnât enable PDLP to scale as well compared to CPU solvers. The cuOpt LP solver offers a batch mode for this scenario where you can provide and solve hundreds of small LPs in parallel.
Conclusion
The cuOpt LP solver uses CUDA programming, NVIDIA GPU libraries, and cutting-edge NVIDIA GPUs to solve LPs, potentially orders of magnitude faster than CPU and scaling to over a billion coefficients. As a result, itâs particularly beneficial for tackling large-scale problems, where its advantages become even more prominent.
Some use cases will still work better with traditional Simplex or IPM and I expect the future solvers to be a combination of GPU and CPU techniques.
Sign up to be notified when you can
try the cuOpt LP
. Try NVIDIA
cuOpt Vehicle Routing Problem (VRP
) today with
NVIDIA-hosted NIM
microservices for the latest AI models for free on the
NVIDIA API Catalog
. | https://developer.nvidia.com/ja-jp/blog/accelerate-large-linear-programming-problems-with-nvidia-cuopt/ | NVIDIA cuOpt ã§å€§èŠæš¡ãªç·åœ¢èšç»åé¡ãå éãã | Reading Time:
3
minutes
ç·åœ¢èšç»æ³ (LP: Linear Programming) ãœã«ããŒã®é²åã¯ã
ã·ã³ãã¬ãã¯ã¹æ³
ãã
å
ç¹æ³ (IPM: Interior Point Method)
ãŸã§ãéå» 1 äžçŽã«ããã£ãŠã«éèŠãªç¯ç®ã§ç¹åŸŽã¥ããããŠããŸããã
äž»å察ç·åœ¢èšç»æ³ (PDLP: Primal-dual Linear Programming)
ã®å°å
¥ã¯ããããªã倧ããªé²æ©ããããããŸããã
NVIDIA cuOpt
ã¯çŸåšãGPU ã¢ã¯ã»ã©ã¬ãŒã·ã§ã³ã§ PDLP ãå®è£
ããŠããŸããæå
端ã®ã¢ã«ãŽãªãºã ãNVIDIA ããŒããŠã§ã¢ãå°çšã® CUDA æ©èœãNVIDIA GPU ã©ã€ãã©ãªã䜿çšããŠãcuOpt LP ãœã«ããŒã¯ãCPU ããŒã¹ã®ãœã«ããŒãšæ¯èŒã㊠5,000 å以äžã®é«éããã©ãŒãã³ã¹ãå®çŸããŠããŸãã
ãã®æçš¿ã§ã¯ãLP ãœã«ã㌠ã¢ã«ãŽãªãºã ã®äž»èŠã³ã³ããŒãã³ããLP ã«ããã GPU ã¢ã¯ã»ã©ã¬ãŒã·ã§ã³ã
Mittelmann ã®ãã³ãããŒã¯
ãš
æå°è²»çšãããŒåé¡
ã®ã€ã³ã¹ã¿ã³ã¹ã«ããã cuOpt æ§èœãæ€èšŒããŸãã
æå
端ã®ã€ãããŒã·ã§ã³ã倧èŠæš¡ãª LP ã«æŽ»çš
LP ã¯ãäžé£ã®ç·åœ¢å¶çŽã®å¯Ÿè±¡ãšãªãç·åœ¢ç®çé¢æ°ãæé©åããææ³ã§ãã
äŸãã°ããããªã·ããªãªãèããŠã¿ãŠãã ããã蟲家ã¯ãåå°ãçš®ãè¥æã«å¶çŽãããäžã§ãå©çãæ倧éã«é«ããããã«ãã©ã®éèãã©ããããæ œå¹ãããã決ããªããã°ãªããŸãããç®æšã¯ãããããå¶çŽãæºãããªãããã§ããã ãè¿
éã«æé©ãªåçã決å®ããããšã§ãã
NVIDIA ãéçºãã LLM ãšãŒãžã§ã³ã
ã®äŸã¯ãLP ãœã«ããŒã䜿çšããŠåé¡ãã¢ãã«åãã解決ããã®ã«åœ¹ç«ã¡ãŸããLP ã¯æé©åã«äžå¯æ¬ ãªããŒã«ã§ããããªãœãŒã¹ã®é
åãçç£èšç»ããµãã©ã€ãã§ãŒã³ã
æ··åæŽæ°èšç»åé¡
(MIP: Mixed-integer Programming) ãœã«ããŒã®ããã¯ããŒã³ãšããŠå¿çšãããŠããŸããæ°çŸäžãã®å€æ°ãå¶çŽãããæ°åŠçåé¡ãæ°ç§ã§è§£ãããšã¯ãäžå¯èœã§ã¯ãªãã«ãããé£ããå ŽåããããŸãã
GPU ã§å¹ççã« LP åé¡ã解決ããã«ã¯ã以äžã® 3 ã€ã®èŠä»¶ããããŸãã
å¹ççã§å€§èŠæš¡ãªäžŠåã¢ã«ãŽãªãºã
NVIDIA GPU ã©ã€ãã©ãªãš CUDA æ©èœ
ææ°éã® NVIDIA GPU
å¹ççã§å€§èŠæš¡ãªäžŠåã¢ã«ãŽãªãºã
1947 幎㫠Dantz æ°ã«ãã£ãŠå°å
¥ããã
ã·ã³ãã¬ãã¯ã¹æ³
ã¯ãçŸåšã§ãã»ãšãã©ã® LP ãš MIP ãœã«ããŒã®äžæ žããªããŠããŸããããã¯ãå®çŸå¯èœãªé åã®ç«¯ãè¿œã£ãŠæé©å€ããæ±ãããã®ã§ãã
å³ 1. ã·ã³ãã¬ãã¯ã¹æ³ (åºå
ž:
Visually Explained â What is Linear Programming (LP)?
)
次ã®å€§ããªé²æ©ã¯ã1967 幎㫠I. I. Dikin æ°ãçºèŠãã
å
ç¹æ³ (IPM: Interior Point Method)
ã§ãããå€é¢äœã®å
éšãæé©å€ã®æ¹åã«ç§»åãã IPM ã¯ãçŸåšã§ã¯ CPU äžã§å€§èŠæš¡ãª LP ã解ãæå
端æè¡ã ãšèããããŠããŸããããããªããããããã®ææ³ã倧èŠæš¡ãªäžŠååã«ã¯éçããããŸãã
å³ 2. å
ç¹æ³ (åºå
ž:
Visually Explained â What is Linear Programming (LP)?
)
2021 幎ã«ã倧èŠæš¡ LP ã解ãæ°ããç»æçãªæè¡ãšããŠãGoogle Research ããŒã ã«ãã£ãŠçºè¡šãããŸããã®ãã
PDLP
ã§ããPDLP ã¯ãåé¡ã®å°é¢æ°ã䜿çšããŠãç®çãç¹°ãè¿ãæé©åããå¶çŽéåãæå°éã«æããäžæ¬¡æ³ (FOM: First-order Method) ã§ãã
å³3. åŸé
éäž
PDLP ã¯ããã¬ãœã«ããŒã察è§ç·ã®åææ¡ä»¶ãé©å¿çãªãªã¹ã¿ãŒããåçãªäž»å察ã¹ããã ãµã€ãºéžæãªã©ãåæãæ¹åããããŒã«ãå°å
¥ããããšã§ã
äž»å察ãã€ããªããåŸé
(PDHG: Primal-dual Hybrid Gradient)
ã¢ã«ãŽãªãºã ã匷åããŸãããã¬ãœã«ããŒãšåææ¡ä»¶ã«ãããå
¥ååé¡ãããç°¡çŽ åãããæ°å€å®å®æ§ãåäžããŸããäžæ¹ããªã¹ã¿ãŒããšåçã¹ããã ãµã€ãºèšç®ã«ããããœã«ããŒã¯æé©åäžã«é©å¿ããããšãã§ããŸãã
FOM ãåŸæ¥ã®ææ³ãããåªããŠããç¹ãšããŠã倧èŠæš¡ãªäžŠååã容æã§ãããGPU ã®å®è£
ã«é©ããŠããŸãã
PDLP ã¯ãMap æŒç®ãšçè¡åãã¯ãã«ç© (SpMV: Sparse Matrix-Vector Multiplications) ã® 2 ã€ã®é«åºŠã«äžŠååå¯èœãªèšç®ãã¿ãŒã³ãæ¡çšããŠããŸãããã®ã¢ãããŒãã«ãããPDLP ã¯æ°çŸäžãã®å€æ°ãšå¶çŽãå¹ççã«äžŠååŠçããããšãã§ããGPU ã«å¯ŸããŠéåžžã«å¹æçã«ãªããŸãã
Map ã¯ãPDLP ã§åºã䜿çšãããŠãããæ°çŸäžã®èŠçŽ ã«åã¶ãã¹ãŠã®å€æ°ãšå¶çŽã«å¯ŸããŠå ç®ãæžç®ãªã©ãè¡ããŸããããã¯ãGPU äžã§ã¯æ¥µããŠäžŠååŠçãå¯èœã§ãã€å¹ççã§ãã
SpMV ã¯ãçè¡å (å€ãã®ãŒããå«ã) ãšãã¯ãã«ã®ä¹ç®ã«çžåœããŸãããã®è¡åã®ãµã€ãºã¯æ°çŸåã«éããŸãããæçšãªå€ã¯ã¯ããã«å°ãªããªããŸããäŸãã°ãéèã®æ€ãä»ãåé¡ã§ã¯ãã3.5 kg 以äžã®ãžã£ã¬ã€ã¢ãæ€ããããšãã§ããªãããšãã£ãå¶çŽã«ã¯ãæ°çŸäžã®å€æ°ã®äžã§æçšãªå€ã 1 ã€ããå«ãŸããªãããšã«ãªããŸãã
SpMV ã¢ã«ãŽãªãºã ã¯ãGPU åãã«åºç¯å²ã«æé©åãããŠãããCPU å®è£
ãããæ¡éãã«é«éã§ãã
NVIDIA GPU ã©ã€ãã©ãªãš CUDA æ©èœ
æé«ã®ããã©ãŒãã³ã¹ãåŸãããã«ãNVIDIA ã® GPU PDLP å®è£
ã§ã¯ãæå
端㮠CUDA æ©èœãšä»¥äžã® NVIDIA ã©ã€ãã©ãªã䜿çšããŠããŸãã
cuSparse
Thrust
RMM
cuSparse
ã¯ãçç·åœ¢ä»£æ°ã® NVIDIA GPU 察å¿ã©ã€ãã©ãªã§ããããã¯ãGPU ã§ã¯é£ãããšããã SpMV ãå¹ççã«å®è¡ããŸããcuSparse ã¯ãNVIDIA ã®å·šå€§ãªäžŠåã¢ãŒããã¯ãã£ããã«æŽ»çšããããã«èšèšãããç¬èªã®ã¢ã«ãŽãªãºã ãæ¡çšããŠããŸãã
Thrust ã¯ã
NVIDIA CUDA ã³ã¢ ã³ã³ãã¥ãŒãã£ã³ã° ã©ã€ãã©ãª
(CCCL) ã®äžéšã§ãããé«ãã¬ãã«ã® C++ 䞊åã¢ã«ãŽãªãºã ãæäŸããŸããGPU ã®å®è¡ã«ãã¿ãŒã³ãšã€ãã¬ãŒã¿ãŒã䜿çšããŠãè€éãªã¢ã«ãŽãªãºã ã®è¡šçŸãç°¡çŽ åããŸããç§ã¯ãMap æŒç®ãšããŒã§å€ããœãŒããããªã¹ã¿ãŒãã®ããã»ã¹ã« Thrust ã䜿çšããŸãããããã¯ãGPU ã«è² è·ããããäœæ¥ã§ãããThrust ã§å¹ççã«æé©åã§ããŸãã
RMM
ã¯ãé«éãã€æè»ãª NVIDIA ã¡ã¢ãªç®¡çã·ã¹ãã ã§ãã¡ã¢ãª ããŒã«ã䜿çšããããšã§ãGPU ã¡ã¢ãªã®å®å
šã§å¹ççãªåŠçãå®çŸããŸãã
æåŸã«ãé«åºŠãª CUDA ã®æ©èœãå©çšããŸãããGPU äžã§ PDLP ã䞊ååããéã®æãéèŠãªèª²é¡ã® 1 ã€ã¯ãæ¬æ¥å埩çã§ããã䞊åå®è¡ã«ã¯é©ããŠããªããªã¹ã¿ãŒãæé ã§ããããã«å¯ŸåŠããããã«ãç§ã¯
CUDA Cooperative Groups
ã䜿çšããŸãããããã¯ãããŸããŸãªã¬ãã«ã§ GPU ã¢ã«ãŽãªãºã ãå®çŸ©ã§ããæã倧ããªã®ãã®ã¯ãã¹ãŠã®ã¯ãŒã«ãŒã網çŸ
ããã°ãªããã«ãªããŸããå調çãªã«ãŒãã«èµ·åãå®è£
ããã°ãªããåæãå©çšããããšã§ãGPU äžã§å埩çãªãªã¹ã¿ãŒãæé ãå¹ççãã€ãšã¬ã¬ã³ãã«è¡šçŸã§ããŸãã
æå
端㮠NVIDIA GPU
GPU ã¯ãæ°åãã®ã¹ã¬ããã䜿çšããŠå€ãã®åé¡ãåæã«è§£æ±ºããããšã§ãé«éèšç®ãå®çŸããŸããããããåŠçããåã«ãGPU ã¯ãŸãã¡ã€ã³ ã¡ã¢ãªããã¯ãŒã«ãŒ ã¹ã¬ããã«ããŒã¿ã転éããå¿
èŠããããŸãã
ã¡ã¢ãªåž¯åå¹
ãšã¯ã1 ç§éã«è»¢éã§ããããŒã¿éã®ããšã§ããCPU ã§ã¯éåžžãæ°çŸ GB/ç§ãåŠçã§ããŸãããææ°ã® GPU ã§ãã
NVIDIA HGX B100
ã®åž¯åå¹
㯠8 TB/ç§ã§ã2 æ¡ã倧ããå€ã§ãã
ãã® PDLP å®è£
ã®ããã©ãŒãã³ã¹ã¯ãMap ã SpMV ã®ãããªã¡ã¢ãªè² è·ã®é«ãèšç®ãã¿ãŒã³ã«å€§ããäŸåããŠãããããã¡ã¢ãªåž¯åå¹
ã®å¢å ã«å¿ããŠãçŽç·çã«æ¡å€§ããŸããå°æ¥çã« NVIDIA GPU ã®åž¯åå¹
ãå¢å ããã°ãä»ã® CPU ããŒã¹ã® LP ãœã«ããŒãšã¯ç°ãªããPDLP ã¯èªåçã«é«éåãããŸãã
cuOpt ã¯ãMittelmann ã®ãã³ãããŒã¯ã§æå
端㮠CPU LP ãœã«ããŒãäžåãæ§èœãçºæ®
LP ãœã«ããŒã®é床ãè©äŸ¡ããæ¥çæšæºãã
Mittelmann ã®ãã³ãããŒã¯
ã§ãããã®ç®çã¯ãå¶çŽãæºãããªãã LP é¢æ°ã®æé©å€ãå¯èœãªéãæçæéã§æ±ºå®ããããšã§ãããã³ãããŒã¯ã®åé¡ã¯ãããŸããŸãªã·ããªãªã瀺ããŠãããæ°åäžããæ°åäžã®å€ãå«ãã§ããŸãã
æ¯èŒã®ããã«ãç§ã¯ææ°ã® CPU LP ãœã«ããŒãå®è¡ãããã® GPU LP ãœã«ããŒãšæ¯èŒããŸãããåãéŸå€ã§ãã 10
-4
ãçšããŠãã¯ãã¹ãªãŒããŒãç¡å¹ã«ããŸããã詳现ã«ã€ããŠã¯ããã®æçš¿ã®åŸåã«ãã
PDLP æ¹è¯ã®å¯èœæ§
ã®ã»ã¯ã·ã§ã³ãåç
§ããŠãã ããã
ã©ã¡ãã®ãœã«ããŒã
float64
ã®ç²ŸåºŠã§åäœããŸããã
CPU LP ãœã«ããŒã®å Žåãæšå¥šããã CPU èšå®ã§ããã16 ã³ã¢ãš 256 GB ã® DDR4 ã¡ã¢ãªãåãã AMD EPYC 7313P ãµãŒããŒã䜿çšããŸããã
cuOpt LP ãœã«ããŒã®å Žåãé«åž¯åå¹
ã®ã¡ãªããã掻çšãããããNVIDIA H100 SXM Tensor ã³ã¢ GPU ã䜿çšãããã¬ãœã«ããŒãªãã§å®è¡ããŸããã
äž¡æ¹ã®ãœã«ããŒã®ã¹ã±ãŒãªã³ã°ãš CPU LP ãœã«ããŒã®ãã¬ãœã«ããŒãªã©ãI/O ãªãã®å®å
šãªè§£æ±ºæéãèæ
®ããŸãããæ£ããç®æšå€ãæã€äž¡æ¹ã®ãœã«ããŒã®å Žåãåæããã€ã³ã¹ã¿ã³ã¹ã®ã¿ãå³ 4 ã§ç€ºããŠããŸããcuOpt ã¯ã60% ã®ã€ã³ã¹ã¿ã³ã¹ã§é«éåãã20% ã®ã€ã³ã¹ã¿ã³ã¹ã§ 10 å以äžé«éåããŸãããæ倧ã®é«éåã¯ã倧èŠæš¡ãªå€åçš®æµåé¡ã®æé©åã®ãã¡ã® 1 ã€ã®ã€ã³ã¹ã¿ã³ã¹ã§ 5,000 åã®é«éåãéæãããŸããã
å³ 4. Mittelman ã®ãã³ãããŒã¯ã§ CPU LP ãšæ¯èŒãã cuOpt ã®é«éå
ãŸãåãèšå®ãšæ¡ä»¶ã䜿çšããŠãææ°ã® CPU PDLP ã®å®è£
ãš cuOpt ãæ¯èŒããŸãããcuOpt ã¯åžžã«é«éåãã10 åïœ 3000 åãé«éåããŸãã
å³ 5. Mittelman ã®ãã³ãããŒã¯ã§ CPU PDLP å®è£
ãšæ¯èŒãã cuOpt ã®é«éå
å€åçš®æµåé¡ (MCF: Multi-commodity Flow Problem) ã§ã¯ããããã¯ãŒã¯ã®å®¹éå¶çŽãè¶
ããªãããã«ãããŸããŸãªåºçºç¹ããããããã®ç®çå°ãŸã§ãããã¯ãŒã¯ãä»ããŠè€æ°ã®ããŸããŸãªç©ãã«ãŒãã£ã³ã°ããæãå¹ççãªæ¹æ³ãèŠã€ããããšãã§ããŸããMCF åé¡ã解決ãã 1 ã€ã®æ¹æ³ã¯ãLP ã«å€æããããšã§ããäžé£ã®å€§èŠæš¡ãª MCF ã€ã³ã¹ã¿ã³ã¹ã§ã¯ãPDLP ã¯äžè²«ã㊠10 åãã 300 åã®éã§é«éã«ãªããŸãã
å³ 6. äžé£ã® MCF ã€ã³ã¹ã¿ã³ã¹ã§ CPU LP ãœã«ããŒãšæ¯èŒãã cuOpt ã®é«éå
PDLP æ¹è¯ã®å¯èœæ§
NVIDIA cuOpt LP ãœã«ããŒã¯ãé©ç°çãªããã©ãŒãã³ã¹ãçºæ®ããŸãããå°æ¥çã«åŒ·åãããäœå°ããããŸãã
ããé«ã粟床ãžã®å¯Ÿå¿
é«åž¯åå¹
ã®å¿
èŠæ§
äžéšã®åé¡ã«å¯Ÿããåæåé¡
å°èŠæš¡ãª LP ã«å¯Ÿããéå®çãªå¹æ
ããé«ã粟床ãžã®å¯Ÿå¿
LP ã解決ãããã©ãããå€æããã«ã¯ã以äžã® 2 ç¹ã枬å®ããŸãã
æé©æ§ã®ã®ã£ãã
: ç®çé¢æ°ã®æé©å€ããã®è·é¢ã枬å®ããŸãã
å®çŸå¯èœæ§
: å¶çŽãã©ã®ãããæºãããŠãããã枬å®ããŸãã
LP ã¯ãäž¡æ¹ã®å€ããŒãã«ãªã£ãæã解決ãããšã¿ãªãããŸããå³å¯ã«ãŒãã®å€ã«å°éããããšã¯å°é£ã§ãããäžèŠãªããšãå€ããããLP ãœã«ããŒã¯ç²ŸåºŠãç¶æããªããåæãããé«éåããéŸå€ã䜿çšããŸããäž¡æ¹ã®å€ãããã®éŸå€ä»¥äžã§ããããšãèŠæ±ããããããåé¡ã®å€ã®å€§ããã«å¯ŸããŠçžå¯Ÿçãªãã®ã«ãªããŸãã
ã»ãšãã©ã® LP ãœã«ããŒã¯ãç¹ã«è§£æ±ºããã®ãéåžžã«å°é£ãªå€§ããªåé¡ã«å¯ŸããŠãéŸå€ã䜿çšããŸãããããŸã§ã®æ¥çæšæºã§ã¯ã10
-8
ã䜿çšããŠããŸãããPDLP 㯠10
-8
ã䜿çšããŠåé¡ã解決ããããšãã§ããŸãããéåžžã¯ããªãæéãããããŸããé«ã粟床ãæ±ããããå Žåãããã¯åé¡ã«ãªããŸããå®éã«ã¯ãå€ãã®å Žå 10
-4
ã§ååãªç²ŸåºŠã§ãããšèãã人ãå€ããå Žåã«ãã£ãŠã¯ãããããäœãå€ã§ãååã§ãããšèãã人ãããŸããããã¯ãPDLP ã«ã¯å€§ããªå©ç¹ãšãªããŸãããä»ã® LP 解決ã¢ã«ãŽãªãºã ã®å€§ããªå·®å¥åèŠå ã«ã¯ãªããŸããã
é«åž¯åå¹
ã®å¿
èŠæ§
PDLP ã®ããã©ãŒãã³ã¹ã¯ã¡ã¢ãªåž¯åå¹
ã«æ¯äŸããŠæ¡åŒµããæ°ãã GPU ã¢ãŒããã¯ãã£ã§ã¯ããå¹ççã«ãªããŸããåè¿°ããããã©ãŒãã³ã¹åæã®ã»ã¯ã·ã§ã³ã§ç€ºããçµæãåçŸããã«ã¯ãæè¿ã®ãµãŒã㌠ã°ã¬ãŒãã® GPU ãå¿
èŠã§ãã
äžéšã®åé¡ã«å¯Ÿããåæåé¡
PDLP ã¯ãã»ãšãã©ã® LP ãè¿
éã«è§£æ±ºã§ããŸãããåæãããŸã§ã«ããªãã®ã¹ãããæ°ãå¿
èŠã«ãªãããšãããããã®çµæãå®è¡æéãé·ããªãããšããããŸããMittelman ã®ãã³ãããŒã¯ã§ã¯ãcuOpt LP ãœã«ããŒã¯ã49 ã®å
¬éã€ã³ã¹ã¿ã³ã¹ã®ãã¡ 8 ã€ã§ãåæçãé
ãããã1 æéåŸã«ã¿ã€ã ã¢ãŠãããŸããã
å°èŠæš¡ãª LP ã«å¯Ÿããéå®çãªå¹æ
å°èŠæš¡ãª LP ããGPU ã®é«åž¯åå¹
ããå©çãåŸãããšã¯å°ãªããCPU ãœã«ããŒãšæ¯èŒã㊠PDLP ãæ¡åŒµã§ããŸãããcuOpt LP ãœã«ããŒã¯ããã®ãããªã·ããªãªã«å¯Ÿå¿ãããããã¢ãŒããçšæããŠãããæ°çŸãã®å°èŠæš¡ãª LP ã䞊è¡ããŠæäŸãã解決ã§ããŸãã
ãŸãšã
cuOpt LP ãœã«ããŒã¯ãCUDA ããã°ã©ãã³ã°ãNVIDIA GPU ã©ã€ãã©ãªãããã³ææ°éã® NVIDIA GPU ã䜿çšããŠãLP ã解決ããŸããããã¯ãCPU ãããæ¡éãã«é«éã§ã10 åãè¶
ããä¿æ°ã«ãŸã§æ¡å€§ãããå¯èœæ§ããããŸãããã®çµæã倧èŠæš¡ãªåé¡ã«åãçµãäžã§ç¹ã«æçã§ããããã®å©ç¹ã¯ããã«é¡èã«ãªããŸãã
GTC ã»ãã·ã§ã³:
Advances in Optimization AI (æé©å AI ã®é²æ©)
NGC ã³ã³ãããŒ:
cuOpt
SDK:
cuOpt
SDK:
cuSOLVERMp
SDK:
cuSOLVER
åŸæ¥ã®ã·ã³ãã¬ãã¯ã¹æ³ã IPM ãçšããã»ããäžæããããŠãŒã¹ ã±ãŒã¹ããããŸãããå°æ¥çã«ã¯ãGPU ãš CPU ã®æè¡ãçµã¿åããããœã«ããŒãäž»æµã«ãªãã§ãããã
ç»é²ãããšã
cuOpt LP ãè©Šããããã«
ãªã£ãéã«ãéç¥ãåãåãããšãã§ããŸãã
NVIDIA API ã«ã¿ãã°
ã§ã
NVIDIA ãæäŸãã
ææ°ã® AI ã¢ãã«åãã® NIM ãã€ã¯ããµãŒãã¹ã䜿çšããŠãNVIDIA ã®
cuOpt è»äž¡çµè·¯åé¡ (VPR: Vehicle Routing Problem
) ãä»ããç¡æã§ãè©Šããã ããã
é¢é£æ
å ± |
https://developer.nvidia.com/blog/managing-ai-inference-pipelines-on-kubernetes-with-nvidia-nim-operator/ | Managing AI Inference Pipelines on Kubernetes with NVIDIA NIM Operator | Developers have shown a lot of excitement for
NVIDIA NIM microservices
, a set of easy-to-use cloud-native microservices that shortens the time-to-market and simplifies the deployment of generative AI models anywhere, across cloud, data centers, cloud, and GPU-accelerated workstations.
To meet the demands of diverse use cases, NVIDIA is bringing to market a variety of different AI models packaged as NVIDIA NIM microservices, which enable key functionality in a
generative AI inference workflow
.
A typical generative AI application integrates multiple different NIM microservices. For instance,
multi-turn conversational AI in a RAG pipeline
uses the LLM, embedding, and re-ranking NIM microservices. The deployment and lifecycle management of these microservices and their dependencies for production generative AI pipelines can lead to additional toil for the MLOps and LLMOps engineers and Kubernetes cluster admins.
This is why NVIDIA is announcing the
NVIDIA NIM Operator
, a Kubernetes operator designed to facilitate the deployment, scaling, monitoring, and management of NVIDIA NIM microservices on Kubernetes clusters. With NIM Operator, you can deploy, auto-scale, and manage the lifecycle of NVIDIA NIM microservices with just a few clicks or commands.
Cluster admins and MLOps and LLMOps engineers donât have to put effort into the manual deployment, scaling, and lifecycle management of AI inference pipelines. NIM Operator handles all of this and more.
Core capabilities and benefits
Developers are looking to reduce the effort of deploying AI inference pipelines at scale in local deployments. NIM Operator facilitates this with simplified, lightweight deployment and manages the lifecycle of AI NIM inference pipelines on Kubernetes. NIM Operator also supports pre-caching models to enable faster initial inference and autoscaling.
Figure 1. NIM Operator architecture
Figure 2. NIM Operator Helm deployment
Intelligent model pre-caching
NIM Operator offers
pre-caching of models
that reduces initial inference latency and enables faster autoscaling. It also enables model deployments in air-gapped environments.
Use NIM intelligent model pre-caching by specifying NIM profiles and tags, or let NIM Operator auto-detect the best model based on the GPUs available on the Kubernetes cluster. You can pre-cache models on any available node based on your requirements, either on CPU-only or on GPU-accelerated nodes.
When this option is selected, NIM Operator creates a persistent volume claim (PVC) in Kubernetes and then downloads and caches the NIM models in the cluster. Then, NIM Operator deploys and manages the lifecycle of this PVC using the
NIMCache
custom resource.
Figure 3. NIM microservice cache deployment
Automated AI NIM pipeline deployments
NVIDIA is introducing two Kubernetes custom resource definitions (CRDs) to deploy NVIDIA NIM microservices:
NIMService
and
NIMPipeline
.
NIMService
, when deployed, manages each NIM microservice as a standalone microservice.
NIMPipeline
enables the deployment and management of several NIM microservices collectively.
Figure 4 shows a RAG pipeline managed as a microservice pipeline. You can manage multiple pipelines as a collection instead of individual services.
Figure 4. NIM microservice pipeline deployment
Autoscaling
NIM Operator supports
auto-scaling the NIMService deployment
and its
ReplicaSet
using Kubernetes Horizontal Pod Autoscaler (HPA).
The
NIMService
and
NIMPipeline
CRDs support all the familiar HPA metrics and scaling behaviors, such as the following:
Specify minimum and maximum replica counts
Scale using the following metrics:
Per-pod resource metrics, such as CPU
Per-pod custom metrics, such as GPU memory usage
Object metrics, such as NIM max requests or
KVCache
External metrics
You can also specify any HPA scale-up and scale-down behavior, for example, a stabilization window to prevent flapping and scaling policies to control the rate of change of replicas while scaling.
For more information, see
GPU Metrics
.
Figure 5. NIM Auto-scaling
Day 2 operations
NIMService
and
NIMPipeline
support easy rolling upgrades of NIM with a customizable rolling strategy. Change the version number of the NIM in the
NIMService
or
NIMPipeline
CRD and NIM Operator updates the NIM deployments in the cluster.
Any changes in
NIMService
pods are reflected in the
NIMService
and
NIMPipeline
status. You can also add Kubernetes ingress for
NIMService
.
Support matrix
At launch, NIM Operator supports the reasoning LLM and the retrievalâembedding NIM microservice.
We are continuously expanding the list of supported NVIDIA NIM microservices. For more information about the full list of supported NIM microservices, see
Platform Support
.
Conclusion
By automating the deployment, scaling, and lifecycle management of NVIDIA NIM microservices, NIM Operator makes it easier for enterprise teams to adopt NIM microservices and accelerate AI adoption.
This effort aligns with our commitment to make NIM microservices easy to adopt, production-ready, and secure. NIM Operator will be part of future releases of NVIDIA AI Enterprise to provide enterprise support, API stability, and proactive security patching.
Get started with
NIM Operator through NGC today
, or get it from the
GitHub repo
. For technical questions on installation, usage, or issues, please file an issue on the repo. | https://developer.nvidia.com/ja-jp/blog/managing-ai-inference-pipelines-on-kubernetes-with-nvidia-nim-operator/ | NVIDIA NIM Operator 㧠Kubernetes ã® AI æšè«ãã€ãã©ã€ã³ã管ç | Reading Time:
2
minutes
éçºè
ã¯ãããã¯ãã¯ã©ãŠããããŒã¿ ã»ã³ã¿ãŒãã¯ã©ãŠããGPU ã«ããé«éåãããã¯ãŒã¯ã¹ããŒã·ã§ã³ãªã©ãããããå Žæã§åžå Žæå
¥ãŸã§ã®æéãççž®ããçæ AI ã¢ãã«ã®ãããã€ãç°¡çŽ åããããšãã§ããã䜿ããããã¯ã©ãŠããã€ãã£ãã®ãã€ã¯ããµãŒãã¹ã§ãã
NVIDIA NIM ãã€ã¯ããµãŒãã¹
ã«å€§ãã«æåŸ
ããŠããŸãã
å€æ§ãªãŠãŒã¹ ã±ãŒã¹ã®èŠæ±ã«å¿ãããããNVIDIA ã¯ãNVIDIA NIM ãã€ã¯ããµãŒãã¹ãšããŠããã±ãŒãžåãããããŸããŸãª AI ã¢ãã«ãåžå Žã«æå
¥ããŠããã
çæ AI æšè«ã¯ãŒã¯ãããŒ
ã®äž»èŠãªæ©èœãå®çŸããŠããŸãã
éåžžã®çæ AI ã¢ããªã±ãŒã·ã§ã³ã§ã¯ãè€æ°ã®ç°ãªã NIM ãã€ã¯ããµãŒãã¹ãçµ±åããŠããŸããäŸãã°ã
RAG ãã€ãã©ã€ã³ã®ãã«ãã¿ãŒã³å¯Ÿè©±å AI
ã§ã¯ãLLMãåã蟌ã¿ããªã©ã³ãã³ã°ãªã©ã®è€æ°ã® NIM ãã€ã¯ããµãŒãã¹ã䜿çšããŠããŸãããããã®ãã€ã¯ããµãŒãã¹ã®ãããã€ãã©ã€ããµã€ã¯ã«ç®¡çãæ¬çªç°å¢ã®çæ AI ãã€ãã©ã€ã³ãžã®äŸåé¢ä¿ã«ãããMLOps ããã³ LLMOps ã®ãšã³ãžãã¢ãKubernetes ã¯ã©ã¹ã¿ãŒã®ç®¡çè
ã®åŽåãããã«å¢ããå ŽåããããŸãã
ãã®ãããNVIDIA ã¯ãKubernetes ã¯ã©ã¹ã¿ãŒã§ NVIDIA NIM ãã€ã¯ããµãŒãã¹ã®ãããã€ãã¹ã±ãŒãªã³ã°ãç£èŠã管çã容æã«ããããèšèšããã Kubernetes ãªãã¬ãŒã¿ãŒã§ãã
NVIDIA NIM Operator
ãçºè¡šããŸãããNIM Operator ã䜿çšããã°ããããæ°åã®ã¯ãªãã¯ãŸãã¯ã³ãã³ãã§ãNVIDIA NIM ãã€ã¯ããµãŒãã¹ã®ãããã€ããªãŒãã¹ã±ãŒãªã³ã°ãã©ã€ããµã€ã¯ã«ã管çããããšãã§ããŸãã
ã¯ã©ã¹ã¿ãŒç®¡çè
ã MLOps ããã³ LLLMOps ã®ãšã³ãžãã¢ããAI æšè«ãã€ãã©ã€ã³ã®æäœæ¥ã«ãããããã€ãã¹ã±ãŒãªã³ã°ãã©ã€ããµã€ã¯ã«ç®¡çã«åŽåãè²»ããå¿
èŠã¯ãããŸãããNIM Operator ãããããã¹ãŠã«å¯ŸåŠããŸãã
äž»ãªæ©èœãšã¡ãªãã
éçºè
ã¯ãAI æšè«ãã€ãã©ã€ã³ãããŒã«ã«ã§å€§èŠæš¡ã«ãããã€ããåŽåã軜æžããããšèããŠããŸããNIM Operator ã¯ãç°¡çŽ åããã軜éãªãããã€ã§ãããä¿é²ããŠãKubernetes 㧠AI NIM æšè«ãã€ãã©ã€ã³ã®ã©ã€ããµã€ã¯ã«ã管çããŸããNIM Operator ã¯ãã¢ãã«ã®äºåãã£ãã·ã¥ããµããŒãããŠãããåææšè«ãšãªãŒãã¹ã±ãŒãªã³ã°ãããé«éåããããšãã§ããŸãã
å³ 1. NIM Operator ã®ã¢ãŒããã¯ãã£
å³ 2. NIM Operator Helm ã®ãããã€
ã¢ãã«ã®ã€ã³ããªãžã§ã³ãäºåãã£ãã·ã¥
NIM Operator ã¯ã
ã¢ãã«ã®äºåãã£ãã·ã¥
ãè¡ããåææšè«ã®é
延ãäœæžããŠããªãŒãã¹ã±ãŒãªã³ã°ãããé«éåããããšãã§ããŸãããŸããã€ã³ã¿ãŒãããæ¥ç¶ã®ãªãç°å¢ã§ãã¢ãã«ã®ãããã€ãå¯èœã§ãã
NIM ãããã¡ã€ã«ãšã¿ã°ãæå®ããŠãNIM ã¢ãã«ã®ã€ã³ããªãžã§ã³ãäºåãã£ãã·ã¥ã䜿çšããããKubernetes ã¯ã©ã¹ã¿ãŒã§å©çšå¯èœãª GPU ã«åºã¥ããŠãNIM Operator ã«æé©ãªã¢ãã«ãèªåæ€åºãããŸããCPU ã®ã¿ãæèŒãããããŒãã§ããGPU ã«ããé«éåãããããŒãã§ããèŠä»¶ã«å¿ããŠå©çšå¯èœãªããŒãã«ã¢ãã«ãäºåãã£ãã·ã¥ããããšãã§ããŸãã
ãã®ãªãã·ã§ã³ãéžæãããšãNIM Operator ã¯ãKubernetes ã«Persistent Volume Claim (PVC) ãäœæããã¯ã©ã¹ã¿ãŒã« NIM ã¢ãã«ãããŠã³ããŒãããŠãã£ãã·ã¥ããŸãã次ã«ãNIM Operator ã NIMCache ã«ã¹ã¿ã ãªãœãŒã¹ã䜿çšããŠããã® PVC ã®ã©ã€ããµã€ã¯ã«ããããã€ããŠç®¡çããŸãã
å³ 3. NIM ãã€ã¯ããµãŒãã¹ ãã£ãã·ã¥ã®ãããã€
èªå AI NIM ãã€ãã©ã€ã³ ãããã€
NVIDIA ã¯ã2 ã€ã® Kubernetes ã«ã¹ã¿ã ãªãœãŒã¹å®çŸ© (CRD) ãå°å
¥ããŠãNVIDIA NIM ãã€ã¯ããµãŒãã¹ã§ãã
NIMService
ãš
NIMPipeline
ããããã€ããŸãã
NIMService
ããããã€ããããšãå NIM ãã€ã¯ããµãŒãã¹ãã¹ã¿ã³ãã¢ãã³ã®ãã€ã¯ããµãŒãã¹ãšããŠç®¡çããŸãã
NIMPipeline
ã¯ãè€æ°ã® NIM ãã€ã¯ããµãŒãã¹ããŸãšããŠãããã€ãã管çããããšãã§ããŸãã
å³ 4 ã¯ããã€ã¯ããµãŒãã¹ ãã€ãã©ã€ã³ãšããŠç®¡çããã RAG ãã€ãã©ã€ã³ã瀺ããŠããŸããè€æ°ã®ãã€ãã©ã€ã³ãåå¥ã®ãµãŒãã¹ã§ã¯ãªããäžæ¬ããŠç®¡çã§ããŸãã
å³ 4. NIM ãã€ã¯ããµãŒãã¹ ãã€ãã©ã€ã³ã®ãããã€
ãªãŒãã¹ã±ãŒãªã³ã°
NIM Operator ã¯ãKubernetes Horizontal Pod Autoscaler (HPA) ã䜿çšããŠã
NIMService ã®ãããã€
ãšãã® ReplicaSet ã®ãªãŒãã¹ã±ãŒãªã³ã°ã«å¯Ÿå¿ããŠããŸãã
NIMService
ãš
NIMPipeline
ã® CRD ã¯ã以äžã®ãã㪠HPA ã§ããªãã¿ã®ã¡ããªã¯ã¹ãšã¹ã±ãŒãªã³ã°åäœããã¹ãŠãµããŒãããŠããŸãã
ã¬ããªã«æ°ã®æå°å€ãšæ倧å€ãæå®ãã
以äžã®ã¡ããªã¯ã¹ã䜿çšããŠã¹ã±ãŒãªã³ã°ãã
CPU ãªã©ããããããšã®ãªãœãŒã¹ ã¡ããªã¯ã¹
GPU ã¡ã¢ãªã®äœ¿çšéãªã©ããããããšã®ã«ã¹ã¿ã ã¡ããªã¯ã¹
NIM æ倧ãªã¯ãšã¹ãã KVCache ãªã©ã®ãªããžã§ã¯ã ã¡ããªã¯ã¹
å€éšã¡ããªã¯ã¹
ãŸãããã©ãããé²æ¢ããå®å®åãŠã£ã³ããŠããã¹ã±ãŒãªã³ã°äžã«ã¬ããªã«æ°ã®å€åãå¶åŸ¡ããã¹ã±ãŒãªã³ã° ããªã·ãŒãªã©ãHPA ã®ã¹ã±ãŒã«ã¢ããåäœãšã¹ã±ãŒã«ããŠã³åäœãæå®ããããšãã§ããŸãã
詳现ã«ã€ããŠã¯ã
GPU ã¡ããªã¯ã¹
ãã芧ãã ããã
å³ 5. NIM ãªãŒãã¹ã±ãŒãªã³ã°
2 æ¥ç®ã®ãªãã¬ãŒã·ã§ã³
NIMService
ãš
NIMPipeline
ã¯ãã«ã¹ã¿ãã€ãºå¯èœãªããŒãªã³ã°æŠç¥ã§ NIM ã®ç°¡åãªããŒãªã³ã° ã¢ããã°ã¬ãŒãããµããŒãããŠããŸãã
NIMService
ãŸãã¯
NIMPipeline
CRD 㧠NIM ã®ããŒãžã§ã³çªå·ãå€æŽãããšãNIM Operator ã¯ãã¯ã©ã¹ã¿ãŒã® NIM ãããã€ãæŽæ°ããŸãã
NIMService
ãããã®å€æŽã¯ãã¹ãŠ
NIMService
ãš
NIMPipeline
ã®ã¹ããŒã¿ã¹ã«åæ ãããŸãã
NIMService
åãã« Kubernetes Ingress ãè¿œå ããããšãã§ããŸãã
ãµããŒããããã¢ãã«ãšãã©ãããã©ãŒã
ãªãªãŒã¹æç¹ã§ãNIM Operator ã¯ãLLM ãRetrievalïŒåã蟌ã¿ïŒ NIM ãã€ã¯ããµãŒãã¹ããµããŒãããŠããŸãã
NVIDIA ã¯ããµããŒã察象㮠NVIDIA NIM ãã€ã¯ããµãŒãã¹ã®ãªã¹ããç¶ç¶çã«æ¡å€§ããŠããŸãããµããŒã察象㮠NIM ãã€ã¯ããµãŒãã¹ã®å
šãªã¹ãã®è©³çŽ°ã«ã€ããŠã¯ã
ãã©ãããã©ãŒã ãµããŒã
ãã芧ãã ãã
ã
ãŸãšã
NVIDIA NIM ãã€ã¯ããµãŒãã¹ã®ãããã€ãã¹ã±ãŒãªã³ã°ãã©ã€ããµã€ã¯ã«ç®¡çãèªååããããšã§ãNIM Operator ã¯ããšã³ã¿ãŒãã©ã€ãº ããŒã ã NIM ãã€ã¯ããµãŒãã¹ãç°¡åã«å°å
¥ããAI ã®å°å
¥ãé«éåã§ããããã«ããŸãã
ãã®åãçµã¿ã¯ãNIM ãã€ã¯ããµãŒãã¹ãç°¡åã«å°å
¥ã§ããæ¬çªç°å¢ã«å¯Ÿå¿ããå®å
šæ§ã確ä¿ãããšãã NVIDIA ã®ã³ãããã¡ã³ããšãäžèŽããŠããŸããNIM Operator ã¯ãNVIDIA AI Enterprise ãä»åŸãªãªãŒã¹ããæ©èœã®äžéšã§ããšã³ã¿ãŒãã©ã€ãº ãµããŒããAPI ã®å®å®æ§ãããã¢ã¯ãã£ããªã»ãã¥ãªã㣠ããããæäŸããŸãã
ä»ãã NGC 㧠NIM Operator
ã䜿ãå§ãããã
GitHub ãªããžããª
ããå
¥æããŠãã ãããã€ã³ã¹ããŒã«ã䜿çšæ¹æ³ãåé¡ã«é¢ããæè¡çãªè³ªåã«ã€ããŠã¯ããªããžããªã« Issue ãæåºããŠãã ããã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
ãšã³ã¿ãŒãã©ã€ãºã®å é: 次äžä»£ AI ãããã€ã®ããã®ããŒã«ããã¯ããã¯
GTC ã»ãã·ã§ã³:
LLM æšè«ãµã€ãžã³ã°: ãšã³ãããŒãšã³ãæšè«ã·ã¹ãã ã®ãã³ãããŒã¯
NGC ã³ã³ãããŒ:
NVIDIA NIM Operator
NGC ã³ã³ãããŒ:
NVIDIA GPU Operator
NGC ã³ã³ãããŒ:
NV-CLIP
ãŠã§ãããŒ:
æ¬çªç°å¢å¯Ÿå¿ã®çæ AI ã§äžçæé«ã¯ã©ã¹ã®ããã¹ãæ€çŽ¢ç²ŸåºŠãå®çŸ |
https://developer.nvidia.com/blog/deploying-accelerated-llama-3-2-from-the-edge-to-the-cloud/ | Deploying Accelerated Llama 3.2 from the Edge to the Cloud | Expanding the open-source Meta Llama collection of models, the Llama 3.2 collection includes vision language models (VLMs), small language models (SLMs), and an updated Llama Guard model with support for vision. When paired with the NVIDIA accelerated computing platform, Llama 3.2 offers developers, researchers, and enterprises valuable new capabilities and optimizations to realize their generative AI use cases.
Trained on
NVIDIA H100 Tensor Core GPUs
, the SLMs in 1B and 3B sizes are ideal for deploying Llama-based AI assistants across edge devices. The VLMs in 11B and 90B sizes support text and image inputs and output text. With multimodal support, the VLMs help developers build powerful applications requiring visual grounding, reasoning, and understanding. For example, they can build AI agents for image captioning, image-text retrieval, visual Q&A, and document Q&A, among others. The Llama Guard models now also support image input guardrails in addition to text input.
Llama 3.2 model architecture is an auto-regressive language model that uses an optimized transformer architecture. The instruction tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. All models support a long context length of 128K tokens and are optimized for inference with support for grouped query attention (GQA).
NVIDIA is optimizing the Llama 3.2 collection of models to deliver high throughput and low latency across millions of GPUs worldwideâfrom data centers to local workstations with
NVIDIA RTX
, and at the edge with
NVIDIA Jetson
. This post describes the hardware and software optimizations, customizations, and ease-of-deployment capabilities.
Accelerating Llama 3.2 performance with NVIDIA TensorRT
NVIDIA is accelerating the Llama 3.2 model collection to reduce cost and latency while delivering unparalleled throughput and providing an optimal end-user experience.
NVIDIA TensorRT
includes
TensorRT
and
TensorRT-LLM
libraries for high-performance deep learning inference.
The Llama 3.2 1B and Llama 3.2 3B models are being accelerated for long-context support in TensorRT-LLM using the
scaled rotary position embedding (RoPE)
technique and several other
optimizations
, including KV caching and in-flight batching.
The Llama 3.2 11B and Llama 3.2 90B models are multimodal and include a vision encoder with a text decoder. The vision encoder is being accelerated by exporting the model into an ONNX graph and building the TensorRT engine.
ONNX
export creates a standard model definition with built-in operators and standard data types, focused on inferencing. TensorRT uses the ONNX graph to optimize the model for target GPUs by building the
TensorRT engine
. These engines offer a variety of hardware-level optimizations to maximize NVIDIA GPU utilization through layer and tensor fusion in conjunction with kernel auto-tuning.
The visual information from the vision encoder is fused into the Llama text decoder with a cross-attention mechanism that is supported in TensorRT-LLM. This enables the VLMs to efficiently generate text by taking into account visual reasoning and understanding in context with text input.
Easily deploy generative AI solutions using NVIDIA NIM
The TensorRT optimizations are available through production-ready deployments using
NVIDIA NIM
microservices. NIM microservices accelerate the deployment of generative AI models across NVIDIA-accelerated infrastructure anywhere, including cloud, data center, and workstations.
Llama 3.2 90B Vision Instruct
,
Llama 3.2 11B Vision Instruct
,
Llama 3.2 3B Instruct
, and
Llama 3.2 1B Instruct
are supported through NIM microservices for production deployments. NIM provides simplified management and orchestration of generative AI workloads, standard application programming interface (APIs), and enterprise support with production-ready containers. Offering strong and growing ecosystem support with over 175 partners integrating their solutions with NVIDIA NIM microservices, developers, researchers and enterprises around the world can maximize their return on investment for generative AI applications.
Customize and evaluate Llama 3.2 models with NVIDIA AI Foundry and NVIDIA NeMo
NVIDIA AI Foundry
provides an end-to-end platform for Llama 3.2 model customizations with access to advanced AI tools, computing resources, and AI expertise. Fine-tuned on proprietary data, the custom models enable enterprises to achieve better performance and accuracy in domain-specific tasks, gaining a competitive edge.
With
NVIDIA NeMo
, developers can curate their training data, leverage advanced tuning techniques including LoRA, SFT, DPO, and RLHF to customize the Llama 3.2 models, evaluate for accuracy, and add guardrails to ensure appropriate responses from the models. AI Foundry provides dedicated capacity on
NVIDIA DGX Cloud
, and is supported by NVIDIA AI experts. The output is a custom Llama 3.2 model packaged as an NVIDIA NIM inference microservice, which can be deployed anywhere.
Scale local inference with NVIDIA RTX and NVIDIA Jetson
Today, Llama 3.2 models are optimized on the 100M+ NVIDIA RTX PCs and workstations worldwide. For Windows deployments, NVIDIA has optimized this suite of models to work efficiently using the ONNX-GenAI runtime, with a DirectML backend. Get started with the
Llama 3.2 3B model on NVIDIA RTX
.
The new VLM and SLM models unlock new capabilities on NVIDIA RTX systems. To demonstrate, we created an example of a multimodal
retrieval-augmented generation (RAG)
pipeline that combines text and visual data processing (for images, plots, and charts, for example) for enhanced information retrieval and generation.
Learn how to run this pipeline on NVIDIA RTX Linux systems using the Llama 3.2 SLM and VLM
. Note that youâll need a Linux workstation with an NVIDIA RTX professional GPU with 30+ GB of memory.
SLMs are tailored for local deployment on edge devices using techniques like distillation, pruning, and quantization to reduce memory, latency, and computational requirements while retaining accuracy for application-focused domains. To download and deploy the Llama 3.2 1B and 3B SLMs onboard your Jetson with optimized GPU inference and INT4/FP8 quantization, see the
SLM Tutorial on NVIDIA Jetson AI Lab
.
Multimodal models are increasingly useful in edge applications for their unique vision capabilities in video analytics and robotics. The
Llama 3.2 11B VLM is supported on embedded Jetson AGX Orin 64 GB
.
Advancing community AI models
An active open-source contributor, NVIDIA is committed to optimizing community software that helps users address their toughest challenges. Open-source AI models also promote transparency and enable users to broadly share work on AI safety and resilience.
The
Hugging Face inference-as-a-service
capabilities enable developers to rapidly deploy leading
large language models (LLMs)
such as the Llama 3 collection with optimization from NVIDIA NIM microservices running on
NVIDIA DGX Cloud
.
Get free access to NIM for research, development, and testing through the
NVIDIA Developer Program
.
Explore the NVIDIA AI inference platform further, including how
NVIDIA NIM
,
NVIDIA TensorRT-LLM
,
NVIDIA TensorRT,
and
NVIDIA Triton
use state-of-the-art techniques such as
LoRA
to accelerate the latest LLMs. | https://developer.nvidia.com/ja-jp/blog/deploying-accelerated-llama-3-2-from-the-edge-to-the-cloud/ | é«éåããã Llama 3.2 ããšããžããã¯ã©ãŠããžãããã€ãã | Reading Time:
2
minutes
ãªãŒãã³ãœãŒã¹ã® Meta Llama ã¢ãã«ã®ã³ã¬ã¯ã·ã§ã³ãæ¡åŒµãã Llama 3.2 ã³ã¬ã¯ã·ã§ã³ã«ã¯ãèŠèŠèšèªã¢ãã« (VLM)ãå°èŠæš¡èšèªã¢ãã« (SLM)ãããžã§ã³ã®ãµããŒããè¿œå ããã Llama Guard ã¢ãã«ãå«ãŸããŠããŸããNVIDIA ã®ã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã° ãã©ãããã©ãŒã ãšçµã¿åãããããšã§ãLlama 3.2 ã¯éçºè
ãç 究è
ãäŒæ¥ã«ãçæ AI ã®ãŠãŒã¹ ã±ãŒã¹ãå®çŸããããã®æçãªæ°æ©èœãšæé©åãæäŸããŸãã
NVIDIA H100 Tensor ã³ã¢ GPU
ã§ãã¬ãŒãã³ã°ããã 1B ããã³ 3B ãµã€ãºã® SLM ã¯ããšããž ããã€ã¹ã« Llama ããŒã¹ã® AI ã¢ã·ã¹ã¿ã³ããå±éããã®ã«æé©ã§ãã11B ããã³ 90B ãµã€ãºã® VLM ã¯ãããã¹ããšç»åã®å
¥åãšããã¹ãã®åºåããµããŒãããŸãããã«ãã¢ãŒãã«ããµããŒããã VLM ã¯ãã°ã©ãŠã³ãã£ã³ã° (Visual grounding)ãæšè« (Reasoning)ãç解ãå¿
èŠãšãã匷åãªã¢ããªã±ãŒã·ã§ã³ãéçºããã®ã«åœ¹ç«ã¡ãŸããäŸãã°ãç»åãã£ãã·ã§ãã³ã°ãç»åããã¹ãæ€çŽ¢ãããžã¥ã¢ã« Q&Aãææž Q&A ãªã©ãæ
åœãã AI ãšãŒãžã§ã³ããæ§ç¯ããããšãã§ããŸããLlama Guard ã¢ãã«ã¯ãããã¹ãå
¥åã«å ããŠãç»åå
¥åã®ã¬ãŒãã¬ãŒã«ããµããŒãããããã«ãªããŸããã
Llama 3.2 ã¢ãã« ã¢ãŒããã¯ãã£ã¯ãæé©åããã Transformer ã¢ãŒããã¯ãã£ã䜿çšããèªå·±ååž°èšèªã¢ãã«ã§ããæ瀺ãã¥ãŒãã³ã°çã§ã¯ãæåž«ãããã¡ã€ã³ãã¥ãŒãã³ã° (SFT) ãšã人éã®ãã£ãŒãããã¯ã«ãã匷ååŠç¿ (RLHF) ã䜿çšããŠã人éã®å¥œã¿ã«åãããæçšæ§ãšå®å
šæ§ãå®çŸããŠããŸãããã¹ãŠã®ã¢ãã«ã¯ 128K ããŒã¯ã³ã®é·ãã³ã³ããã¹ãé·ããµããŒãããã°ã«ãŒãåãããã¯ãšãª ã¢ãã³ã·ã§ã³ (GQA) ã®ãµããŒããšå
±ã«æšè«ã«æé©åãããŠããŸãã
NVIDIA ã¯ãLlama 3.2 ã®ã¢ãã« ã³ã¬ã¯ã·ã§ã³ãæé©åããŠãããããŒã¿ ã»ã³ã¿ãŒãã
NVIDIA RTX
æèŒã®ããŒã«ã« ã¯ãŒã¯ã¹ããŒã·ã§ã³ããããŠ
NVIDIA Jetson
æèŒã®ãšããžã«è³ããŸã§ãäžçäžã®æ°çŸäžã® GPU ã§é«ã¹ã«ãŒããããšäœé
延ãå®çŸããŠããŸãããã®èšäºã§ã¯ãããŒããŠã§ã¢ãšãœãããŠã§ã¢ã®æé©åãã«ã¹ã¿ãã€ãºããããã€ã容æã«ããæ©èœã«ã€ããŠèª¬æããŸãã
NVIDIA TensorRT ã«ãã Llama 3.2 ã®ããã©ãŒãã³ã¹ã®é«éå
NVIDIA ã¯ãLlama 3.2 ã¢ãã« ã³ã¬ã¯ã·ã§ã³ãé«éåããã³ã¹ããšã¬ã€ãã³ã·ãäœæžããªãããæ¯é¡ã®ãªãã¹ã«ãŒããããšæé©ãªãšã³ã ãŠãŒã¶ãŒäœéšãæäŸããŠããŸãã
NVIDIA TensorRT
ã«ã¯ãé«æ§èœãªãã£ãŒãã©ãŒãã³ã°æšè«çšã®
TensorRT
ããã³
TensorRT-LLM
ã©ã€ãã©ãªãå«ãŸããŠããŸãã
Llama 3.2 1B ããã³ Llama 3.2 3B ã¢ãã«ã¯ãTensorRT-LLM ã®é·ã³ã³ããã¹ã ãµããŒãã®ããã«ã
Scaled Rotary Position Embedding (RoPE)
æè¡ãš KV ãã£ãã·ã¥ããã³ã€ã³ãã©ã€ã ãããã³ã°ãªã©ããã®ä»ã
è€æ°ã®æé©åææ³
ã䜿çšããŠé«éåãããŠããŸãã
Llama 3.2 11B ãš Llama 3.2 90B ã¯ãã«ãã¢ãŒãã«ã§ãããã¹ã ãã³ãŒããŒãåããããžã§ã³ ãšã³ã³ãŒããŒãæèŒãããŠããŸããããžã§ã³ ãšã³ã³ãŒããŒã¯ãã¢ãã«ã ONNX ã°ã©ãã«ãšã¯ã¹ããŒãããTensorRT ãšã³ãžã³ãæ§ç¯ããããšã§å éãããŠããŸãã
ONNX
ã®ãšã¯ã¹ããŒãã§ã¯ãæšè«ã«éç¹ã眮ããçµã¿èŸŒã¿ã®æŒç®åãšæšæºããŒã¿åãçšããæšæºã¢ãã«å®çŸ©ãäœæãããŸããTensorRT 㯠ONNX ã°ã©ãã䜿çšãã
TensorRT ãšã³ãžã³
ãæ§ç¯ããããšã§ãã¿ãŒã²ãã GPU ã«ã¢ãã«ãæé©åããŸãããããã®ãšã³ãžã³ã¯ãã«ãŒãã«èªåãã¥ãŒãã³ã°ãšã¬ã€ã€ãŒããã³ãœã«ã®èåãéããŠãNVIDIA GPU ã®å©çšãæ倧éã«é«ããããã«ãããŒããŠã§ã¢ ã¬ãã«ã§å€æ§ãªæé©åãæäŸããŸãã
ããžã§ã³ ãšã³ã³ãŒããŒããååŸããèŠèŠæ
å ±ã¯ãTensorRT-LLM ã§ãµããŒããããŠããã¯ãã¹ ã¢ãã³ã·ã§ã³ã®ã¡ã«ããºã ã䜿çšã㊠Llama ããã¹ã ãã³ãŒããŒã«èåãããŸããããã«ãã VLM ã¯ãããã¹ãå
¥åã®ã³ã³ããã¹ãã«ãããç解ãšèŠèŠçãªæšè« (Reasoning) ãèæ
®ã«å
¥ããŠãå¹ççã«ããã¹ããçæã§ããããã«ãªããŸãã
NVIDIA NIM ã䜿çšããŠçæ AI ãœãªã¥ãŒã·ã§ã³ã容æã«ãããã€
TensorRT ã®æé©åã¯ã
NVIDIA NIM
ãã€ã¯ããµãŒãã¹ã䜿çšããæ¬çªç°å¢ãžã®ãããã€ãéããŠå©çšã§ããŸããNIM ãã€ã¯ããµãŒãã¹ã¯ãã¯ã©ãŠããããŒã¿ ã»ã³ã¿ãŒãã¯ãŒã¯ã¹ããŒã·ã§ã³ãªã©ãNVIDIA ãã¢ã¯ã»ã©ã¬ãŒãããã€ã³ãã©å
šäœã§çæ AI ã¢ãã«ã®ãããã€ãå éããŸãã
Llama 3.2 90B Vision Instruct
ã
Llama 3.2 11B Vision Instruct
ã
Llama 3.2 3B Instruct
ããã³
Llama 3.2 1B Instruct
ã¯ãNIM ãã€ã¯ããµãŒãã¹ãéããæ¬çªç°å¢ãžã®ãããã€ã«å¯Ÿå¿ããŠããŸããNIM ã¯ãçæ AI ã¯ãŒã¯ããŒãã®ç°¡çŽ åããã管çãšãªãŒã±ã¹ãã¬ãŒã·ã§ã³ãæšæºçãªã¢ããªã±ãŒã·ã§ã³ ããã°ã©ãã³ã° ã€ã³ã¿ãŒãã§ã€ã¹ (API) ããã³æ¬çªç°å¢ã«å¯Ÿå¿ããã³ã³ãããŒã«ãããšã³ã¿ãŒãã©ã€ãº ãµããŒããæäŸããŸãã175 瀟ãè¶
ããããŒãããŒãèªç€Ÿã®ãœãªã¥ãŒã·ã§ã³ã NVIDIA NIM ãã€ã¯ããµãŒãã¹ãšçµ±åãã匷åã§æ¡å€§ãç¶ãããšã³ã·ã¹ãã ãµããŒããæäŸããããšã§ãäžçäžã®éçºè
ãç 究è
ãäŒæ¥ã¯ãçæ AI ã¢ããªã±ãŒã·ã§ã³ã«å¯Ÿããæè³åççãæ倧åã§ããŸãã
NVIDIA AI Foundry ãš NVIDIA NeMo ã«ãã Llama 3.2 ã¢ãã«ã®ã«ã¹ã¿ãã€ãºãšè©äŸ¡
NVIDIA AI Foundry
ã¯ãé«åºŠãª AI ããŒã«ãã³ã³ãã¥ãŒãã£ã³ã° ãªãœãŒã¹ãAI ã®å°éç¥èã«ã¢ã¯ã»ã¹ã§ãããLlama 3.2 ã¢ãã«ã®ã«ã¹ã¿ãã€ãºã«é©ãããšã³ãããŒãšã³ãã®ãã©ãããã©ãŒã ãæäŸããŸããç¬èªã®ããŒã¿ã«åºã¥ããŠãã¡ã€ã³ãã¥ãŒãã³ã°ããã«ã¹ã¿ã ã¢ãã«ã«ãããäŒæ¥ã¯ç¹å®ãã¡ã€ã³ã«ãããæ¥åã§ããåªããããã©ãŒãã³ã¹ãšç²ŸåºŠãéæãã競äºåãé«ããããšãã§ããŸãã
NVIDIA NeMo
ã䜿çšããããšã§ãéçºè
ã¯ãã¬ãŒãã³ã° ããŒã¿ããã¥ã¬ãŒã·ã§ã³ã㊠LoRAãSFTãDPOãRLHF ãªã©ã®é«åºŠãªãã¥ãŒãã³ã°æè¡ã掻çšããŠãLlama 3.2 ã¢ãã«ãã«ã¹ã¿ãã€ãºãã粟床ãè©äŸ¡ããã¬ãŒãã¬ãŒã«ãè¿œå ããŠãã¢ãã«ããé©åãªå¿çãåŸãããããã«ãªããŸããAI Foundry ã¯ã
NVIDIA DGX Cloud
äžã§å°çšã®ãªãœãŒã¹ãæäŸãããã㊠NVIDIA AI ã®å°é家ã«ãã£ãŠãµããŒããããŠããŸããåºåã¯ãNVIDIA NIM æšè«ãã€ã¯ããµãŒãã¹ãšããŠããã±ãŒãžåãããã«ã¹ã¿ã Llama 3.2 ã¢ãã«ã§ãã©ãã«ã§ããããã€ããããšãã§ããŸãã
NVIDIA RTX ããã³ NVIDIA Jetson ã«ããããŒã«ã«æšè«ã®ã¹ã±ãŒãªã³ã°
çŸåšãLlama 3.2 ã¢ãã«ã¯ãäžçäžã® 1 åå°ãè¶
ãã NVIDIA RTX æèŒ PC ããã³ã¯ãŒã¯ã¹ããŒã·ã§ã³ã§æé©åãããŠããŸããWindows ã§ã®ãããã€çšã« NVIDIA ã¯ãã®ã¢ãã« ã¹ã€ãŒããæé©åããDirectML ããã¯ãšã³ã㧠ONNX-GenAI ã©ã³ã¿ã€ã ã䜿çšããŠå¹ççã«åäœããããã«ããŸããã
NVIDIA RTX 㧠Llama 3.2 3B ã¢ãã«
ã䜿çšããŠã¿ãŸãããã
æ°ãã VLM ãš SLM ã¢ãã«ã¯ãNVIDIA RTX ã·ã¹ãã ã«æ°ããªå¯èœæ§ããããããŸããå®èšŒããããã«ãããã¹ããšèŠèŠããŒã¿åŠç (äŸãã°ç»åãã°ã©ããè¡šãªã©) ãçµã¿åãããæ
å ±æ€çŽ¢ãšçæã匷åãããã«ãã¢ãŒãã«
æ€çŽ¢æ¡åŒµçæ (RAG)
ãã€ãã©ã€ã³ã®äŸãäœæããŸããã
Llama 3.2 SLM ãš VLM ã䜿çšã㊠NVIDIA RTX Linux ã·ã¹ãã äžã§ãã®ãã€ãã©ã€ã³ãå®è¡ããæ¹æ³ã«ã€ããŠã芧ãã ãã
ã30 GB 以äžã®ã¡ã¢ãªãæèŒãã NVIDIA RTX ãããã§ãã·ã§ãã« GPU ãæèŒãã Linux ã¯ãŒã¯ã¹ããŒã·ã§ã³ãå¿
èŠãšãªããŸãã
SLM ã¯ãã¢ããªã±ãŒã·ã§ã³ã«ç¹åãããã¡ã€ã³ã®ç²ŸåºŠã確ä¿ããªããã¡ã¢ãªãã¬ã€ãã³ã·ããã³æŒç®èŠä»¶ãåæžããããã«ãèžçããã«ãŒãã³ã°ãéååãªã©ã®æè¡ã䜿çšããŠããšããž ããã€ã¹ãžã®ããŒã«ã« ãããã€ã«åãããŠã«ã¹ã¿ãã€ãºãããŠããŸããæé©åããã GPU ã®æšè«ãš INT4/FP8 éååãåãã Jetson ã«ãLlama 3.2 1B ããã³ 3B SLM ãããŠã³ããŒãããŠãããã€ããã«ã¯ã
NVIDIA Jetson AI Lab ã® SLM ãã¥ãŒããªã¢ã«
ãåç
§ããŠãã ããã
ãã«ãã¢ãŒãã« ã¢ãã«ã¯ããããªåæããããã£ã¯ã¹ã«ãããç¬èªã®ããžã§ã³æ©èœã«ããããšããž ã¢ããªã±ãŒã·ã§ã³ã§æçšæ§ãé«ãŸã£ãŠããŠããŸãã
Llama 3.2 11B VLM ã¯ãçµã¿èŸŒã¿ã® Jetson AGX Orin 64 GB ã§ãµããŒããããŠããŸã
ã
ã³ãã¥ãã㣠AI ã¢ãã«ã®é²å
ãªãŒãã³ãœãŒã¹ã«ç©æ¥µçã«è²¢ç®ããŠãã NVIDIA ã§ã¯ããŠãŒã¶ãŒãçŽé¢ããæãå°é£ãªèª²é¡ã§æ¯æŽã§ããããã«ãã³ãã¥ãã㣠ãœãããŠã§ã¢ã®æé©åã«åãçµãã§ããŸãããªãŒãã³ãœãŒã¹ã® AI ã¢ãã«ã¯éææ§ãä¿é²ãããŠãŒã¶ãŒã¯ AI ã®å®å
šæ§ãšã¬ãžãªãšã³ã¹ã«é¢ããäœæ¥ãåºãå
±æããããšãå¯èœã«ãªããŸãã
Hugging Face ã®æšè«ãµãŒãã¹ (Inference-as-a-Service)
æ©èœã«ãããéçºè
ã¯
NVIDIA DGX Cloud
äžã§åäœãã NVIDIA NIM ãã€ã¯ããµãŒãã¹ã§æé©åããã Llama 3 ã³ã¬ã¯ã·ã§ã³ãªã©ã®
倧èŠæš¡èšèªã¢ãã« (LLM)
ãè¿
éã«ãããã€ããããšãã§ããŸãã
NVIDIA éçºè
ããã°ã©ã
ãéããŠãç 究ãéçºããã¹ãçšã® NIM ãžã®ç¡æã¢ã¯ã»ã¹ãå©çšã§ããŸãã
NVIDIA NIM
ã
NVIDIA TensorRT-LLM
ã
NVIDIA TensorRT
ã
NVIDIA Triton
ãªã©ã®NVIDIA AI æšè«ãã©ãããã©ãŒã ãã
LoRA
ã§ãã¥ãŒãã³ã°ããææ°ã® LLM ã®é«éåãªã©ãã©ã®ããã«æå
端ã®æè¡ã䜿çšããŠããã®ãã«ã€ããŠã¯è©³çŽ°ãæ¯é調ã¹ãŠã¿ãŠãã ããã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
LLM ã¯ã©ã¹ã¿ãŒ ã¢ãŒããã¯ãã£ã®éåç: äžçæ倧èŠæš¡ã®ãããã€ã«åããã¹ã±ãŒãªã³ã° (Supermicro ã«ããè¬æŒ)
NGC ã³ã³ãããŒ:
Llama-3-Swallow-70B-Instruct-v0.1
NGC ã³ã³ãããŒ:
Llama-3.1-405b-instruct
SDK:
Llama3 8B Instruct NIM |
https://developer.nvidia.com/blog/advancing-the-accuracy-efficiency-frontier-with-llama-3-1-nemotron-51b/ | Advancing the Accuracy-Efficiency Frontier with Llama-3.1-Nemotron-51B | Today, NVIDIA released a unique language model that delivers an unmatched accuracy-efficiency performance. Llama 3.1-Nemotron-51B, derived from Metaâs Llama-3.1-70B, uses a novel neural architecture search (NAS) approach that results in a highly accurate and efficient model.
The model fits on a single NVIDIA H100 GPU at high workloads, making it much more accessible and affordable. The excellent accuracy-efficiency sweet spot exhibited by the new model stems from changes to the modelâs architecture that lead to a significantly lower memory footprint, reduced memory bandwidth, and reduced FLOPs while maintaining excellent accuracy. We demonstrate that this approach can be generalized by creating another smaller and faster variant from the reference model.
In July 2024, Meta released Llama-3.1-70B, a leading state-of-the-art large language model (LLM). Today we announce
Llama 3.1-Nemotron-51B-Instruct
, developed using NAS and knowledge distillation derived from the reference model, Llama-3.1-70B.
Superior throughput and workload efficiency
The Nemotron model yields 2.2x faster inference compared to the reference model while maintaining nearly the same accuracy. The model opens a new set of opportunities with a reduced memory footprint, which enables running 4x larger workloads on a single GPU during inference.
Accuracy
Efficiency
MT Bench
MMLU
Text generation
(128/1024)
Summarization/ RAG (2048/128)
Llama-3.1- Nemotron-51B- Instruct
8.99
80.2%
6472
653
Llama 3.1-70B- Instruct
8.93
81.66%
2975
339
Llama 3.1-70B- Instruct (single GPU)
â
â
1274
301
Llama 3-70B
8.94
80.17%
2975
339
Table 1. Overview of the Llama-3.1-Nemotron-51B-Instruct accuracy and efficiency.
Note: Speed is reported in tokens per second per GPU, Measured on machines equipped with 8 X NVIDIA H100 SXM GPUs, with FP8 quantization using
TRT-LLM
as the runtime engine. For each model with the optimal number of GPUs through tensor parallelism (unless otherwise stated). The numbers in the brackets show the (input/output sequence lengths).
We discuss the detailed performance metrics later in this post.
Optimized accuracy per dollar
Foundation models display incredible quality in solving complex tasks: reasoning, summarization, and more. However, a major challenge in the adoption of âtop models is their inference cost.
As the field of generative AI evolves, the balance between accuracy and efficiency (directly impacting cost) will become the decisive factor in model selection. Moreover, the capability to run a model on a single GPU significantly streamlines its deployment, opening opportunities for new applications to run anywhere, from edge systems to data centers to the cloud, as well as facilitating serving multiple models via Kubernetes and
NIM blueprints
.
Consequently, we engineered Llama 3.1-Nemotron-51B-Instruct to achieve this optimal tradeoff (Figure 1). Throughput is inversely proportional to price, so the best tradeoff is obtained by models on the efficient frontier displayed in the chart. Figure 1 shows that the model pushes beyond the current efficient frontier, making it the model that provides the best accuracy per dollar.
Figure 1. Accuracy vs. Throughput performance of Llama-3.1-Nemotron-51B compared to frontier models. Throughput was measured through NIM with concurrency 25 (serving throughput).
The model quality is defined as the weighted average of MT-Bench and MMLU (10*MT-Bench + MMLU)/2, plotted compared to model throughput per a single NVIDIA H100 80GB GPU. Gray dots represent state-of-the-art models, while the dashed line represents the âefficient frontierâ.
Simplifying inference with NVIDIA NIM
The Nemotron model is optimized with TensorRT-LLM engines for higher inference performance and packaged as an NVIDIA NIM microservice to streamline and accelerate the deployment of generative AI models across NVIDIA accelerated infrastructure anywhere, including cloud, data center, and workstations.
NIM uses inference optimization engines, industry-standard APIs, and prebuilt containers to provide high-throughput AI inference that scales with demand.
Try out
Llama-3.1-Nemotron-51B NIM microservice
through the API from
ai.nvidia.com
with free NVIDIA credits.
Building the model with NAS
Inference and hardware-aware methods for designing neural architectures have been successfully used in many domains. However, LLMs are still constructed as repeated identical blocks, with little regard for inference cost overheads incurred by this simplification. To tackle these challenges, we developed efficient NAS technology and training methods that can be used to create non-standard transformer models designed for efficient inference on specific GPUs.
Our technology can select neural architectures that optimize various constraints. The range includes enormous design spaces that include a zoo of non-standard transformer models using alternative attention and FFN blocks of varying efficiency degrees, up to a complete block elimination in the extreme case.
We then use our block-distillation (Figure 2) framework to train all these block variants for all layers of a (large) parent LLM in parallel. In a basic version of block-distillation, training data is passed through the reference model (also known as a teacher).
For each block, its input is taken from the teacher and injected into the matching block of the student. The outputs of the teacher and student for the block are compared and the student block is trained so that the student block mimics the functionality of the teacher block. A more advanced scenario where a single student block mimics multiple teacher blocks is depicted on the right side in Figure 2.
Figure 2. Block distillation where blue reference model blocks are multiple variants for the yellow student models that mimic the block-wise teacher functionality
Next, we use our Puzzle algorithm to efficiently score each alternative replacement puzzle piece and search our enormous design space for the most accurate models, while adhering to a set of inference constraints, such as memory size and required throughput.
Finally, by using knowledge distillation (KD) loss for both block scoring and training, we demonstrate the potential to narrow the accuracy gap between our model and the reference model using a much more efficient architecture with a tiny fraction of the reference model training costs. Using our methods on Llama-3.1-70B as the reference model, we built ââLlama-3.1-Nemotron-51B-Instruct, a 51B model that breaks the efficient frontier of LLMs on a single NVIDIA H100 GPU (Figure 1).
The ââLlama-3.1-Nemotron-51B-Instruct architecture is unique in its irregular block structure with many layers in which the attention and FFN are reduced or pruned, resulting in better utilization of H100 and highlighting the importance of optimizing LLMs for inference. Figure 3 schematically depicts the irregular structure of the resulting architecture and highlights the resulting compute saving, which amounts to the green area in the Figure.
Figure 3. Runtime of Puzzle chosen blocks (layers) for attention layers (blue) and FFN layers (red)Â across the 80 layers of the reference model. Green areas correspond to overall runtime savings.
Our innovative techniques enable us to develop models that redefine the efficient frontier of LLMs. Crucially, we can cost-effectively design multiple models from a single reference model, each optimized for specific hardware and inference scenarios. This capability empowers us to maintain best-in-class performance for LLM inference across our current and future hardware platforms.
Detailed results
Here are the model accuracy and performance metrics for our model.
Model accuracy
Table 2 lists all the benchmarks that we evaluated, comparing our model and the reference model Llama-3.1-70B. The
Accuracy preserved
column is the ratio between our modelâs score and that of the teacher.
Benchmark
Llama-3.1 70B-instruct
Llama-3.1-Nemotron-51B- Instruct
Accuracy preserved
winogrande
85.08%
84.53%
99.35%
arc_challenge
70.39%
69.20%
98.30%
MMLU
81.66%
80.20%
98.21%
hellaswag
86.44%
85.58%
99.01%
gsm8k
92.04%
91.43%
99.34%
truthfulqa
59.86%
58.63%
97.94%
xlsum_english
33.86%
31.61%
93.36%
MMLU Chat
81.76%
80.58%
98.55%
gsm8k Chat
81.58%
81.88%
100.37%
Instruct HumanEval (n=20)
75.85%
73.84%
97.35%
MT Bench
8.93
8.99
100.67%
Table 2. Accuracy comparison of the Nemotron model to the Llama-3.1-70B-Instruct model across several industry benchmarks
Performance
Table 3 shows the number of tokens per second per GPU (NVIDIA H100 80-GB GPU). You can see that for a range of relevant scenarios, short and long inputs as well as outputs, our model doubles the throughput of the teacher model, making it cost-effective across multiple use cases.
TPX describes the number of GPUs on which the process runs in parallel. We also list the performance of Llama 3.1-70B on a single GPU to demonstrate the value of our model in such a setting.
Scenario
Input/Output Sequence Length
Llama-3.1- Nemotron-Instruct
Llama-3.1-70B-Instruct
Ratio
Llama (TP1)
Chatbot
128/128
5478 (TP1)
2645 (TP1)
2.07
2645
Text generation
128/1024
6472 (TP1)
2975 (TP4)
2.17
1274
Long text generation
128/2048
4910 (TP2)
2786 (TP4)
1.76
646
System 2 reasoning
128/4096
3855 (TP2)
1828 (TP4)
2.11
313
Summarization/ RAG
2048/128
653 (TP1)
339 (TP4)
1.92
300
Stress test 1
2048/2048
2622 (TP2)
1336 (TP4)
1.96
319
Table 3. Throughput comparison of the number of tokens generated by the models for popular use cases. All numbers are in tokens per second per GPU.
The main factor in determining the cost of running a model is
throughput
, the total number of tokens that the system can generate in one second. However, in some scenarios (for example, chatbots), the rate at which a single end user receives the response from the model is important for the user experience. This is quantified by the tokens per second per user, termed the user-side throughput.
Figure 4 shows this user-side throughput plotted against the throughput at different batch sizes. As seen in all batch sizes, our model is superior to Llama-3.1-70B.
Figure 4. Server throughput vs. user-side throughput, plotted at different batch sizes for the Nemotron model and for Llama-3.1-70B
Tailoring LLMs for diverse needs
The NAS approach offers you flexibility in selecting the optimal balance between accuracy and efficiency. To demonstrate this versatility, we created another variant from the same reference model, this time prioritizing speed and cost. Llama-3.1-Nemotron-40B-Instruct was developed using the same methodology but with a modified speed requirement during the puzzle phase.
This model achieves a 3.2x speed increase compared to the parent model, with a moderate decrease in accuracy. Table 4 shows competitive performance metrics.
Accuracy
Speed
MT bench
MMLU
Text generation
(128/1024)
Summarization/ RAG (2048/128)
Llama-3.1- Nemotron-40B-Instruct
8.69
77.10%
9568
862
Llama-3.1- Nemotron-51B-Instruct
8.99
80.20%
6472
653
Llama 3.1-70B-Instruct
8.93
81.72%
2975
339
Table 4. Overview of the Llama-3.1-Nemotron-40B-Instruct accuracy and efficiency
Summary
Llama 3.1-Nemotron-51B-Instruct
provides a new set of opportunities for users and companies that want to use highly accurate foundation models, but do so in a cost-controlled manner. By providing the best tradeoff between accuracy and efficiency, we believe the model is an attractive option for builders. Moreover, these results demonstrate the effectiveness of the NAS approach and intend to extend the method to other models. | https://developer.nvidia.com/ja-jp/blog/advancing-the-accuracy-efficiency-frontier-with-llama-3-1-nemotron-51b/ | Llama-3.1-Nemotron-51B ã«ãã粟床ãšå¹çã®åé² | Reading Time:
3
minutes
æ¬æ¥ãNVIDIA ã¯ãæ¯é¡ã®ãªã粟床ãšå¹çãå®çŸããç¬èªã®èšèªã¢ãã«ãçºè¡šããŸããã Llama 3.1-Nemotron-51B ã¯ã Meta ã® Llama-3.1-70B ã®æŽŸçã¢ãã«ã§ãããæ°ãã Neural Architecture Search (NAS) ã¢ãããŒãã«ãã£ãŠãé«ç²ŸåºŠãã€å¹ççãªã¢ãã«ãšãªã£ãŠããŸãã
ãã®ã¢ãã«ã¯é«è² è·ã®ã¯ãŒã¯ããŒãã§ãã²ãšã€ã® NVIDIA H100 GPU ã«åãŸããããããå©çšããããããã€äŸ¡æ Œãæé ãªã¢ãã«ãšãªã£ãŠããŸããã¢ãã«ã®ã¢ãŒããã¯ãã£ãå€æŽããããšã§ããã®ã¢ãã«ã¯ç²ŸåºŠãšå¹çæ§ã®åªãããã©ã³ã¹ãä¿ã£ãŠãããé«ã粟床ãç¶æããªãããã¡ã¢ãªäœ¿çšéãã¡ã¢ãªåž¯åå¹
ãFLOPs ã倧å¹
ã«åæžãããŠããŸãããã®ã¢ãããŒãã¯ãããã«å°åã§é«éãªå¥ã¢ãã«ããªãã¡ã¬ã³ã¹ ã¢ãã«ããäœæããããšã§ãæ±çšçãªææ³ã§ããããšãå®èšŒããŠããŸãã
2024 幎 7 æãMeta ã¯å
é²çãªå€§èŠæš¡èšèªã¢ãã« (LLM) ã§ãã Llama-3.1-70B ããªãªãŒã¹ããŸãããæ¬æ¥ãNVIDIA ã¯
Llama 3.1-Nemotron-51B-Instruct
ãçºè¡šããŸããããã¯ãªãã¡ã¬ã³ã¹ ã¢ãã«ã® Llama-3.1-70B ãããšã«NAS ãšç¥èèžçã䜿çšããããšã§éçºãããŸããã
åªããã¹ã«ãŒããããšã¯ãŒã¯ããŒãã®å¹ç
Nemotron ã¢ãã«ã¯ãã»ãŒåã粟床ãç¶æããªããããªãã¡ã¬ã³ã¹ ã¢ãã«ãšæ¯èŒã㊠2.2 åé«éãªæšè«ãå¯èœã§ãããã®ã¢ãã«ã¯ãæšè«æã«ã²ãšã€ã® GPU 㧠4 åã®ã¯ãŒã¯ããŒããå®è¡å¯èœãªã»ã©ã¡ã¢ãªäœ¿çšéãåæžããŠãããããã«ãã£ãŠæ°ããªçšéãå¯èœæ§ãæãŸããŸãã
粟床
å¹çæ§
MT Bench
MMLU
ããã¹ãçæ (128/1024)
èŠçŽ / RAG (2048/128)
Llama-3.1- Nemotron-51B- Instruct
8.99
80.2%
6472
653
Llama 3.1-70B- Instruct
8.93
81.66%
2975
339
Llama 3.1-70B- Instruct (GPUã¯ïŒåºã®ã¿)
â
â
1274
301
Llama 3-70B
8.94
80.17%
2975
339
è¡š 1. Llama-3.1-Nemotron-51B-Instruct ã®ç²ŸåºŠãšå¹çæ§ã®æŠèŠ
泚: é床㯠GPU 1 åºåœããã®ç§éããŒã¯ã³æ°ã§å ±åãããŠããŸãã8 åºã® NVIDIA H100 SXM GPU ãæèŒãããã·ã³ã§æž¬å®ã
TRT-LLM
ãã©ã³ã¿ã€ã ãšã³ãžã³ãšããŠäœ¿çšããFP8 éååãé©çšãåã¢ãã«ã«å¯ŸããŠããã³ãœã«äžŠåå (Tensor Parallelism) ã«ããæé©ãªæ°ã® GPU ãäœ¿çš (å¥éèšèŒã®ãªãå Žå)ãæ¬åŒ§å
ã®æ°å㯠(å
¥åã·ãŒã±ã³ã¹ã®é·ã/åºåã·ãŒã±ã³ã¹ã®é·ã) ã瀺ããŠããŸãã
詳现ãªããã©ãŒãã³ã¹ææšã«ã€ããŠã¯ããã®èšäºã§åŸè¿°ããŸãã
ã³ã¹ãåœããã®ç²ŸåºŠãæé©å
åºç€ã¢ãã«ã¯ãæšè«ãèŠçŽãªã©ã®è€éãªã¿ã¹ã¯ã解決ããéã«ãåè¶ããèœåã瀺ããŠããŸããããããã©ã®äžäœã¢ãã«ã䜿ãããéžæããéã«å€§ããªèª²é¡ãšãªã£ãŠããã®ããæšè«ã³ã¹ãã§ãã
çæ AI åéã®é²æ©ã«äŒŽãã粟床ãšå¹çæ§(çŽæ¥çã«ã³ã¹ãã«åœ±é¿)ã®ãã©ã³ã¹ããã¢ãã«ãéžæããéã®æ±ºå®çãªèŠå ãšãªãã§ããããããã«ãïŒåºã® GPU ã§ã¢ãã«ãå®è¡ã§ããããšã§ããããã€ã倧å¹
ã«ç°¡çŽ åããããšããž ã·ã¹ãã ããããŒã¿ ã»ã³ã¿ãŒãã¯ã©ãŠãã«è³ããŸã§ã©ãã§ãæ°ããªã¢ããªã±ãŒã·ã§ã³ãå®è¡ããæ©äŒãéæãããŸããå ããŠãKubernetes ãš
NVIDIA Blueprint
ã«ããè€æ°ã®ã¢ãã«ã®æäŸã容æã«ãªããŸãã
ããã§ãæè¯ã®ãã¬ãŒããªããéæããããã« Llama 3.1-Nemotron-51B-Instruct ãäœæããŸãã (å³ 1 )ãã¹ã«ãŒãããã¯äŸ¡æ Œã«åæ¯äŸãããããæè¯ã®ãã¬ãŒããªãã¯ã°ã©ãã«ç€ºãããŠãã âEfficient frontierâ (å¹ççã§ããããšã瀺ãå¢çç·) äžã®ã¢ãã«ãå®çŸããŠããŸããå³ 1 ã¯ãä»åã®ã¢ãã«ã§ãã Llama 3.1-Nemotron-51B-Instruct ãçŸåšã® Efficient frontier ãããã«æŒãäžããã³ã¹ãåœããã®ç²ŸåºŠãæãé«ãã¢ãã«ã§ããããšã瀺ããŠããŸãã
å³ 1. Efficient frontieräžã®ã¢ãã«ãšæ¯èŒãã Llama-3.1-Nemotron-51B ã®ç²ŸåºŠãšã¹ã«ãŒãããã®ããã©ãŒãã³ã¹ãã¹ã«ãŒãããã¯ãNIM ã§åæå®è¡æ° 25 (ãµãŒãã³ã° ã¹ã«ãŒããã) ãçšããŠæž¬å®ã
ã¢ãã«ã®å質ã¯ãMT-Bench ãš MMLU (10*MT-Bench + MMLU)/2 ã®å éå¹³åãšããŠå®çŸ©ãããNVIDIA H100 80GB GPU 1 åºåœããã®ã¢ãã« ã¹ã«ãŒããããšæ¯èŒããŠãããããããŠããŸããã°ã¬ãŒã®ç¹ã¯æå
端ã®ã¢ãã«ãè¡šããçŽç·ã¯ãEfficient frontierããè¡šããŠããŸãã
NVIDIA NIM ã«ããæšè«ã®ç°¡çŽ å
Nemotron ã¢ãã«ã¯ãTensorRT-LLM ãšã³ãžã³ã§æé©åãããããé«ãæšè«æ§èœãçºæ®ããŸãããŸã NVIDIA NIM ãã€ã¯ããµãŒãã¹ãšããŠããã±ãŒãžåãããŠãããã¯ã©ãŠããããŒã¿ ã»ã³ã¿ãŒãã¯ãŒã¯ã¹ããŒã·ã§ã³ãªã©ãNVIDIA ã«ããé«éåãããã€ã³ãã©å
šäœã§çæ AI ã¢ãã«ã®ãããã€ãå¹çåããã³å éããŸãã
NIM ã¯ãæšè«æé©åãšã³ãžã³ãæ¥çæšæºã® APIãããã³æ§ç¯æžã¿ã®ã³ã³ããã䜿çšããŠãéèŠã«å¿ããŠã¹ã±ãŒã«ã§ããé«ã¹ã«ãŒããããªAIã®æšè«ãå¯èœãšããŠããŸãã
Llama-3.1-Nemotron-51B ã® NIM ãã€ã¯ããµãŒãã¹
ãã
ai.nvidia.com
ã® API ãéããŠãç¡æã® NVIDIA ã¯ã¬ãžããã§æ¯éãè©Šããã ããã
NAS ã«ããã¢ãã«ã®æ§ç¯
æšè«ããã³ããŒããŠã§ã¢ãèæ
®ãããã¥ãŒã©ã« ãããã¯ãŒã¯ã®ã¢ãŒããã¯ãã£ãæ§ç¯ããææ³ã¯ãå€ãã®åéã§æåãåããŠããŸãããããããLLM ã¯äŸç¶ãšããŠåäžãããã¯ãç¹°ãè¿ã䜿çšããŠæ§æãããŠããããã®ç°¡çŽ åã«ããçºçããæšè«ã³ã¹ãã®ãªãŒããŒãããã¯ã»ãšãã©èæ
®ãããŠããŸããããããã®èª²é¡ã«åãçµãããã«ãNVIDIA ã¯ãç¹å®ã® GPU äžã§å¹ççãªæšè«ãå¯èœãª Transformer ã¢ãã«ãäœæãã NAS æè¡ãšåŠç¿æ¹æ³ãéçºããŸããã
NVIDIA ã®æè¡ã§ã¯ãå€æ§ãªå¶çŽãæé©åãããã¥ãŒã©ã« ãããã¯ãŒã¯ã®ã¢ãŒããã¯ãã£ãéžæããããšãã§ããŸããã¢ãŒããã¯ãã£ã®æ¢çŽ¢ç¯å²ã¯èšå€§ã§ãããããŸããŸãª Attention ã FFN ãããã¯ã極端ãªã±ãŒã¹ã ãšãããã¯ã®å®å
šãªæé€ãŸã§å«ãŸããŸãã
次ã«ããããã¯èžç (Block-distillation) ãã¬ãŒã ã¯ãŒã¯ (å³ 2) ã䜿çšããŠãNAS ã«ãã£ãŠèŠã€ãããããã¯ãã¡ãåŠç¿ããŸãããããã¯èžçã§ã¯ãçåŸ (Student) ã¢ãã«ã®åãããã¯ãæåž« (Teacher) ã¢ãã« ã®å
šã¬ã€ã€ãŒã«å¯ŸããŠäžŠåã§åŠç¿ããŸããåºæ¬çãªãããã¯èžçã§ã¯ãåŠç¿ããŒã¿ã¯ãªãã¡ã¬ã³ã¹ ã¢ãã« (ïŒæåž«ã¢ãã«) ãéããŠæž¡ãããŸãã
çåŸã¢ãã«ã®åãããã¯ã¯æåž«ã¢ãã«ã®è©²åœãããããã¯ããå
¥åãåãåããŸããçåŸã¢ãã«ãšæåž«ã¢ãã«ã®ãããã¯ã®åºåãæ¯èŒãããçåŸã¢ãã«ã®ãããã¯ã¯æåž«ã¢ãã«ã®ãããã¯ã®æ©èœãæš¡å£ããããã«åŠç¿ãããŸãããŸããå³ 2 ã®å³åŽã®ããã«ãçåŸã¢ãã«ã® 1 ã€ã®ãããã¯ãæåž«ã¢ãã«ã®è€æ°ã®ãããã¯ãæš¡å£ããããã«åŠç¿ãããããçºå±çãªã·ããªãªãèããããŸãã
å³ 2. ãããã¯èžçã§ã¯ãéè²ã®ãªãã¡ã¬ã³ã¹ ã¢ãã« ã®ãããã¯ãã¡ãçåŸã¢ãã«ã®ãããã¯ã«ãšã£ãŠã®è€æ°ã®ããªãšãŒã·ã§ã³ãšããŠæ±ããçåŸã¢ãã«ã®ãããã¯ã¯æåž«ã¢ãã«ã®æ©èœããããã¯åäœã§æš¡å£ã
次ã«ãNVIDIA ã® Puzzle ã¢ã«ãŽãªãºã ã䜿çšããããšã§ãã¡ã¢ãª ãµã€ãºãå¿
èŠãªã¹ã«ãŒããããªã©ã®æšè«ã«ãããå¶çŽãéµå®ããªããã代æ¿ã®äº€æããºã« ããŒã¹ãå¹ççã«ã¹ã³ã¢ä»ãããæãæ£ç¢ºãªã¢ãã«ãèšå€§ãªèšèšã¹ããŒã¹ããæ€çŽ¢ããŸãã
æåŸã«ããããã¯ã®ã¹ã³ã¢ãªã³ã°ãšãã¬ãŒãã³ã°ã®äž¡æ¹ã«ç¥èèžç (KD) æ倱ã䜿çšããããšã«ããããªãã¡ã¬ã³ã¹ ã¢ãã«ã®åŠç¿ã³ã¹ããšæ¯ã¹ãŠããäžéšã®ã³ã¹ãã§ãã¯ããã«å¹ççãªã¢ãŒããã¯ãã£ããªãã¡ã¬ã³ã¹ ã¢ãã«ãšã®ç²ŸåºŠã®ã®ã£ãããçž®ããå¯èœæ§ã瀺ããŸãããLlama-3.1-70B ããªãã¡ã¬ã³ã¹ ã¢ãã«ãšã㊠NVIDIA ã®æ¹æ³ã䜿çšããŠãã²ãšã€ã® NVIDIA H100 GPU ã«åãŸãLLM ã®âEfficient frontierâãäžåã Llama-3.1-Nemotron-51B-Instruct ïŒãã©ã¡ãŒã¿ãŒæ°: 51BïŒãæ§ç¯ããŸãã (å³ 1)ã
Llama-3.1-Nemotron-51B-Instruct ã¢ãŒããã¯ãã£ã¯ãåæžãŸãã¯æåããããã¢ãã³ã·ã§ã³å±€ãš FFN å±€ãå€ãå«ãäžèŠåãªãããã¯æ§é ãç¹åŸŽã§ããã®çµæ H100 ã®å©çšçãåäžãããæšè«ã®ããã« LLM ãæé©åããéèŠæ§ã瀺ãããŸãããå³ 3 ã¯ãçµæãšããŠçããã¢ãŒããã¯ãã£ã®äžèŠåãªæ§é ãæŠç¥çã«ç€ºããŠããŸããå³äžã®ç·è²éšåã¯èšç®åŠçãã©ãã»ã©åæžããããã匷調ããŠç€ºããŠããŸãã
å³ 3. ãªãã¡ã¬ã³ã¹ ã¢ãã«ã® 80 å±€ã«ãããã¢ãã³ã·ã§ã³ ã¬ã€ã€ãŒ (é) ãš FFN ã¬ã€ã€ãŒ (èµ€) ã«ãããã Puzzle ã¢ã«ãŽãªãºã ã«ãã£ãŠéžæãããããã㯠(ãŸãã¯ã¬ã€ã€ãŒ) ã®ã©ã³ã¿ã€ã ãå³ã®ç·è²ã®éšåã¯å
šäœçãªã©ã³ã¿ã€ã ã®åæžã«çžåœã
NVIDIA ã®é©æ°çãªæè¡ã«ãããLLM ã® âEfficient frontierâ ãåå®çŸ©ããã¢ãã«ã®éçºãå¯èœã«ãªããŸããéèŠãªã®ã¯ãã²ãšã€ã®ãªãã¡ã¬ã³ã¹ ã¢ãã«ããè€æ°ã®ã¢ãã«ãè²»çšå¯Ÿå¹æã«åªããæ¹æ³ã§èšèšãããã®çµæãšããŠåã¢ãã«ãç¹å®ã®ããŒããŠã§ã¢ããã³æšè«ã®ã·ããªãªã«æé©åã§ããããšã§ãããã®æ©èœã«ãããçŸåšããã³å°æ¥ã®ããŒããŠã§ã¢ ãã©ãããã©ãŒã å
šäœã§ãåãèŠæš¡ã® LLM ãã¡ã®äžã§æé«ã®æšè«ããã©ãŒãã³ã¹ãç¶æããããšãã§ããŸãã
詳现ãªçµæ
NVIDIA ã®ã¢ãã«ã®ç²ŸåºŠãšããã©ãŒãã³ã¹ã¯ä»¥äžã®éãã§ãã
粟床
è¡š 2 ã§ã¯ãNVIDIA ã®ã¢ãã«ãšãªãã¡ã¬ã³ã¹ ã¢ãã«ã§ãã Llama-3.1-70B ãæ¯èŒããè©äŸ¡ãããã¹ãŠã®ãã³ãããŒã¯ã«ã€ããŠèšèŒããŠããŸãã
ç¶æããã粟床ã®å²å
ã®åã¯ãNVIDIA ã¢ãã«ã®ã¹ã³ã¢ãšæåž«ã¢ãã«ã®ã¹ã³ã¢ãšã®æ¯çã§ãã
ãã³ãããŒã¯
Llama-3.1 70B-instruct
Llama-3.1-Nemotron-51B- Instruct
ç¶æããã粟床ã®å²å
winogrande
85.08%
84.53%
99.35%
arc_challenge
70.39%
69.20%
98.30%
MMLU
81.66%
80.20%
98.21%
hellaswag
86.44%
85.58%
99.01%
gsm8k
92.04%
91.43%
99.34%
truthfulqa
59.86%
58.63%
97.94%
xlsum_english
33.86%
31.61%
93.36%
MMLU Chat
81.76%
80.58%
98.55%
gsm8k Chat
81.58%
81.88%
100.37%
Instruct HumanEval (n=20)
75.85%
73.84%
97.35%
MT Bench
8.93
8.99
100.67%
è¡š 2. è€æ°ã®æ¥çæšæºã®ãã³ãããŒã¯ã«ããã Nemotron ã¢ãã«ãš Llama-3.1-70B-Instruct ã¢ãã«ã®ç²ŸåºŠæ¯èŒ
ããã©ãŒãã³ã¹
è¡š 3 ã¯ãGPU (NVIDIA H100 80GB GPU) 1 åºãããã®ããŒã¯ã³æ°ã瀺ããŠããŸããé¢é£ããã·ããªãªã®ç¯å²ãçãå
¥åãšé·ãå
¥åããã³åºåã«é¢ããŠãNVIDIA ã®ã¢ãã«ã¯æåž«ã¢ãã«ã®ã¹ã«ãŒãããã 2 åã«é«ããè€æ°ã®ãŠãŒã¹ ã±ãŒã¹ã§è²»çšå¯Ÿå¹æãé«ãããšãããããŸãã
TPX ã¯ãããã»ã¹ã䞊ååŠçãã GPU ã®æ°ãè¡šããŸãããŸãããã®ãããªèšå®ã«ããã NVIDIA ã®ã¢ãã«ã®äŸ¡å€ã瀺ããããåäžã® GPU äžã® Llama 3.1-70B ã®ããã©ãŒãã³ã¹ãèšèŒããŠããŸãã
ã·ããªãª
å
¥åºåã·ãŒã±ã³ã¹é·
Llama-3.1- Nemotron-Instruct
Llama-3.1-70B-Instruct
æ¯ç
Llama (TP1)
ãã£ããããã
128/128
5478 (TP1)
2645 (TP1)
2.07
2645
ããã¹ãçæ
128/1024
6472 (TP1)
2975 (TP4)
2.17
1274
é·ãããã¹ãçæ
128/2048
4910 (TP2)
2786 (TP4)
1.76
646
ã·ã¹ãã 2 ã®æšè«
128/4096
3855 (TP2)
1828 (TP4)
2.11
313
èŠçŽ / RAG
2048/128
653 (TP1)
339 (TP4)
1.92
300
ã¹ãã¬ã¹ ãã¹ã 1
2048/2048
2622 (TP2)
1336 (TP4)
1.96
319
è¡š 3. äžè¬çãªãŠãŒã¹ ã±ãŒã¹ã®ã¢ãã«ã«ããçæãããããŒã¯ã³æ°ã®ã¹ã«ãŒãããæ¯èŒããã¹ãŠã®æ°å€ã¯ãGPU ããšã® 1 ç§ãããã®ããŒã¯ã³æ°ã瀺ããŸãã
ã¢ãã«ã®å®è¡ã³ã¹ãã決å®ããäž»ãªèŠå ã¯ãã·ã¹ãã ã 1 ç§éã«çæã§ããããŒã¯ã³ã®åèšæ°ã§ããã¹ã«ãŒãããã§ããããããããã€ãã®ã·ããªãª (ãã£ããããããªã©) ã§ã¯ã1 人ã®ãšã³ããŠãŒã¶ãŒãã¢ãã«ããã®å¿çãåãåãé床ãšããã®ããŠãŒã¶ãŒäœéšã®èŠ³ç¹ããéèŠã«ãªããŸããããã¯ãŠãŒã¶ãŒããšã® 1 ç§éã®ããŒã¯ã³æ°ã§å®éåããããŠãŒã¶ãŒåŽã®ã¹ã«ãŒããã (User-side throughput) ãšåŒã°ããŸãã
å³ 4 ã¯ããã®ãŠãŒã¶ãŒåŽã®ã¹ã«ãŒãããããç°ãªãããã ãµã€ãºã®ã¹ã«ãŒãããã«å¯ŸããŠãããããããã®ã§ããå
šãŠã®ããã ãµã€ãºã§ç¢ºèªã§ããããã«ãNVIDIA ã®ã¢ãã«ã¯ Llama-3.1-70B ãããåªããŠããŸãã
å³ 4. Nemotron ã¢ãã«ãš Llama-3.1-70B ã®ç°ãªãããã ãµã€ãºã§ãããããããµãŒããŒã®ã¹ã«ãŒããããšãŠãŒã¶ãŒåŽã®ã¹ã«ãŒããã
å€æ§ãªããŒãºã«åããã LLM ã®ã«ã¹ã¿ãã€ãº
NAS ã¢ãããŒãã«ã¯ã粟床ãšå¹çæ§ã®æé©ãªãã©ã³ã¹ãéžæã§ããæè»æ§ããããŸãããã®æ±çšæ§ã瀺ããããåããªãã¡ã¬ã³ã¹ ã¢ãã«ããå¥ã®ã¢ãã«ãäœæããŸãããããã§ã¯é床ãšã³ã¹ããåªå
ããŸãããLlama-3.1-Nemotron-40B-Instruct ã¯åãææ³ãçšããŠéçºãããŸããããããºã«æ®µéã«ãããé床ã®èŠä»¶ãå€æŽãããŠããŸãã
ãã®ã¢ãã«ã¯ã芪ã¢ãã«ãšæ¯èŒã㊠3.2 åã®é床åäžãéæããŸããã粟床ã¯è¥å¹²äœäžããŠããŸããè¡š 4 ããããã®ã¢ãã«ãååãªããã©ãŒãã³ã¹ã瀺ããŠããããšãããããŸãã
粟床
é床
MT bench
MMLU
ããã¹ãçæ (128/1024)
èŠçŽ / RAG (2048/128)
Llama-3.1- Nemotron-40B-Instruct
8.69
77.10%
9568
862
Llama-3.1- Nemotron-51B-Instruct
8.99
80.20%
6472
653
Llama 3.1-70B-Instruct
8.93
81.72%
2975
339
è¡š 4. Llama-3.1-Nemotron-40B-Instruct ã®ç²ŸåºŠãšå¹çã®æŠèŠ
ãŸãšã
Llama 3.1-Nemotron-51B-Instruct
ã¯ãé«ç²ŸåºŠã®åºç€ã¢ãã«ã䜿ãã€ã€ãã³ã¹ããæããããŠãŒã¶ãŒãäŒæ¥ã«ãšã£ãŠãæ°ããªãã£ã³ã¹ãšãªããŸãããã®ã¢ãã«ã¯ç²ŸåºŠãšå¹çæ§ã®æé©ãªãã©ã³ã¹ãåããŠãããéçºè
ã«ãšã£ãŠé
åçãªéžæè¢ã«ãªããšæããŸããããã«ãããã§ç€ºããçµæ㯠NAS ã¢ãããŒãã®æå¹æ§ã瀺ããŠãããä»ã®ã¢ãã«ã«å¯ŸããŠããã®æ¹æ³ãæ¡åŒµããããšã«ãã€ãªãããŸãã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
å°éããŒã¿åŠçã¯ãŒã¯ãããŒã«å¯Ÿãã AI ã¢ã·ã¹ã¿ã³ãçµ±åãå
é§ããŠå®çŸ
GTC ã»ãã·ã§ã³:
倧èŠæš¡èšèªã¢ãã«ã®æé©å: LLama2 7B ã®ãã«ãŒãã³ã°ãšãã¡ã€ã³ãã¥ãŒãã³ã°ã«åããå®éšçã¢ãããŒã
NGC ã³ã³ãããŒ:
Llama-3.1-405b-instruct
NGC ã³ã³ãããŒ:
Llama-3-Swallow-70B-Instruct-v0.1
SDK:
Llama3 8B Instruct NIM |
https://developer.nvidia.com/blog/transforming-financial-analysis-with-nvidia-nim/ | Transforming Financial Analysis with NVIDIA NIM | In financial services, portfolio managers and research analysts diligently sift through vast amounts of data to gain a competitive edge in investments. Making informed decisions requires access to the most pertinent data and the ability to quickly synthesize and interpret that data.
Traditionally, sell-side analysts and fundamental portfolio managers have focused on a small subset of companies, meticulously examining financial statements, earning calls, and corporate filings. Systematically analyzing financial documents across a larger trading universe can uncover additional insights. Due to the technical and algorithmic difficulty of such tasks, systematic analysis of transcripts over a wide trading universe was, until recently, only accessible to sophisticated quant-trading firms.
The performance achieved on these tasks using traditional
natural language processing (NLP)
methods such as bag-of-words, sentiment dictionaries, and word statistics, often falls short when compared to the capabilities of
large language models (LLMs)
in financial NLP tasks. Besides financial applications, LLMs have demonstrated superior performance in domains like medical document understanding, news article summarization, and legal document retrieval.
By leveraging AI and NVIDIA technology, sell-side analysts, fundamental traders, and retail traders can significantly accelerate their research workflow, extract more nuanced insights from financial documents, and cover more companies and industries. By adopting these advanced AI tools, the financial services sector can enhance its data analysis capabilities, saving time and improving the accuracy of investment decisions. According to the NVIDIA
2024 State of AI in Financial Services
survey report, 37% of respondents are exploring generative AI and LLMs for report generation, synthesis, and investment research to reduce repetitive manual work.
In this post, weâll walk you through an end-to-end demo on how to build an AI assistant to extract insights from earnings call transcripts using
NVIDIA NIM
inference microservices to implement a
retrieval-augmented generation (RAG)
system. Weâll highlight how leveraging advanced AI technologies can accelerate workflows, uncover hidden insights, and ultimately enhance decision-making processes in the financial services industry.
Analyzing earnings call transcripts with NIM microservices
Earnings calls in particular are a vital source for investors and analysts, providing a platform for companies to communicate important financial and business information. These calls offer insights into the industry, the companyâs products, competitors, and most importantly, its business prospects.
By analyzing earnings call transcripts, investors can glean valuable information about a companyâs future earnings and valuation. Earnings call transcripts have successfully been used to generate alpha for over two decades. For more details, see
Natural Language Processing â Part I: Primer
and
Natural Language Processing â Part II: Stock Selection
.
Step 1: The data
In this demo, we use transcripts from NASDAQ earnings calls from 2016 to 2020 for our analysis. This
Earnings Call Transcripts dataset
can be downloaded from Kaggle.
For our evaluation, we used a subset of 10 companies from which we then randomly selected 63 transcripts for manual annotation. For all transcripts, we answered the following set of questions:
What are the companyâs primary revenue streams and how have they changed over the past year?
What are the companyâs major cost components and how have they fluctuated in the reporting period?
What capital expenditures were made and how are these supporting the companyâs growth?
What dividends or stock buybacks were executed?
What significant risks are mentioned in the transcript?
This makes for a total of 315 question-answer pairs. All questions are answered using a structured
JSON format
. For example:
Question: What are the companyâs primary revenue streams and how have they changed over the past year?
Answer:
{
"Google Search and Other advertising": {
"year_on_year_change": "-10%",
"absolute_revenue": "21.3 billion",
"currency": "USD"
},
"YouTube advertising": {
"year_on_year_change": "6%",
"absolute_revenue": "3.8 billion",
"currency": "USD"
},
"Network advertising": {
"year_on_year_change": "-10%",
"absolute_revenue": "4.7 billion",
"currency": "USD"
},
"Google Cloud": {
"year_on_year_change": "43%",
"absolute_revenue": "3 billion",
"currency": "USD"
},
"Other revenues": {
"year_on_year_change": "26%",
"absolute_revenue": "5.1 billion",
"currency": "USD"
}
}
Using JSON enables evaluating model performance in a manner that does not rely on subjective language understanding methods, such as
LLM-as-a-judge
, which might introduce unwanted biases into the evaluation.
Step 2: NVIDIA NIM
This demo uses
NVIDIA NIM
, a set of microservices designed to speed up enterprise
generative AI
deployment. For more details, see
NVIDIA NIM Offers Optimized Inference Microservices for Deploying AI Models at Scale
. Supporting a wide range of AI models, including NVIDIA-optimized community and commercial partner models, NIM ensures seamless, scalable AI inferencing, on-premises or in the cloud, leveraging industry-standard APIs.
When ready for production, NIM microservices are deployed with a single command for easy integration into enterprise-grade AI applications using standard APIs and just a few lines of code. Built on robust foundations including inference engines like
NVIDIA TensorRT
, TensorRT-LLM, and PyTorch, NIM is engineered to facilitate seamless AI inferencing with best performance out-of-the-box based on the underlying hardware. Self-hosting models with NIM supports the protection of customer and enterprise data, which is a common requirement in RAG applications.
Step 3: Setting up on NVIDIA API catalog
NIM microservices can be accessed using the NVIDIA API catalog. All it takes to set up is registering an NVIDIA API key (From the
API catalog
, click Get API Key.) For the purposes of this post, weâll store it in an environment variable:
export NVIDIA_API_KEY=YOUR_KEY
LangChain provides a package for
convenient NGC integration
. This tutorial will use endpoints to run embedding, reranking, and chat models with NIM microservices. To reproduce the code, youâll need to install the following Python dependencies:
langchain-nvidia-ai-endpoints==0.1.2
faiss-cpu==1.7.4
langchain==0.2.5
unstructured[all-docs]==0.11.2
Step 4: Building a RAG pipeline with NIM microservices
RAG is a method that enhances language models by combining retrieval of relevant documents from a large corpus with text generation.
The first step of RAG is to vectorize your collection of documents. This involves taking a series of documents, splitting them into smaller chunks, using an embedder model to turn each of these chunks into a neural network embedding (a vector), and storing them in a
vector database
.
Weâll do this for each of the earning calls transcripts:
import os
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders import TextLoader
from langchain.vectorstores import FAISS
from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings
#â¯Initialise the embedder that converts text to vectors
transcript_embedder = NVIDIAEmbeddings(model='nvidia/nv-embed-v1',
truncate='END'
)
#â¯The document we will be chunking and vectorizing
transcript_fp = "Transcripts/GOOGL/2020-Feb-03-GOOGL.txt"
raw_document = TextLoader(transcript_fp).load()
#â¯Split the document into chunks of 1500 characters each
text_splitter = RecursiveCharacterTextSplitter(chunk_size=3000,
chunk_overlap=200)
documents = text_splitter.split_documents(raw_document)
#â¯Vectorise each chunk into a separate entry in the database
vectorstore = FAISS.from_documents(documents, transcript_embedder)
vector_store_path = "vector_db/google_transcript_2020_feb.pkl"
try:
os.remove(vector_store_path)
except OSError:
pass
vectorstore.save_local(vector_store_path)
Once the vectorized database is built, the simplest RAG flow for the earning calls transcripts is as follows:
A user inputs a
query.
For example, âWhat are the companyâs main revenue sources?â
The
embedder
model embeds the query into a vector and then searches through the vectorized database of the documents for the Top-K (Top-30, for example) most relevant chunks.
A
reranker
model, also known as a cross-encoder, then outputs a similarity score for each query-document pair. Additionally, metadata can also be used to help improve the accuracy of the reranking step. This score is used to reorder the Top-K documents retrieved by the embedder by relevance to the user query. Further filtering can then be applied, retaining only the Top-N (Top-10, for example) documents.
The Top-N most relevant documents are then passed onto an
LLM
alongside the user query. The retrieved documents are used as context to ground the modelâs answer.
Figure 2. A simplified RAG workflow that involves three main steps: embedding and retrieval, reranking, and context-grounded LLM answer generation
Note that modifications can be made to improve a modelâs answer accuracy, but for now weâll continue with the simplest robust approach.
Consider the following user query and desired JSON format:
question = "What are the companyâs primary revenue streams and how have they changed over the past year?"
json_template = """
{"revenue_streams": [
{
"name": "<Revenue Stream Name 1>",
"amount": <Current Year Revenue Amount 1>,
"currency": "<Currency 1>",
"percentage_change": <Change in Revenue Percentage 1>
},
{
"name": "<Revenue Stream Name 2>",
"amount": <Current Year Revenue Amount 2>,
"currency": "<Currency 2>",
"percentage_change": <Change in Revenue Percentage 2>
},
// Add more revenue streams as needed
]
}
"""
user_query = question + json_template
The JSON template will be used so that, further down the pipeline, the LLM knows to output its answer in valid JSON, rather than in plain text. As mentioned in Step 1, using JSON enables the automated evaluation of model answers in an objective manner. Note that this could be removed if a more conversational style is preferred.
To contextualize the user query, initialize the Embedder and the Reranker for the retrieval and ordering of the relevant documents:
from langchain_nvidia_ai_endpoints import NVIDIARerank
from langchain.retrievers.contextual_compression import ContextualCompressionRetriever
#â¯How many retrieved documents to keep at each step
top_k_documents_retriever = 30
top_n_documents_reranker = 5
#â¯Initialie retriever for vector database
retriever = vectorstore.as_retriever(search_type='similarity',
search_kwargs={'k': top_k_documents_retriever})
# Add a reranker to reorder documents by relevance to user query
reranker = NVIDIARerank(model="ai-rerank-qa-mistral-4b",
top_n=top_n_documents_reranker)
retriever = ContextualCompressionRetriever(base_compressor=reranker,
base_retriever=retriever)
#â¯Retrieve documents, rerank them and pick top-N
retrieved_docs = retriever.invoke(user_query)
#â¯Join all retrieved documents into a single string
context = ""
for doc in retrieved_docs:
context += doc.page_content + "\n\n"
Then, when the relevant documents are retrieved, they can be passed onto the LLM alongside the user query. We are using the
Llama 3 70B
NIM:
from langchain_nvidia_ai_endpoints import ChatNVIDIA
PROMPT_FORMAT = """"
Given the following context:
####################
{context}
####################
Answer the following question:
{question}
using the following JSON structure:
{json_template}
For amounts don't forget to always state if it's in billions or millions and "N/A" if not present.
Only use information and JSON keys that are explicitly mentioned in the transcript.
If you don't have information for any of the keys use "N/A" as a value.
Answer only with JSON. Every key and value in the JSON should be a string.
"""
llm = ChatNVIDIA(model="ai-llama3-70b",
max_tokens=600,
temperature=0
)
llm_input = PROMPT_FORMAT.format(**{"context": context,
"question": question,
"json_template": json_template
})
answer = llm.invoke(llm_input)
print(answer.content)
Running this code will produce a JSON-structured answer to the user query. The code can now be easily modified to read in multiple transcripts and answer varying user queries.
Step 5: Evaluation
To evaluate the performance of the retrieval step, use the annotated question-answer pairs previously described to compare the ground-truth JSON with the predicted JSON, key-by-key. Consider the following ground-truth example:
"Google Cloud": {
"year_on_year_change": "43%",
"absolute_revenue": "3 billion",
"currency": "N/A"
}
The prediction looks like this:
"Google Cloud": {
"year_on_year_change": "43%",
"absolute_revenue": "N/A",
"currency": "USD"
}
The three possible outcomes are:
True positive (TP)
: There is no value to extract, and the ground truth and the prediction match. For the previous example, the prediction for
year_on_year_change
is TP.
False Positive (FP)
: The ground truth value is
âN/Aâ
. In other words, there is no value to extract, but the prediction hallucinates a value. For the previous example, the prediction for
currency
is FP.
False Negative (FN)
: There is a ground truth value to extract, however, the prediction fails to capture that value. For the previous example, the prediction for
absolute_revenue
is FP.
With these outcomes measured, next calculate the following three main metrics:
Recall
= TP/ (TP + FN): Higher recall implies our model is returning more and more of the relevant results.
Precision
= TP / (TP + FP): Higher precision implies our model returns a higher ratio of relevant results versus irrelevant ones.
F1-score
= (2 * Precision * Recall) / (Precision + Recall): The F1-score is a harmonic mean of precision and recall.
A user might want to be partially flexible with the matching of non-numeric values when doing string comparisons for some of the attributes. For example, consider a question about revenue sources, where one of the ground-truth answers is âData Centersâ and the model outputs âData Centerâ. An exact match evaluation would treat this as a mismatch. To achieve more robust evaluation in such cases, use fuzzy matching with the Python default
difflib
:
import difflib
def get_ratio_match(gt_string, pred_string):
if len(gt_string) < len(pred_string):
min_len = len(gt_string)
else:
min_len = len(pred_string)
matcher = difflib.SequenceMatcher(None, gt_string, pred_string, autojunk=False)
_, _, longest_match = matcher.find_longest_match(0, min_len, 0, min_len)
# Return the ratio of match with ground truth
return longest_match / min_len
For evaluation, consider any string attributes to be a match if their similarity ratio is above 90%.
Table 1 presents results for two of the most-used open-source model families (Mistral AI Mixtral models and Meta Llama 3 models) on our manually annotated data. For both model families, there is noticeable performance deterioration when lowering the number of parameters. Visit the
NVIDIA API catalog
to experience these NIM microservices.
Method
F1
Precision
Recall
Llama 3 70B
84.4%
91.3%
78.5%
Llama 3 8B
75.8%
85.2%
68.2%
Mixtral 8x22B
84.4%
91.9%
78.0%
Mixtral 8x7B
62.2%
80.2%
50.7%
Table 1. Performance of Llama and Mixtral models on JSON-structured information extraction and question-answering from earning call transcripts
Mixtral-8x22B seems to have roughly equivalent performance to Llama 3 70B. However, for both model families, reducing the number of parameters does result in a significant decrease in performance. A decrease is most accentuated for Recall. This presents a frequent trade-off between choosing to have better accuracy at the cost of larger hardware requirements.
In most cases, model accuracy can be improved without increasing the number of parameters, by fine-tuning either the Embedder, Reranker, or the LLM using domain-specific data (in this case, earning call transcripts).
The Embedder is the smallest and therefore the quickest and most cost-effective to fine-tune. For detailed instructions, refer to the
NVIDIA NeMo documentation
. Additionally,
NVIDIA NeMo
simplifies and enhances the efficiency of fine-tuning an effective version of the LLM.
Key implications for users
This demo is designed to extract insights from earnings call transcripts. By leveraging advanced AI technologies like NIM, itâs now possible to quickly and accurately retrieve information from earnings call transcripts. The AI product assists multiple categories of financial researchers, analysts, advisors, and fundamental portfolio managers during the most intensive processes of documentation and data analysis, enabling financial professionals to spend more time on strategic decision-making or with clients.
In the asset management sector, for example, portfolio managers can use the assistant to quickly synthesize insights from a vast number of earnings calls, improving investment strategies and outcomes. In the insurance industry, the AI assistant can analyze financial health and risk factors from company reports, enhancing underwriting and risk assessment processes. In fundamental and retail trading, the assistant can help with systematic information extraction to identify market trends and sentiment shifts, enabling the use of more detailed information for future trades.
Even in banking, it can be used to assess the financial stability of potential loan recipients by analyzing their earnings calls. Ultimately, this technology enhances efficiency, accuracy, and the ability to make data-driven decisions, giving users a competitive edge in their respective markets.
Visit the
NVIDIA API catalog
to see all the available NIM microservices and experiment with
LangChainâs convenient integration
to see what works best for your own data. | https://developer.nvidia.com/ja-jp/blog/transforming-financial-analysis-with-nvidia-nim/ | NVIDIA NIM ã«ãã財ååæã®å€é© | Reading Time:
4
minutes
éèãµãŒãã¹ã§ã¯ãããŒããã©ãªãª ãããŒãžã£ãŒããªãµãŒã ã¢ããªã¹ããèšå€§ãªéã®ããŒã¿ã䞹念ã«ç²Ÿæ»ããæè³ã§ç«¶äºåãé«ããŠããŸããæ
å ±ã«åºã¥ããææ決å®ãè¡ãã«ã¯ãæãé¢é£æ§ã®é«ãããŒã¿ã«ã¢ã¯ã»ã¹ãããã®ããŒã¿ãè¿
éã«çµ±åããŠè§£éããèœåãå¿
èŠã§ãã
åŸæ¥ãã»ã«ãµã€ã ã¢ããªã¹ãããã¡ã³ãã¡ã³ã¿ã« ããŒããã©ãªãª ãããŒãžã£ãŒã¯ã財åè«žè¡šãåçå ±åãäŒæ¥æåºæžé¡ã綿å¯ã«èª¿ã¹ãäžéšã®äŒæ¥ã«çŠç¹ãåœãŠãŠããŸãããããåºç¯ãªååŒå¯Ÿè±¡ç¯å²ã«ããã£ãŠè²¡åææžãäœç³»çã«åæããããšã§ããããªãæŽå¯ãåŸãããšãã§ããŸãããã®ãããªã¿ã¹ã¯ã¯æè¡çããã³ã¢ã«ãŽãªãºã çã«é£ãããããåºç¯ãªååŒå¯Ÿè±¡ç¯å²ã«ããããã©ã³ã¹ã¯ãªããã®äœç³»çãªåæã¯ãæè¿ãŸã§ãé«åºŠãªã¯ãªã³ã ãã¬ãŒãã£ã³ã° (quant-trading) äŒç€Ÿã«ããã§ããŸããã§ããã
ãããã®ã¿ã¹ã¯ã§ãããã°ãªãã¯ãŒããææ
èŸæžãåèªçµ±èšãªã©ã®åŸæ¥ã®
èªç¶èšèªåŠç (NLP)
ææ³ã䜿çšããŠéæãããããã©ãŒãã³ã¹ã¯ãéè NLP ã¿ã¹ã¯ã«ããã
倧èŠæš¡èšèªã¢ãã« (LLM)
ã®æ©èœãšæ¯èŒãããšããã°ãã°äžååã§ããéèã¢ããªã±ãŒã·ã§ã³ä»¥å€ã«ããLLM ã¯å»çææžã®ç解ããã¥ãŒã¹èšäºã®èŠçŽãæ³çææžã®æ€çŽ¢ãªã©ã®åéã§åªããããã©ãŒãã³ã¹ãçºæ®ããŠããŸãã
AI ãš NVIDIA ã®ãã¯ãããžã掻çšããããšã§ãã»ã«ãµã€ã ã¢ããªã¹ãããã¡ã³ãã¡ã³ã¿ã« ãã¬ãŒããŒããªããŒã« ãã¬ãŒããŒã¯ãªãµãŒã ã¯ãŒã¯ãããŒã倧å¹
ã«å éããéèææžãããã埮åŠãªæŽå¯ãæœåºããããå€ãã®äŒæ¥ãæ¥çãã«ããŒã§ããŸãããããã®é«åºŠãª AI ããŒã«ãå°å
¥ããããšã§ãéèãµãŒãã¹éšéã¯ããŒã¿åææ©èœã匷åããæéãç¯çŽããæè³æ±ºå®ã®ç²ŸåºŠãåäžãããããšãã§ããŸãã
NVIDIA ã® 2024 幎éèãµãŒãã¹
ã«ããã AI ã®çŸç¶èª¿æ»ã¬ããŒãã«ãããšãåçè
ã® 37% (äžåœãé€ã) ããå埩çãªæäœæ¥ãæžããããã«ãã¬ããŒãçæãçµ±ååæãæè³èª¿æ»ã®ããã®çæ AI ãš LLM ãæ€èšããŠããŸãã
ãã®èšäºã§ã¯ã
NVIDIA NIM
æšè«ãã€ã¯ããµãŒãã¹ã䜿çšããŠåçå ±åã®ãã©ã³ã¹ã¯ãªããããæŽå¯ãæœåºãã AI ã¢ã·ã¹ã¿ã³ããæ§ç¯ãã
æ€çŽ¢æ¡åŒµçæ (RAG)
ã·ã¹ãã ãå®è£
ããæ¹æ³ã«ã€ããŠããšã³ãããŒãšã³ãã®ãã¢ã§èª¬æããŸããé«åºŠãª AI ãã¯ãããžã掻çšããããšã§ãã¯ãŒã¯ãããŒãå éããé ããæŽå¯ãæããã«ããæçµçã«éèãµãŒãã¹æ¥çã®ææ決å®ããã»ã¹ã匷åããæ¹æ³ã玹ä»ããŸãã
NIM ã«ããåçå ±åã®èšé²ã®åæ
ç¹ã«ã決ç®çºè¡šã®é»è©±äŒè°ã¯æè³å®¶ãã¢ããªã¹ãã«ãšã£ãŠéèŠãªæ
å ±æºã§ãããäŒæ¥ãéèŠãªè²¡åæ
å ±ãäºæ¥æ
å ±ãäŒéãããã©ãããã©ãŒã ãæäŸããŸãããããã®é»è©±äŒè°ã¯ãæ¥çãäŒæ¥ã®è£œåã競åä»ç€ŸããããŠæãéèŠãªäºæ¥èŠéãã«ã€ããŠã®æŽå¯ãæäŸããŸãã
決ç®çºè¡šã®é»è©±äŒè°ã®èšé²ãåæããããšã§ãæè³å®¶ã¯äŒæ¥ã®å°æ¥ã®åçãšè©äŸ¡ã«é¢ãã貎éãªæ
å ±ãåéã§ããŸãã決ç®çºè¡šã®é»è©±äŒè°ã®èšé²ã¯ã20 幎以äžã«ããã£ãŠã¢ã«ãã¡ãçã¿åºãããã«å¹æçã«äœ¿çšãããŠããŸããã詳现ã«ã€ããŠã¯ã
ãèªç¶èšèªåŠç â ããŒã I: å
¥éã
(2017 幎 9 æ) ããã³
ãèªç¶èšèªåŠç â ããŒã II: éæéžæã
(2018 幎 9 æ)ãåç
§ããŠãã ããã
ã¹ããã 1: ããŒã¿
ãã®ãã¢ã§ã¯ã2016 幎ãã 2020 幎ãŸã§ã® NASDAQ ã®åçå ±åã®ãã©ã³ã¹ã¯ãªãããåæã«äœ¿çšããŸãããã®
åçå ±åã®ãã©ã³ã¹ã¯ãªãã ããŒã¿ã»ãã
ã¯ãKaggle ããããŠã³ããŒãã§ããŸãã
è©äŸ¡ã§ã¯ã10 瀟ã®ãµãã»ããã䜿çšãããããã 63 件ã®ãã©ã³ã¹ã¯ãªãããã©ã³ãã ã«éžæããŠæåã§æ³šéãä»ããŸããããã¹ãŠã®ãã©ã³ã¹ã¯ãªããã«ã€ããŠã次ã®äžé£ã®è³ªåã«çããŸãã:
äŒç€Ÿã®äž»ãªåçæºã¯äœã§ãããéå» 1 幎éã§ã©ã®ããã«å€åããŸããã?
äŒç€Ÿã®äž»ãªã³ã¹ãèŠçŽ ã¯äœã§ãããå ±åæéäžã«ã©ã®ããã«å€åããŸããã?
ã©ã®ãããªèšåæè³ãè¡ããããããã©ã®ããã«äŒç€Ÿã®æé·ãæ¯ããŠããŸãã?
ã©ã®ãããªé
åœãŸãã¯æ ªåŒè²·ãæ»ããå®è¡ãããŸããã?
ãã©ã³ã¹ã¯ãªããã§èšåãããŠããéèŠãªãªã¹ã¯ã¯äœã§ãã?
ããã«ãããåèš 315 ã®è³ªåãšåçã®ãã¢ãäœæãããŸãããã¹ãŠã®è³ªåãžã®åçã¯ãæ§é åããã
JSON ãã©ãŒããã
ã䜿çšããŠè¡ãããŸãã
ããšãã°:
質å: äŒç€Ÿã®äž»ãªåå
¥æºã¯äœã§ãã? ãŸããéå» 1 幎éã§ã©ã®ããã«å€åããŸããã? (What are the companyâs primary revenue streams and how have they changed over the past year?)
åç:
{
"Google Search and Other advertising": {
"year_on_year_change": "-10%",
"absolute_revenue": "21.3 billion",
"currency": "USD"
},
"YouTube advertising": {
"year_on_year_change": "6%",
"absolute_revenue": "3.8 billion",
"currency": "USD"
},
"Network advertising": {
"year_on_year_change": "-10%",
"absolute_revenue": "4.7 billion",
"currency": "USD"
},
"Google Cloud": {
"year_on_year_change": "43%",
"absolute_revenue": "3 billion",
"currency": "USD"
},
"Other revenues": {
"year_on_year_change": "26%",
"absolute_revenue": "5.1 billion",
"currency": "USD"
}
}
JSON ã䜿çšãããšãè©äŸ¡ã«æãŸãããªããã€ã¢ã¹ãå°å
¥ããå¯èœæ§ã®ããã
LLM æèš
ããªã©ã®äž»èŠ³çãªèšèªç解æ¹æ³ã«äŸåããã«ã¢ãã«ã®ããã©ãŒãã³ã¹ãè©äŸ¡ã§ããŸãã
ã¹ããã 2: NVIDIA NIM
ãã®ãã¢ã§ã¯ããšã³ã¿ãŒãã©ã€ãºçæ AI ã®å°å
¥ãé«éåããããã«èšèšããããã€ã¯ããµãŒãã¹ ã»ããã§ãã
NVIDIA NIM
ã䜿çšããŸãã詳现ã«ã€ããŠã¯ã
ãNVIDIA NIM 㯠AI ã¢ãã«ã®å€§èŠæš¡ãªå°å
¥åãã«æé©åãããæšè«ãã€ã¯ããµãŒãã¹ãæäŸããŸãã
ãåç
§ããŠãã ããã
NVIDIA åãã«æé©åãããã³ãã¥ãã㣠ã¢ãã«ãåçšããŒãã㌠ã¢ãã«ãªã©ãå¹
åºã AI ã¢ãã«ããµããŒããã NIM ã¯ãæ¥çæšæºã® API ã掻çšããŠããªã³ãã¬ãã¹ãŸãã¯ã¯ã©ãŠãã§ã·ãŒã ã¬ã¹ã§ã¹ã±ãŒã©ãã«ãª AI æšè«ãå®çŸããŸããå®çšŒåã®æºåãã§ããããNIM 㯠1 ã€ã®ã³ãã³ãã§å°å
¥ãããæšæº API ãšãããæ°è¡ã®ã³ãŒãã䜿çšããŠãšã³ã¿ãŒãã©ã€ãº ã°ã¬ãŒãã® AI ã¢ããªã±ãŒã·ã§ã³ã«ç°¡åã«çµ±åã§ããŸãã
NVIDIA TensorRT
ãTensorRT-LLMãPyTorch ãªã©ã®æšè«ãšã³ãžã³ãå«ãå
ç¢ãªåºç€äžã«æ§ç¯ããã NIM ã¯ãåºç€ãšãªãããŒããŠã§ã¢ã«åºã¥ããŠããã«æé«ã®ããã©ãŒãã³ã¹ã§ã·ãŒã ã¬ã¹ãª AI æšè«ãå®çŸããããã«èšèšãããŠããŸããNIM ã䜿çšããã»ã«ããã¹ãã£ã³ã° (self-hosting) ã¢ãã«ã¯ãRAG ã¢ããªã±ãŒã·ã§ã³ã§äžè¬çãªèŠä»¶ã§ãã顧客ããŒã¿ãšäŒæ¥ããŒã¿ã®ä¿è·ããµããŒãããŸãã
ã¹ããã 3: NVIDIA API ã«ã¿ãã°ã§ã®èšå®
NIM ã«ã¯ã
NVIDIA API ã«ã¿ãã°
ã䜿çšããŠã¢ã¯ã»ã¹ã§ããŸããèšå®ã«å¿
èŠãªã®ã¯ãNVIDIA API ããŒãç»é²ããããšã ãã§ã (API ã«ã¿ãã°ããã[API ããŒã®ååŸ] ãã¯ãªãã¯ããŸã)ããã®èšäºã§ã¯ããããç°å¢å€æ°ã«ä¿åããŸãã
export NVIDIA_API_KEY=YOUR_KEY
LangChain ã¯ã
䟿å©ãª NGC çµ±å
çšã®ããã±ãŒãžãæäŸããŠããŸãããã®ãã¥ãŒããªã¢ã«ã§ã¯ããšã³ããã€ã³ãã䜿çšããŠãNIM ã§åã蟌ã¿ããªã©ã³ãã³ã°ããã£ãã ã¢ãã«ãå®è¡ããŸããã³ãŒããåçŸããã«ã¯ã次㮠Python äŸåé¢ä¿ãã€ã³ã¹ããŒã«ããå¿
èŠããããŸãã
langchain-nvidia-ai-endpoints==0.1.2
faiss-cpu==1.7.4
langchain==0.2.5
unstructured[all-docs]==0.11.2
ã¹ããã 4: NIM ã䜿çšãã RAG ãã€ãã©ã€ã³ã®æ§ç¯
RAG ã¯ã倧èŠæš¡ãªã³ãŒãã¹ããã®é¢é£ææžã®ååŸãšããã¹ãçæãçµã¿åãããããšã§èšèªã¢ãã«ã匷åããæ¹æ³ã§ãã
RAG ã®æåã®ã¹ãããã¯ãææžã®ã³ã¬ã¯ã·ã§ã³ããã¯ãã«åããããšã§ããããã«ã¯ãäžé£ã®ææžãååŸããããããå°ããªãã£ã³ã¯ã«åå²ããåã蟌ã¿ã¢ãã«ã䜿çšããŠãããã®åãã£ã³ã¯ããã¥ãŒã©ã« ãããã¯ãŒã¯åã蟌㿠(ãã¯ãã«) ã«å€æãã
ãã¯ãã« ããŒã¿ããŒã¹
ã«ä¿åããããšãå«ãŸããŸããååçå ±åé話ã®ãã©ã³ã¹ã¯ãªããã«å¯ŸããŠãããã®æäœãå®è¡ããŸãã
import os
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders import TextLoader
from langchain.vectorstores import FAISS
from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings
#â¯Initialise the embedder that converts text to vectors
transcript_embedder = NVIDIAEmbeddings(model='nvidia/nv-embed-v1',
truncate='END'
)
#â¯The document we will be chunking and vectorizing
transcript_fp = "Transcripts/GOOGL/2020-Feb-03-GOOGL.txt"
raw_document = TextLoader(transcript_fp).load()
#â¯Split the document into chunks of 1500 characters each
text_splitter = RecursiveCharacterTextSplitter(chunk_size=3000,
chunk_overlap=200)
documents = text_splitter.split_documents(raw_document)
#â¯Vectorise each chunk into a separate entry in the database
vectorstore = FAISS.from_documents(documents, transcript_embedder)
vector_store_path = "vector_db/google_transcript_2020_feb.pkl"
try:
os.remove(vector_store_path)
except OSError:
pass
vectorstore.save_local(vector_store_path)
ãã¯ãã«åãããããŒã¿ããŒã¹ãæ§ç¯ããããšãåçå ±åã®èšé²ã®æãåçŽãª RAG ãããŒã¯æ¬¡ã®ããã«ãªããŸãã
ãŠãŒã¶ãŒã
ã¯ãšãª (query)
ãå
¥åããŸããããšãã°ããäŒç€Ÿã®äž»ãªåçæºã¯äœã§ãã?ã(What are the companyâs main revenue sources?)
åã蟌ã¿ã¢ãã« (embedder)
ã¯ã¯ãšãªããã¯ãã«ã«åã蟌ã¿ãããã¥ã¡ã³ãã®ãã¯ãã«åãããããŒã¿ããŒã¹ãæ€çŽ¢ããŠãæãé¢é£æ§ã®é«ãäžäœ K (äŸãã°ãäžäœ 30 ãªã©) ã®ãã£ã³ã¯ãæ¢ããŸãã
次ã«ãã¯ãã¹ãšã³ã³ãŒããŒãšãåŒã°ãã
ãªã©ã³ãã³ã° (reranker)
ã¢ãã«ããã¯ãšãªãšããã¥ã¡ã³ãã®åãã¢ã®é¡äŒŒæ§ã¹ã³ã¢ãåºåããŸããããã«ãã¡ã¿ããŒã¿ã䜿çšããŠãåã©ã³ã¯ä»ãã¹ãããã®ç²ŸåºŠãåäžãããããšãã§ããŸãããã®ã¹ã³ã¢ã¯ãåã蟌ã¿ã«ãã£ãŠååŸãããäžäœ K ã®ããã¥ã¡ã³ããããŠãŒã¶ãŒ ã¯ãšãªãšã®é¢é£æ§ã§äžŠã¹æ¿ããããã«äœ¿çšãããŸãããã®åŸãããã«ãã£ã«ã¿ãªã³ã°ãé©çšããŠãäžäœ N (äŸãã°ãäžäœ 10 ãªã©) ã®ããã¥ã¡ã³ãã®ã¿ãä¿æã§ããŸãã
次ã«ãæãé¢é£æ§ã®é«ãäžäœ N ã®ããã¥ã¡ã³ããããŠãŒã¶ãŒã¯ãšãªãšãšãã«
LLM
ã«æž¡ãããŸããååŸãããããã¥ã¡ã³ãã¯ãã¢ãã«ã®åçãåºç€ä»ããã³ã³ããã¹ããšããŠäœ¿çšãããŸãã
å³1. åã蟌ã¿ãšæ€çŽ¢ããªã©ã³ãã³ã°ãã³ã³ããã¹ãã«åºã¥ãã LLM åççæãšãã 3 ã€ã®äž»èŠãªã¹ããããå«ãç°¡ç¥åããã RAG ã¯ãŒã¯ãããŒ
ã¢ãã«ã®åç粟床ãåäžãããããã«å€æŽãå ããããšãã§ããŸãããä»ã®ãšããã¯æãã·ã³ãã«ã§å
ç¢ãªã¢ãããŒããç¶ããŸãã
次ã®ãŠãŒã¶ãŒ ã¯ãšãªãšå¿
èŠãª JSON ãã©ãŒããããæ€èšããŠãã ããã
question = "What are the companyâs primary revenue streams and how have they changed over the past year?"
# äŒç€Ÿã®äž»ãªåå
¥æºã¯äœã§ãã? ãŸããéå» 1 幎éã§ã©ã®ããã«å€åããŸããã?
json_template = """
{"revenue_streams": [
{
"name": "<Revenue Stream Name 1>",
"amount": <Current Year Revenue Amount 1>,
"currency": "<Currency 1>",
"percentage_change": <Change in Revenue Percentage 1>
},
{
"name": "<Revenue Stream Name 2>",
"amount": <Current Year Revenue Amount 2>,
"currency": "<Currency 2>",
"percentage_change": <Change in Revenue Percentage 2>
},
// Add more revenue streams as needed
]
}
"""
user_query = question + json_template
JSON ãã³ãã¬ãŒãã¯ããã€ãã©ã€ã³ã®ããã«å
ã§ãLLM ããã¬ãŒã³ ããã¹ãã§ã¯ãªãæå¹ãª JSON ãã©ãŒãããã§åçãåºåããããšãèªèã§ããããã«äœ¿çšãããŸããã¹ããã 1 ã§è¿°ã¹ãããã«ãJSON ã䜿çšãããšã客芳çãªæ¹æ³ã§ã¢ãã«åçãèªåçã«è©äŸ¡ã§ããŸãããããäŒè©±çãªã¹ã¿ã€ã«ããæãŸããå Žåã¯ããããåé€ã§ããããšã«æ³šæããŠãã ããã
ãŠãŒã¶ãŒ ã¯ãšãªãã³ã³ããã¹ãåããã«ã¯ãé¢é£ããããã¥ã¡ã³ããååŸããŠé åºä»ãããããã«ãEmbedder (åã蟌ã¿ã¢ãã«) ãš Reranker (ãªã©ã³ãã³ã°ã¢ãã«) ãåæåããŸã:
from langchain_nvidia_ai_endpoints import NVIDIARerank
from langchain.retrievers.contextual_compression import ContextualCompressionRetriever
#â¯1 How many retrieved documents to keep at each step, åã¹ãããã§ååŸãããããã¥ã¡ã³ããããã€ä¿åãããã®èšå®ïŒ
top_k_documents_retriever = 30
top_n_documents_reranker = 5
#â¯2 Initialize retriever for vector database, ãã¯ãã«ããŒã¿ããŒã¹ã®ååŸãåæåãã:
retriever = vectorstore.as_retriever(search_type='similarity',
search_kwargs={'k': top_k_documents_retriever})
# 3 Add a reranker to reorder documents by relevance to user query, ãŠãŒã¶ãŒã¯ãšãªãšã®é¢é£æ§ã«åºã¥ããŠããã¥ã¡ã³ãã䞊ã¹æ¿ãããªã©ã³ãã³ã°æ©èœãè¿œå ãã:
reranker = NVIDIARerank(model="ai-rerank-qa-mistral-4b",
top_n=top_n_documents_reranker)
retriever = ContextualCompressionRetriever(base_compressor=reranker,
base_retriever=retriever)
#â¯4 Retrieve documents, rerank them and pick top-N, ããã¥ã¡ã³ããååŸãããªã©ã³ãã³ã°ããŠäžäœNãéžæãã:
retrieved_docs = retriever.invoke(user_query)
#â¯5 Join all retrieved documents into a single string, ååŸãããã¹ãŠã®ããã¥ã¡ã³ãã1ã€ã®æååã«çµåãã:
context = ""
for doc in retrieved_docs:
context += doc.page_content + "\n\n"
次ã«ãé¢é£ããããã¥ã¡ã³ããååŸããããšããŠãŒã¶ãŒ ã¯ãšãªãšãšãã« LLM ã«æž¡ãããŸããããã§ã¯ã
Llama 3 70B
NIM ã䜿çšããŠããŸãã
from langchain_nvidia_ai_endpoints import ChatNVIDIA
PROMPT_FORMAT = """"
Given the following context:
####################
{context}
####################
Answer the following question:
{question}
using the following JSON structure:
{json_template}
For amounts don't forget to always state if it's in billions or millions and "N/A" if not present.
Only use information and JSON keys that are explicitly mentioned in the transcript.
If you don't have information for any of the keys use "N/A" as a value.
Answer only with JSON. Every key and value in the JSON should be a string.
"""
llm = ChatNVIDIA(model="ai-llama3-70b",
max_tokens=600,
temperature=0
)
llm_input = PROMPT_FORMAT.format(**{"context": context,
"question": question,
"json_template": json_template
})
answer = llm.invoke(llm_input)
print(answer.content)
ãã®ã³ãŒããå®è¡ãããšããŠãŒã¶ãŒ ã¯ãšãªã«å¯Ÿãã JSON æ§é ã®åçãçæãããŸããã³ãŒããç°¡åã«å€æŽããŠãè€æ°ã®ãã©ã³ã¹ã¯ãªãããèªã¿èŸŒãã§ããŸããŸãªãŠãŒã¶ãŒ ã¯ãšãªã«åçã§ããããã«ãªããŸããã
ã¹ããã 5: è©äŸ¡
æ€çŽ¢ã¹ãããã®ããã©ãŒãã³ã¹ãè©äŸ¡ããã«ã¯ãåè¿°ã®æ³šéä»ãã®è³ªåãšåçã®ãã¢ã䜿çšããŠãçå®ã® JSON ãšäºæž¬ããã JSON ãããŒããšã«æ¯èŒããŸãã次ã®çå®ã®äŸãèããŠã¿ãŸãããã
"Google Cloud": {
"year_on_year_change": "43%",
"absolute_revenue": "3 billion",
"currency": "N/A"
}
äºæž¬ã®åºåã¯æ¬¡ã®ããã«ãªããŸã:
"Google Cloud": {
"year_on_year_change": "43%",
"absolute_revenue": "N/A",
"currency": "USD"
}
èããããåºåã®è©äŸ¡çµæ㯠3 ã€ãããŸãã
çéœæ§ (TP, True Positive)
: åèã®çããšäºæž¬ãäžèŽããŠããŸããåã®äŸã§ã¯ã
year_on_year_change
ã®äºæž¬ (â43%â) 㯠TP ã§ãã
åœéœæ§ (FP, False Positive)
: åèã®çãã®å€ã¯
"N/A"
ã§ããã€ãŸããæœåºããå€ã¯ãããŸããããäºæž¬ã§ã¯å€ãå¹»èŠçã«è¡šç€ºãããŸããåã®äŸã§ã¯ã
currency
ã®äºæž¬ (âUSDâ) 㯠FP ã§ãã
åœé°æ§ (FN, False Negative)
: æœåºããåèã®çãã®å€ããããŸãããäºæž¬ã§ã¯ãã®å€ãååŸã§ããŸããã§ãããåã®äŸã§ã¯ã
absolute_revenue
ã®äºæž¬ (âN/Aâ) 㯠FP ã§ãã
ãããã®çµæã枬å®ãããã次ã«ã次㮠3 ã€ã®äž»èŠãªã¡ããªãã¯ãèšç®ããŸãã
åçŸç (Recall)
= TP/ (TP + FN): åçŸçãé«ãã»ã©ãã¢ãã«ãé¢é£ããçµæãããå€ãè¿ããŠããããšãæå³ããŸãã
粟床 (Precision)
= TP / (TP + FP): 粟床ãé«ãã»ã©ãã¢ãã«ãé¢é£ããçµæãšç¡é¢ä¿ãªçµæã®æ¯çãé«ããªã£ãŠããããšãæå³ããŸãã
F1 ã¹ã³ã¢ (F1-score)
= (2 * é©åç * åçŸç) / (é©åç + åçŸç): F1 ã¹ã³ã¢ã¯ãé©åçãšåçŸçã®èª¿åå¹³åã§ãã
ãŠãŒã¶ãŒã¯ãäžéšã®å±æ§ã«ã€ããŠæååæ¯èŒãè¡ãéã«ãæ°å€ä»¥å€ã®å€ã®ãããã³ã°ãéšåçã«æè»ã«ãããå ŽåããããŸããããšãã°ãåçæºã«é¢ãã質åãèããŠã¿ãŸããããããã§ãåèã®åçã® 1 ã€ãè€æ°åœ¢ã®ãããŒã¿ ã»ã³ã¿ãŒã(âData Centersâ) ã§ãã¢ãã«ãåæ°åœ¢ã®ãããŒã¿ ã»ã³ã¿ãŒã(âData Centerâ)ãåºåããŸããå®å
šäžèŽè©äŸ¡ã§ã¯ãããã¯ãäžäžèŽããšããŠæ±ãããŸãããã®ãããªå Žåã«ãããå
ç¢ãªè©äŸ¡ãå®çŸããã«ã¯ãPython ã®ããã©ã«ãã®
difflib
ã§ãã¡ãžãŒ ãããã³ã° (fuzzy matching) ã䜿çšããŸãã
import difflib
def get_ratio_match(gt_string, pred_string):
if len(gt_string) < len(pred_string):
min_len = len(gt_string)
else:
min_len = len(pred_string)
matcher = difflib.SequenceMatcher(None, gt_string, pred_string, autojunk=False)
_, _, longest_match = matcher.find_longest_match(0, min_len, 0, min_len)
# Return the ratio of match with ground truth
return longest_match / min_len
è©äŸ¡ã§ã¯ãé¡äŒŒåºŠã 90% ãè¶
ããæååå±æ§ã¯äžèŽããŠãããšèŠãªããŸãã
è¡š 1 ã¯ãæåã§æ³šéãä»ããããŒã¿ã«å¯Ÿãããæããã䜿çšããã 2 ã€ã®ãªãŒãã³ãœãŒã¹ ã¢ãã« ãã¡ã㪠(
Mistral AI Mixtral ã¢ãã«
ãš
Meta Llama 3 ã¢ãã«
) ã®çµæã瀺ããŠããŸããã©ã¡ãã®ã¢ãã« ãã¡ããªã§ãããã©ã¡ãŒã¿ãŒã®æ°ãæžãããšããã©ãŒãã³ã¹ãèããäœäžããŸãããããã® NIM ãäœéšããã«ã¯ã
NVIDIA API ã«ã¿ãã°
ã«ã¢ã¯ã»ã¹ããŠãã ããã
Method
F1
Precision
Recall
Llama 3 70B
84.4%
91.3%
78.5%
Llama 3 8B
75.8%
85.2%
68.2%
Mixtral 8x22B
84.4%
91.9%
78.0%
Mixtral 8x7B
62.2%
80.2%
50.7%
è¡š1. åçå ±åé»è©±äŒè°ã®èšé²ããã®JSONæ§é åæ
å ±æœåºãšè³ªåå¿çã«ãããLlamaããã³Mixtralã¢ãã«ã®ããã©ãŒãã³ã¹
Mixtral-8x22B ã¯ãLlama 3 70B ãšã»ãŒåçã®ããã©ãŒãã³ã¹ãæã£ãŠããããã§ãããã ããã©ã¡ãã®ã¢ãã« ãã¡ããªã§ãããã©ã¡ãŒã¿ãŒã®æ°ãæžãããšããã©ãŒãã³ã¹ã倧å¹
ã«äœäžããŸããäœäžã¯ Recall ã§æãé¡èã§ããããã¯ãããŒããŠã§ã¢èŠä»¶ã®å¢å ãç ç²ã«ããŠç²ŸåºŠãåäžãããããšãéžæãããšãããã¬ãŒããªããé »ç¹ã«ç€ºããŠããŸãã
ã»ãšãã©ã®å Žåããã¡ã€ã³åºæã®ããŒã¿ (ãã®å Žåã¯ãåçé話ã®ãã©ã³ã¹ã¯ãªãã) ã䜿çšããŠãEmbedderãRerankerããŸã㯠LLM ã®ããããã埮調æŽããããšã§ããã©ã¡ãŒã¿ãŒã®æ°ãå¢ããããšãªãã¢ãã«ã®ç²ŸåºŠãåäžãããããšãã§ããŸãã
Embedder ã¯æãå°ããããã埮調æŽãæãè¿
éãã€ã³ã¹ãå¹çã«åªããŠããŸãã詳现ãªæé ã«ã€ããŠã¯ã
NVIDIA NeMo ã®ããã¥ã¡ã³ã
ãåç
§ããŠãã ãããããã«ã
NVIDIA NeMo
ã¯ãLLM ã®æå¹ãªããŒãžã§ã³ã埮調æŽããå¹çãç°¡çŽ åãã匷åããŸãã
ãŠãŒã¶ãŒåãéèŠãªæå³åã
ãã®ãã¢ã¯ãåçå ±åã®èšé²ããæŽå¯ãæœåºããããã«èšèšãããŠããŸããNIM ãªã©ã®é«åºŠãª AI ãã¯ãããžã掻çšããããšã§ãåçå ±åã®èšé²ããæ
å ±ãè¿
éãã€æ£ç¢ºã«ååŸã§ããããã«ãªããŸãããAI 補åã¯ãææžåãšããŒã¿åæã®æãéäžçãªããã»ã¹äžã«ãè€æ°ã®ã«ããŽãªã®éèç 究è
ãã¢ããªã¹ããã¢ããã€ã¶ãŒããã¡ã³ãã¡ã³ã¿ã« ããŒããã©ãªãª ãããŒãžã£ãŒãæ¯æŽããéèå°é家ãæŠç¥çãªææ決å®ã顧客察å¿ã«å€ãã®æéãè²»ãããããã«ããŸãã
ããšãã°ãè³ç£ç®¡çã»ã¯ã¿ãŒã§ã¯ãããŒããã©ãªãª ãããŒãžã£ãŒã¯ã¢ã·ã¹ã¿ã³ãã䜿çšããŠãèšå€§ãªæ°ã®åçå ±åããæŽå¯ããã°ããçµ±åããæè³æŠç¥ãšçµæãæ¹åã§ããŸããä¿éºæ¥çã§ã¯ãAI ã¢ã·ã¹ã¿ã³ããäŒæ¥ã¬ããŒããã財åã®å¥å
šæ§ãšãªã¹ã¯èŠå ãåæããåŒåãšãªã¹ã¯è©äŸ¡ã®ããã»ã¹ã匷åã§ããŸãããã¡ã³ãã¡ã³ã¿ã« ãã¬ãŒãã£ã³ã°ãšå°å£²ååŒã§ã¯ãã¢ã·ã¹ã¿ã³ããäœç³»çãªæ
å ±æœåºãæ¯æŽããŠåžå Žã®ãã¬ã³ããšææ
ã®å€åãç¹å®ããå°æ¥ã®ååŒã§ãã詳现ãªæ
å ±ã䜿çšã§ããããã«ããŸãã
éè¡ã§ããåçå ±åãåæããããšã§ãæœåšçãªããŒã³åé è
ã®è²¡åå®å®æ§ãè©äŸ¡ããããã«äœ¿çšã§ããŸããæçµçã«ããã®ãã¯ãããžãŒã¯å¹çã粟床ãããŒã¿ã«åºã¥ãææ決å®èœåãé«ãããŠãŒã¶ãŒã«ããããã®åžå Žã§ã®ç«¶äºäžã®åªäœæ§ããããããŸãã
NVIDIA API ã«ã¿ãã°
ã«ã¢ã¯ã»ã¹ããŠãå©çšå¯èœãªãã¹ãŠã® NIM ã確èªãã
LangChain ã®äŸ¿å©ãªçµ±å
ãè©ŠããŠãèªåã®ããŒã¿ã«æé©ãªãã®ãå³èŠããŠãã ããã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
SQL ãããã£ãããž: NVIDIA ã§ãšã³ã¿ãŒãã©ã€ãº ããŒã¿åæãå€é©ããæ¹æ³
NGC ã³ã³ãããŒ:
DiffDock
NGC ã³ã³ãããŒ:
NMT Megatron Riva 1b
ãŠã§ãããŒ:
è³æ¬åžå Žã«ããã次äžä»£åæ: ååŒå®è¡ãšãªã¹ã¯ç®¡ç
ãŠã§ãããŒ:
éèãµãŒãã¹åãã®å€§èŠæš¡ãª AI ã¢ãã«æšè«ã®å é |
https://developer.nvidia.com/blog/tune-and-deploy-lora-llms-with-nvidia-tensorrt-llm/ | Tune and Deploy LoRA LLMs with NVIDIA TensorRT-LLM | Large language models
(LLMs) have revolutionized natural language processing (NLP) with their ability to learn from massive amounts of text and generate fluent and coherent texts for various tasks and domains. However,
customizing LLMs
is a challenging task, often requiring a full
training process
that is time-consuming and computationally expensive. Moreover, training LLMs requires a diverse and representative dataset, which can be difficult to obtain and curate.
How can enterprises leverage the power of LLMs without paying the cost of full training? One promising solution is Low-Rank Adaptation (LoRA), a fine-tuning method that can significantly reduce the number of trainable parameters, the memory requirement, and the training time, while achieving comparable or even better performance than fine-tuning on various NLP tasks and domains.
This post explains the intuition and the implementation of LoRA, and shows some of its applications and benefits. It also compares LoRA with supervised fine-tuning and prompt engineering, and discusses their advantages and limitations. It outlines practical guidelines for both training and inference of LoRA-tuned models. Finally, it demonstrates how to use
NVIDIA TensorRT-LLM
to optimize deployment of LoRA models on NVIDIA GPUs.
Tutorial prerequisites
To make best use of this tutorial, you will need basic knowledge of LLM training and inference pipelines, as well as:
Basic knowledge of linear algebra
Hugging Face
registered user access and general familiarity with the Transformers library
NVIDIA/TensorRT-LLM optimization library
NVIDIA Triton Inference Server
with
TensorRT-LLM backend
What is LoRA?
LoRA is a fine-tuning method that introduces low-rank matrices into each layer of the LLM architecture, and only trains these matrices while keeping the original LLM weights frozen. It is among the LLM customization tools supported in
NVIDIA NeMo
(Figure 1).
Figure 1. LoRA is among the LLM customization tools and techniques supported in NVIDIA NeMo
LLMs are powerful, but often require customization, especially when used for enterprise or domain-specific use cases. There are many tuning options, ranging from simple prompt engineering to supervised fine-tuning (SFT). The choice of tuning option is typically based on the size of the dataset required (minimum for prompt engineering, maximum for SFT) and compute availability.
LoRA tuning is a type of tuning family called Parameter Efficient Fine-Tuning (PEFT). These techniques are a middle-of-the-road approach. They require more training data and compute compared to prompt engineering, but also yield much higher accuracy. The common theme is that they introduce a small number of parameters or layers while keeping the original LLM unchanged.
PEFT has been proven to achieve comparable accuracy to SFT while using less data and less computational resources. Compared to other tuning techniques, LoRA has several advantages. It reduces the computational and memory cost, as it only adds a few new parameters, but does not add any layers. It enables multi-task learning, allowing a single-base LLM to be used for different tasks by deploying the relevant fine-tuned LoRA variant on demand, only loading its low-rank matrices when needed.
Finally, it avoids catastrophic forgetting, the natural tendency of LLMs to abruptly forget previously learned information upon learning new data. Quantitatively, LoRA performs better than models using alternative tuning techniques such as prompt tuning and adapters, as shown in
LoRA: Low-Rank Adaptation of Large Language Models
.
The math behind LoRA
The math behind LoRA is based on the idea of low-rank decomposition, which is a way of approximating a matrix by a product of two smaller matrices with lower ranks. A rank of a matrix is the number of linearly independent rows or columns in the matrix. A low-rank matrix has fewer degrees of freedom and can be represented more compactly than a full-rank matrix.
LoRA applies low-rank decomposition to the weight matrices of the LLM, which are usually very large and dense. For example, if the LLM has a hidden size of 1,024 and a vocabulary size of 50,000, then the output weight matrix
would have 1024 x 50,000 = 51,200,000 parameters.
LoRA decomposes this matrix
into two smaller matrices, matrix
with the shape of 1024 x
and matrix
with the shape of
x 50,000, where
is a hyperparameter that controls the rank of the decomposition. The product of these two matrices would have the same shape as the original matrix, but only 1024 x
+
x 50,000 = 51,200,000 â 50,000 x (1024 â
) parameters.
The hyperparameter
is critical to set correctly. Choosing a smaller
can save a lot of parameters and memory and achieve faster training. However, a smaller
can potentially decrease task-specific information captured in the low-rank matrices. A larger
can lead to overfitting. Hence, itâs important to experiment in order to achieve the ideal accuracy-performance trade-off for your specific task and data.
LoRA inserts these low-rank matrices into each layer of the LLM, and adds them to the original weight matrices. The original weight matrices are initialized with the pretrained LLM weights and are not updated during training. The low-rank matrices are randomly initialized and are the only parameters that are updated during training. LoRA also applies layer normalization to the sum of the original and low-rank matrices to stabilize the training.
Figure 2. The decomposition of the LLM matrix W into two low-ranking matrices A and B
Multi-LoRA deployment
One challenge in deploying LLMs is how to efficiently serve hundreds or thousands of tuned models. For example, a single base LLM, such as Llama 2, may have many LoRA-tuned variants per language or locale. A standard system would require loading all the models independently, taking up large amounts of memory capacity. Take advantage of LoRAâs design, capturing all the information in smaller low-rank matrices per model, by loading a single base model together with the low-rank matrices
A
and
B
for each respective LoRA tuned variant. In this manner, itâs possible to store thousands of LLMs and run them dynamically and efficiently within a minimal GPU memory footprint.
LoRA tuning
LoRA tuning requires preparing a training dataset in a specific format, typically using prompt templates. You should determine and adhere to a pattern when forming the prompt, which will naturally vary across different use cases. An example for question and answer is shown below.
{
"taskname": "squad",
"prompt_template": "<|VIRTUAL_PROMPT_0|> Context: {context}\n\nQuestion: {question}\n\nAnswer:{answer}",
"total_virtual_tokens": 10,
"virtual_token_splits": [10],
"truncate_field": "context",
"answer_only_loss": True,
"answer_field": "answer",
}
The prompt contains all the 10 virtual tokens at the beginning, followed by the context, the question, and finally the answer. The corresponding fields in the training data JSON object will be mapped to this prompt template to form complete training examples.
There are several available platforms for customizing LLMs. You can use
NVIDIA NeMo
, or a tool such as
Hugging Face PEFT
. For an example of how to tune LoRA on the PubMed dataset using NeMo, see
NeMo Framework PEFT with Llama 2
.
Note that this post uses ready-tuned LLMs from Hugging Face, so there is no need to tune.
LoRA inference
To optimize a LoRA-tuned LLM with TensorRT-LLM, you must understand its architecture and identify which common base architecture it most closely resembles. This tutorial uses Llama 2 13B and Llama 2 7B as the base models, as well as several LoRA-tuned variants available on Hugging Face.
The first step is to use the converter and build scripts in this directory to compile all the models and prepare them for hardware acceleration. Iâll then show examples of deployment using both the command line and Triton Inference Server.
Note that the tokenizer is not handled directly by TensorRT-LLM. But it is necessary to be able to classify it within a defined tokenizer family for runtime and for setting preprocessing and postprocessing steps in Triton.
Set up and build TensorRT-LLM
Start by cloning and building the
NVIDIA/TensorRT-LLM
library. The easiest way to build TensorRT-LLM and retrieve all its dependencies is to use the included Dockerfile. These commands pull a base container and install all the dependencies needed for TensorRT-LLM inside the container. It then builds and installs TensorRT-LLM itself in the container.
git lfs install
git clone -b v0.7.1 https://github.com/NVIDIA/TensorRT-LLM.git
cd TensorRT-LLM
git submodule update --init --recursive
make -C docker release_build
Retrieve model weights
Download the base model and LoRA model from Hugging Face:
git-lfs clone https://huggingface.co/meta-llama/Llama-2-13b-hf
git-lfs clone https://huggingface.co/hfl/chinese-llama-2-lora-13b
Compile the model
Build the engine, setting
--use_lora_plugin
and
--hf_lora_dir
. If LoRA has a separate
lm_head
and embedding, these will replace the
lm_head
and embedding of the base model.
python convert_checkpoint.py --model_dir /tmp/llama-v2-13b-hf \
--output_dir ./tllm_checkpoint_2gpu_lora \
--dtype float16 \
--tp_size 2 \
--hf_lora_dir /tmp/chinese-llama-2-lora-13b
trtllm-build --checkpoint_dir ./tllm_checkpoint_2gpu_lora \
--output_dir /tmp/new_lora_13b/trt_engines/fp16/2-gpu/ \
--gpt_attention_plugin float16 \
--gemm_plugin float16 \
--lora_plugin float16 \
--max_batch_size 1 \
--max_input_len 512 \
--max_output_len 50 \
--use_fused_mlp
Run the model
To run the model during inference, set up the
lora_dir
command line argument. Remember to use the LoRA tokenizer, as the LoRA-tuned model has a larger vocabulary size.
mpirun -n 2 python ../run.py --engine_dir "/tmp/new_lora_13b/trt_engines/fp16/2-gpu/" \
--max_output_len 50 \
--tokenizer_dir "chinese-llama-2-lora-13b/" \
--input_text "ä»å€©å€©æ°åŸå¥œïŒæå°å
¬åçæ¶åïŒ" \
--lora_dir "chinese-llama-2-lora-13b/" \
--lora_task_uids 0 \
--no_add_special_tokens \
--use_py_session
Input: "ä»å€©å€©æ°åŸå¥œïŒæå°å
¬åçæ¶åïŒ"
Output: "åç°å
¬åé人åŸå€ïŒæçåšæ矜æ¯çïŒæçåšæä¹ä¹çïŒæçåšè·³ç»³ïŒè¿æçåšè·æ¥ãæååŠåŠæ¥å°äžäžªç©ºå°äžïŒæååŠåŠäžèµ·è·³ç»³ïŒæè·³äº1"
You can run ablation tests to see the contribution of the LoRA-tuned model first-hand. To easily compare results with and without LoRa, simply set the UID to -1 using
--lora_task_uids -1
. In this case, the model will ignore the LoRA module and the results will be based on the base model alone.
mpirun -n 2 python ../run.py --engine_dir "/tmp/new_lora_13b/trt_engines/fp16/2-gpu/" \
--max_output_len 50 \
--tokenizer_dir "chinese-llama-2-lora-13b/" \
--input_text "ä»å€©å€©æ°åŸå¥œïŒæå°å
¬åçæ¶åïŒ" \
--lora_dir "chinese-llama-2-lora-13b/" \
--lora_task_uids -1 \
--no_add_special_tokens \
--use_py_session
Input: "ä»å€©å€©æ°åŸå¥œïŒæå°å
¬åçæ¶åïŒ"
Output: "æçè§äžäžªäººååšé£èŸ¹èŸ¹ç乊乊ïŒæçèµ·æ¥è¿æºåäœ ïŒå¯æ¯æèµ°è¿è¿å»é®äºäžäžä»è¯Žäœ æ¯äœ åïŒä»è¯Žæ²¡æïŒç¶åæå°±è¯Žäœ çæççäœ åäœ ïŒä»è¯Žè¯Žäœ çæåäœ ïŒæè¯Žäœ æ¯äœ ïŒä»è¯Žäœ æ¯äœ ïŒ"
Run the base model with multiple LoRA-tuned models
TensorRT-LLM also supports running a single base model with multiple LoRA-tuned modules at the same time. Here, we use two LoRA checkpoints as examples. As the rank
of the LoRA modules of both checkpoints is 8, you can set
--max_lora_rank
to 8 in order to reduce the memory requirement for the LoRA plugin.
This example uses a LoRA checkpoint fine-tuned on the Chinese dataset
chinese-llama-lora-7b
and a LoRA checkpoint fine-tuned on the Japanese dataset
Japanese-Alpaca-LoRA-7b-v0
. For TensorRT-LLM to load several checkpoints, pass in the directories of all the LoRA checkpoints through
--lora_dir
"chinese-llama-lora-7b/"
"Japanese-Alpaca-LoRA-7b-v0/"
. TensorRT-LLM will assign
lora_task_uids
to these checkpoints.
lora_task_uids -1
is a predefined value, which corresponds to the base model. For example, passing
lora_task_uids 0 1
will use the first LoRA checkpoint on the first sentence and use the second LoRA checkpoint on the second sentence.
To verify correctness, pass the same Chinese input çŸåœçéŠéœåšåªé? \nçæ¡: three times, as well as the same Japanese input ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã: three times. (In English, both inputs mean, âWhere is the capital of America? \nAnswerâ). Then run on the base model,
chinese-llama-lora-7b
and
Japanese-Alpaca-LoRA-7b-v0
, respectively:
git-lfs clone https://huggingface.co/hfl/chinese-llama-lora-7b
git-lfs clone https://huggingface.co/kunishou/Japanese-Alpaca-LoRA-7b-v0
BASE_LLAMA_MODEL=llama-7b-hf/
python convert_checkpoint.py --model_dir ${BASE_LLAMA_MODEL} \
--output_dir ./tllm_checkpoint_1gpu_lora_rank \
--dtype float16 \
--hf_lora_dir /tmp/Japanese-Alpaca-LoRA-7b-v0 \
--max_lora_rank 8 \
--lora_target_modules "attn_q" "attn_k" "attn_v"
trtllm-build --checkpoint_dir ./tllm_checkpoint_1gpu_lora_rank \
--output_dir /tmp/llama_7b_with_lora_qkv/trt_engines/fp16/1-gpu/ \
--gpt_attention_plugin float16 \
--gemm_plugin float16 \
--lora_plugin float16 \
--max_batch_size 1 \
--max_input_len 512 \
--max_output_len 50
python ../run.py --engine_dir "/tmp/llama_7b_with_lora_qkv/trt_engines/fp16/1-gpu/" \
--max_output_len 10 \
--tokenizer_dir ${BASE_LLAMA_MODEL} \
--input_text "çŸåœçéŠéœåšåªé? \nçæ¡:" "çŸåœçéŠéœåšåªé? \nçæ¡:" "çŸåœçéŠéœåšåªé? \nçæ¡:" "ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:" "ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:" "ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:" \
--lora_dir "lchinese-llama-lora-7b/" "Japanese-Alpaca-LoRA-7b-v0/" \
--lora_task_uids -1 0 1 -1 0 1 \
--use_py_session --top_p 0.5 --top_k 0
The results are shown below:
Input [Text 0]: "<s> çŸåœçéŠéœåšåªé? \nçæ¡:"
Output [Text 0 Beam 0]: "Washington, D.C.
What is the"
Input [Text 1]: "<s> çŸåœçéŠéœåšåªé? \nçæ¡:"
Output [Text 1 Beam 0]: "åçé¡¿ã
"
Input [Text 2]: "<s> çŸåœçéŠéœåšåªé? \nçæ¡:"
Output [Text 2 Beam 0]: "Washington D.C.'''''"
Input [Text 3]: "<s> ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:"
Output [Text 3 Beam 0]: "Washington, D.C.
Which of"
Input [Text 4]: "<s> ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:"
Output [Text 4 Beam 0]: "åçé¡¿ã
"
Input [Text 5]: "<s> ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:"
Output [Text 5 Beam 0]: "ã¯ã·ã³ãã³ D.C."
Notice that
chinese-llama-lora-7b
produces correct answers on the first sentence and the fifth sentence (in Chinese).
Japanese-Alpaca-LoRA-7b-v0
produces correct answers on the sixth sentence (in Japanese).
Important note:
If one of the LoRA modules contains a fine-tuned embedding table or logit GEMM, users must guarantee that all âinstances of the model can use the same fine-tuned embedding table or logit GEMM.
Deploying LoRA tuned models with Triton and inflight batching
This section shows how to deploy LoRA-tuned models using inflight batching with Triton Inference server. For specific instructions on setting up and launching the Triton Inference Server, see
Deploy an AI Coding Assistant with NVIDIA TensorRT-LLM and NVIDIA Triton
.
As before, first compile a model with LoRA enabled, this time with the base model Llama 2 7B.
BASE_MODEL=llama-7b-hf
python3 tensorrt_llm/examples/llama/build.py --model_dir ${BASE_MODEL} \
--dtype float16 \
--remove_input_padding \
--use_gpt_attention_plugin float16 \
--enable_context_fmha \
--use_gemm_plugin float16 \
--output_dir "/tmp/llama_7b_with_lora_qkv/trt_engines/fp16/1-gpu/" \
--max_batch_size 128 \
--max_input_len 512 \
--max_output_len 50 \
--use_lora_plugin float16 \
--lora_target_modules "attn_q" "attn_k" "attn_v" \
--use_inflight_batching \
--paged_kv_cache \
--max_lora_rank 8 \
--world_size 1 --tp_size 1
Next, generate LoRA tensors that will be passed in with each request to Triton.
git-lfs clone https://huggingface.co/hfl/chinese-llama-lora-7b
git-lfs clone https://huggingface.co/kunishou/Japanese-Alpaca-LoRA-7b-v0
python3 tensorrt_llm/examples/hf_lora_convert.py -i Japanese-Alpaca-LoRA-7b-v0 -o Japanese-Alpaca-LoRA-7b-v0-weights --storage-type float16
python3 tensorrt_llm/examples/hf_lora_convert.py -i chinese-llama-lora-7b -o chinese-llama-lora-7b-weights --storage-type float16
Then create a Triton model repository and launch the Triton server as previously described.
Finally, run the multi-LoRA example by issuing multiple concurrent requests from the client. The inflight batcher will execute mixed batches with multiple LoRAs in the same batch.
INPUT_TEXT=("çŸåœçéŠéœåšåªé? \nçæ¡:" "çŸåœçéŠéœåšåªé? \nçæ¡:" "çŸåœçéŠéœåšåªé? \nçæ¡:" "ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:" "ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:" "ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:")
LORA_PATHS=("" "chinese-llama-lora-7b-weights" "Japanese-Alpaca-LoRA-7b-v0-weights" "" "chinese-llama-lora-7b-weights" "Japanese-Alpaca-LoRA-7b-v0-weights")
for index in ${!INPUT_TEXT[@]}; do
text=${INPUT_TEXT[$index]}
lora_path=${LORA_PATHS[$index]}
lora_arg=""
if [ "${lora_path}" != "" ]; then
lora_arg="--lora-path ${lora_path}"
fi
python3 inflight_batcher_llm/client/inflight_batcher_llm_client.py \
--top-k 0 \
--top-p 0.5 \
--request-output-len 10 \
--text "${text}" \
--tokenizer-dir /home/scratch.trt_llm_data/llm-models/llama-models/llama-7b-hf \
${lora_arg} &
done
wait
Example output is shown below:
Input sequence: [1, 29871, 30310, 30604, 30303, 30439, 30733, 235, 164, 137, 30356, 30199, 31688, 30769, 30449, 31250, 30589, 30499, 30427, 30412, 29973, 320, 29876, 234, 176, 151, 30914, 29901]
Input sequence: [1, 29871, 30630, 30356, 30210, 31688, 30769, 30505, 232, 150, 173, 30755, 29973, 320, 29876, 234, 176, 151, 233, 164, 139, 29901]
Input sequence: [1, 29871, 30630, 30356, 30210, 31688, 30769, 30505, 232, 150, 173, 30755, 29973, 320, 29876, 234, 176, 151, 233, 164, 139, 29901]
Input sequence: [1, 29871, 30310, 30604, 30303, 30439, 30733, 235, 164, 137, 30356, 30199, 31688, 30769, 30449, 31250, 30589, 30499, 30427, 30412, 29973, 320, 29876, 234, 176, 151, 30914, 29901]
Input sequence: [1, 29871, 30310, 30604, 30303, 30439, 30733, 235, 164, 137, 30356, 30199, 31688, 30769, 30449, 31250, 30589, 30499, 30427, 30412, 29973, 320, 29876, 234, 176, 151, 30914, 29901]
Input sequence: [1, 29871, 30630, 30356, 30210, 31688, 30769, 30505, 232, 150, 173, 30755, 29973, 320, 29876, 234, 176, 151, 233, 164, 139, 29901]
Got completed request
Input: ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:
Output beam 0: ã¯ã·ã³ãã³ D.C.
Output sequence: [1, 29871, 30310, 30604, 30303, 30439, 30733, 235, 164, 137, 30356, 30199, 31688, 30769, 30449, 31250, 30589, 30499, 30427, 30412, 29973, 320, 29876, 234, 176, 151, 30914, 29901, 29871, 31028, 30373, 30203, 30279, 30203, 360, 29889, 29907, 29889]
Got completed request
Input: çŸåœçéŠéœåšåªé? \nçæ¡:
Output beam 0: Washington, D.C.
What is the
Output sequence: [1, 29871, 30630, 30356, 30210, 31688, 30769, 30505, 232, 150, 173, 30755, 29973, 320, 29876, 234, 176, 151, 233, 164, 139, 29901, 7660, 29892, 360, 29889, 29907, 29889, 13, 5618, 338, 278]
Got completed request
Input: çŸåœçéŠéœåšåªé? \nçæ¡:
Output beam 0: Washington D.C.
Washington D.
Output sequence: [1, 29871, 30630, 30356, 30210, 31688, 30769, 30505, 232, 150, 173, 30755, 29973, 320, 29876, 234, 176, 151, 233, 164, 139, 29901, 7660, 360, 29889, 29907, 29889, 13, 29956, 7321, 360, 29889]
Got completed request
Input: ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:
Output beam 0: Washington, D.C.
Which of
Output sequence: [1, 29871, 30310, 30604, 30303, 30439, 30733, 235, 164, 137, 30356, 30199, 31688, 30769, 30449, 31250, 30589, 30499, 30427, 30412, 29973, 320, 29876, 234, 176, 151, 30914, 29901, 7660, 29892, 360, 29889, 29907, 29889, 13, 8809, 436, 310]
Got completed request
Input: ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:
Output beam 0: Washington D.C.
1. ã¢
Output sequence: [1, 29871, 30310, 30604, 30303, 30439, 30733, 235, 164, 137, 30356, 30199, 31688, 30769, 30449, 31250, 30589, 30499, 30427, 30412, 29973, 320, 29876, 234, 176, 151, 30914, 29901, 7660, 360, 29889, 29907, 29889, 13, 29896, 29889, 29871, 30310]
Got completed request
Input: çŸåœçéŠéœåšåªé? \nçæ¡:
Output beam 0: åçé¡¿
W
Output sequence: [1, 29871, 30630, 30356, 30210, 31688, 30769, 30505, 232, 150, 173, 30755, 29973, 320, 29876, 234, 176, 151, 233, 164, 1
Conclusion
With baseline support for many popular LLM architectures, TensorRT-LLM makes it easy to deploy, experiment, and optimize with a variety of code LLMs. Together, NVIDIA TensorRT-LLM and NVIDIA Triton Inference Server provide an indispensable toolkit for optimizing, deploying, and running LLMs efficiently. With support for LoRA-tuned models, TensorRT-LLM enables efficient deployment of customized LLMs, significantly reducing memory and computational cost.
To get started, download and set up the
NVIDIA/TensorRT-LLM
open-source library, and experiment with the different
example LLMs
. You can tune your own LLM using
NVIDIA NeMo
âsee
NeMo Framework PEFT with Llama 2
for an example. As an alternative, you can also deploy using the
NeMo Framework Inference Container
. | https://developer.nvidia.com/ja-jp/blog/tune-and-deploy-lora-llms-with-nvidia-tensorrt-llm/ | NVIDIA TensorRT-LLM ã«ãããLoRA LLM ã®ãã¥ãŒãã³ã°ãšããã〠| Reading Time:
7
minutes
倧èŠæš¡èšèªã¢ãã«
(LLM) ã¯ãèšå€§ãªããã¹ãããåŠç¿ããããŸããŸãªã¿ã¹ã¯ãé åã«åãããæµæ¢ã§äžè²«ããããã¹ããçæã§ããããšãããèªç¶èšèªåŠç (NLP) ã«é©åœãèµ·ãããŸããããã ãã
LLM ã®ã«ã¹ã¿ãã€ãº
ã¯å°é£ãªäœæ¥ã§ãããå€ãã®å Žåãå®å
šãª
ãã¬ãŒãã³ã° ããã»ã¹
ãå¿
èŠãšããæéãšèšç®ã³ã¹ããããããŸããããã«ãLLM ã®ãã¬ãŒãã³ã°ã«ã¯å€æ§ãã€ä»£è¡šçãªããŒã¿ã»ãããå¿
èŠã§ãããååŸãšãã¥ã¬ãŒã·ã§ã³ãå°é£ãªå ŽåããããŸãã
äŒæ¥ã¯ãã©ãããã°å®å
šãªãã¬ãŒãã³ã°ã«ãããè²»çšãæ¯æãããšãªããLLM ã®ãã¯ãŒã掻çšã§ããã§ãããã? ææãªãœãªã¥ãŒã·ã§ã³ã® 1 ã€ã¯ Low-Rank Adaptation (LoRA) ã§ããããã¯ããã¬ãŒãã³ã°å¯èœãªãã©ã¡ãŒã¿ãŒã®æ°ãã¡ã¢ãªèŠä»¶ããã¬ãŒãã³ã°æéã倧å¹
ã«æžããããã€ãNLP ã®ããŸããŸãªäœæ¥ãšåéã§ãã¡ã€ã³ãã¥ãŒãã³ã°ããå Žåã«å¹æµãããããããäžåãããšããããããã©ãŒãã³ã¹ãéæã§ãããã¡ã€ã³ãã¥ãŒãã³ã°ã®ææ³ã§ãã
ãã®èšäºã§ã¯ãLoRA ã®æŽå¯åãšå®è£
ã«ã€ããŠèª¬æãããã®å¿çšãšå©ç¹ã®äžéšãã玹ä»ããŸããLoRA ãæåž«ãããã¡ã€ã³ãã¥ãŒãã³ã°ãããã³ãã ãšã³ãžãã¢ãªã³ã°ãšæ¯èŒãããã®å©ç¹ãšéçã«ã€ããŠã説æããŸããLoRA ã§ãã¥ãŒãã³ã°ããã¢ãã«ã®ãã¬ãŒãã³ã°ãšæšè«ã®äž¡æ¹ã®ããã®å®çšçãªã¬ã€ãã©ã€ã³ãæŠèª¬ããæåŸã«ã
NVIDIA TensorRT-LLM
ã䜿çšã㊠NVIDIA GPU ã§ã® LoRA ã¢ãã«ã®ãããã€ãæé©åããæ¹æ³ã瀺ããŸãã
ãã¥ãŒããªã¢ã«ã®åææ¡ä»¶
ãã®ãã¥ãŒããªã¢ã«ãæ倧éã«æŽ»çšããã«ã¯ãLLM ãã¬ãŒãã³ã°ããã³æšè«ãã€ãã©ã€ã³ã®åºæ¬çãªç¥èãšä»¥äžã®ç¥èãå¿
èŠã§ãã
ç·åœ¢ä»£æ°ã®åºç€ç¥è
Hugging Face
ã®ç»é²ãŠãŒã¶ãŒ ã¢ã¯ã»ã¹ãšãTransformers ã©ã€ãã©ãªã«é¢ããäžè¬çãªç¥è
NVIDIA/TensorRT-LLM æé©åã©ã€ãã©ãª
TensorRT-LLM ããã¯ãšã³ã
ãåãã
NVIDIA Triton Inference Server
LoRA ãšã¯?
LoRA ã¯ããã¡ã€ã³ãã¥ãŒãã³ã°ã®ææ³ã§ãããLLM ã¢ãŒããã¯ãã£ã®åå±€ã«äœã©ã³ã¯è¡åãå°å
¥ããå
ã® LLM ã®éã¿ã¯ãã®ãŸãŸã§ãã®è¡åã®ã¿ããã¬ãŒãã³ã°ããŸãã
NVIDIA NeMo
ã§ãµããŒããããŠãã LLM ã«ã¹ã¿ãã€ãŒãŒã·ã§ã³ ããŒã«ã® 1 ã€ã§ã (å³ 1)ã
å³ 1. LoRA ã¯ãNVIDIA NeMo ã§ãµããŒããããŠãã LLM ã«ã¹ã¿ãã€ãº ããŒã«ããã³ææ³ã® 1 ã€
LLM ã¯ãã¯ãã«ã§ãããäŒæ¥ããã¡ã€ã³åºæã®çšéã§äœ¿çšããå Žåã¯ç¹ã«ãã«ã¹ã¿ãã€ãºãå¿
èŠã«ãªãããšãé »ç¹ã«ãããŸããç°¡åãªããã³ãã ãšã³ãžãã¢ãªã³ã°ããæåž«ãããã¡ã€ã³ãã¥ãŒãã³ã° (SFT) ãŸã§ãããŸããŸãªãã¥ãŒãã³ã° ãªãã·ã§ã³ããããŸãããã¥ãŒãã³ã° ãªãã·ã§ã³ã®éžæã¯éåžžãå¿
èŠãšãããããŒã¿ã»ããã®èŠæš¡ (ããã³ãã ãšã³ãžãã¢ãªã³ã°ã§æå°ãSFT ã§æ倧) ãšãå©çšã§ããèšç®åŠçãªãœãŒã¹ã«åºã¥ããŸãã
LoRA ãã¥ãŒãã³ã°ã¯ Parameter Efficient Fine-Tuning (PEFT) ãšåŒã°ããŠãããã¥ãŒãã³ã°çŸ€ã®äžçš®ã§ããPEFT ã¯äžåºžçãªææ³ã§ãããããã³ãã ãšã³ãžãã¢ãªã³ã°ãããå€ãã®ãã¬ãŒãã³ã° ããŒã¿ãšèšç®ãå¿
èŠãšããŸããã粟床ã¯ãã£ãšé«ããªããŸããå
ã® LLM ãå€ããã«ãå°æ°ã®ãã©ã¡ãŒã¿ãŒãŸãã¯å±€ãå°å
¥ãããšããç¹ã PEFT ã®å
±éé
ç®ã§ãã
PEFT ã¯ã䜿çšããããŒã¿ãšèšç®ãªãœãŒã¹ã SFT ããå°ãªããªãããSFT ã«å¹æµãã粟床ãéæããããšã蚌æãããŠããŸããä»ã®ãã¥ãŒãã³ã°ææ³ãšæ¯èŒãããšãLoRA ã«ã¯ããã€ãã®å©ç¹ããããŸããããã€ãã®æ°ãããã©ã¡ãŒã¿ãŒãè¿œå ããã ãã§ãå±€ã¯è¿œå ããªããããèšç®ã³ã¹ããšã¡ã¢ãª ã³ã¹ããåæžã§ããŸããããã«ãããã«ãã¿ã¹ã¯åŠç¿ãå¯èœã«ãªããé¢é£ãããã¡ã€ã³ãã¥ãŒãã³ã°ããã LoRA ããªã¢ã³ããå¿
èŠã«å¿ããŠãããã€ããå¿
èŠãªãšãã ããã®äœã©ã³ã¯è¡åãèªã¿èŸŒãããšã§ãããŸããŸãªã¿ã¹ã¯ã§åäžããŒã¹ã® LLM ãå©çšã§ããŸãã
æåŸã«ãªããŸãããæ°ããããŒã¿ãåŠç¿ãããšããåã«åŠç¿ããæ
å ±ãçªç¶å¿ãããšããå£æ»
çãªå¿åŽ (LLM ã«ãšã£ãŠã¯èªç¶ãªåŸå) ãåé¿ãããŸããã
LoRA: Low-Rank Adaptation of Large Language Models
ãã«ç€ºãããã«ãLoRA ã¯å®éçã«ãããã³ãã ãã¥ãŒãã³ã°ãã¢ããã¿ãŒãªã©ã®ä»£ãããšãªããã¥ãŒãã³ã°æ¹æ³ã䜿çšããã¢ãã«ãããããã©ãŒãã³ã¹ãåªããŠããŸãã
LoRA ã®èåŸã«ããæ°åŠ
LoRA ã®èåŸã«ããæ°åŠã¯äœã©ã³ã¯å解ãšããèãã«åºã¥ããŠããŸããããã¯ã©ã³ã¯ã®äœã 2 ã€ã®å°ããªè¡åã®ç©ã§è¡åãè¿äŒŒãããšããææ³ã§ããè¡åã®ã©ã³ã¯ã¯è¡åã®ç·åœ¢ç¬ç«ãªè¡ãŸãã¯åã®æ°ã«ãªããŸããäœã©ã³ã¯ã®è¡åã¯èªç±åºŠãäœãããã«ã©ã³ã¯ã®è¡åããã³ã³ãã¯ãã«è¡šçŸã§ããŸãã
LoRA ã§ã¯ãéåžžã¯éåžžã«å€§ããå¯ãª LLM ã®éã¿è¡åã«äœã©ã³ã¯å解ãé©çšããŸããããšãã°ãLLM ã®é ãå±€ã®ãµã€ãºã 1,024 ã§ãèªåœãµã€ãºã 50,000 ã®ãšããåºåãããéã¿è¡å
ã®ãã©ã¡ãŒã¿ãŒã¯ 1024 x 50,000 = 51,200,000 åã«ãªããŸãã
LoRA ã§ã¯ãã®è¡å
ã 2 ã€ã®å°ããªè¡åã«å解ããŸãã1024 x
è¡å
ãš
x 50,000 è¡å
ã§ãã
ã¯å解ã®ã©ã³ã¯ãå¶åŸ¡ãããã€ããŒãã©ã¡ãŒã¿ãŒã§ãããã® 2 ã€ã®è¡åã®ç©ã®åœ¢ã¯å
ã®è¡åãšåãã«ãªããŸããããã©ã¡ãŒã¿ãŒã¯ 1024 x
+
x 50,000 = 51,200,000 â 50,000 x (1024 â
) åã ãã«ãªããŸãã
ãã€ããŒãã©ã¡ãŒã¿ãŒ
ã¯æ£ããèšå®ããããšãéèŠã§ããå°ããª
ãéžæããããšã§ããããã®ãã©ã¡ãŒã¿ãŒãšã¡ã¢ãªãç¯çŽããããã¬ãŒãã³ã°ãéããªããŸãããã ãã
ãå°ãããšãäœã©ã³ã¯è¡åã§ãã£ããã£ãããã¿ã¹ã¯åºææ
å ±ãå°ãªããªãå¯èœæ§ããããŸãã
ã倧ãããšãéå°é©åã«ãªãããšããããŸãããããã£ãŠãç¹å®ã®ã¿ã¹ã¯ãšããŒã¿ã«å¯ŸããŠç²ŸåºŠãšããã©ãŒãã³ã¹ã®çæ³çãªãã¬ãŒããªããéæããããã«ã¯ãå®éšãè¡ãããšãéèŠã§ãã
LoRA ã§ã¯ãäœã©ã³ã¯è¡åã LLM ã®åå±€ã«æ¿å
¥ããå
ã®éã¿è¡åã«è¿œå ããŸããå
ã®éã¿è¡åã¯åŠç¿æžã¿ LLM éã¿ã§åæåããããã¬ãŒãã³ã°äžã«æŽæ°ãããããšã¯ãããŸãããäœã©ã³ã¯è¡åã¯ã©ã³ãã ã«åæåããããã¬ãŒãã³ã°äžã«æŽæ°ãããå¯äžã®ãã©ã¡ãŒã¿ãŒãšãªããŸããLoRA ã§ã¯ãŸããå
ã®è¡åãšäœã©ã³ã¯è¡åã®åèšã«å±€ã®æ£èŠåãé©çšãããã¬ãŒãã³ã°ãå®å®ãããŸãã
å³ 2. LLM è¡å W ã 2 ã€ã®äœã©ã³ã¯è¡å A ãš B ã«å解
ãã«ã LoRA ãããã€
LLM ã®ãããã€ã«ããã課é¡ã® 1 ã€ã¯ãæ°çŸãŸãã¯æ°åã®ãã¥ãŒãã³ã°ãããã¢ãã«ãããã«ããŠå¹ççã«äžãããã§ããããšãã°ãLlama 2 ãªã©ã®åäžããŒã¹ã® LLM ã«ã¯ãèšèªãŸãã¯ãã±ãŒã«ããšã«å€ãã® LoRA ã§ãã¥ãŒãã³ã°ããããªã¢ã³ããå«ãŸããããšããããŸããæšæºçãªã·ã¹ãã ã§ã¯ããã¹ãŠã®ã¢ãã«ãéäŸåã§èªã¿èŸŒãããšãå¿
èŠã«ãªããã¡ã¢ãªå®¹éã®å€§éšåãå ããããšããããŸããLoRA ã®èšèšã掻çšããLoRA ã§ãã¥ãŒãã³ã°ããåããªã¢ã³ãã«å¯ŸããŠäœã©ã³ã¯è¡å
A
ãš
B
ãšå
±ã«åäžåºæ¬ã¢ãã«ãèªã¿èŸŒãããšã§ãã¢ãã«ããšã«å°ããªäœã©ã³ã¯è¡åã§ãã¹ãŠã®æ
å ±ããã£ããã£ããŸãããã®ããã«ããŠãæ°åã® LLM ãä¿åããæå°ã® GPU ã¡ã¢ãª ãããããªã³ãã§åçãã€å¹ççã«å®è¡ã§ããŸãã
LoRA ãã¥ãŒãã³ã°
LoRA ãã¥ãŒãã³ã°ã§ã¯ãéåžžã¯ããã³ãã ãã³ãã¬ãŒãã䜿çšãããã¬ãŒãã³ã° ããŒã¿ã»ãããç¹å®ã®åœ¢åŒã§æºåããå¿
èŠããããŸããããã³ããã圢æãããšãããã¿ãŒã³ã決å®ããããã«åŸãå¿
èŠããããŸããããã¯åœç¶ãããŸããŸãªçšéã«ãã£ãŠç°ãªããŸãã質åãšåçã®äŸã以äžã«ç€ºããŸãã
{
"taskname": "squad",
"prompt_template": "<|VIRTUAL_PROMPT_0|> Context: {context}\n\nQuestion: {question}\n\nAnswer:{answer}",
"total_virtual_tokens": 10,
"virtual_token_splits": [10],
"truncate_field": "context",
"answer_only_loss": True,
"answer_field": "answer",
}
ãã®ããã³ããã«ã¯ãæåã« 10 åã®ä»®æ³ããŒã¯ã³å
šéšãå«ãŸããæèãšè³ªåãããã«ç¶ããæåŸã«åçãå ãããŸãããã¬ãŒãã³ã° ããŒã¿ JSON ãªããžã§ã¯ãã®å¯Ÿå¿ãããã£ãŒã«ãããã®ããã³ãã ãã³ãã¬ãŒãã«ãããã³ã°ãããå®å
šãªãã¬ãŒãã³ã°äŸã圢æãããŸãã
LLM ãã«ã¹ã¿ãã€ãºããããã®ãã©ãããã©ãŒã ãããã€ããããŸãã
NVIDIA NeMo
ã䜿çšãããã
Hugging Face PEFT
ãªã©ã®ããŒã«ã䜿çšããããšãã§ããŸããNeMo ã䜿çšããPubMed ããŒã¿ã»ãã㧠LoRA ããã¥ãŒãã³ã°ããæ¹æ³ã®äŸãå¿
èŠã§ããã°ãã
NeMo Framework PEFT with Llama2 and Mixtral-8x7B
ããã芧ãã ããã
ãã®èšäºã§ã¯ãHugging Face ã®ãã¥ãŒãã³ã°æžã¿ LLM ã䜿çšããŠããããããã¥ãŒãã³ã°ããå¿
èŠããªãããšã«ãçæãã ããã
LoRA æšè«
LoRA ã§ãã¥ãŒãã³ã°ãã LLM ã TensorRT-LLM ã§æé©åããã«ã¯ããã®ã¢ãŒããã¯ãã£ãç解ãããããæã䌌ãŠããå
±éã®åºæ¬ã®ã¢ãŒããã¯ãã£ãç¹å®ããå¿
èŠããããŸãããã®ãã¥ãŒããªã¢ã«ã§ã¯ãLlama 2 13B ãš Llama 2 7B ãåºæ¬ã¢ãã«ãšããŠäœ¿çšããŸãããŸããHugging Face ã§å©çšã§ããããã€ãã® LoRA ã§ãã¥ãŒãã³ã°ããããªã¢ã³ãã䜿çšããŸãã
æåã®æé ã§ã¯ãã³ã³ããŒã¿ãŒã䜿çšãããã®ãã£ã¬ã¯ããªã§ã¹ã¯ãªãããæ§ç¯ãããã¹ãŠã®ã¢ãã«ãã³ã³ãã€ã«ããããŒããŠã§ã¢ ã¢ã¯ã»ã©ã¬ãŒã·ã§ã³ã®æºåãããŸãã次ã«ãã³ãã³ã ã©ã€ã³ãš Triton Inference Server ã®äž¡æ¹ã䜿çšãããããã€äŸãã玹ä»ããŸãã
ããŒã¯ãã€ã¶ãŒã TensorRT-LLM ã«ãã£ãŠçŽæ¥åŠçãããããšã¯ãªãããšã«ã泚æãã ããããã ãå®è¡æã®ãããšãTriton ã§ååŠçãšåŸåŠçã®ã¹ããããèšå®ããããã«ã¯ãå®çŸ©æžã¿ã®ããŒã¯ãã€ã¶ãŒ ãã¡ããªå
ã§ãããåé¡ã§ããå¿
èŠããããŸãã
TensorRT-LLM ãèšå®ããŠãã«ããã
ãŸãã
NVIDIA/TensorRT-LLM
ã©ã€ãã©ãªãã¯ããŒã³ãããã«ãããŸããTensorRT-LLM ããã«ããããã®äŸåé¢ä¿ããã¹ãŠååŸããæãç°¡åãªæ¹æ³ã¯ãä»å±ã® Dockerfile ã䜿çšããããšã§ãã以äžã®ã³ãã³ãã§ã¯ãåºæ¬ã³ã³ãããŒã pull ããããã®ã³ã³ãããŒã®äžã« TensorRT-LLM ã«å¿
èŠãªãã¹ãŠã®äŸåé¢ä¿ãã€ã³ã¹ããŒã«ãããŸãã次ã«ãTensorRT-LLM èªäœããã«ããããã³ã³ãããŒã«ã€ã³ã¹ããŒã«ãããŸãã
git lfs install
git clone https://github.com/NVIDIA/TensorRT-LLM.git
cd TensorRT-LLM
git submodule update --init --recursive
make -C docker release_build
ã¢ãã«ã®éã¿ãååŸãã
åºæ¬ã¢ãã«ãš LoRA ã¢ãã«ã Hugging Face ããããŠã³ããŒãããŸãã
git-lfs clone
https://huggingface.co/meta-llama/Llama-2-13b-hf
git-lfs clone
https://huggingface.co/hfl/chinese-llama-2-lora-13b
ã¢ãã«ãã³ã³ãã€ã«ãã
ãšã³ãžã³ãæ§ç¯ã
ã
--use_lora_plugin
ãš
--hf_lora_dir
ãèšå®ããŸããLoRA ã«å¥ã®
lm_head
ãšåã蟌ã¿ãããå Žåãããã¯åºæ¬ã¢ãã«ã®
lm_head
ãšåã蟌ã¿ã眮ãæããŸãã
python convert_checkpoint.py --model_dir /tmp/llama-v2-13b-hf \
--output_dir ./tllm_checkpoint_2gpu_lora \
--dtype float16 \
--tp_size 2 \
--hf_lora_dir /tmp/chinese-llama-2-lora-13b
trtllm-build --checkpoint_dir ./tllm_checkpoint_2gpu_lora \
--output_dir /tmp/new_lora_13b/trt_engines/fp16/2-gpu/ \
--gpt_attention_plugin float16 \
--gemm_plugin float16 \
--lora_plugin float16 \
--max_batch_size 1 \
--max_input_len 512 \
--max_output_len 50 \
--use_fused_mlp
ã¢ãã«ãå®è¡ãã
æšè«äžã«ã¢ãã«ãå®è¡ããã«ã¯ã
lora_dir
ã³ãã³ã ã©ã€ã³åŒæ°ãèšå®ããŸããLoRA ã§ãã¥ãŒãã³ã°ããã¢ãã«ã§ã¯èªåœãµã€ãºã倧ãããããLoRA ããŒã¯ãã€ã¶ãŒãå¿
ã䜿çšããŠãã ããã
mpirun -n 2 python ../run.py --engine_dir "/tmp/new_lora_13b/trt_engines/fp16/2-gpu/" \
--max_output_len 50 \
--tokenizer_dir "chinese-llama-2-lora-13b/" \
--input_text "ä»å€©å€©æ°åŸå¥œïŒæå°å
¬åçæ¶åïŒ" \
--lora_dir "chinese-llama-2-lora-13b/" \
--lora_task_uids 0 \
--no_add_special_tokens \
--use_py_session
Input: "ä»å€©å€©æ°åŸå¥œïŒæå°å
¬åçæ¶åïŒ"
Output: "åç°å
¬åé人åŸå€ïŒæçåšæ矜æ¯çïŒæçåšæä¹ä¹çïŒæçåšè·³ç»³ïŒè¿æçåšè·æ¥ãæååŠåŠæ¥å°äžäžªç©ºå°äžïŒæååŠåŠäžèµ·è·³ç»³ïŒæè·³äº1"
LoRA ã§ãã¥ãŒãã³ã°ããã¢ãã«ã®åœ±é¿ããã¢ãã¬ãŒã·ã§ã³ ãã¹ããå®è¡ããŠçŽæ¥ç¢ºèªããããšãã§ããŸããLoRA ãããå Žåãšãªãå Žåã®çµæãç°¡åã«æ¯èŒããã«ã¯ã
--lora_task_uids -1
ã䜿çšã㊠UID ã -1 ã«èšå®ããŸãããã®å Žåãã¢ãã«ã¯ LoRA ã¢ãžã¥ãŒã«ãç¡èŠããçµæã¯åºæ¬ã¢ãã«ã®ã¿ã«åºã¥ããŸãã
mpirun -n 2 python ../run.py --engine_dir "/tmp/new_lora_13b/trt_engines/fp16/2-gpu/" \
--max_output_len 50 \
--tokenizer_dir "chinese-llama-2-lora-13b/" \
--input_text "ä»å€©å€©æ°åŸå¥œïŒæå°å
¬åçæ¶åïŒ" \
--lora_dir "chinese-llama-2-lora-13b/" \
--lora_task_uids -1 \
--no_add_special_tokens \
--use_py_session
Input: "ä»å€©å€©æ°åŸå¥œïŒæå°å
¬åçæ¶åïŒ"
Output: "æçè§äžäžªäººååšé£èŸ¹èŸ¹ç乊乊ïŒæçèµ·æ¥è¿æºåäœ ïŒå¯æ¯æèµ°è¿è¿å»é®äºäžäžä»è¯Žäœ æ¯äœ åïŒä»è¯Žæ²¡æïŒç¶åæå°±è¯Žäœ çæççäœ åäœ ïŒä»è¯Žè¯Žäœ çæåäœ ïŒæè¯Žäœ æ¯äœ ïŒä»è¯Žäœ æ¯äœ ïŒ"
LoRA ã§ãã¥ãŒãã³ã°ããè€æ°ã®ã¢ãã«ãšåºæ¬ã¢ãã«ãåæå®è¡
ãŸããTensorRT-LLM ã¯ãLoRA ã§ãã¥ãŒãã³ã°ããè€æ°ã®ã¢ãžã¥ãŒã«ãšåäžã®åºæ¬ã¢ãã«ãåæã«å®è¡ããããšãã§ããŸããããã§ã¯ã2 ã€ã® LoRA ãã§ãã¯ãã€ã³ããäŸãšããŠäœ¿çšããŸããäž¡æ¹ã®ãã§ãã¯ãã€ã³ãã® LoRA ã¢ãžã¥ãŒã«ã®ã©ã³ã¯
㯠8 ã§ãããããLoRA ãã©ã°ã€ã³ã®ã¡ã¢ãªèŠä»¶ãæžããããã«
--max_lora_rank
ã 8 ã«èšå®ã§ããŸãã
ãã®äŸã§ã¯ãäžåœèªããŒã¿ã»ãã chinese-llama-lora-7b ã§ãã¡ã€ã³ãã¥ãŒãã³ã°ããã LoRA ãã§ãã¯ãã€ã³ããšãæ¥æ¬èªããŒã¿ã»ãã Japanese-Alpaca-LoRA-7b-v0 ã§ãã¡ã€ã³ãã¥ãŒãã³ã°ããã LoRA ãã§ãã¯ãã€ã³ãã䜿çšããŠããŸããTensorRT-LLM ã§è€æ°ã®ãã§ãã¯ãã€ã³ããèªã¿èŸŒãã«ã¯ã
--lora_dir "chinese-llama-lora-7b/"
"Japanese-Alpaca-LoRA-7b-v0/"
çµç±ã§å
š LoRA ãã§ãã¯ãã€ã³ãã®ãã£ã¬ã¯ããªãæž¡ããŸããTensorRT-LLM ã¯
lora_task_uids
ããããã®ãã§ãã¯ãã€ã³ãã«å²ãåœãŠãŸãã
lora_task_uids -1
ã¯åºæ¬ã¢ãã«ã«å¯Ÿå¿ããäºåå®çŸ©æžã¿ã®å€ã§ããããšãã°ã
lora_task_uids 0 1
ãæž¡ããšãæåã®æã§æåã® LoRA ãã§ãã¯ãã€ã³ãã䜿çšããã2 çªç®ã®æ㧠2 çªç®ã® LoRA ãã§ãã¯ãã€ã³ãã䜿çšãããŸãã
æ£ããããšã確èªããã«ã¯ãäžåœèªã®å
¥åãçŸåœçéŠéœåšåªé? \nçæ¡:ãã 3 åæž¡ããæ¥æ¬èªã®å
¥åãã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:ãã 3 åæž¡ããŸãã(è±èªã§ã¯ããããã®å
¥åããWhere is the capital of America? \nAnswerããæå³ããŸã)ã次ã«ãåºæ¬ã¢ãã«ã§ chinese-llama-lora-7b ãš Japanese-Alpaca-LoRA-7b-v0 ãããããå®è¡ããŸãã
git-lfs clone
https://huggingface.co/hfl/chinese-llama-lora-7b
git-lfs clone
https://huggingface.co/kunishou/Japanese-Alpaca-LoRA-7b-v0
BASE_LLAMA_MODEL=llama-7b-hf/
python convert_checkpoint.py --model_dir ${BASE_LLAMA_MODEL} \
--output_dir ./tllm_checkpoint_1gpu_lora_rank \
--dtype float16 \
--hf_lora_dir /tmp/Japanese-Alpaca-LoRA-7b-v0 \
--max_lora_rank 8 \
--lora_target_modules "attn_q" "attn_k" "attn_v"
trtllm-build --checkpoint_dir ./tllm_checkpoint_1gpu_lora_rank \
--output_dir /tmp/llama_7b_with_lora_qkv/trt_engines/fp16/1-gpu/ \
--gpt_attention_plugin float16 \
--gemm_plugin float16 \
--lora_plugin float16 \
--max_batch_size 1 \
--max_input_len 512 \
--max_output_len 50
python ../run.py --engine_dir "/tmp/llama_7b_with_lora_qkv/trt_engines/fp16/1-gpu/" \
--max_output_len 10 \
--tokenizer_dir ${BASE_LLAMA_MODEL} \
--input_text "çŸåœçéŠéœåšåªé? \nçæ¡:" "çŸåœçéŠéœåšåªé? \nçæ¡:" "çŸåœçéŠéœåšåªé? \nçæ¡:" "ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:" "ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:" "ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:" \
--lora_dir "chinese-llama-lora-7b" "Japanese-Alpaca-LoRA-7b-v0/" \
--lora_task_uids -1 0 1 -1 0 1 \
--use_py_session --top_p 0.5 --top_k 0
çµæã以äžã«ç€ºããŸãã
Input [Text 0]: "<s> çŸåœçéŠéœåšåªé? \nçæ¡:"
Output [Text 0 Beam 0]: "Washington, D.C.
What is the"
Input [Text 1]: "<s> çŸåœçéŠéœåšåªé? \nçæ¡:"
Output [Text 1 Beam 0]: "åçé¡¿ã
"
Input [Text 2]: "<s> çŸåœçéŠéœåšåªé? \nçæ¡:"
Output [Text 2 Beam 0]: "Washington D.C.'''''"
Input [Text 3]: "<s> ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:"
Output [Text 3 Beam 0]: "Washington, D.C.
Which of"
Input [Text 4]: "<s> ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:"
Output [Text 4 Beam 0]: "åçé¡¿ã
"
Input [Text 5]: "<s> ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:"
Output [Text 5 Beam 0]: "ã¯ã·ã³ãã³ D.C."
chinese-llama-lora-7b ã«ãããæåã®æãš 5 çªç®ã®æã§æ£ããçã (äžåœèª) ãåºãããšã«ã泚ç®ãã ãããJapanese-Alpaca-LoRA-7b-v0 ã«ããã6 çªç®ã®æã§æ£ããçã (æ¥æ¬èª) ãçæããŸãã
éèŠãªæ³šæ:
LoRA ã¢ãžã¥ãŒã«ã®ã²ãšã€ã«ãã¡ã€ã³ãã¥ãŒãã³ã°ãããåã蟌ã¿ããŒãã«ãŸã㯠logit GEMM ãå«ãŸããŠããå Žåãåãããã¡ã€ã³ãã¥ãŒãã³ã°ãããåã蟌ã¿ããŒãã«ãŸã㯠logit GEMM ãã¢ãã«ã®å
šã€ã³ã¹ã¿ã³ã¹ã§äœ¿çšã§ããããããŠãŒã¶ãŒã¯åãèšããå¿
èŠããããŸãã
LoRA ã§ãã¥ãŒãã³ã°ããã¢ãã«ã Triton ãšã€ã³ãã©ã€ã ãããåŠçã§ãããã€ãã
ãã®ã»ã¯ã·ã§ã³ã§ã¯ãLoRA ã§ãã¥ãŒãã³ã°ããã¢ãã«ããTriton Inference Server ã§ã€ã³ãã©ã€ã ãããåŠçã䜿çšããŠãããã€ããæ¹æ³ã瀺ããŸããTriton Inference Server ã®èšå®ãšèµ·åã«é¢ããå
·äœçãªæé ã«ã€ããŠã¯ãã
Deploy an AI Coding Assistant with NVIDIA TensorRT-LLM and NVIDIA Triton
ããåç
§ããŠãã ããã
åãšåãããã«ããŸããLoRA ãæå¹ã«ããŠã¢ãã«ãã³ã³ãã€ã«ããŸããä»åã¯åºæ¬ã¢ãã«ã® Llama 2 7B ã§ã³ã³ãã€ã«ããŸãã
BASE_MODEL=llama-7b-hf
python3 tensorrt_llm/examples/llama/build.py --model_dir ${BASE_MODEL} \
--dtype float16 \
--remove_input_padding \
--use_gpt_attention_plugin float16 \
--enable_context_fmha \
--use_gemm_plugin float16 \
--output_dir "/tmp/llama_7b_with_lora_qkv/trt_engines/fp16/1-gpu/" \
--max_batch_size 128 \
--max_input_len 512 \
--max_output_len 50 \
--use_lora_plugin float16 \
--lora_target_modules "attn_q" "attn_k" "attn_v" \
--use_inflight_batching \
--paged_kv_cache \
--max_lora_rank 8 \
--world_size 1 --tp_size 1
次ã«ããªã¯ãšã¹ãããšã« Triton ã«æž¡ããã LoRA ãã³ãœã«ãçæããŸãã
git-lfs clone
https://huggingface.co/hfl/chinese-llama-lora-7b
git-lfs clone
https://huggingface.co/kunishou/Japanese-Alpaca-LoRA-7b-v0
python3 tensorrt_llm/examples/hf_lora_convert.py -i Japanese-Alpaca-LoRA-7b-v0 -o Japanese-Alpaca-LoRA-7b-v0-weights --storage-type float16
python3 tensorrt_llm/examples/hf_lora_convert.py -i chinese-llama-lora-7b -o chinese-llama-lora-7b-weights --storage-type float16
ãããŠãTriton ã¢ãã« ãªããžããªãäœæããåè¿°ã®ããã« Triton ãµãŒããŒãèµ·åããŸãã
æåŸã«ãã¯ã©ã€ã¢ã³ãããè€æ°ã®åæãªã¯ãšã¹ããçºè¡ã㊠multi-LoRA ã®äŸãå®è¡ããŸããã€ã³ãã©ã€ã ãããã£ãŒã«ãããè€æ°ã® LoRA ãæ··åšããããããåããããã§å®è¡ãããŸãã
INPUT_TEXT=("çŸåœçéŠéœåšåªé? \nçæ¡:" "çŸåœçéŠéœåšåªé? \nçæ¡:" "çŸåœçéŠéœåšåªé? \nçæ¡:" "ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:" "ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:" "ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:")
LORA_PATHS=("" "chinese-llama-lora-7b-weights" "Japanese-Alpaca-LoRA-7b-v0-weights" "" "chinese-llama-lora-7b-weights" "Japanese-Alpaca-LoRA-7b-v0-weights")
for index in ${!INPUT_TEXT[@]}; do
text=${INPUT_TEXT[$index]}
lora_path=${LORA_PATHS[$index]}
lora_arg=""
if [ "${lora_path}" != "" ]; then
lora_arg="--lora-path ${lora_path}"
fi
python3 inflight_batcher_llm/client/inflight_batcher_llm_client.py \
--top-k 0 \
--top-p 0.5 \
--request-output-len 10 \
--text "${text}" \
--tokenizer-dir /home/scratch.trt_llm_data/llm-models/llama-models/llama-7b-hf \
${lora_arg} &
done
wait
åºåäŸã以äžã«ç€ºããŸãã
Input sequence: [1, 29871, 30310, 30604, 30303, 30439, 30733, 235, 164, 137, 30356, 30199, 31688, 30769, 30449, 31250, 30589, 30499, 30427, 30412, 29973, 320, 29876, 234, 176, 151, 30914, 29901]
Input sequence: [1, 29871, 30630, 30356, 30210, 31688, 30769, 30505, 232, 150, 173, 30755, 29973, 320, 29876, 234, 176, 151, 233, 164, 139, 29901]
Input sequence: [1, 29871, 30630, 30356, 30210, 31688, 30769, 30505, 232, 150, 173, 30755, 29973, 320, 29876, 234, 176, 151, 233, 164, 139, 29901]
Input sequence: [1, 29871, 30310, 30604, 30303, 30439, 30733, 235, 164, 137, 30356, 30199, 31688, 30769, 30449, 31250, 30589, 30499, 30427, 30412, 29973, 320, 29876, 234, 176, 151, 30914, 29901]
Input sequence: [1, 29871, 30310, 30604, 30303, 30439, 30733, 235, 164, 137, 30356, 30199, 31688, 30769, 30449, 31250, 30589, 30499, 30427, 30412, 29973, 320, 29876, 234, 176, 151, 30914, 29901]
Input sequence: [1, 29871, 30630, 30356, 30210, 31688, 30769, 30505, 232, 150, 173, 30755, 29973, 320, 29876, 234, 176, 151, 233, 164, 139, 29901]
Got completed request
Input: ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:
Output beam 0: ã¯ã·ã³ãã³ D.C.
Output sequence: [1, 29871, 30310, 30604, 30303, 30439, 30733, 235, 164, 137, 30356, 30199, 31688, 30769, 30449, 31250, 30589, 30499, 30427, 30412, 29973, 320, 29876, 234, 176, 151, 30914, 29901, 29871, 31028, 30373, 30203, 30279, 30203, 360, 29889, 29907, 29889]
Got completed request
Input: çŸåœçéŠéœåšåªé? \nçæ¡:
Output beam 0: Washington, D.C.
What is the
Output sequence: [1, 29871, 30630, 30356, 30210, 31688, 30769, 30505, 232, 150, 173, 30755, 29973, 320, 29876, 234, 176, 151, 233, 164, 139, 29901, 7660, 29892, 360, 29889, 29907, 29889, 13, 5618, 338, 278]
Got completed request
Input: çŸåœçéŠéœåšåªé? \nçæ¡:
Output beam 0: Washington D.C.
Washington D.
Output sequence: [1, 29871, 30630, 30356, 30210, 31688, 30769, 30505, 232, 150, 173, 30755, 29973, 320, 29876, 234, 176, 151, 233, 164, 139, 29901, 7660, 360, 29889, 29907, 29889, 13, 29956, 7321, 360, 29889]
Got completed request
Input: ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:
Output beam 0: Washington, D.C.
Which of
Output sequence: [1, 29871, 30310, 30604, 30303, 30439, 30733, 235, 164, 137, 30356, 30199, 31688, 30769, 30449, 31250, 30589, 30499, 30427, 30412, 29973, 320, 29876, 234, 176, 151, 30914, 29901, 7660, 29892, 360, 29889, 29907, 29889, 13, 8809, 436, 310]
Got completed request
Input: ã¢ã¡ãªã«åè¡åœã®éŠéœã¯ã©ãã§ãã? \nçã:
Output beam 0: Washington D.C.
1. ã¢
Output sequence: [1, 29871, 30310, 30604, 30303, 30439, 30733, 235, 164, 137, 30356, 30199, 31688, 30769, 30449, 31250, 30589, 30499, 30427, 30412, 29973, 320, 29876, 234, 176, 151, 30914, 29901, 7660, 360, 29889, 29907, 29889, 13, 29896, 29889, 29871, 30310]
Got completed request
Input: çŸåœçéŠéœåšåªé? \nçæ¡:
Output beam 0: åçé¡¿
W
Output sequence: [1, 29871, 30630, 30356, 30210, 31688, 30769, 30505, 232, 150, 173, 30755, 29973, 320, 29876, 234, 176, 151, 233, 164, 1
ãŸãšã
å€ãã®äžè¬ç㪠LLM ã¢ãŒããã¯ãã£ãããŒã¹ã©ã€ã³ ãµããŒããã TensorRT-LLM ã¯ãããŸããŸãªã³ãŒã LLM ã«ãããããã€ãå®éšãæé©åãç°¡åã«ããŸããNVIDIA TensorRT-LLM ãš NVIDIA Triton Inference Server ãå
±ã«ãLLM ãå¹ççã«æé©åããããã€ãå®è¡ããããã«äžå¯æ¬ ãªããŒã«ããããæäŸããŸããLoRA ã§ãã¥ãŒãã³ã°ããã¢ãã«ããµããŒãããã TensorRT-LLM ã§ã¯ãã«ã¹ã¿ãã€ãºããã LLM ãå¹ççã«ãããã€ã§ãããããã¡ã¢ãª ã³ã¹ããšèšç®ã³ã¹ãã倧å¹
ã«åæžãããŸãã
ãŸãã¯ã
NVIDIA/TensorRT-LLM
ãªãŒãã³ãœãŒã¹ ã©ã€ãã©ãªãããŠã³ããŒãããŠèšå®ããããŸããŸãª
ãµã³ãã« LLM
ãè©ŠããŠã¿ãŠãã ããã
NVIDIA NeMo
ã䜿çšããã°ç¬èªã® LLM ããã¥ãŒãã³ã°ã§ããŸããäŸã«ã€ããŠã¯ãã
NeMo Framework PEFT with Llama2 and Mixtral-8x7B
ããåç
§ããŠãã ããããããã¯ã
NeMo Framework Inference Container
ã䜿çšããŠãããã€ããããšãã§ããŸãã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
NeMoãTensorRT-LLMãTriton Inference Server ã®ã¢ã¯ã»ã©ã¬ãŒããã LLM ã¢ãã«ã®ã¢ã©ã€ã¡ã³ããšãããã€
GTC ã»ãã·ã§ã³:
Oracle Container Engine for Kubernetes ã䜿çšããNVIDIA Nemotron LLM ããã¡ã€ã³ãã¥ãŒãã³ã°ããOCI ã«ãããã€ãã (Oracle æäŸ)
GTC ã»ãã·ã§ã³:
ããã¹ãçæã« TensorRT-LLM ã䜿çšãã LLM ã®æé©åãšã¹ã±ãŒãªã³ã°
NGC Containers:
TensorRT PB May (PB 24h1)
NGC Containers:
TensorRT
SDK:
NeMo Inferencing Microservice |
https://developer.nvidia.com/blog/three-building-blocks-for-creating-ai-virtual-assistants-for-customer-service-with-an-nvidia-nim-agent-blueprint/ | Three Building Blocks for Creating AI Virtual Assistants for Customer Service with an NVIDIA AI Blueprint | In todayâs fast-paced business environment, providing exceptional customer service is no longer just a nice-to-haveâitâs a necessity. Whether addressing technical issues, resolving billing questions, or providing service updates, customers expect quick, accurate, and personalized responses at their convenience. However, achieving this level of service comes with significant challenges.
Legacy approaches, such as static scripts or manual processes, often fall short when it comes to delivering personalized and real-time support. Additionally, many customer service operations rely on sensitive and fragmented data, which is subject to strict data governance and privacy regulations. With the rise of generative AI, companies aim to revolutionize customer service by enhancing operational efficiency, cutting costs, and maximizing ROI.
Integrating AI into existing systems presents challenges related to transparency, accuracy, and security, which can impede adoption and disrupt workflows. To overcome these hurdles, companies are leveraging generative AI-powered virtual assistants to manage a wide range of tasks, ultimately improving response times and freeing up resources.
This post outlines how developers can use the
NVIDIA AI Blueprint for AI virtual assistants
to scale operations with generative AI. By leveraging this information, including sample code, businesses can meet the growing demands for exceptional customer service while ensuring data integrity and governance. Whether improving existing systems or creating new ones, this blueprint empowers teams to meet customer needs with efficient and meaningful interactions.
Smarter AI virtual assistants with an AI query engine using retrieval-augmented generation
When building an AI virtual assistant, itâs important to align with the unique use case requirements, institutional knowledge, and needs of the organization. Traditional bots, however, often rely on rigid frameworks and outdated methods that struggle to meet the evolving demands of todayâs customer service landscape.
Across every industry, AI-based assistants can be transformational. For example, telecommunications companies, and the majority of retail and service providers, can use AI virtual assistants to enhance customer experience by offering support 24 hours a day, 7 days a week while handling a wide range of customer queries in multiple languages and providing dynamic, personalized interactions that streamline troubleshooting and account management. This helps reduce wait times and ensures consistent service across diverse customer needs.
Another example is within the healthcare insurance payor industry, where ensuring a positive member experience is critical. Virtual assistants enhance this experience by providing personalized support to members, addressing their claims, coverage inquiries, benefits, and payment issues, all while ensuring compliance with healthcare regulations. This also helps reduce the administrative burden on healthcare workers.
With the NVIDIA AI platform, organizations can create an AI query engine that uses
retrieval-augmented generation (RAG)
to connect AI applications to enterprise data. The AI virtual assistant blueprint enables developers to quickly get started building solutions that provide enhanced customer experiences. It is built using the following
NVIDIA NIM
microservices:
NVIDIA NIM for LLM:
Brings the power of state-of-the-art large language models (LLMs) to applications, providing unmatched natural language processing with remarkable efficiency.
Llama 3.1 70B Instruct NIM
:
Powers complex conversations with superior contextual understanding, reasoning, and text generation.
NVIDIA NeMo
Retriever NIM:
This collection provides easy access to state-of-the-art models that serve as foundational building blocks for RAG pipelines. These pipelines, when integrated into virtual assistant solutions, enable seamless access to enterprise data, unlocking institutional knowledge via fast, accurate, and scalable answers.
NeMo
Retriever Embedding NIM
:
Boosts text question-answering retrieval performance, providing high-quality embeddings for the downstream virtual assistant.
NeMo
Retriever Reranking NIM
:
Enhances the retrieval performance further with a fine-tuned reranker, finding the most relevant passages to provide as context when querying an LLM.
The blueprint is designed to integrate seamlessly with existing customer service applications without breaking information security mandates. Thanks to the portability of NVIDIA NIM, organizations can integrate data wherever it resides. By bringing generative AI to the data, this architecture enables AI virtual assistants to provide more personalized experiences tailored to each customer by leveraging their unique profiles, user interaction histories, and other relevant data.
A blueprint is a starting point that can be customized for an enterpriseâs unique use case. For example, integrate other NIM microservices, such as the
Nemotron 4 Hindi 4B Instruct
, to enable an AI virtual assistant to communicate in the local language. Other microservices can enable additional capabilities such as synthetic data generation and model fine-tuning to better align with your specific use case requirements. Give the AI virtual assistant a humanlike interface when connected to the digital human AI Blueprint.
With the implementation of a RAG backend with proprietary data (both company and user profile and their specific data), the AI virtual assistant can engage in highly contextual conversations, addressing the specifics of each customerâs needs in real-time. Additionally, the solution operates securely within your existing governance frameworks, ensuring compliance with privacy and security protocols especially when working with sensitive data.
Three building blocks for creating your own AI virtual assistant
As a developer, you can build your own AI virtual assistant that retrieves the most relevant and up-to-date information, in real time, with ever-improving humanlike responses. Figure 1 shows the AI virtual assistant architecture diagram which includes three functional components.
Figure 1. The NVIDIA AI Blueprint for AI virtual assistants
1. Data ingestion and retrieval pipeline
Pipeline administrators use the ingestion pipeline to load structured and unstructured data into the databases. Examples of structured data include customer profiles, order history, and order status. Unstructured data includes product manuals, the product catalog, and supporting material such as FAQ documents.
2. AI agent
The AI virtual assistant is the second functional component. Users interact with the virtual assistant through a user interface. An AI agent, implemented in the LangGraph agentic LLM programming framework, plans how to handle complex customer queries and solves recursively. The LangGraph agent uses the tool calling feature of the
Llama 3.1 70B Instruct NIM
to retrieve information from both the unstructured and structured data sources, then generates an accurate response.
The AI agent also uses short-term and long-term memory functions to enable multi-turn conversation history. The active conversation queries and responses are embedded so they can be retrieved later in the conversation as additional context. This allows more human-like interactions and eliminates the need for customers to repeat information theyâve already shared with the agent.
Finally, at the end of the conversation, the AI agent summarizes the discussion along with a sentiment determination and stores the conversation history in the structured database. Subsequent interactions from the same user can be retrieved as additional context in future conversations. Call summarization and conversation history retrieval can reduce call time and improve customer experience. Sentiment determination can provide valuable insights to the customer service administrator regarding the agentâs effectiveness.
3. Operations pipeline
The customer operations pipeline is the third functional component of the overall solution. This pipeline provides important information and insight to the customer service operators. Administrators can use the operations pipeline to review chat history, user feedback, sentiment analysis data, and call summaries. The analytics microservice, which leverages the Llama 3.1 70B Instruct NIM, can be used to generate analytics such as average call time, time to resolution, and customer satisfaction. The analytics are also leveraged as user feedback to retrain the LLM models to improve accuracy.
You can find the complete example of how to get started with this Blueprint on the
NVIDIA AI Blueprint GitHub repository.
Get to production with NVIDIA partners
NVIDIA consulting partners are helping enterprises adopt world-class AI virtual assistants built using NVIDIA accelerated computing and
NVIDIA AI Enterprise software
, which includes NeMo, NIM microservices, and AI Blueprints.
Accenture
The Accenture AI Refinery
built on
NVIDIA AI Foundry
helps design autonomous, intent-driven customer interactions, enabling businesses to tailor the journey to the individual through innovative channels such as digital humans or interaction agents. Specific use cases can be tailored to meet the needs of each industry, for example, telco call centers, insurance policy advisors, pharmaceutical interactive agents or automotive dealer network agents.
Deloitte
Deloitte Frontline AI enhances the customer service experience with digital avatars and LLM agents built with NVIDIA AI Blueprints that are accelerated by NVIDIA technologies such as NVIDIA ACE, NVIDIA Omniverse, NVIDIA Riva, and NIM.
Wipro
Wipro Enterprise Generative AI (WeGA) Studio accelerates industry-specific use cases including contact center agents across healthcare, financial services, retail, and more.
Tech Mahindra
Tech Mahindra is leveraging the NVIDIA AI Blueprint for digital humans to build solutions for customer service. Using RAG and NVIDIA NeMo, the solution provides the ability for a trainee to stop an agent during a conversation by raising a hand to ask clarifying questions. The system is designed to connect with microservices on the backend with a refined learning management system) which can be deployed across many industry use cases.
Infosys
Infosys Cortex
, part of
Infosys Topaz
, is an AI-driven customer engagement platform that integrates NVIDIA AI Blueprints and the NVIDIA NeMo, Riva, and ACE technologies for generative AI, speech AI, and digital human capabilities to deliver specialized and individualized, proactive, and on-demand assistance to every member of a customer service organization, consequently playing a pivotal role in enhancing customer experience, improving operational efficiency, and reducing costs.
Tata Consultancy Services
The Tata Consultancy Services (TCS) virtual agent, powered by NVIDIA NIM, and integrated with ServiceNowâs IT Virtual Agent is designed to optimize IT and HR support. This solution uses prompt-tuning and RAG to improve response times, accuracy, and provide multi-turn conversational capabilities. Benefits include reduced service desk costs, fewer support tickets, enhanced knowledge utilization, faster deployment, and a better overall employee and customer experience.
Quantiphi
Quantiphi
is integrating NVIDIA AI Blueprints into its conversational AI solutions to enhance customer service with lifelike digital avatars. These state-of-the-art avatars, powered by NVIDIA Tokkio and ACE technologies,
NVIDIA NIM microservices
and
NVIDIA NeMo
, seamlessly integrate with existing enterprise applications, enhancing operations and customer experiences with increased realism. Fine-tuned NIM deployments for digital avatar workflows have proven to be highly cost-effective, reducing enterprise spending on tokens.
SoftServe
SoftServe Digital Concierge
, accelerated by NVIDIA AI Blueprints and NVIDIA NIM microservices, uses NVIDIA ACE, NVIDIA Riva, and the NVIDIA Audio2Face NIM microservice to deliver a highly realistic virtual assistant. Thanks to the Character Creator tool, it delivers speech and facial expressions with remarkable accuracy and lifelike detail.
With RAG capabilities from NVIDIA NeMo Retriever, SoftServe Digital Concierge can intelligently respond to customer queries by referencing context and delivering specific, up-to-date information. It simplifies complex queries into clear, concise answers and can also provide detailed explanations when needed.
EXL
EXLâs Smart Agent Assist offering is a contact center AI solution leveraging NVIDIA Riva, NVIDIA NeMo, and NVIDIA NIM microservices. EXL plans to augment their solution using the NVIDIA AI Blueprint for AI virtual agents.
This week at
NVIDIA AI Summit India
, NVIDIA consulting partners announced a collaboration with NVIDIA to transform India into a Front Office for AI. Using NVIDIA technologies, these consulting giants can help customers tailor the customer service agent blueprint to build unique virtual assistants using their preferred AI modelâincluding sovereign LLMs from India-based model makersâand run it in production efficiently on the infrastructure of their choice.
Get started
To try the blueprint for free, and to see system requirements, navigate to the
Blueprint Card
.
To start building applications using those microservices, visit the
NVIDIA API catalog
. To
sign in
, youâll be prompted to enter a personal or business email address to access different options for building with NIM. For more information, see the
NVIDIA NIM FAQ
.
This post was originally published on 10/23/2024. | https://developer.nvidia.com/ja-jp/blog/three-building-blocks-for-creating-ai-virtual-assistants-for-customer-service-with-an-nvidia-nim-agent-blueprint/ | NVIDIA AI Blueprint ã§ã«ã¹ã¿ã㌠ãµãŒãã¹åãã® AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããäœæãã 3 ã€ã®æ§æèŠçŽ | Reading Time:
2
minutes
ä»æ¥ã®ããŸããããããžãã¹ç°å¢ã§ã¯ãåªããã«ã¹ã¿ã㌠ãµãŒãã¹ãæäŸããããšã¯ããã¯ãåã«ãããã°è¯ãããšãã§ã¯ãªãããå¿
èŠäžå¯æ¬ ãªããšãã§ããæè¡çãªåé¡ãžã®å¯Ÿå¿ãè«æ±ã«é¢ãã質åã®è§£æ±ºããµãŒãã¹ã®ææ°æ
å ±ã®æäŸãªã©ã顧客ã¯ãè¿
éãã€æ£ç¢ºã§ã顧客ã®éœåã«ã«ã¹ã¿ãã€ãºããã察å¿ãæåŸ
ããŠããŸãããããããã®ã¬ãã«ã®ãµãŒãã¹ãå®çŸããã«ã¯ã倧ããªèª²é¡ã䌎ããŸãã
ããŒãœãã©ã€ãºããããªã¢ã«ã¿ã€ã ã®ãµããŒããæäŸããã«ã¯ãå€ãã®å Žåãéçãªã¹ã¯ãªãããæäœæ¥ã«ããããã»ã¹ãšãã£ãåŸæ¥ã®ã¢ãããŒãã§ã¯äžååã§ããããã«ãå€ãã®ã«ã¹ã¿ã㌠ãµãŒãã¹æ¥åã§ã¯ãæ©å¯æ§ãé«ããã€æççãªããŒã¿ãåãæ±ãããšã«ãªããå³ããããŒã¿ç®¡çãšãã©ã€ãã·ãŒèŠå¶ã®å¯Ÿè±¡ãšãªããŸããçæ AI ã®å°é ã«ãããäŒæ¥ã¯éçšå¹çã®åäžãã³ã¹ãåæžãROI ã®æ倧åã«ãã£ãŠã«ã¹ã¿ã㌠ãµãŒãã¹ã«é©åœãèµ·ããããšãç®æããŠããŸãã
AI ãæ¢åã®ã·ã¹ãã ã«çµã¿èŸŒãéã«ã¯ãéææ§ã粟床ãã»ãã¥ãªãã£ã«é¢ãã課é¡ã«çŽé¢ããå°å
¥ã劚ããã¯ãŒã¯ãããŒãäžæãããããšããããããããŸãããããããããŒãã«ãå
æããããã«ãäŒæ¥ã¯çæ AI ã掻çšããããŒãã£ã« ã¢ã·ã¹ã¿ã³ããå©çšããŠå¹
åºãã¿ã¹ã¯ã管çããæçµçã«å¿çæéãççž®ããŠããªãœãŒã¹ã解æŸããŠããŸãã
ãã®æçš¿ã§ã¯ãéçºè
ãã
AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã« NVIDIA AI Blueprint
ã䜿çšããŠãçæ AI ã§æ¥åãæ¡åŒµããæ¹æ³ã«ã€ããŠèª¬æããŸãããµã³ãã« ã³ãŒããå«ããã®æ
å ±ã掻çšããããšã§ãäŒæ¥ã¯ãããŒã¿ã®æŽåæ§ãšããŒã¿ ã¬ããã³ã¹ã確ä¿ããªãããåªããã«ã¹ã¿ã㌠ãµãŒãã¹ãžã®é«ãŸãèŠæ±ã«å¿ããããšãã§ããŸããæ¢åã®ã·ã¹ãã ã®æ¹åãŸãã¯æ°ããã·ã¹ãã ã®æ§ç¯ã«ãããããããã® Blueprint ã«ãã£ãŠããŒã ã¯å¹ççã§æå³ã®ãããããšããéããŠé¡§å®¢ã®ããŒãºã«å¯Ÿå¿ããããšãã§ããŸãã
æ€çŽ¢æ¡åŒµçæ (RAG) ã䜿çšãã AI ã¯ãšãª ãšã³ãžã³ã«ããã¹ããŒã㪠AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ã
AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããæ§ç¯ããå Žåãç¬èªã®ãŠãŒã¹ ã±ãŒã¹èŠä»¶ããã³çµç¹ã®ç¥èãããŒãºã«åãããŠèª¿æŽããããšãéèŠã§ããåŸæ¥ã®ãããã§ã¯ãå€ãã®å Žåãæè»æ§ã®ä¹ãããã¬ãŒã ã¯ãŒã¯ãšæ代é
ãã®ã¡ãœãããå©çšããŠãããä»æ¥ã®ã«ã¹ã¿ã㌠ãµãŒãã¹ã®ãããªåžžã«å€åãç¶ããèŠæ±ã«å¯Ÿå¿ã§ããŸããã
ããããæ¥çã§ãAI ããŒã¹ã®ã¢ã·ã¹ã¿ã³ããé©æ°çãªååšãšãªãåŸãŸããããšãã°ãéä¿¡äŒç€Ÿãå°å£²ããµãŒãã¹ ãããã€ããŒã®å€§å€æ°ã¯ãAI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã䜿çšããŠã24 æé 365 æ¥çšŒåãããµããŒããæäŸããªãããå€èšèªã§å¹
åºã顧客ã®åãåããã«å¯Ÿå¿ãããã©ãã«ã·ã¥ãŒãã£ã³ã°ãã¢ã«ãŠã³ã管çãåçåããããã€ãããã¯ã§ããŒãœãã©ã€ãºããããããšããæäŸããããšã§ã顧客äœéšãåäžããããšãã§ããŸããããã«ãããåŸ
ã¡æéãççž®ããããŸããŸãªé¡§å®¢ããŒãºã«å¯ŸããŠäžè²«ãããµãŒãã¹ãæäŸããããšãã§ããŸãã
ããã²ãšã€ã®äŸãšããŠãå»çä¿éºã®æ¯ææ¥çã§ã¯ãå å
¥è
ã«ãšã£ãŠæºè¶³åºŠã®é«ãäœéšã確å®ã«æäŸããããšãéèŠã§ããããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã¯ãå»çèŠå¶ã®éµå®ã確ä¿ããªãããå å
¥è
ã«ããŒãœãã©ã€ãºããããµããŒããæäŸããè«æ±ãè£åã«é¢ããåãåããã絊ä»éãæ¯æãã«é¢ããåé¡ã«å¯ŸåŠããããšã§ãããããäœéšãåäžããŠããŸããããã«ãããå»çåŸäºè
ã®ç®¡çäžã®è² æ
ã軜æžããããšãã§ããŸãã
NVIDIA AI ãã©ãããã©ãŒã ã䜿çšããããšã§ãäŒæ¥ã¯ã
æ€çŽ¢æ¡åŒµçæ (RAG)
ã䜿çšãã AI ã¯ãšãª ãšã³ãžã³ãäœæããAI ã¢ããªã±ãŒã·ã§ã³ãäŒæ¥ããŒã¿ã«æ¥ç¶ããããšãã§ããŸããAI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã® Blueprint ã«ãããéçºè
ã¯ãããæŽç·Žããã顧客äœéšãæäŸãããœãªã¥ãŒã·ã§ã³ãè¿
éã«æ§ç¯ãéå§ããããšãã§ããŸãããã® Blueprint ã¯ã以äžã®
NVIDIA NIM
ãã€ã¯ããµãŒãã¹ã䜿çšããŠæ§ç¯ãããŸãã
LLM åã NVIDIA NIM:
æå
端ã®å€§èŠæš¡èšèªã¢ãã« (LLM) ã®ãã¯ãŒãã¢ããªã±ãŒã·ã§ã³ã«åãå
¥ãã倧å¹
ã«å¹çåããŠãåè¶ããèªç¶èšèªåŠçãæäŸããŸãã
Llama 3.1 70B Instruct NIM
:
åªããæèç解ãæšè«ãããã¹ãçæã§è€éãªäŒè©±ãå¯èœã§ãã
NVIDIA NeMo
Retriever NIM:
RAG ãã€ãã©ã€ã³ã®åºç€ãšãªãæ§æèŠçŽ ã§ããæå
端ã¢ãã«ã«ç°¡åã«ã¢ã¯ã»ã¹ã§ããŸãããã® RAG ãã€ãã©ã€ã³ã«ãã£ãŠãããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã¯äŒæ¥ããŒã¿ãžã®ã·ãŒã ã¬ã¹ãªã¢ã¯ã»ã¹ãå¯èœã«ãªããè¿
éãã€æ£ç¢ºã§ã¹ã±ãŒã©ãã«ãªåçã§ãçµç¹ã®ç¥èã掻çšã§ããŸãã
NeMo
Retriever Embedding NIM
:
ããã¹ãã® QA æ€çŽ¢ã¿ã¹ã¯ã«ç¹åãããŠãããããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã¯ãã®é«å質ã®ããã¹ãåã蟌ã¿ãå©çšããŸãã
NeMo
Retriever Reranking NIM
:
ãã¡ã€ã³ãã¥ãŒãã³ã°ããããªã©ã³ãã³ã° ã¢ãã«ã§ãããåã蟌ã¿ã¢ãã«ãšäœµçšããããšã§æ€çŽ¢æ§èœãããã«åäžãããããšãã§ããŸããå
¥åæã«æãé¢é£æ§ã®é«ãæç« ãèŠä»ãåºããLLM ã«æèãšããŠæž¡ããŸãã
ãã® Blueprint ã¯ãæ
å ±ã»ãã¥ãªãã£ã«é¢ãã矩åã«åããããšãªããæ¢åã®ã«ã¹ã¿ã㌠ãµãŒãã¹ ã¢ããªã±ãŒã·ã§ã³ãšã·ãŒã ã¬ã¹ã«çµ±åã§ããããã«èšèšãããŠããŸããNVIDIA NIM ã®ç§»æ€æ§ã®ãããã§ãäŒæ¥ã¯ãããŒã¿ãã©ãã«ãã£ãŠãçµ±åããããšãã§ããŸããçæ AI ãããŒã¿ã«åãå
¥ããããšã§ãAI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã¯ã顧客åºæã®ãããã¡ã€ã«ããŠãŒã¶ãŒãšã®å¯Ÿè©±å±¥æŽããã®ä»ã®é¢é£ããŒã¿ãªã©ã掻çšããŠãå顧客ã«åãããããããŒãœãã©ã€ãºãããäœéšãæäŸã§ããããã«ãªããŸãã
Blueprint ã¯ãäŒæ¥ç¬èªã®ãŠãŒã¹ ã±ãŒã¹ã«åãããŠã«ã¹ã¿ãã€ãºãå¯èœãª âåå°â ã®ãããªãã®ã§ããããšãã°ã
Nemotron 4 Hindi 4B Instruct
ãªã©ä»ã® NIM ãã€ã¯ããµãŒãã¹ãçµ±åããã°ãAI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããçŸå°ã®èšèªã§ã³ãã¥ãã±ãŒã·ã§ã³ã§ããããã«ãªããŸãããã®ä»ã®ãã€ã¯ããµãŒãã¹ã«ãããåæããŒã¿ã®çæãã¢ãã«ã®ãã¡ã€ã³ãã¥ãŒãã³ã°ãªã©ã®è¿œå æ©èœãå¯èœã«ãªããç¹å®ã®ãŠãŒã¹ ã±ãŒã¹èŠä»¶ã«é©åãããããšãã§ããŸãããŸããããžã¿ã« ãã¥ãŒãã³ AI Blueprint ã«æ¥ç¶ãããšãAI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã«äººéã®ãããªã€ã³ã¿ãŒãã§ã€ã¹ãæäŸãããŸãã
ç¬èªã®ããŒã¿ (äŒæ¥ããŠãŒã¶ãŒã®ãããã¡ã€ã«ãç¹å®ã®ããŒã¿) ãåãã RAG ããã¯ãšã³ããå®è£
ããããšã§ãAI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã¯ãæèã«æ²¿ã£ã察話ãè¡ãããªã¢ã«ã¿ã€ã ã§å顧客ã®ããŒãºã®ç¹å®äºé
ã«å¯Ÿå¿ããããšãã§ããŸããããã«ããã®ãœãªã¥ãŒã·ã§ã³ã¯ãã§ã«éçšããŠããã¬ããã³ã¹ ãã¬ãŒã ã¯ãŒã¯å
ã§å®å
šã«éçšãããç¹ã«æ©å¯ããŒã¿ãæ±ãéã«ã¯ããã©ã€ãã·ãŒãšã»ãã¥ãªã㣠ãããã³ã«ã®éµå®ãä¿èšŒããŸãã
ç¬èªã® AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããæ§ç¯ãã 3 ã€ã®æ§æèŠçŽ
éçºè
ãšããŠãæãé¢é£æ§ã®é«ãææ°ã®æ
å ±ããªã¢ã«ã¿ã€ã ã§ååŸããåžžã«äººéã®ãããªå¿çãã§ããããæ¥ã
é²åããç¬èªã® AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããæ§ç¯ã§ããŸããå³ 1 ã¯ã3 ã€ã®æ©èœã³ã³ããŒãã³ããå«ã AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã®ã¢ãŒããã¯ãã£å³ã§ãã
å³ 1. AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãåãã® NVIDIA AI Blueprint
1. ããŒã¿ã®åã蟌ã¿ãšæ€çŽ¢ãã€ãã©ã€ã³
ãã€ãã©ã€ã³ç®¡çè
ã¯ãåã蟌㿠(Ingest) ãã€ãã©ã€ã³ã䜿çšããŠãæ§é åããŒã¿ãéæ§é åããŒã¿ãããŒã¿ããŒã¹ã«èªã¿èŸŒãããšãã§ããŸããæ§é åããŒã¿ã®äŸãšããŠã顧客ãããã¡ã€ã«ã泚æå±¥æŽãçºéç¶æ³ãªã©ããããŸããéæ§é åããŒã¿ã«ã¯ã補åããã¥ã¢ã«ã補åã«ã¿ãã°ãFAQ ããã¥ã¡ã³ããªã©ã®ãµããŒãè³æãå«ãŸããŸãã
2. AI ãšãŒãžã§ã³ã
2 ã€ç®ã®æ©èœã³ã³ããŒãã³ã㯠AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ã ã§ãããŠãŒã¶ãŒã¯ããŠãŒã¶ãŒ ã€ã³ã¿ãŒãã§ã€ã¹ãä»ããŠããŒãã£ã« ã¢ã·ã¹ã¿ã³ããšå¯Ÿè©±ããŸãããšãŒãžã§ã³ãå LLM ããã°ã©ãã³ã° ãã¬ãŒã ã¯ãŒã¯ã§ãã LangGraph ã§å®è£
ããã AI ãšãŒãžã§ã³ããã顧客ããã®è€éãªåãåããã«å¯Ÿå¿ããæ¹æ³ãèšç»ãããã®åãåãããååž°çã«è§£æ±ºããŸããLangGraph ãšãŒãžã§ã³ãã¯
Llama3.1 70B Instruct NIM
ã®ããŒã«åŒã³åºãæ©èœã䜿çšããŠãéæ§é åããŒã¿ãšæ§é åããŒã¿ã®äž¡æ¹ããæ
å ±ãååŸããæ£ç¢ºãªå¿çãçæããŸãã
ãŸã AI ãšãŒãžã§ã³ãã«ãããçæã¡ã¢ãªãšé·æã¡ã¢ãªã®æ©èœã䜿çšããŠãã«ãã¿ãŒã³ã®å¯Ÿè©±å±¥æŽãå®çŸã§ããŸããã¢ã¯ãã£ããªäŒè©±ã«å¯Ÿããåãåãããå¿çãåã蟌ãŸããŠãããããäŒè©±ã®åŸåã§è¿œå ã®æèãšããŠæ€çŽ¢ãå©çšã§ããŸããããã«ããããã人éã«è¿ããããšããå¯èœã«ãªãã顧客ããã§ã«ãšãŒãžã§ã³ããšå
±æããæ
å ±ãç¹°ãè¿ãæäŸããå¿
èŠããªããªããŸãã
æçµçã«ãäŒè©±ã®æåŸã« AI ãšãŒãžã§ã³ããææ
ã®å€å®ãšãšãã«è°è«ãèŠçŽããæ§é åããŒã¿ããŒã¹ã«äŒè©±å±¥æŽãä¿åããŸãããŠãŒã¶ãŒãšã®å¯Ÿè©±ã¯ãä»åŸã®äŒè©±ã§è¿œå ã®æèãšããŠæ€çŽ¢ã§ããŸããé話ã®èŠçŽãšäŒè©±å±¥æŽãæ€çŽ¢ããããšã§ãé話æéãççž®ãã顧客äœéšãåäžãããããšãã§ããŸããææ
å€å®ã«ãã£ãŠããšãŒãžã§ã³ãã®æå¹æ§ã«é¢ãã貎éãªæŽå¯ãã«ã¹ã¿ã㌠ãµãŒãã¹ç®¡çè
ã«æäŸã§ããŸãã
3. éçšãã€ãã©ã€ã³
顧客éçšãã€ãã©ã€ã³ã¯ããœãªã¥ãŒã·ã§ã³å
šäœã® 3 ã€ç®ã®æ§æèŠçŽ ã§ãããã®ãã€ãã©ã€ã³ã¯ãã«ã¹ã¿ã㌠ãµãŒãã¹ ãªãã¬ãŒã¿ãŒã«éèŠãªæ
å ±ãšæŽå¯ãæäŸããŸãã管çè
ã¯ãéçšãã€ãã©ã€ã³ã䜿çšããŠããã£ããå±¥æŽããŠãŒã¶ãŒã®ãã£ãŒãããã¯ãææ
åæããŒã¿ãé話ã®èŠçŽã確èªããããšãã§ããŸããLlama 3.1 70B Instruct NIM ã掻çšããåæãã€ã¯ããµãŒãã¹ã䜿çšããŠãå¹³åé話æéã解決ãŸã§ã®æéã顧客æºè¶³åºŠãªã©ã®åæãçæã§ããŸãããŸãåæçµæã¯ããŠãŒã¶ãŒ ãã£ãŒãããã¯ãšããŠã掻çšãããLLM ã¢ãã«ãåãã¬ãŒãã³ã°ããŠç²ŸåºŠãåäžããŸãã
NVIDIA ããŒãããŒãšæ¬çªç°å¢ã«çæ
NVIDIA ã®ã³ã³ãµã«ãã£ã³ã° ããŒãããŒã¯ãåäŒæ¥ããNVIDIA ã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ãšãNeMoãNIM ãã€ã¯ããµãŒãã¹ãAI Blueprint ãå«ã
NVIDIA AI Enterprise ãœãããŠã§ã¢
ã§æ§ç¯ãããäžçæ°Žæºã® AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããå°å
¥ã§ããããã«æ¯æŽããŠããŸãã
Accenture
NVIDIA AI Foundry
äžã«æ§ç¯ããã
Accenture AI Refinery
ã¯ãèªåŸçã§é¡§å®¢ã®æå³ã«æ²¿ã£ã察話ãèšèšããäŒæ¥ãããžã¿ã« ãã¥ãŒãã³ãã€ã³ã¿ã©ã¯ã·ã§ã³ ãšãŒãžã§ã³ããªã©ã®é©æ°çãªãã£ãã«ãéããŠãå人ã«åãããŠã«ã¹ã¿ãã€ãºã§ããããã«ããŸããç¹å®ã®ãŠãŒã¹ ã±ãŒã¹ã¯ãéä¿¡äŒç€Ÿã®ã³ãŒã« ã»ã³ã¿ãŒãä¿éºå¥çŽã®ã¢ããã€ã¶ãŒãå»è¬åã®ã€ã³ã¿ã©ã¯ãã£ã ãšãŒãžã§ã³ããèªåè»ãã£ãŒã©ãŒã®ãããã¯ãŒã¯ ãšãŒãžã§ã³ããªã©ãåæ¥çã®ããŒãºã«åãããŠã«ã¹ã¿ãã€ãºã§ããŸãã
Deloitte
Deloitte Frontline AI ã¯ãNVIDIA ACEãNVIDIA OmniverseãNVIDIA RivaãNIM ãªã©ã® NVIDIA ã®ãã¯ãããžã«ãã£ãŠå éããã NVIDIA AI Blueprint ãå©çšããŠæ§ç¯ãããããžã¿ã« ã¢ãã¿ãŒã LLM ãšãŒãžã§ã³ãã§ã«ã¹ã¿ã㌠ãµãŒãã¹äœéšãåäžããŠããŸãã
Wipro
Wipro Enterprise Generative AI (WeGA) Studio ã¯ããã«ã¹ã±ã¢ãéèãµãŒãã¹ãå°å£²ãªã©ã®ã³ã³ã¿ã¯ã ã»ã³ã¿ãŒã®ãšãŒãžã§ã³ããå«ãæ¥çåºæã®ãŠãŒã¹ ã±ãŒã¹ãå éããŠããŸãã
Tech Mahindra
Tech Mahindra ã¯ãããžã¿ã« ãã¥ãŒãã³åãã® NVIDIA AI Blueprint ã掻çšããŠãã«ã¹ã¿ã㌠ãµãŒãã¹åãã®ãœãªã¥ãŒã·ã§ã³ãæ§ç¯ããŠããŸããRAG ãš NVIDIA NeMo ã䜿çšãããã®ãœãªã¥ãŒã·ã§ã³ã¯ããã¬ãŒãã³ã°åè¬è
ããäŒè©±äžã«æãæããŠæ確ãªè³ªåãããããšã§ããšãŒãžã§ã³ããæ¢ããæ©èœãæäŸããŸãããã®ã·ã¹ãã ã¯ãå€ãã®æ¥çã®ãŠãŒã¹ ã±ãŒã¹ã§ãããã€ã§ããæŽç·ŽãããåŠç¿ç®¡çã·ã¹ãã ã§ãããã¯ãšã³ãã®ãã€ã¯ããµãŒãã¹ãšæ¥ç¶ããããã«èšèšãããŠããŸãã
Infosys
Infosys Topaz
ã®äžéšã§ãã
Infosys Cortex
ã¯ãAI ã掻çšãã顧客ãšã³ã²ãŒãžã¡ã³ã ãã©ãããã©ãŒã ã§ãããçæ AIãã¹ããŒã AIãããžã¿ã« ãã¥ãŒãã³æ©èœãå®çŸãã NVIDIA AI Blueprint ãš NVIDIA NeMoãRivaãACE æè¡ãçµ±åããã«ã¹ã¿ã㌠ãµãŒãã¹çµç¹ã®ããããã¡ã³ããŒã«å°éçã§å人ã«åãããããã¢ã¯ãã£ããã€ãªã³ããã³ãã®æ¯æŽãæäŸããããšã§ã顧客äœéšã®åäžãéçšå¹çã®æ¹åãã³ã¹ãåæžã«éèŠãªåœ¹å²ãæãããŸãã
Tata Consultancy Services
NVIDIA NIM ãæèŒã ServiceNow ã® IT ä»®æ³ãšãŒãžã§ã³ããšçµ±åããã Tata Consultancy Services (TCS) ã®ä»®æ³ãšãŒãžã§ã³ãã¯ãIT ãš HR ã®ãµããŒããæé©åããããã«èšèšãããŠããŸãããã®ãœãªã¥ãŒã·ã§ã³ã¯ãããã³ãã ãã¥ãŒãã³ã°ãš RAG ã䜿çšããŠãå¿çæéã粟床ãåäžããããã«ãã¿ãŒã³ã®äŒè©±æ©èœãæäŸããŸãããµãŒãã¹ ãã¹ã¯ã®ã³ã¹ãåæžããµããŒã ãã±ããã®æžå°ããã¬ããžæŽ»çšã®åŒ·åãããè¿
éãªãããã€ããããŠåŸæ¥å¡ãšé¡§å®¢ã®å
šäœçãªäœéšã®åäžãªã©ã®ã¡ãªããããããŸãã
Quantiphi
Quantiphi
ã¯ãNVIDIA AI Blueprint ã察話å AI ãœãªã¥ãŒã·ã§ã³ã«çµ±åãããªã¢ã«ãªããžã¿ã« ã¢ãã¿ãŒã§ã«ã¹ã¿ã㌠ãµãŒãã¹ã匷åããŠããŸããNVIDIA Tokkio ãš ACEã
NVIDIA NIM ãã€ã¯ããµãŒãã¹
ã
NVIDIA NeMo
ãæèŒããæå
端ã®ã¢ãã¿ãŒããæ¢åã®ãšã³ã¿ãŒãã©ã€ãº ã¢ããªã±ãŒã·ã§ã³ãšã·ãŒã ã¬ã¹ã«çµ±åãããªã¢ãªãã£ãé«ããªããéçšãšé¡§å®¢äœéšãåäžãããŸããããžã¿ã« ã¢ãã¿ãŒ ã¯ãŒã¯ãããŒã«ãã¡ã€ã³ãã¥ãŒãã³ã°ããã NIM ã®ãããã€ã¯ãè²»çšå¯Ÿå¹æãé«ããäŒæ¥ã®ããŒã¯ã³ã«å¯Ÿããæ¯åºãåæžããããšãå®èšŒãããŠããŸãã
SoftServe
SoftServe Digital Concierge
ã¯ãNVIDIA AI Blueprint ãš NVIDIA NIM ãã€ã¯ããµãŒãã¹ã«ãã£ãŠå éãããŠãããNVIDIA ACEãNVIDIA RivaãNVIDIA Audio2Face NIM ãã€ã¯ããµãŒãã¹ã䜿çšããŠãéåžžã«ãªã¢ã«ãªããŒãã£ã« ã¢ã·ã¹ã¿ã³ããæäŸããŸããCharacter Creator ããŒã«ã䜿çšããããšã§ãé³å£°ãé¡ã®è¡šæ
ãé©ãã»ã©æ£ç¢ºãã€ãªã¢ã«ã«è©³çŽ°ãåçŸã§ããŸãã
NVIDIA NeMo Retriever ã® RAG æ©èœã«ãããSoftServe Digital Concierge ã¯ãæèãåç
§ããç¹å®ã®ææ°æ
å ±ãæäŸããããšã§ã顧客ããã®åãåããã«ã€ã³ããªãžã§ã³ãã«å¯Ÿå¿ã§ããŸããè€éãªåãåãããç°¡çŽ åããæ確ã§ç°¡æœãªåçã«ãŸãšããå¿
èŠã«å¿ããŠè©³çŽ°ãªèª¬æãæäŸããããšãã§ããŸãã
EXL
EXL ã® Smart Agent Assist 補åã¯ãNVIDIA RivaãNVIDIA NeMoãNVIDIA NIM ãã€ã¯ããµãŒãã¹ã掻çšããã³ã³ã¿ã¯ã ã»ã³ã¿ãŒ AI ãœãªã¥ãŒã·ã§ã³ã§ããEXL ã¯ãAI ä»®æ³ãšãŒãžã§ã³ãåãã® NVIDIA AI Blueprint ã䜿çšããŠããœãªã¥ãŒã·ã§ã³ã匷åããäºå®ã§ãã
NVIDIA AI Summit India
ã§ãNVIDIA ã³ã³ãµã«ãã£ã³ã° ããŒãããŒããã€ã³ãã AI ã®ããã³ã ãªãã£ã¹ã«å€é©ããããã«ãNVIDIA ãšã®ã³ã©ãã¬ãŒã·ã§ã³ãçºè¡šããŸãããNVIDIA ãã¯ãããžã䜿çšããããšã§ããããã®ã³ã³ãµã«ãã£ã³ã°å€§æã¯ã顧客ãã«ã¹ã¿ã㌠ãµãŒãã¹ ãšãŒãžã§ã³ãã® Blueprint ãã«ã¹ã¿ãã€ãºãã奜ã¿ã® AI ã¢ãã« (ã€ã³ãã«æ ç¹ã眮ãã¢ãã« ã¡ãŒã«ãŒãæäŸãããœããªã³ LLM ãå«ã) ã䜿çšããŠç¬èªã®ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããæ§ç¯ããåžæã®ã€ã³ãã©ã§å¹ççã«æ¬çªçšŒåã§ããããã«ããŸãã
ä»ããå§ãã
Blueprint ãç¡æã§è©Šããããã·ã¹ãã èŠä»¶ã確èªããã«ã¯ã
Blueprint ã«ãŒã
ããåç
§ãã ããããããã®ãã€ã¯ããµãŒãã¹ã䜿çšããŠã¢ããªã±ãŒã·ã§ã³ã®æ§ç¯ãå§ããã«ã¯ã
NVIDIA API ã«ã¿ãã°
ã«ã¢ã¯ã»ã¹ããŠãã ããã
ãµã€ã³ã€ã³
ããã«ã¯ãNIM ã§æ§ç¯ããããŸããŸãªãªãã·ã§ã³ã«ã¢ã¯ã»ã¹ãããããå人çšãŸãã¯ããžãã¹çšã®ã¡ãŒã« ã¢ãã¬ã¹ãå
¥åããå¿
èŠããããŸãã詳现ã«ã€ããŠã¯ã
NVIDIA NIM FAQ
ãã芧ãã ããã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
éèéšéåãã®å®å
šã§å¹ççãªããŒãã£ã« ã¢ã·ã¹ã¿ã³ã
GTC ã»ãã·ã§ã³:
çæ AI ã®èª²é¡ãžã®å¯Ÿå¿ãšå¯èœæ§ã®æŽ»çš: NVIDIA ã®ãšã³ã¿ãŒãã©ã€ãº ãããã€ããåŸãããæŽå¯
NGC ã³ã³ãããŒ:
retail-shopping-advisor-chatbot-service
NGC ã³ã³ãããŒ:
retail-shopping-advisor-frontend-service
ãŠã§ãããŒ:
éèãµãŒãã¹ ã³ã³ã¿ã¯ã ã»ã³ã¿ãŒåãã® AI é³å£°å¯Ÿå¿ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã®æ§ç¯ãšå°å
¥æ¹æ³
ãŠã§ãããŒ:
éä¿¡äŒæ¥ã察話å AI ã§é¡§å®¢äœéšãå€é©ããæ¹æ³ |
https://developer.nvidia.com/blog/hymba-hybrid-head-architecture-boosts-small-language-model-performance/ | Hymba Hybrid-Head Architecture Boosts Small Language Model Performance | Transformers, with their attention-based architecture, have become the dominant choice for language models (LMs) due to their strong performance, parallelization capabilities, and long-term recall through key-value (KV) caches. However, their quadratic computational cost and high memory demands pose efficiency challenges. In contrast, state space models (SSMs) like Mamba and Mamba-2 offer constant complexity and efficient hardware optimization but struggle with memory recall tasks, affecting their performance on general benchmarks.
NVIDIA researchers recently proposed
Hymba
, a family of small language models (SLMs) featuring a hybrid-head parallel architecture that integrates transformer attention mechanisms with SSMs to achieve both enhanced efficiency and improved performance. In Hymba, attention heads provide high-resolution recall, while SSM heads enable efficient context summarization.
The novel architecture of Hymba reveals several insights:
Overhead in attention:
Over 50% of attention computation can be replaced by cheaper SSM computation.
Local attention dominance:
Most global attention can be replaced by local attention without sacrificing performance on general and recall-intensive tasks, thanks to the global information summarized by SSM heads.
KV cache redundancy:
Key-value cache is highly correlated across heads and layers, so it can be shared across heads (group query attention) and layers (cross-layer KV cache sharing).
Softmax attention limitation:
Attention mechanisms are constrained to sum to one, limiting sparsity, and flexibility. We introduce learnable meta-tokens that are prepended to prompts, storing critical information and alleviating the âforced-to-attendâ burden associated with attention mechanisms.
This post shows that Hymba 1.5B performs favorably against state-of-the-art open-source models of similar size, including Llama 3.2 1B, OpenELM 1B, Phi 1.5, SmolLM2 1.7B, Danube2 1.8B, and Qwen2.5 1.5B. Compared to Transformer models of similar size, Hymba also achieves higher throughput and requires 10x less memory to store cache.
Hymba 1.5B is released to the
Hugging Face
collection and
GitHub
.
Hymba 1.5B performance
Figure 1 compares Hymba 1.5B against sub-2B models (Llama 3.2 1B, OpenELM 1B, Phi 1.5, SmolLM2 1.7B, Danube2 1.8B, Qwen2.5 1.5B) in terms of average task accuracy, cache size (MB) relative to sequence length, and throughput (tok/sec).
Figure 1. Performance comparison of Hymba 1.5B Base against sub-2B models
In this set of experiments, the tasks include MMLU, ARC-C, ARC-E, PIQA, Hellaswag, Winogrande, and SQuAD-C. The throughput is measured on an NVIDIA A100 GPU with a sequence length of 8K and a batch size of 128 using PyTorch. For models encountering out of memory (OOM) issues during throughput measurement, the batch size was halved until the OOM is resolved to measure the maximal achievable throughput without OOM.
Hymba model design
SSMs such as Mamba were introduced to address the quadratic complexity and large inference-time KV cache issues of transformers. However, due to their low-resolution memory, SSMs struggle with memory recall and performance. To overcome these limitations, we propose a road map for developing efficient and high-performing small LMs in Table 1.
Configuration
Commonsense reasoning (%) â
Recall (%) â
Throughput (token/sec) â
Cache size (MB) â
Design reason
Ablations on 300M model size and 100B training tokens
Transformer (Llama)
44.08
39.98
721.1
414.7
Accurate recall while inefficient
State-space models (Mamba)
42.98
19.23
4720.8
1.9
Efficient while inaccurate recall
A. + Attention heads (sequential)
44.07
45.16
776.3
156.3
Enhance recall capabilities
B. + Multi-head heads (parallel)
45.19
49.90
876.7
148.2
Better balance of two modules
C. + Local / global attention
44.56
48.79
2399.7
41.2
Boost compute/cache efficiency
D. + KV cache sharing
45.16
48.04
2756.5
39.4
Cache efficiency
E. + Meta-tokens
45.59
51.79
2695.8
40.0
Learned memory initialization
Scaling to 1.5B model size and 1.5T training tokens
F. + Size / data
60.56
64.15
664.1
78.6
Further boost task performance
G. + Extended context length (2Kâ8K)
60.64
68.79
664.1
78.6
Improve multishot and recall tasks
Table 1. Design road map of the Hymba model
Fused hybrid modules
Fusing attention and SSM heads in parallel within a hybrid-head module outperforms sequential stacking, according to the ablation study. Hymba fuses attention and SSM heads in parallel within a hybrid head module, enabling both heads to process the same information simultaneously. This architecture improves reasoning and recall accuracy.
Figure 2. The hybrid-head module in Hymba
Efficiency and KV cache optimization
While attention heads improve task performance, they increase KV cache requirements and reduce throughput. To mitigate this, Hymba optimizes the hybrid-head module by combining local and global attention and employing cross-layer KV cache sharing. This improves throughput by 3x and reduces cache by almost 4x without sacrificing performance.
Figure 3. Hymba model architecture
Meta-tokens
A set of 128 pretrained embeddings prepended to inputs, functioning as learned cache initialization to enhance focus on relevant information. These tokens serve a dual purpose:
Mitigating attention drain by acting as backstop tokens, redistributing attention effectively
Encapsulating compressed world knowledge
Figure 4. Interpretation of Hymba from the memory aspect
Model analysis
This section presents an apples-to-apples comparison across different architectures under the same training settings. We then visualize the attention maps of SSM and Attention in different pretrained models. Finally, we perform head importance analysis for Hymba through pruning. All the analyses in this section help to illustrate how and why the design choices for Hymba are effective.
Apples-to-apples comparison
We performed an apples-to-apples comparison of Hymba, pure Mamba2, Mamba2 with FFN, Llama3 style, and Samba style (Mamba-FFN-Attn-FFN) architectures. All models have 1 billion parameters and are trained from scratch for 100 billion tokens from SmolLM-Corpus with exactly the same training recipe. All results are obtained through lm-evaluation-harness using a zero-shot setting on Hugging Face models. Hymba performs the best on commonsense reasoning as well as question answering and recall-intensive tasks.
Table 2 compares various model architectures on language modeling and recall-intensive and commonsense reasoning tasks, with Hymba achieving strong performance across metrics. Hymba demonstrates the lowest perplexity in language tasks (18.62 for Wiki and 10.38 for LMB) and solid results in recall-intensive tasks, particularly in SWDE (54.29) and SQuAD-C (44.71), leading to the highest average score in this category (49.50).
Model
Language (PPL) â
Recall intensive (%) â
Commonsense reasoning (%) â
Mamba2
15.88
43.34
52.52
Mamba2 w/ FFN
17.43
28.92
51.14
Llama3
16.19
47.33
52.82
Samba
16.28
36.17
52.83
Hymba
14.5
49.5
54.57
Table 2. Comparison of architectures trained on 100 billion tokens under the same settings
In commonsense reasoning and question answering, Hymba outperforms other models in most tasks, such as SIQA (31.76) and TruthfulQA (31.64), with an average score of 54.57, slightly above Llama3 and Mamba2. Overall, Hymba stands out as a balanced model, excelling in both efficiency and task performance across diverse categories.
Attention map visualization
We further categorized elements in the attention map into four types:
Meta:
Attention scores from all real tokens to meta-tokens. This category reflects the modelâs preference for attending to meta-tokens. In attention maps, they are usually located in the first few columns (for example, 128 for Hymba) if a model has meta-tokens.
BOS:
Attention scores from all real tokens to the beginning-of-sequence token. In the attention map, they are usually located in the first column right after the meta-tokens.
Self:
Attention scores from all real tokens to themselves. In the attention map, they are usually located in the diagonal line.
Cross:
Attention scores from all real tokens to other real tokens. In the attention map, they are usually located in the off-diagonal area.
The attention pattern of Hymba is significantly different from that of vanilla Transformers. In vanilla Transformers, attention scores are more concentrated on BOS, which is consistent with the findings in Attention Sink. In addition, vanilla Transformers also have a higher proportion of Self attention scores. In Hymba, meta-tokens, attention heads, and SSM heads work complementary to each other, leading to a more balanced distribution of attention scores across different types of tokens.
Specifically, meta-tokens offload the attention scores from BOS, enabling the model to focus more on the real tokens. SSM heads summarize the global context, which focuses more on current tokens (Self attention scores). Attention heads, on the other hand, pay less attention to Self and BOS tokens, and more attention to other tokens (that is, Cross attention scores). This suggests that the hybrid-head design of Hymba can effectively balance the attention distribution across different types of tokens, potentially leading to better performance.
Figure 5. Schematics of the attention map of Hymba as a combination of meta-tokens, sliding window attention, and Mamba contributions
Figure 6. Sum of the attention score from different categories in Llama 3.2 3B and Hymba 1.5B
Heads importance analysis
We analyzed the relative importance of attention and SSM heads in each layer by removing them and recording the final accuracy. Our analysis reveals the following:
The relative importance of attention/SSM heads in the same layer is input-adaptive and varies across tasks, suggesting that they can serve different roles when handling various inputs.
The SSM head in the first layer is critical for language modeling, and removing it causes a substantial accuracy drop to random guess levels.
Generally, removing one attention/SSM head results in an average accuracy drop of 0.24%/1.1% on Hellaswag, respectively.
Figure 7. The achieved accuracy, measured using 1K samples from Hellaswag, after removing the Attention or SSM heads in each layer
Model architecture and training best practices
This section outlines key architectural decisions and training methodologies for Hymba 1.5B Base and Hymba 1.5B Instruct.
Model architecture
Hybrid architecture:
Mamba is great at summarization and usually closer focuses on the current token, while attention is more precise and acts as snapshot memory. Combining them in parallel merges these benefits, but standard sequential fusion does not. We chose a 5:1 parameter ratio between SSM and attention heads.
Sliding window attention:
Full attention heads are preserved in three layers (first, last, and middle), with sliding window attention heads used in the remaining 90% layers.
Cross-layer KV cache sharing:
Implemented between every two consecutive attention layers. It is done in addition to GQA KV cache sharing between heads.
Meta-tokens:
These 128 tokens are learnable with no supervision, helping to avoid entropy collapse problems in large language models (LLMs) and mitigate the attention sink phenomenon. Additionally, the model stores general knowledge in these tokens.
Training best practices
Pretraining:
We opted for two-stage base model training. Stage 1 maintained a constant large learning rate and used less filtered large corpus data. Continuous learning rate decay was then performed to 1e-5 using high-quality data. This approach enables continuous training and resuming of Stage 1.
Instruction fine-tuning:
Instruct model tuning is performed in three stages. First, SFT-1 provides the model with strong reasoning abilities by training on code, math, function calling, role play, and other task-specific data. Second, SFT-2 teaches the model to follow human instructions. Finally, DPO is leveraged to align the model with human preferences and improve the modelâs safety.
Figure 8. Training pipeline adapted for the Hymba model family
Performance and efficiency evaluation
With only 1.5T pretraining tokens, the Hymba 1.5B model performs the best among all small LMs and achieves better throughput and cache efficiency than all transformer-based LMs.
For example, when benchmarking against the strongest baseline, Qwen2.5, which is pretrained on 13x more tokens, Hymba 1.5B achieves a 1.55% average accuracy improvement, 1.41x throughput, and 2.90x cache efficiency. Compared to the strongest small LM trained on fewer than 2T tokens, namely h2o-danube2, our method achieves a 5.41% average accuracy improvement, 2.45x throughput, and 6.23x cache efficiency.
Model
# Para-ms
Train tokens
Token
per sec
Cache
(MB)
MMLU 5-
shot
ARC-E 0-shot
ARC-C 0-shot
PIQA 0-shot
Wino. 0-shot
Hella. 0-shot
SQuAD -C
1-shot
Avg
Open
ELM-1
1.1B
1.5T
246
346
27.06
62.37
19.54
74.76
61.8
48.37
45.38
48.57
Rene
v0.1
1.3B
1.5T
800
113
32.94
67.05
31.06
76.49
62.75
51.16
48.36
52.83
Phi
1.5
1.3B
0.15T
241
1573
42.56
76.18
44.71
76.56
72.85
48
30.09
55.85
Smol
LM
1.7B
1T
238
1573
27.06
76.47
43.43
75.79
60.93
49.58
45.81
54.15
Cosmo
1.8B
.2T
244
1573
26.1
62.42
32.94
71.76
55.8
42.9
38.51
47.2
h20
dan-ube2
1.8B
2T
271
492
40.05
70.66
33.19
76.01
66.93
53.7
49.03
55.65
Llama 3.2 1B
1.2B
9T
535
262
32.12
65.53
31.39
74.43
60.69
47.72
40.18
50.29
Qwen
2.5
1.5B
18T
469
229
60.92
75.51
41.21
75.79
63.38
50.2
49.53
59.51
AMD
OLMo
1.2B
1.3T
387
1049
26.93
65.91
31.57
74.92
61.64
47.3
33.71
48.85
Smol
LM2
1.7B
11T
238
1573
50.29
77.78
44.71
77.09
66.38
53.55
50.5
60.04
Llama
3.2 3B
3.0B
9T
191
918
56.03
74.54
42.32
76.66
69.85
55.29
43.46
59.74
Hymba
1.5B
1.5T
664
79
51.19
76.94
45.9
77.31
66.61
53.55
55.93
61.06
Table 2. Hymba 1.5B Base model results
Instructed models
The Hymba 1.5B Instruct model achieves the highest performance on an average of all tasks, outperforming the previous state-of-the-art model, Qwen 2.5 Instruct, by around 2%. Specifically, Hymba 1.5B surpasses all other models in GSM8K/GPQA/BFCLv2 with a score of 58.76/31.03/46.40, respectively. These results indicate the superiority of Hymba 1.5B, particularly in areas requiring complex reasoning capabilities.
Model
# Params
MMLU â
IFEval â
GSM8K â
GPQA â
BFCLv2 â
Avg. â
SmolLM
1.7B
27.80
25.16
1.36
25.67
-*
20.00
OpenELM
1.1B
25.65
6.25
56.03
21.62
-*
27.39
Llama 3.2
1.2B
44.41
58.92
42.99
24.11
20.27
38.14
Qwen2.5
1.5B
59.73
46.78
56.03
30.13
43.85
47.30
SmolLM2
1.7B
49.11
55.06
47.68
29.24
22.83
40.78
Hymba 1.5B
1.5B
52.79
57.14
58.76
31.03
46.40
49.22
Table 3. Hymba 1.5B Instruct model results
Conclusion
The new Hymba family of small LMs features a hybrid-head architecture that combines the high-resolution recall capabilities of attention heads with the efficient context summarization of SSM heads. To further optimize the performance of Hymba, learnable meta-tokens are introduced to act as a learned cache for both attention and SSM heads, enhancing the modelâs focus on salient information. Through the road map of Hymba, comprehensive evaluations, and ablation studies, Hymba sets new state-of-the-art performance across a wide range of tasks, achieving superior results in both accuracy and efficiency. Additionally, this work provides valuable insights into the advantages of hybrid-head architectures, offering a promising direction for future research in efficient LMs.
Learn more about
Hybma 1.5B Base
and
Hymba 1.5B Instruct
.
Acknowledgments
This work would not have been possible without contributions from many people at NVIDIA, including Wonmin Byeon, Zijia Chen, Ameya Sunil Mahabaleshwarkar, Shih-Yang Liu, Matthijs Van Keirsbilck, Min-Hung Chen, Yoshi Suhara, Nikolaus Binder, Hanah Zhang, Maksim Khadkevich, Yingyan Celine Lin, Jan Kautz, Pavlo Molchanov, and Nathan Horrocks. | https://developer.nvidia.com/ja-jp/blog/hymba-hybrid-head-architecture-boosts-small-language-model-performance/ | Hymba ãã€ããªãã ããã ã¢ãŒããã¯ãã£ãå°èŠæš¡èšèªã¢ãã«ã®ããã©ãŒãã³ã¹ãåäž | Reading Time:
4
minutes
Transformer ã¯ããã® Attention ããŒã¹ã®ã¢ãŒããã¯ãã£ã«ããã匷åãªããã©ãŒãã³ã¹ã䞊ååèœåãããã³ KV (Key-Value) ãã£ãã·ã¥ãéããé·æèšæ¶ã®ãããã§ãèšèªã¢ãã« (LM) ã®äž»æµãšãªã£ãŠããŸããããããäºæ¬¡èšç®ã³ã¹ããšé«ãã¡ã¢ãªèŠæ±ã«ãããå¹çæ§ã«èª²é¡ãçããŠããŸããããã«å¯ŸããMamba ã Mamba-2 ã®ãããªç¶æ
空éã¢ãã« (SSMs) ã¯ãè€éããäžå®ã«ããŠå¹ççãªããŒããŠã§ã¢æé©åãæäŸããŸãããã¡ã¢ãªæ³èµ·ã¿ã¹ã¯ãèŠæã§ããã¯äžè¬çãªãã³ãããŒã¯ã§ã®ããã©ãŒãã³ã¹ã«åœ±é¿ãäžããŠããŸãã
NVIDIA ã®ç 究è
ã¯æè¿ãå¹çæ§ãšããã©ãŒãã³ã¹ã®äž¡æ¹ãåäžãããããã«ãTransformer ã® Attention ã¡ã«ããºã ã SSM ãšçµ±åãããã€ããªãã ããã䞊åã¢ãŒããã¯ãã£ãç¹åŸŽãšããå°èŠæš¡èšèªã¢ãã« (SLM) ãã¡ããªã§ãã
Hymba
ãææ¡ããŸãããHymba ã§ã¯ãAttention ããããé«è§£å床ã®èšæ¶èœåãæäŸããSSM ããããå¹ççãªã³ã³ããã¹ãã®èŠçŽãå¯èœã«ããŸãã
Hymba ã®æ°ããªã¢ãŒããã¯ãã£ã¯ãããã€ãã®æŽå¯ãæããã«ããŠããŸãã
Attention ã®ãªãŒããŒããã:
Attention èšç®ã® 50% 以äžããããå®äŸ¡ãª SSM èšç®ã«çœ®ãæããããšãã§ããŸãã
ããŒã«ã« Attention ã®åªäœæ§:
SSM ãããã«ããèŠçŽãããã°ããŒãã«æ
å ±ã®ãããã§ãäžè¬çãªã¿ã¹ã¯ãã¡ã¢ãªæ³èµ·ã«éäžããã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ãç ç²ã«ããããšãªããã»ãšãã©ã®ã°ããŒãã« Attention ãããŒã«ã« Attention ã«çœ®ãæããããšãã§ããŸãã
KV ãã£ãã·ã¥åé·æ§:
Key-value ãã£ãã·ã¥ã¯ããããéãšã¬ã€ã€ãŒéã§é«ãçžé¢æ§ãããããããããé (GQA: Group Query Attention) ããã³ã¬ã€ã€ãŒé (Cross-layer KV ãã£ãã·ã¥å
±æ) ã§å
±æã§ããŸãã
Softmax ã® Attention ã®å¶é:
Attention ã¡ã«ããºã ã¯ãåèšã 1 ã«ãªãããã«å¶éãããŠãããçæ§ãšæè»æ§ã«å¶éããããŸããNVIDIA ã¯ãããã³ããã®å
é ã«åŠç¿å¯èœãªã¡ã¿ããŒã¯ã³ãå°å
¥ããéèŠãªæ
å ±ãæ ŒçŽããAttention ã¡ã«ããºã ã«é¢é£ããã匷å¶çã« Attention ãè¡ããè² æ
ã軜æžããŸãã
ãã®èšäºã§ã¯ãHymba 1.5B ãåæ§ã®èŠæš¡ã§ããæå
端ã®ãªãŒãã³ãœãŒã¹ ã¢ãã«ãLlama 3.2 1BãOpenELM 1BãPhi 1.5ãSmolLM2 1.7BãDanube2 1.8BãQwen2.5 1.5B ãªã©ãšæ¯èŒããŠãè¯å¥œãªããã©ãŒãã³ã¹ãçºæ®ããããšã瀺ãããŠããŸããåçã®ãµã€ãºã® Transformer ã¢ãã«ãšæ¯èŒãããšãHymba ã¯ããé«ãã¹ã«ãŒããããçºæ®ãããã£ãã·ã¥ãä¿åããããã«å¿
èŠãªã¡ã¢ãªã 10 åã® 1 ã§æžã¿ãŸãã
Hymba 1.5B ã¯
Hugging Face
ã³ã¬ã¯ã·ã§ã³ãš
GitHub
ã§å
¬éãããŠããŸãã
Hymba 1.5B ã®ããã©ãŒãã³ã¹
å³ 1 ã¯ãHymba 1.5B ãš 2B æªæºã®ã¢ãã« (Llama 3.2 1BãOpenELM 1BãPhi 1.5ãSmolLM2 1.7BãDanube2 1.8BãQwen2.5 1.5B) ããå¹³åã¿ã¹ã¯ç²ŸåºŠãã·ãŒã±ã³ã¹é·ã«å¯Ÿãããã£ãã·ã¥ ãµã€ãº (MB)ãã¹ã«ãŒããã (tok/sec) ã§æ¯èŒãããã®ã§ãã
å³ 1. Hymba 1.5B Base ãš 2B æªæºã®ã¢ãã«ã®ããã©ãŒãã³ã¹æ¯èŒ
ãã®äžé£ã®å®éšã«ã¯ãMMLUãARC-CãARC-EãPIQAãHellaswagãWinograndeãSQuAD-C ãªã©ã®ã¿ã¹ã¯ãå«ãŸããŠããŸããã¹ã«ãŒãããã¯ãã·ãŒã±ã³ã¹é· 8Kãããã ãµã€ãº 128 㧠PyTorch ã䜿çšã㊠NVIDIA A100 GPU ã§æž¬å®ããŸããã¹ã«ãŒããã枬å®äžã«ã¡ã¢ãªäžè¶³ (OOM: Out of Memory) åé¡ãçºçããã¢ãã«ã§ã¯ãOOM ã解決ããããŸã§ããã ãµã€ãºãååã«ããŠãOOM ãªãã§éæå¯èœãªæ倧ã¹ã«ãŒãããã枬å®ããŸããã
Hymba ã¢ãã«ã®ãã¶ã€ã³
Mamba ã®ãã㪠SSM ã¯ãTransformer ã®äºæ¬¡çãªè€éæ§ãšæšè«æã® KV ãã£ãã·ã¥ã倧ããåé¡ã«å¯ŸåŠããããã«å°å
¥ãããŸãããããããã¡ã¢ãªè§£å床ãäœãããã«ãSSM ã¯èšæ¶æ³èµ·ãšããã©ãŒãã³ã¹ã®ç¹ã§èŠæŠããŠããŸãããããã®å¶éãå
æããããã«ãè¡š 1 ã§å¹ççã§é«æ§èœãªå°èŠæš¡èšèªã¢ãã«ãéçºããããã®ããŒãããããææ¡ããŸãã
æ§æ
åžžèæšè« (%) â
ãªã³ãŒã« (%) â
ã¹ã«ãŒããã (token/sec) â
ãã£ãã·ã¥ ãµã€ãº (MB) â
èšèšçç±
300M ã¢ãã« ãµã€ãºãš 100B ãã¬ãŒãã³ã° ããŒã¯ã³ã®ã¢ãã¬ãŒã·ã§ã³
Transformer (Llama)
44.08
39.98
721.1
414.7
éå¹ççãªããæ£ç¢ºãªèšæ¶
ç¶æ
空éã¢ãã« (Mamba)
42.98
19.23
4720.8
1.9
å¹ççã ãäžæ£ç¢ºãªèšæ¶
A. + Attention ããã (é£ç¶)
44.07
45.16
776.3
156.3
èšæ¶èœåã匷å
B. + è€æ°ããã (䞊å)
45.19
49.90
876.7
148.2
2 ã€ã®ã¢ãžã¥ãŒã«ã®ãã©ã³ã¹ã®æ¹å
C. + ããŒã«ã« / ã°ããŒãã« Attention
44.56
48.79
2399.7
41.2
æŒç® / ãã£ãã·ã¥ã®å¹çãåäž
D. + KV ãã£ãã·ã¥å
±æ
45.16
48.04
2756.5
39.4
ãã£ãã·ã¥å¹çå
E. + ã¡ã¿ããŒã¯ã³
45.59
51.79
2695.8
40.0
åŠç¿ããèšæ¶ã®åæå
1.5B ã¢ãã« ãµã€ãºãš 1.5T ãã¬ãŒãã³ã° ããŒã¯ã³ãžã®ã¹ã±ãŒãªã³ã°
F. + ãµã€ãº / ããŒã¿
60.56
64.15
664.1
78.6
ã¿ã¹ã¯ ããã©ãŒãã³ã¹ã®ãããªãåäž
G. + ã³ã³ããã¹ãé·ã®æ¡åŒµ (2Kâ8K)
60.64
68.79
664.1
78.6
ãã«ãã·ã§ãããšãªã³ãŒã« ã¿ã¹ã¯ã®æ¹å
è¡š 1. Hymba ã¢ãã«ã®ãã¶ã€ã³ ããŒãããã
èååãã€ããªãã ã¢ãžã¥ãŒã«
ã¢ãã¬ãŒã·ã§ã³ç 究ã«ãããšããã€ããªãã ããã ã¢ãžã¥ãŒã«å
㧠Attention ãš SSM ãããã䞊åã«ããŠèåããã»ãããã·ãŒã±ã³ã·ã£ã«ã«ã¹ã¿ããã³ã°ããããåªããŠããããšãåãã£ãŠããŸããHymba ã¯ããã€ããªãã ããã ã¢ãžã¥ãŒã«å
㧠Attention ãš SSM ãããã䞊åã«èåãããäž¡ããããåæã«åãæ
å ±ãåŠçã§ããããã«ããŸãããã®ã¢ãŒããã¯ãã£ã¯ãæšè«ãšèšæ¶ã®æ£ç¢ºããé«ããŸãã
å³ 2. Hymba ã®ãã€ããªãã ããã ã¢ãžã¥ãŒã«
å¹çæ§ãš KV ãã£ãã·ã¥ã®æé©å
Attention ãããã¯ã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ãåäžãããŸãããKV ãã£ãã·ã¥ã®èŠæ±ãå¢å€§ãããã¹ã«ãŒããããäœäžãããŸãããããç·©åããããã«ãHymba ã¯ããŒã«ã«ããã³ã°ããŒãã«ã® Attention ãçµã¿åããã Cross-layer KV ãã£ãã·ã¥å
±æãæ¡çšããããšã§ããã€ããªãã ããã ã¢ãžã¥ãŒã«ãæé©åããŸããããã«ãããããã©ãŒãã³ã¹ãç ç²ã«ããããšãªãã¹ã«ãŒãããã 3 ååäžãããã£ãã·ã¥ãã»ãŒ 4 åã® 1 ã«åæžãããŸãã
å³ 3. Hymba ã¢ãã«ã®ã¢ãŒããã¯ãã£
ã¡ã¿ããŒã¯ã³
å
¥åã®å
é ã«çœ®ããã 128 ã®äºååŠç¿æžã¿ã®åã蟌ã¿ã®ã»ããã§ãããåŠç¿æžã¿ãã£ãã·ã¥ã®åæåãšããŠæ©èœããé¢é£æ
å ±ãžã®æ³šæã匷åããŸãããã®ãããªããŒã¯ã³ã«ã¯ 2 ã€ã®ç®çããããŸãã
ããã¯ã¹ããã ããŒã¯ã³ãšããŠæ©èœããAttention ãå¹æçã«ååé
ããããšã§ Attention ã®æµåºã軜æžãã
å§çž®ãããäžçç¥èãã«ãã»ã«åãã
å³ 4. ã¡ã¢ãªã®åŽé¢ããèŠã Hymba ã®è§£é
ã¢ãã«è§£æ
ãã®ã»ã¯ã·ã§ã³ã§ã¯ãåäžã®ãã¬ãŒãã³ã°èšå®ã«ãããç°ãªãã¢ãŒããã¯ãã£ãæ¯èŒããæ¹æ³ã玹ä»ããŸãããããããSSM ãš Attention ã® Attention ããããç°ãªãåŠç¿æžã¿ã¢ãã«ã§å¯èŠåããæåŸã«ãåªå® (pruning) ãéã㊠Hymba ã®ãããéèŠåºŠåæãè¡ããŸãããã®ã»ã¯ã·ã§ã³ã®ãã¹ãŠã®åæã¯ãHymba ã®ãã¶ã€ã³ã«ãããéžæã®ä»çµã¿ãšããããå¹æçãªçç±ã説æããã®ã«åœ¹ç«ã¡ãŸãã
åäžæ¡ä»¶ã§ã®æ¯èŒ
HymbaãçŽç²ãª Mamba2ãMamba2 ãš FFNãLlama3 ã¹ã¿ã€ã«ãSamba ã¹ã¿ã€ã« (Mamba-FFN-Attn-FFN) ã®ã¢ãŒããã¯ãã£ãåäžæ¡ä»¶ã§æ¯èŒããŸããããã¹ãŠã®ã¢ãã«ã 10 åã®ãã©ã¡ãŒã¿ãŒã§ããŸã£ããåããã¬ãŒãã³ã° ã¬ã·ã㧠SmolLM-Corpus ãã 1,000 åããŒã¯ã³ããŒãããåŠç¿ããŠããŸãããã¹ãŠã®çµæã¯ãHugging Face ã¢ãã«ã§ãŒãã·ã§ããèšå®ã䜿çšã㊠lm-evaluation-harness ãéããŠååŸãããŠããŸããHymba ã¯ãåžžèæšè«ã ãã§ãªãã質åå¿çã¿ã¹ã¯ãèšæ¶æ³èµ·ã¿ã¹ã¯ã§ãæé«ã®ããã©ãŒãã³ã¹ãçºæ®ããŸãã
è¡š 2 ã¯ãèšèªã¢ããªã³ã°ã¿ã¹ã¯ãšèšæ¶æ³èµ·ã¿ã¹ã¯ããã³åžžèæšè«ã¿ã¹ã¯ã«é¢ããããŸããŸãªã¢ãã« ã¢ãŒããã¯ãã£ãæ¯èŒããŠãããHymba ã¯ãã¹ãŠã®è©äŸ¡åºæºã§åè¶ããããã©ãŒãã³ã¹ãéæããŠããŸããHymba ã¯ãèšèªã¢ããªã³ã°ã¿ã¹ã¯ã§æãäœã Perplexity ã瀺ã (Wiki 㧠18.62ãLMB 㧠10.38)ãç¹ã« SWDE (54.29) ãš SQuAD-C (44.71) ã®èšæ¶æ³èµ·ã¿ã¹ã¯ã«ãããŠå
å®ãªçµæã瀺ãããã®ã«ããŽãªã§æé«ã®å¹³åã¹ã³ã¢ (49.50) ãéæããŸããã
ã¢ãã«
èšèªã¢ããªã³ã° (PPL) â
èšæ¶æ³èµ·å (%) â
åžžèæšè« (%) â
Mamba2
15.88
43.34
52.52
Mamba2 ãš FFN
17.43
28.92
51.14
Llama3
16.19
47.33
52.82
Samba
16.28
36.17
52.83
Hymba
14.5
49.5
54.57
è¡š 2. åãèšå®ã§ 1,000 åããŒã¯ã³ã§åŠç¿ãããã¢ãŒããã¯ãã£ã®æ¯èŒ
åžžèæšè«ãšè³ªåå¿çã«ãããŠãHymba ã¯å¹³åã¹ã³ã¢ 54.57 ã§ã SIQA (31.76) ã TruthfulQA (31.64) ãªã©ã®ã»ãšãã©ã®ã¿ã¹ã¯ã§ãLlama3 ã Mamba2 ãããäžåã£ãŠããŸããå
šäœçã«ãHymba ã¯ãã©ã³ã¹ã®åããã¢ãã«ãšããŠéç«ã£ãŠãããå€æ§ãªã«ããŽãªã§å¹çæ§ãšã¿ã¹ã¯ ããã©ãŒãã³ã¹ã®äž¡æ¹ã§åªããŠããŸãã
Attention ãããã®å¯èŠå
ããã«ãAttention ãããã®èŠçŽ ã 4 ã€ã®ã¿ã€ãã«åé¡ããŸããã
Meta:
ãã¹ãŠã®å®ããŒã¯ã³ããã¡ã¿ããŒã¯ã³ãžã® Attention ã¹ã³ã¢ããã®ã«ããŽãªã¯ãã¢ãã«ãã¡ã¿ããŒã¯ã³ã« Attention ãåããåŸåãåæ ãããã®ã§ããAttention ãããã§ã¯ãéåžžãã¢ãã«ã«ã¡ã¿ããŒã¯ã³ãããå Žåãæåã®æ°å (äŸãã° Hymba ã®å Žå㯠128) ã«äœçœ®ããŠããŸãã
BOS:
ãã¹ãŠã®å®ããŒã¯ã³ããã»ã³ãã³ã¹ã®éå§ããŒã¯ã³ãŸã§ã® Attention ã¹ã³ã¢ãAttention ãããã§ã¯ãéåžžãã¡ã¿ããŒã¯ã³ã®çŽåŸã®æåã®åã«äœçœ®ããŸãã
Self:
ãã¹ãŠã®å®ããŒã¯ã³ããããèªèº«ãžã® Attention ã¹ã³ã¢ãAttention ãããã§ã¯ãéåžžã察è§ç·äžã«äœçœ®ããŠããŸãã
Cross:
ãã¹ãŠã®å®ããŒã¯ã³ããä»ã®å®ããŒã¯ã³ãžã® Attention ã¹ã³ã¢ãAttention ãããã§ã¯ãéåžžã察è§ç·å€ã®é åã«äœçœ®ããŠããŸãã
Hymba ã® Attention ãã¿ãŒã³ã¯ãvanilla (å å·¥ãããŠããªã) Transformer ã®ãããšã¯å€§ããç°ãªããŸããvanilla Transformer ã® Attention ã¹ã³ã¢ã¯ BOS ã«éäžããŠãããAttention Sink ã®çµæãšäžèŽããŠããŸããããã«ãvanilla Transformer ã¯ãSelf-Attention ã¹ã³ã¢ã®æ¯çãé«ããªã£ãŠããŸããHymba ã§ã¯ãã¡ã¿ããŒã¯ã³ãAttention ããããSSM ããããäºãã«è£å®ãåãããã«æ©èœããç°ãªãã¿ã€ãã®ããŒã¯ã³éã§ããããã©ã³ã¹ã®åãã Attention ã¹ã³ã¢ã®ååžãå®çŸããŠããŸãã
å
·äœçã«ã¯ãã¡ã¿ããŒã¯ã³ã BOS ããã® Attention ã¹ã³ã¢ããªãããŒãããããšã§ãã¢ãã«ãããå®éã®ããŒã¯ã³ã«éäžã§ããããã«ãªããŸããSSM ãããã¯ã°ããŒãã«ãªã³ã³ããã¹ããèŠçŽããçŸåšã®ããŒã¯ã³ (Self-Attention ã¹ã³ã¢) ã«ããéç¹ã眮ããŸããäžæ¹ãAttention ãããã¯ãSelf ãš BOS ããŒã¯ã³ã«å¯Ÿãã泚æãäœããä»ã®ããŒã¯ã³ (ããªãã¡ãCross Attention ã¹ã³ã¢) ãžã®æ³šæãé«ããªããŸããããã¯ãHymba ã®ãã€ããªãã ããã ãã¶ã€ã³ããç°ãªãã¿ã€ãã®ããŒã¯ã³éã® Attention ååžã®ãã©ã³ã¹ãå¹æçã«åãããšãã§ããããã©ãŒãã³ã¹ã®åäžã«ã€ãªããå¯èœæ§ãããããšã瀺åããŠããŸãã
å³ 5. ã¡ã¿ããŒã¯ã³ãSliding Window AttentionãMamba è²¢ç®ã®çµã¿åããã«ãã Hymba ã® Attention ãããã®æŠç¥å³
å³ 6. Llama 3.2 3B ãš Hymba 1.5B ã®ç°ãªãã«ããŽãªããã® Attention ã¹ã³ã¢ã®åèš
ãããéèŠåºŠåæ
åã¬ã€ã€ãŒã®Attention ãš SSM ãããã®çžå¯ŸçãªéèŠæ§ãåæããããã«ããããããåé€ããŠæçµçãªç²ŸåºŠãèšé²ããŸãããåæã®çµæã以äžã®ããšãæããã«ãªããŸããã
åãã¬ã€ã€ãŒã®Â Attention / SSM ãããã®çžå¯ŸçãªéèŠæ§ã¯å
¥åé©å¿ã§ãããã¿ã¹ã¯ã«ãã£ãŠç°ãªããŸããããã¯ãããŸããŸãªå
¥åã®åŠçã«ãããŠãç°ãªã圹å²ãæããå¯èœæ§ãããããšã瀺åããŠããŸãã
æåã®ã¬ã€ã€ãŒã® SSM ãããã¯èšèªã¢ããªã³ã°ã¿ã¹ã¯ã«äžå¯æ¬ ã§ããããåé€ãããšãã©ã³ãã æšæž¬ã¬ãã«ã«ãŸã§å€§å¹
ã«ç²ŸåºŠãäœäžããŸãã
äžè¬çã«ãAttention / SSM ãããã 1 ã€åé€ãããšãHellaswag ã§ã¯ããããå¹³å 0.24%/1.1% 粟床ãäœäžããŸãã
å³ 7. Hellaswag ã® 1K ãµã³ãã«ã䜿çšããŠæž¬å®ãããåã¬ã€ã€ãŒã® Attention ãŸã㯠SSM ããããåé€ããåŸã®éæ粟床
ã¢ãã« ã¢ãŒããã¯ãã£ãšåŠç¿ã®ãã¹ã ãã©ã¯ãã£ã¹
ãã®ã»ã¯ã·ã§ã³ã§ã¯ãHymba 1.5B Base ãš Hymba 1.5B Instruct ã®äž»èŠã¢ãŒããã¯ãã£äžã®æ±ºå®äºé
ãšåŠç¿æ¹æ³ã®æŠèŠã«ã€ããŠèª¬æããŸãã
ã¢ãã« ã¢ãŒããã¯ãã£
ãã€ããªãã ã¢ãŒããã¯ãã£:
Mamba ã¯èŠçŽã«åªããéåžžã¯çŸåšã®ããŒã¯ã³ã«ããéç¹ã眮ããŸããAttention ã¯ããæ£ç¢ºã§ã¹ãããã·ã§ãã ã¡ã¢ãªãšããŠæ©èœããŸããæšæºçãªã·ãŒã±ã³ã·ã£ã«èåã§ã¯ãªãã䞊åã«çµã¿åãããããšã§å©ç¹ãçµ±åããããšãã§ããŸããSSM ãš Attention ãããéã®ãã©ã¡ãŒã¿ãŒæ¯ã¯ 5:1 ãéžæããŸããã
Sliding Window Attention:
å®å
šãª Attention ããã㯠3 ã€ã®ã¬ã€ã€ãŒ (æåãæåŸãäžé) ã«ç¶æãããæ®ãã® 90% ã®ã¬ã€ã€ãŒã§ Sliding Window Attention ãããã䜿çšãããŸãã
Cross-layer KV ãã£ãã·ã¥å
±æ:
é£ç¶ãã 2 ã€ã® Attention ã¬ã€ã€ãŒéã«å®è£
ãããŸããããã¯ããããéã® GQA KV ãã£ãã·ã¥å
±æã«å ããŠè¡ãããŸãã
ã¡ã¿ããŒã¯ã³:
ãããã® 128 ããŒã¯ã³ã¯æåž«ãªãåŠç¿ãå¯èœã§ããã倧èŠæš¡èšèªã¢ãã« (LLM) ã«ããããšã³ããããŒåŽ©å£ã®åé¡ãåé¿ããAttention Sink çŸè±¡ãç·©åããã®ã«åœ¹ç«ã¡ãŸããããã«ãã¢ãã«ã¯ãããã®ããŒã¯ã³ã«äžè¬çãªç¥èãæ ŒçŽããŸãã
åŠç¿ã®ãã¹ã ãã©ã¯ãã£ã¹
äºååŠç¿:
2 段éã®ããŒã¹ã¢ãã«åŠç¿ãéžæããŸãããã¹ããŒãž 1 ã§ã¯ãäžå®ã®é«ãåŠç¿çãç¶æãããã£ã«ã¿ãªã³ã°ãããŠããªã倧èŠæš¡ãªã³ãŒãã¹ ããŒã¿ã®äœ¿çšããŸãããç¶ããŠãé«å質ã®ããŒã¿ãçšã㊠1e-5 ãŸã§ç¶ç¶çã«åŠç¿çãæžè¡°ãããŸããããã®ã¢ãããŒãã«ãããã¹ããŒãž 1 ã®ç¶ç¶çãªåŠç¿ãšåéãå¯èœã«ãªããŸãã
æ瀺ãã¡ã€ã³ãã¥ãŒãã³ã°:
æ瀺ã¢ãã«ã®èª¿æŽã¯ 3 ã€ã®æ®µéã§è¡ãããŸãããŸããSFT-1 ã¯ãã³ãŒããæ°åŠãé¢æ°åŒã³åºããããŒã« ãã¬ã€ããã®ä»ã®ã¿ã¹ã¯åºæã®ããŒã¿ã§åŠç¿ãå®æœãã匷åãªæšè«èœåãã¢ãã«ã«ä»äžããŸãã次ã«ãSFT-2 ã¯ã¢ãã«ã«äººéã®æ瀺ã«åŸãããšãæããŸããæåŸã«ãDPO ã掻çšããŠãã¢ãã«ã人éã®å¥œã¿ã«åãããã¢ãã«ã®å®å
šæ§ãé«ããŸãã
å³ 8. Hymba ã¢ãã« ãã¡ããªã«é©å¿ããåŠç¿ãã€ãã©ã€ã³
ããã©ãŒãã³ã¹ãšå¹çæ§ã®è©äŸ¡
1.5T ã®äºååŠç¿ããŒã¯ã³ã ãã§ãHymba 1.5B ã¢ãã«ã¯ãã¹ãŠã®å°èŠæš¡èšèªã¢ãã«ã®äžã§æé«ã®æ§èœãçºæ®ããTransformer ããŒã¹ã® LM ãããåªããã¹ã«ãŒããããšãã£ãã·ã¥å¹çãå®çŸããŸãã
äŸãã°ã13 å以äžã®ããŒã¯ã³æ°ã§äºååŠç¿ãããæã匷åãªããŒã¹ã©ã€ã³ã§ãã Qwen2.5 ã«å¯ŸããŠãã³ãããŒã¯ããå ŽåãHymba 1.5B ã¯å¹³å粟床ã 1.55%ãã¹ã«ãŒãããã 1.41 åããã£ãã·ã¥å¹çã 2.90 åã«åäžããŸãã2T æªæºã®ããŒã¯ã³ã§åŠç¿ãããæã匷åãªå°èŠæš¡èšèªã¢ãã«ãããªãã¡ h2o-danube2 ãšæ¯èŒãããšããã®æ¹æ³ã¯å¹³å粟床ã 5.41%ãã¹ã«ãŒãããã 2.45 åããã£ãã·ã¥å¹çã 6.23 åã«åäžããŠããŸãã
ã¢ãã«
ãã©ã¡ãŒã¿ãŒæ°
åŠç¿ããŒã¯ã³
ããŒã¯ã³
(1 ç§ããã)
ãã£ãã·ã¥
(MB)
MMLU 5-
shot
ARC-E 0-shot
ARC-C 0-shot
PIQA 0-shot
Wino. 0-shot
Hella. 0-shot
SQuAD -C
1-shot
å¹³å
OpenELM-1
1.1B
1.5T
246
346
27.06
62.37
19.54
74.76
61.8
48.37
45.38
48.57
Renev0.1
1.3B
1.5T
800
113
32.94
67.05
31.06
76.49
62.75
51.16
48.36
52.83
Phi1.5
1.3B
0.15T
241
1573
42.56
76.18
44.71
76.56
72.85
48
30.09
55.85
SmolLM
1.7B
1T
238
1573
27.06
76.47
43.43
75.79
60.93
49.58
45.81
54.15
Cosmo
1.8B
.2T
244
1573
26.1
62.42
32.94
71.76
55.8
42.9
38.51
47.2
h20dan-ube2
1.8B
2T
271
492
40.05
70.66
33.19
76.01
66.93
53.7
49.03
55.65
Llama 3.2 1B
1.2B
9T
535
262
32.12
65.53
31.39
74.43
60.69
47.72
40.18
50.29
Qwen2.5
1.5B
18T
469
229
60.92
75.51
41.21
75.79
63.38
50.2
49.53
59.51
AMDOLMo
1.2B
1.3T
387
1049
26.93
65.91
31.57
74.92
61.64
47.3
33.71
48.85
SmolLM2
1.7B
11T
238
1573
50.29
77.78
44.71
77.09
66.38
53.55
50.5
60.04
Llama3.2 3B
3.0B
9T
191
918
56.03
74.54
42.32
76.66
69.85
55.29
43.46
59.74
Hymba
1.5B
1.5T
664
79
51.19
76.94
45.9
77.31
66.61
53.55
55.93
61.06
è¡š 2. Hymba 1.5B ããŒã¹ ã¢ãã«ã®çµæ
æ瀺ã¢ãã«
Hymba 1.5B Instruct ã¢ãã«ã¯ãå
šã¿ã¹ã¯å¹³åã§æé«ã®ããã©ãŒãã³ã¹ãéæããçŽè¿ã®æé«æ§èœã¢ãã«ã§ãã Qwen 2.5 Instruct ãçŽ 2% äžåããŸãããç¹ã«ãHymba 1.5B 㯠GSM8K/GPQA/BFCLv2 ã§ããããã 58.76/31.03/46.40 ã®ã¹ã³ã¢ã§ä»ã®ãã¹ãŠã®ã¢ãã«ãäžåã£ãŠããŸãããããã®çµæã¯ãç¹ã«è€éãªæšè«èœåãå¿
èŠãšããåéã«ãããŠãHymba 1.5B ã®åªäœæ§ã瀺ããŠããŸãã
ã¢ãã«
ãã©ã¡ãŒã¿ãŒæ°
MMLU â
IFEval â
GSM8K â
GPQA â
BFCLv2 â
å¹³åâ
SmolLM
1.7B
27.80
25.16
1.36
25.67
-*
20.00
OpenELM
1.1B
25.65
6.25
56.03
21.62
-*
27.39
Llama 3.2
1.2B
44.41
58.92
42.99
24.11
20.27
38.14
Qwen2.5
1.5B
59.73
46.78
56.03
30.13
43.85
47.30
SmolLM2
1.7B
49.11
55.06
47.68
29.24
22.83
40.78
Hymba 1.5B
1.5B
52.79
57.14
58.76
31.03
46.40
49.22
è¡š 3. Hymba 1.5B Instruct ã¢ãã«ã®çµæ
ãŸãšã
æ°ãã Hymba ãã¡ããªã®å°èŠæš¡èšèªã¢ãã«ã¯ããã€ããªãã ããã ã¢ãŒããã¯ãã£ãæ¡çšããAttention ãããã®é«è§£åãªèšæ¶èœåãš SSM ãããã®å¹ççãªã³ã³ããã¹ãã®èŠçŽãçµã¿åãããŠããŸããHymba ã®ããã©ãŒãã³ã¹ãããã«æé©åããããã«ãåŠç¿å¯èœãªã¡ã¿ããŒã¯ã³ãå°å
¥ãããAttention ããããš SSM ãããã®äž¡æ¹ã§åŠç¿æžã¿ãã£ãã·ã¥ãšããŠæ©èœããé¡èãªæ
å ±ã«æ³šç®ããã¢ãã«ã®ç²ŸåºŠã匷åããŸãããHymba ã®ããŒãããããå
æ¬çãªè©äŸ¡ãã¢ãã¬ãŒã·ã§ã³ç 究ãéããŠãHymba ã¯å¹
åºãã¿ã¹ã¯ã«ããã£ãŠæ°ããªæå
端ã®ããã©ãŒãã³ã¹ã確ç«ããæ£ç¢ºããšå¹çæ§ã®äž¡é¢ã§åªããçµæãéæããŸãããããã«ããã®ç 究ã¯ããã€ããªãã ããã ã¢ãŒããã¯ãã£ã®å©ç¹ã«é¢ãã貎éãªæŽå¯ããããããå¹ççãªèšèªã¢ãã«ã®ä»åŸã®ç 究ã«ææãªæ¹åæ§ã瀺ããŠããŸãã
Hybma 1.5B Base
ãš
Hymba 1.5B Instruct
ã®è©³çŽ°ã¯ãã¡ããã芧ãã ããã
è¬èŸ
ãã®ææã¯ãWonmin ByeonãZijia ChenãAmeya Sunil MahabaleshwarkarãShih-Yang LiuãMatthijs Van KeirsbilckãMin-Hung ChenãYoshi SuharaãNikolaus BinderãHanah ZhangãMaksim KhadkevichãYingyan Celine LinãJan KautzãPavlo MolchanovãNathan Horrocks ãªã©ãNVIDIA ã®å€ãã®ã¡ã³ããŒã®è²¢ç®ãªãããŠã¯å®çŸããŸããã§ããã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
Optimizing Large Language Models: An Experimental Approach to Pruning and Fine-Tuning LLama2 7B (倧èŠæš¡èšèªã¢ãã«ã®æé©å: LLama2 7B ã®åªå®ãšãã¡ã€ã³ãã¥ãŒãã³ã°ã®å®éšçã¢ãããŒã)
GTC ã»ãã·ã§ã³:
Accelerating End-to-End Large Language Models System using a Unified Inference Architecture and FP8 (çµ±äžæšè«ã¢ãŒããã¯ãã£ãš FP8 ãçšãããšã³ãããŒãšã³ãã®å€§èŠæš¡èšèªã¢ãã« ã·ã¹ãã ã®é«éå)
NGC ã³ã³ãããŒ:
Llama-3.1-Nemotron-70B-Ins
truct
NGC ã³ã³ãããŒ:
Llama-3-Swallow-70B-Instruct-v0.1
SDK:
NeMo Megatron |
https://developer.nvidia.com/blog/deploying-fine-tuned-ai-models-with-nvidia-nim/ | Deploying Fine-Tuned AI Models with NVIDIA NIM | For organizations adapting AI foundation models with domain-specific data, the ability to rapidly create and deploy fine-tuned models is key to efficiently delivering value with enterprise generative AI applications.
NVIDIA NIM
offers prebuilt, performance-optimized inference microservices for the latest AI foundation models, including
seamless deployment
of models customized using parameter-efficient fine-tuning (PEFT).
In some cases, itâs ideal to use methods like continual pretraining, DPO, supervised fine-tuning (SFT), or model merging, where the underlying model weights are adjusted directly in the training or customization process, unlike PEFT with low-rank adaptation (LoRA). In these cases, inference software configuration for the model must be updated for optimal performance given the new weights.
Rather than burden you with this often lengthy process, NIM can automatically build a
TensorRT-LLM
inference engine performance optimized for the adjusted model and GPUs in your local environment, and then load it for running inference as part of a single-step model deployment process.
In this post, we explore how to rapidly deploy NIM microservices for models that have been customized through SFT by using locally built, performance-optimized TensorRT-LLM inference engines. We include all the necessary commands as well as some helpful options, so you can try it out on your own today.
Prerequisites
To run this tutorial, you need an NVIDIA-accelerated compute environment with access to 80 GB of GPU memory and which has
git-lfs
installed.
Before you can pull and deploy a NIM microservice in an NVIDIA-accelerated compute environment, you also need an NGC API key.
Navigate to the
Meta Llama 3 8B Instruct
model listing in the NVIDIA API Catalog.
Choose
Login
at the top right and follow the instructions.
When youâre logged in, choose
Build with this NIM
on the
model page
.
Choose
Self-Hosted API
and follow either option to access NIM microservices access:
NVIDIA Developer Program membership with free access to NIM for research, development, and testing only.
The 90-day NVIDIA AI Enterprise license, which includes access to NVIDIA Enterprise Support.
After you provide the necessary details for your selected access method, copy your NGC API key and be ready to move forward with NIM. For more information, see
Launch NVIDIA NIM for LLMs
.
Getting started with NIM microservices
Provide your NGC CLI API key as an environment variable in your compute environment:
export NGC_API_KEY=<<YOUR API KEY HERE>>
You also must point to, create, and modify permissions for a directory to be used as a cache during the optimization process:
export NIM_CACHE_PATH=/tmp/nim/.cache
mkdir -p $NIM_CACHE_PATH
chmod -R 777 $NIM_CACHE_PATH
To demonstrate locally built, optimized TensorRT-LLM inference engines for deploying fine-tuned models with NIM, you need a model that has undergone customization through SFT. For this tutorial, use the
NVIDIA OpenMath2-Llama3.1-8B
model, which is a customization of
Metaâs Llama-3.1-8B
using the
OpenMathInstruct-2
dataset.
The base model must be available as a downloadable NIM for LLMs. For more information about downloadable NIM microservices, see the
NIM Type: Run Anywhere filter
in the NVIDIA API Catalog.
All you need is the weights to this model, which can be obtained in several ways. For this post, clone the model repository using the following commands:
git lfs install
git clone https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B
export MODEL_WEIGHT_PARENT_DIRECTORY=$PWD
Now that you have the model weights collected, move on to the next step: firing up the microservice.
Selecting from available performance profiles
Based on your selected model and hardware configuration, the most applicable inference performance profile available is automatically selected. There are two available performance profiles for local inference engine generation:
Latency:
Focused on delivering a NIM microservice that is optimized for latency.
Throughput:
Focused on delivering a NIM microservice that is optimized for batched throughput.
For more information about supported features, including available precision, see the
Support Matrix
topic in the NVIDIA NIM documentation.
Example using an SFT model
Create a locally built TensorRT-LLM inference engine for OpenMath2-Llama3.1-8B by running the following commands:
docker run -it --rm --gpus all \
--user $(id -u):$(id -g)\
--network=host \
--shm-size=32GB \
-e NGC_API_KEY \
-e NIM_FT_MODEL=/opt/weights/hf/OpenMath2-Llama3.1-8B \
-e NIM_SERVED_MODEL_NAME=OpenMath2-Llama3.1-8B \
-v $NIM_CACHE_PATH:/opt/nim/.cache \
-v $MODEL_WEIGHT_PARENT_DIRECTORY:/opt/weights/hf \
nvcr.io/nim/meta/llama-3.1-8b-instruct:1.3.0
The command is nearly identical to the typical command youâd use to deploy a NIM microservice. In this case, youâve added the extra
NIM_FT_MODEL
parameter, which points to the OpenMath2-Llama3.1-8B model.
With that, NIM builds an optimized inference engine locally. To perform inference using this new NIM microservice, run the following Python code example:
from openai import OpenAI
client = OpenAI(
base_url = "http://localhost:8000/v1",
api_key = "none"
)
completion = client.chat.completions.create(
model="OpenMath2-Llama3.1-8B",
messages=[{"role":"user","content":"What is your name?"}],
temperature=0.2,
top_p=0.7,
max_tokens=100,
stream=True
)
for chunk in completion:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
Video 1. How to Deploy Fine-Tuned AI Models
Building an optimized TensorRT-LLM engine with a custom performance profile
On
supported GPUs
, you can use a similar command to spin up your NIM microservice. Follow the
Model Profile
instructions to launch your microservice and determine which profiles are accessible for it.
export IMG_NAME="nvcr.io/nim/meta/llama-3.1-8b-instruct:1.3.0"
docker run --rm --gpus=all -e NGC_API_KEY=$NGC_API_KEY $IMG_NAME list-model-profiles
Assuming youâre in an environment with two (or more) H100 GPUs, you should see the following profiles available:
tensorrt_llm-h100-bf16-tp2âpp1-throughput
tensorrt_llm-h100-bf16-tp2-pp1-latency
Re-run the command and provide an additional environment variable to specify the desired profile:
docker run --rm --gpus=all \
-e NGC_API_KEY \
-e NIM_FT_MODEL=/opt/weights/hf/OpenMath2-Llama3.1-8B \
-e NIM_SERVED_MODEL_NAME=OpenMath2-Llama3.1-8B \
-e NIM_MODEL_PROFILE=tensorrt_llm-h100-bf16-tp2-pp1-latency \
-v $NIM_CACHE_PATH:/opt/nim/.cache \
-v $MODEL_WEIGHT_PARENT_DIRECTORY:/opt/weights/hf \
$IMG_NAME
Now that youâve relaunched your NIM microservice with the desired profile, use Python to interact with the model:
from openai import OpenAI
client = OpenAI(
base_url = "http://localhost:8000/v1",
api_key = "none"
)
completion = client.chat.completions.create(
model="llama-3.1-8b-instruct",
messages=[{"role":"user","content":"What is your name?"}],
temperature=0.2,
top_p=0.7,
max_tokens=100,
stream=True
)
for chunk in completion:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
Conclusion
Whether youâre using
PEFT
or SFT methods for model customization, NIM accelerates customized model deployment for high-performance inferencing in a few simple steps. With optimized TensorRT-LLM inference engines built automatically in your local environment, NIM is unlocking new possibilities for rapidly deploying accelerated AI inferencing anywhere.
Learn more and get started today by visiting the NVIDIA
API catalog
and checking out the
documentation
. To engage with NVIDIA and the NIM microservices community, see the NVIDIA
NIM developer forum
. | https://developer.nvidia.com/ja-jp/blog/deploying-fine-tuned-ai-models-with-nvidia-nim/ | NVIDIA NIM ã§ãã¡ã€ã³ãã¥ãŒãã³ã°ããã AI ã¢ãã«ã®ããã〠| Reading Time:
2
minutes
ãã¡ã€ã³åºæã®ããŒã¿ã§ AI åºç€ã¢ãã«ãé©å¿ãããŠããäŒæ¥ã«ãšã£ãŠããã¡ã€ã³ãã¥ãŒãã³ã°ãããã¢ãã«ãè¿
éã«äœæãããããã€ããèœåã¯ãäŒæ¥ã®çæ AI ã¢ããªã±ãŒã·ã§ã³ã§å¹ççã«äŸ¡å€ãæäŸããããã®éµãšãªããŸãã
NVIDIA NIM
ã¯ãParapeter-efficient Fine-tuning (PEFT) ãçšããŠã«ã¹ã¿ãã€ãºããã¢ãã«ã®
ã·ãŒã ã¬ã¹ãªãããã€
ãªã©ãææ°ã® AI åºç€ã¢ãã«åãã«ãã«ããããããã©ãŒãã³ã¹ãæé©åããæšè«ãã€ã¯ããµãŒãã¹ãæäŸããŸãã
å Žåã«ãã£ãŠã¯ãLow-rank Adaptation (LoRA) ã䜿çšãã PEFT ãšã¯ç°ãªããç¶ç¶äºååŠç¿ãDPOãæåž«ãããã¡ã€ã³ãã¥ãŒãã³ã° (SFT: Supervised Fine-tuning)ãã¢ãã« ããŒãžãªã©ã®ææ³ãå©çšããåºç€ãšãªãã¢ãã«ã®éã¿ããã¬ãŒãã³ã°ãã«ã¹ã¿ãã€ãºã®éçšã§çŽæ¥èª¿æŽããã®ãçæ³çã§ãããã®ãããªå Žåãæ°ããéã¿ãèæ
®ããæé©ãªããã©ãŒãã³ã¹ãå®çŸããã«ã¯ãã¢ãã«ã®æšè«ãœãããŠã§ã¢æ§æãæŽæ°ããå¿
èŠããããŸãã
ãã®é·æéãèŠããããã»ã¹ã«è² æ
ãå²ãã®ã§ã¯ãªããNIM ã¯ã調æŽãããã¢ãã«ãš GPU ã«åãããŠæé©åãã
TensorRT-LLM
æšè«ãšã³ãžã³ãããŒã«ã«ç°å¢ã§ãã«ãããããŒããããããåäžã¹ãããã®ã¢ãã« ããã〠ããã»ã¹ã®äžç°ãšããŠæšè«ãå®è¡ã§ããŸãã
ãã®æçš¿ã§ã¯ãããã©ãŒãã³ã¹ãæé©åãã TensorRT-LLM æšè«ãšã³ãžã³ãããŒã«ã«ã§ãã«ãããŠãSFT ã§ã«ã¹ã¿ãã€ãºãããã¢ãã«ã«å¯Ÿãã NIM ãã€ã¯ããµãŒãã¹ãè¿
éã«ãããã€ããæ¹æ³ã説æããŸããå¿
èŠãªã³ãã³ããšäŸ¿å©ãªãªãã·ã§ã³ãã玹ä»ããŸãã®ã§ãæ¯éä»ãããè©Šããã ããã
åææ¡ä»¶
ãã®ãã¥ãŒããªã¢ã«ãå®è¡ããã«ã¯ã80 GB ã® GPU ã¡ã¢ãªãæ〠NVIDIA ã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ç°å¢ãš
git-lfs
ã®ã€ã³ã¹ããŒã«ãå¿
èŠã§ãã
NVIDIA ã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ç°å¢ã§ãNIM ãã€ã¯ããµãŒãã¹ã pull ããŠãããã€ããã«ã¯ãNGC API ããŒãå¿
èŠã§ãã
NVIDIA API ã«ã¿ãã°ã®ã¢ãã«äžèŠ§ãã
Meta Llama 3 8B Instruct
ã«ç§»åããŸãã
å³äžã®
[Login]
ãéžæããæ瀺ã«åŸã£ãŠãã ããã
ãã°ã€ã³ãããã
ã¢ãã« ããŒãž
ã§
[Build with this NIM]
ãéžæããŸãã
[Self-Hosted API]
ãéžæããããããã®ãªãã·ã§ã³ã«åŸã£ãŠãNIM ãã€ã¯ããµãŒãã¹ãžã¢ã¯ã»ã¹ããŸãã
NVIDIA éçºè
ããã°ã©ã ã®ã¡ã³ããŒã§ããã°ãç 究ãéçºããã¹ãã«éã NIM ã«ç¡æã§ã¢ã¯ã»ã¹ããããšãã§ããŸãã
90 æ¥éã® NVIDIA AI Enterprise ã©ã€ã»ã³ã¹ã«ã¯ãNVIDIA Enterprise ãµããŒããžã®ã¢ã¯ã»ã¹ãå«ãŸããŠããŸãã
éžæããã¢ã¯ã»ã¹æ¹æ³ã«å¿
èŠãªè©³çŽ°æ
å ±ãæäŸããããNGC API ããŒãã³ããŒããŠãNIM ãé²ããæºåãããŸãã詳现ã«ã€ããŠã¯ã
Launch NVIDIA NIM for LLMs
ãåç
§ããŠãã ããã
NIM ãã€ã¯ããµãŒãã¹ãã¯ããã
å©çšäžã®ã³ã³ãã¥ãŒãã£ã³ã°ç°å¢ã®ç°å¢å€æ°ãšããŠãNGC API ããŒãæäŸããŸãã
export NGC_API_KEY=<<YOUR API KEY HERE>>
ãŸããæé©ååŠçäžã«ãã£ãã·ã¥ãšããŠäœ¿çšãããã£ã¬ã¯ããªãäœæããŠãããŒããã·ã§ã³ãå€æŽããŠãæå®ããå¿
èŠããããŸãã
export NIM_CACHE_PATH=/tmp/nim/.cache
mkdir -p $NIM_CACHE_PATH
chmod -R 777 $NIM_CACHE_PATH
NIM ã§ãã¡ã€ã³ãã¥ãŒãã³ã°ãããã¢ãã«ããããã€ããããã«ãæé©ãª TensorRT-LLM æšè«ãšã³ãžã³ãããŒã«ã«ã§ãã«ãããå®èšŒã«ã¯ãSFT ã«ãã£ãŠã«ã¹ã¿ãã€ãºããã¢ãã«ãå¿
èŠã§ãããã®ãã¥ãŒããªã¢ã«ã§ã¯ã
OpenMathInstruct-2
ããŒã¿ã»ããã䜿çšããŠã
Meta ã® Llama-3.1-8B
ãã«ã¹ã¿ãã€ãºãã
NVIDIA OpenMath2-Llama3.1-8B
ã¢ãã«ã䜿çšããŸãã
ããŒã¹ ã¢ãã«ã¯ãããŠã³ããŒãå¯èœãª NIM for LLMs ãšããŠå©çšå¯èœã§ãªããã°ãªããŸãããããŠã³ããŒãå¯èœãª NIM ãã€ã¯ããµãŒãã¹ã®è©³çŽ°ã«ã€ããŠã¯ãNVIDIA API ã«ã¿ãã°ã®ã
NIM Type: Run Anywhere filter
ããåç
§ããŠãã ããã
å¿
èŠãªã®ã¯ãã®ã¢ãã«ã®éã¿ã ãã§ãããã¯ããŸããŸãªæ¹æ³ããããŸãããã®æçš¿ã§ã¯ã以äžã®ã³ãã³ãã䜿çšããŠã¢ãã« ãªããžããªãã¯ããŒã³ããŸãã
git lfs install
git clone https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B
export MODEL_WEIGHT_PARENT_DIRECTORY=$PWD
ããã§ã¢ãã«ã®éã¿ãåéã§ããã®ã§ã次ã®ã¹ãããã®ãã€ã¯ããµãŒãã¹ã®èµ·åã«é²ã¿ãŸãã
å©çšå¯èœãªããã©ãŒãã³ã¹ ãããã¡ã€ã«ããéžæãã
éžæããã¢ãã«ãšããŒããŠã§ã¢ã®æ§æã«åºã¥ããŠãå©çšå¯èœãªãã®ã®äžããæãé©åãªæšè«ããã©ãŒãã³ã¹ ãããã¡ã€ã«ãèªåçã«éžæãããŸããããŒã«ã«æšè«ãšã³ãžã³ã®çæã«ã¯ã以äžã® 2 ã€ã®ããã©ãŒãã³ã¹ ãããã¡ã€ã«ãå©çšã§ããŸãã
ã¬ã€ãã³ã·:
ã¬ã€ãã³ã·ã«æé©åããã NIM ãã€ã¯ããµãŒãã¹ã®æäŸã«éç¹ã眮ããŸãã
ã¹ã«ãŒããã:
ããã ã¹ã«ãŒãããã«æé©åããã NIM ãã€ã¯ããµãŒãã¹ã®æäŸã«éç¹ã眮ããŸãã
å©çšå¯èœãªç²ŸåºŠãªã©ããµããŒãæ©èœã®è©³çŽ°ã«ã€ããŠã¯ãNVIDIA NIM ããã¥ã¡ã³ãã®
ãµããŒãæ
å ±
ã®ãããã¯ãåç
§ããŠãã ããã
SFT ã¢ãã«ã䜿çšããäŸ
以äžã®ã³ãã³ããå®è¡ããŠãããŒã«ã«ç°å¢ã§ãã«ããã OpenMath2-Llama3.1-8B çšã® TensorRT-LLM æšè«ãšã³ãžã³ãäœæããŸãã
docker run -it --rm --gpus all \
--user $(id -u):$(id -g)\
--network=host \
--shm-size=32GB \
-e NGC_API_KEY \
-e NIM_FT_MODEL=/opt/weights/hf/OpenMath2-Llama3.1-8B \
-e NIM_SERVED_MODEL_NAME=OpenMath2-Llama3.1-8B \
-v $NIM_CACHE_PATH:/opt/nim/.cache \
-v $MODEL_WEIGHT_PARENT_DIRECTORY:/opt/weights/hf \
nvcr.io/nim/meta/llama-3.1-8b-instruct:1.3.0
ãã®ã³ãã³ãã¯ãNIM ãã€ã¯ããµãŒãã¹ããããã€ããããã«äœ¿çšããå
žåçãªã³ãã³ããšã»ãŒåãã§ãããã®å Žåãè¿œå ã® NIM_FT_MODEL ãã©ã¡ãŒã¿ãŒãè¿œå ããOpenMath2-Llama3.1-8B ã¢ãã«ãæããŠããŸãã
ããã«ãããNIM ã¯æé©åãããæšè«ãšã³ãžã³ãããŒã«ã«ç°å¢ã§ãã«ãããŸãããã®æ°ãã NIM ãã€ã¯ããµãŒãã¹ã䜿çšããŠæšè«ãè¡ãã«ã¯ã以äžã® Python ã³ãŒã ãµã³ãã«ãå®è¡ããŸãã
from openai import OpenAI
client = OpenAI(
base_url = "http://localhost:8000/v1",
api_key = "none"
)
completion = client.chat.completions.create(
model="OpenMath2-Llama3.1-8B",
messages=[{"role":"user","content":"What is your name?"}],
temperature=0.2,
top_p=0.7,
max_tokens=100,
stream=True
)
for chunk in completion:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
åç» 1. ãã¡ã€ã³ãã¥ãŒãã³ã°ããã AI ã¢ãã«ããããã€ããæ¹æ³
ã«ã¹ã¿ã ããã©ãŒãã³ã¹ ãããã¡ã€ã«ã§æé©åããã TensorRT-LLM ãšã³ãžã³ã®ãã«ã
ãµããŒããããŠãã GPU
ãªããåæ§ã®ã³ãã³ãã䜿çšããŠãNIM ãã€ã¯ããµãŒãã¹ãèµ·åã§ããŸãã
ã¢ãã« ãããã¡ã€ã«
ã®æé ã«åŸã£ãŠãã€ã¯ããµãŒãã¹ãèµ·åããã©ã®ãããã¡ã€ã«ã«ã¢ã¯ã»ã¹ã§ãããã確èªããŸãã
export IMG_NAME="nvcr.io/nim/meta/llama-3.1-8b-instruct:1.3.0"
docker run --rm --gpus=all -e NGC_API_KEY=$NGC_API_KEY $IMG_NAME list-model-profiles
H100 GPU ã䜿çšããŠãããšä»®å®ãããšã以äžã®ãããã¡ã€ã«ãå©çšå¯èœã§ããããšãããããŸãã
tensorrt_llm-h100-bf16-tp2-pp1-latency
tensorrt_llm-h100-bf16-tp1-pp1-throughput
ã³ãã³ããåå®è¡ããç®çã®ãããã¡ã€ã«ãæå®ããç°å¢å€æ°ãè¿œå ããŸãã
docker run --rm --gpus=all \
--user $(id -u):$(id -g)\
--network=host \
--shm-size=32GB \
-e NGC_API_KEY \
-e NIM_FT_MODEL=/opt/weights/hf/OpenMath2-Llama3.1-8B \
-e NIM_SERVED_MODEL_NAME=OpenMath2-Llama3.1-8B \
-e NIM_MODEL_PROFILE=tensorrt_llm-h100-bf16-tp2-pp1-latency \
-v $NIM_CACHE_PATH:/opt/nim/.cache \
-v $MODEL_WEIGHT_PARENT_DIRECTORY:/opt/weights/hf \
$IMG_NAME
ç®çã®ãããã¡ã€ã«ã§ NIM ãã€ã¯ããµãŒãã¹ãåèµ·åããã®ã§ãPython ã䜿çšããŠã¢ãã«ãšããåãããŸãã
from openai import OpenAI
client = OpenAI(
base_url = "http://localhost:8000/v1",
api_key = "none"
)
completion = client.chat.completions.create(
model="llama-3.1-8b-instruct",
messages=[{"role":"user","content":"What is your name?"}],
temperature=0.2,
top_p=0.7,
max_tokens=100,
stream=True
)
for chunk in completion:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
ãŸãšã
ã¢ãã«ã®ã«ã¹ã¿ãã€ãºã«
PEFT
ãŸã㯠SFT ã䜿çšããŠããå Žåã§ããNIM ã¯ãé«æ§èœãªæšè«ã®ããã«ã«ã¹ã¿ãã€ãºãããã¢ãã«ã®ãããã€ãããããªã¹ãããã§ç°¡åã«é«éåããŸããæé©åããã TensorRT-LLM æšè«ãšã³ãžã³ãããŒã«ã«ç°å¢ã§èªåçã«ãã«ãããããšã§ãNIM ã¯ãé«éåããã AI æšè«ãã©ãã«ã§ãè¿
éã«ãããã€ã§ããããæ°ããªå¯èœæ§ãåŒãåºããŠããŸãã詳现ã«ã€ããŠã¯ã
NVIDIA API ã«ã¿ãã°
ã«ã¢ã¯ã»ã¹ããNVIDIA NIM ããã¥ã¡ã³ãã®
ãã¡ã€ã³ãã¥ãŒãã³ã°ãããã¢ãã«ã®ãµããŒã
ãã芧ãã ããã
NVIDIA NIM éçºè
ãã©ãŒã©ã
ã§ã¯ãNVIDIA ããã³ NIM ãã€ã¯ããµãŒãã¹ ã³ãã¥ããã£ãšã®äº€æµããããšãã§ããŸãã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
Kubernetes çš Oracle ã³ã³ãã㌠ãšã³ãžã³ã䜿çšãã OCI ã® NVIDIA Nemotron LLM ã®ãã¡ã€ã³ãã¥ãŒãã³ã°ãšããã〠(Oracle æäŸ)
GTC ã»ãã·ã§ã³:
äŒæ¥ãå é: 次äžä»£ AI ãããã€ãå®çŸããããŒã«ãšãã¯ããã¯
GTC ã»ãã·ã§ã³:
NVIDIA NeMo ã«ããå€æ§ãªèšèªã§ã®åºç€ãšãªã倧èŠæš¡èšèªã¢ãã«ã®ã«ã¹ã¿ãã€ãº
NGC ã³ã³ãããŒ:
Phind-CodeLlama-34B-v2-Instruct
NGC ã³ã³ãããŒ:
Phi-3-Mini-4K-Instruct
NGC ã³ã³ãããŒ:
Mistral-NeMo-Minitron-8B-Instruct |
https://developer.nvidia.com/blog/mastering-llm-techniques-data-preprocessing/ | Mastering LLM Techniques: Data Preprocessing | The advent of
large language models (LLMs)
marks a significant shift in how industries leverage AI to enhance operations and services. By automating routine tasks and streamlining processes, LLMs free up human resources for more strategic endeavors, thus improving overall efficiency and productivity.
Training and
customizing LLMs
for high accuracy is fraught with challenges, primarily due to their dependency on high-quality data. Poor data quality and inadequate volume can significantly reduce model accuracy, making dataset preparation a critical task for AI developers.
Datasets frequently contain duplicate documents, personally identifiable information (PII), and formatting issues. Some datasets even house toxic or harmful information that poses risks to users. Training models on these datasets without proper processing can result in higher training time and lower model quality. Another significant challenge is the scarcity of data. Model builders are running out of publicly available data to train on, prompting many to turn to third-party vendors or generate synthetic data using advanced LLMs.
In this post, we will describe data processing techniques and best practices for optimizing LLM performance by improving data quality for training. We will introduce
NVIDIA NeMo Curator
and how it addresses these challenges, demonstrating real-world data processing use cases for LLMs.
Text processing pipelines and best practices
Dealing with the preprocessing of large data is nontrivial, especially when the dataset consists of mainly web-scraped data which is likely to contain large amounts of ill-formatted, low-quality data.
Figure 1. Text processing pipelines that can be built using NeMo Curator
Figure 1 shows a comprehensive text processing pipeline, including the following steps at a high-level:
Download the dataset from the source and extract to a desirable format such as JSONL.
Apply preliminary text cleaning, such as Unicode fixing and language separation.
Apply both standard and custom-defined filters to the dataset based on specific quality criteria.
Perform various levels of deduplication (exact, fuzzy, and semantic).
Selectively apply advanced quality filtering, including model-based quality filtering, PII redaction, distributed data classification, and task decontamination.
Blend curated datasets from multiple sources to form a unified dataset.
The sections below dive deeper into each of these stages.
Download and extract text
The initial step in data curation involves downloading and preparing datasets from various common sources such as Common Crawl, specialized collections such as arXiv and PubMed, or private on-prime datasets, each potentially containing terabytes of data.
This crucial phase requires careful consideration of storage formats and extraction methods, as publicly hosted datasets often come in compressed formats (for example, .warc.gz, tar.gz, or zip files) that need to be converted to more manageable formats (such as .jsonl or .parquet) for further processing.
Preliminary text cleaning
Unicode fixing and language identification represent crucial early steps in the data curation pipeline, particularly when dealing with large-scale web-scraped text corpora. This phase addresses two fundamental challenges: improperly decoded Unicode characters, and the presence of multiple languages within the dataset.
Unicode formatting issues often arise from incorrect character encoding or multiple encoding/decoding cycles. Common problems include special characters appearing as garbled sequences (for example, âcaféâ showing as âcaféâ). Language identification and separation are equally important, especially for curators who are interested in curating monolingual datasets. Moreover, some of the data curation steps such as heuristic filtering, and model-based quality classifiers are language-specific.
This preliminary preprocessing step ensures clean, properly encoded text in identified languages, forming the foundation for all subsequent curation steps.
Heuristic filtering
Heuristic filtering employs rule-based metrics and statistical measures to identify and remove low-quality content.
The process typically evaluates multiple quality dimensions, such as document length, repetition patterns, punctuation distribution, and structural integrity of the text. Common heuristic filters include:
Word count filter:
Filters out snippets that are too brief to be meaningful or suspiciously long.
Boilerplate string filter:
Identifies and removes text containing excessive boilerplate content.
N-gram repetition filter:
Identifies repeated phrases at different lengths and removes documents with excessive repetition that might indicate low-quality or artificially generated content.
For heuristic filtering, the best practice is to implement a cascading approach. This enables more nuanced quality control while maintaining transparency in the filtering process. For improved performance, batch filtering can be implemented to process multiple documents simultaneously, significantly reducing computation time when dealing with large-scale datasets.
Deduplication
Deduplication is essential for improving model training efficiency, reducing computational costs, and ensuring data diversity. It helps prevent models from overfitting to repeated content and improves generalization. The process can be implemented through three main approaches: exact, fuzzy, and semantic deduplication. These form a comprehensive strategy for handling different types of duplicates in large-scale datasets, from identical copies to conceptually similar content.
Exact deduplication
Exact deduplication focuses on identifying and removing completely identical documents. This method generates hash signatures for each document and groups documents by their hashes into buckets, keeping only one document per bucket. While this method is computationally efficient, fast and reliable, itâs limited to detecting perfectly matching content and may miss semantically equivalent documents with minor variations.
Fuzzy deduplication
Fuzzy deduplication addresses near-duplicate content using MinHash signatures and Locality-Sensitive Hashing (LSH) to identify similar documents.
The process involves the following steps:
Compute MinHash signatures for documents.
Use LSH to group similar documents into buckets. One document might belong to one or more buckets.
Compute Jaccard similarity between documents within the same buckets.
Based on the Jaccard similarity, transform the similarity matrix to a graph and identify connected components in the graph.
Documents within a connected component are considered fuzzy duplicates.
Remove identified duplicates from the dataset.
This method is particularly valuable for identifying content with minor modifications, detecting partial document overlaps, and finding documents with different formatting but similar content. It strikes a balance between computational efficiency and duplicate detection capability.
Semantic deduplication
Semantic deduplication represents the most sophisticated approach, employing advanced embedding models to capture semantic meaning combined with clustering techniques to group semantically similar content. Research has shown that semantic deduplication can effectively reduce dataset size while maintaining or improving model performance. Itâs especially valuable for identifying paraphrased content, translated versions of the same material, and conceptually identical information.
Semantic deduplication consists of the following steps:
Each data point is embedded using a pretrained model.
The embeddings are clustered into k clusters using k-means clustering.
Within each cluster, pairwise cosine similarities are computed.
Data pairs with cosine similarity above a threshold are considered semantic duplicates.
From each group of semantic duplicates within a cluster, one representative datapoint is kept and the rest are removed.
Model-based quality filtering
Model-based quality filtering employs various types of models to evaluate and filter content based on quality metrics. The choice of model type significantly impacts both the effectiveness of filtering and the computational resources required, making it crucial to select the appropriate model for specific use cases.
Different types of models that can be used for quality filtering include:
N-gram based classifiers:
The simplest approach uses n-gram based bag-of-words classifiers like fastText, which excel in efficiency and practicality, as they require minimal training data (100,000 to 1,000,000 samples).
BERT-style classifiers:
BERT-style classifiers represent a middle-ground approach, offering better quality assessment through Transformer-based architectures. They can capture more complex linguistic patterns and contextual relationships, making them effective for quality assessment.
LLMs:
LLMs provide the most sophisticated quality assessment capabilities, leveraging their extensive knowledge to evaluate text quality. While they offer superior understanding of content quality, they have significant computational requirements thus they are best suited for smaller-scale applications, such as fine-tuning datasets.
Reward models:
Reward models represent a specialized category designed specifically for evaluating conversational data quality. These models can assess multiple quality dimensions simultaneously but similar to LLMs, they have significant computational requirements.
The optimal selection of quality filtering models should consider both the dataset scale and available computational resources. For large-scale pretraining datasets, combining lightweight models for initial filtering with advanced models for final quality assessment often provides the best balance of efficiency and effectiveness. For smaller, specialized datasets where quality is crucial, using models like LLMs or reward models becomes more feasible and beneficial.
PII redaction
Personally Identifiable Information (PII) redaction involves identifying and removing sensitive information from datasets to protect individual privacy and ensure compliance with data protection regulations.
This process is particularly important when dealing with datasets that contain personal information, from direct identifiers like names and social security numbers to indirect identifiers that could be used to identify individuals when combined with other data.
Modern PII redaction employs various techniques to protect sensitive information, including:
Replacing sensitive information with symbols (for example, XXX-XX-1234 for U.S. Social Security Numbers) while maintaining data format and structure.
Substituting sensitive data with non-sensitive equivalents that maintain referential integrity for analysis purposes.
Eliminating sensitive information when its presence is not necessary for downstream tasks.
Overall, PII redaction helps maintain data privacy, comply with regulations, and build trust with users while preserving the utility of their datasets for training and analysis purposes.
Distributed data classification
Data classification plays a vital role in data curation. This process helps organize and categorize data based on various attributes such as domain and quality, ensuring data is well-balanced and representative of different knowledge domains.
Domain classification helps LLMs understand the context and specific domain of input text by identifying and categorizing content based on subject matter. The domain information serves as valuable auxiliary data, enabling developers to build more diverse training datasets while identifying and filtering out potentially harmful or unwanted content. For example, using the AEGIS Safety Model, which classifies content into 13 critical risk categories, developers can effectively identify and filter harmful content from training data.
When dealing with pretraining corpora that often contain billions of documents, running inference for classification becomes computationally intensive and time-consuming. Therefore, distributed data classification is necessary to overcome these challenges. This is achieved by chunking the datasets across multiple GPU nodes to accelerate the classification task in a distributed manner.
Task decontamination
After training, LLMs are usually evaluated by their performance on downstream tasks consisting of unseen test data. Downstream task decontamination is a step that addresses the potential leakage of test data into training datasets, which can provide misleading evaluation results. The decontamination process typically involves several key steps:
Identifying potential downstream tasks and their test sets.
Converting test data into n-gram representations.
Searching for matching n-grams in the training corpus.
Removing or modifying contaminated sections while preserving document coherence.
This systematic approach helps ensure the effectiveness of decontamination while minimizing unintended impacts on data quality, ultimately contributing to more reliable model evaluation and development.
Blending and shuffling
Data blending and shuffling represent the final steps in the data curation pipeline, combining multiple curated datasets while ensuring proper randomization for optimal model training. This process is essential for creating diverse, well-balanced training datasets that enable better model generalization and performance. Data blending involves merging data from multiple sources into a unified dataset, creating more comprehensive and diverse training data. The blending process is implemented using two approaches:
Online: Data combination occurs during training
Offline: Datasets are combined before training
Each approach offers distinct advantages depending on the specific requirements of the training process and the intended use of the final dataset.
Synthetic data generation
Having navigated the intricacies of the preprocessing stage, we now confront a formidable challenge in the realm of LLM development: the scarcity of data. The insatiable appetite of LLMs for vast training datasets, even for fine-tuning purposes, frequently outstrips the availability of domain-specific or language-particular data. To this end,
synthetic data generation (SDG)
is a powerful approach that leverages LLMs to create artificial datasets that mimic real-world data characteristics while maintaining privacy and ensuring data utility. This process uses external LLM services to generate high-quality, diverse, and contextually relevant data that can be used for pretraining, fine-tuning, or evaluating other models.
SDG empowers LLMs by enabling adaptation to low-resource languages, supporting domain specialization, and facilitating knowledge distillation across models, making it a versatile tool for expanding model capabilities. SDG has become particularly valuable in scenarios where real data is scarce, sensitive, or difficult to obtain.
Figure 2. General synthetic data generation architecture with NeMo Curator
The synthetic data pipeline encompasses three key stages: Generate, Critique, and Filter.
Generate:
Use prompt engineering to generate synthetic data for various tasks. Taking
Nemotron-4
as an example, SDG is applied to generate training data for five different types of tasks: open-ended QA, closed-ended QA, writing assignments, coding, and math problems.
Critique:
Use methods like LLM reflection, LLM-as-judge, reward model inference, and other agents to evaluate the quality of synthetic data. The evaluation results can be used as feedback to SDG LLM to generate better results or filter out low quality data. A prime example is the
Nemotron-4-340B reward NIM
, which assesses data quality through five key attributes: Helpfulness, Correctness, Coherence, Complexity, and Verbosity. By setting appropriate thresholds for these attribute scores, the filtering process ensures that only high-quality synthetic data is retained, while filtering out low-quality or inappropriate content.
Filter:
Steps like deduplication and PII redaction to further improve SDG data quality.
Note, however, SDG is not suitable in all cases. Hallucinations from external LLMs can introduce unreliable information, compromising data integrity. Additionally, the generated dataâs distribution may not align with the target distribution, potentially leading to poor real-world performance. In such cases, using SDG could actually harm the systemâs effectiveness rather than improve it.
Data processing for building sovereign LLMs
As noted previously, open-source LLMs excel in English but struggle with other languages, especially those of Southeast Asia. This is primarily due to a lack of training data in these languages, limited understanding of local cultures, and insufficient tokens to capture unique linguistic structures and expressions.
To fully meet customer needs, enterprises in non-English-speaking countries must go beyond generic models and customize them to capture the nuances of their local languages, ensuring a seamless and impactful customer experience. For example, using NeMo Curator, Viettel Solutions processed
high-quality Vietnamese data
to increase accuracy by 10%, reduce the dataset size by 60% and accelerate training time by 3x.
The main steps for this use case are:
Download several Vietnamese and multilingual datasets (Wikipedia, Vietnamese news corpus,
OSCAR
, and C4) and convert to Parquet for efficient handling and processing of large datasets.
Combine, standardize, and shard into a single dataset
Apply unicode reformatting, exact deduplication, quality filtering (heuristic and classifier-based).
You can
follow along with the full tutorial
.
Improve data quality with NVIDIA NeMo Curator
So far, we have discussed the importance of data quality in improving the accuracy of LLMs and explored various data processing techniques. Developers can now try these techniques directly through
NeMo Curator
. It provides a customizable and modular interface that enables developers to build on top of it easily.
NeMo Curator uses NVIDIA RAPIDS GPU-accelerated libraries like cuDF, cuML, and cuGraph, and Dask to speed up workloads on multinode multi-GPUs, reducing processing time and scale as needed. For example, by using GPUs to accelerate the data processing pipelines,
Zyphra reduced the total cost of ownership (TCO)
by 50% and processed the data 10x faster (from 3 weeks to 2 days).
To get started, check out the
NVIDIA/NeMo-Curator GitHub repository
and available
tutorials
that cover various data curation workflows, such as:
Data processing for pretraining
Data processing for customization
SDG pipelines
You can also gain access through a
NeMo framework container
and request enterprise support with an
NVIDIA AI Enterprise
license. | https://developer.nvidia.com/ja-jp/blog/mastering-llm-techniques-data-preprocessing/ | LLM ãã¯ããã¯ã®ç¿åŸ: ããŒã¿ã®ååŠç | Reading Time:
2
minutes
倧èŠæš¡èšèªã¢ãã« (LLM)
ã®åºçŸã¯ãäŒæ¥ã AI ã掻çšããŠæ¥åãšãµãŒãã¹ã匷åããæ¹æ³ã«å€§ããªå€åããããããŸãããLLM ã¯æ¥åžžçãªäœæ¥ãèªååããããã»ã¹ãåçåããããšã§ã人çãªãœãŒã¹ãããæŠç¥çãªåãçµã¿ã«å²ãåœãŠãããšã§ãå
šäœçãªå¹çæ§ãšçç£æ§ãåäžãããŸãã
LLM ãé«ç²ŸåºŠã«ãã¬ãŒãã³ã°ããã³
ã«ã¹ã¿ãã€ãº
ããã«ã¯ãé«å質ãªããŒã¿ãå¿
èŠãšãªããããå€ãã®èª²é¡ã䌎ããŸããããŒã¿ã®è³ªãäœããéãååã§ãªããšãã¢ãã«ã®ç²ŸåºŠã倧å¹
ã«äœäžããå¯èœæ§ããããããAI éçºè
ã«ãšã£ãŠããŒã¿ã»ããã®æºåã¯éèŠãªäœæ¥ã® 1 ã€ãšãªã£ãŠããŸãã
ããŒã¿ã»ããã«ã¯åŸã
ã«ããŠéè€ããããã¥ã¡ã³ããå人ãç¹å®ã§ããæ
å ± (PII)ããã©ãŒãããã«é¢ããåé¡ãååšããŸããããŒã¿ã»ããã®äžã«ã¯ããŠãŒã¶ãŒã«ãªã¹ã¯ãããããæ害ãªæ
å ±ãäžé©åãªæ
å ±ãå«ãŸããŠãããã®ãããããŸããé©åãªåŠçãè¡ããã«ãããã£ãããŒã¿ã»ããã§ã¢ãã«ããã¬ãŒãã³ã°ãããšããã¬ãŒãã³ã°æéãé·åŒããããã¢ãã«ã®å質ãäœäžããå ŽåããããŸãããã 1 ã€ã®å€§ããªèª²é¡ã¯ããŒã¿ã®äžè¶³ã§ããã¢ãã«éçºè
ã¯ãã¬ãŒãã³ã°çšã®å
¬éããŒã¿ã䜿ãæããã€ã€ãããå€ãã®äººã
ããµãŒãããŒãã£ã®ãã³ããŒã«äŸé Œããããé«åºŠãª LLM ã䜿çšããŠåæããŒã¿ãçæãããããããã«ãªã£ãŠããŸãã
ãã®èšäºã§ã¯ããã¬ãŒãã³ã°çšã®ããŒã¿ã®å質ãåäžããããšã§ LLM ã®ããã©ãŒãã³ã¹ãæé©åããããã®ããŒã¿åŠçãã¯ããã¯ãšãã¹ã ãã©ã¯ãã£ã¹ã«ã€ããŠèª¬æããŸãããŸãã
NVIDIA NeMo Curator
ã®æŠèŠããã³åè¿°ãã課é¡ãžã®å¯ŸåŠæ¹æ³ã説æããLLM ã®å®éã®ããŒã¿åŠçã®ãŠãŒã¹ ã±ãŒã¹ãã玹ä»ããŸãã
ããã¹ãåŠçãã€ãã©ã€ã³ãšãã¹ã ãã©ã¯ãã£ã¹
倧èŠæš¡ããŒã¿ã®ååŠçã¯å®¹æã§ã¯ãããŸãããç¹ã«ãããŒã¿ã»ãããäž»ã«Web ã¹ã¯ã¬ã€ãã³ã°ãããããŒã¿ã§æ§æãããŠããã倧éã®äžé©åãªãã©ãŒãããã®äœå質ããŒã¿ãå«ãŸããŠããå¯èœæ§ãé«ãå Žåã¯ãªãããã§ãã
å³ 1. NeMo Curator ã䜿çšããŠæ§ç¯ã§ããããã¹ãåŠçãã€ãã©ã€ã³
å³ 1 ã¯ã以äžã®æé ãå«ãå
æ¬çãªããã¹ãåŠçãã€ãã©ã€ã³ã®æŠèŠã瀺ããŠããŸãã
ãœãŒã¹ããããŒã¿ã»ãããããŠã³ããŒãããJSONL ãªã©ã®æãŸãããã©ãŒãããã§æœåºããŸãã
Unicode ã®ä¿®æ£ãèšèªã«ããåé¡ãªã©ãäºåçãªããã¹ã ã¯ãªãŒãã³ã°ãé©çšããŸãã
ç¹å®ã®å質åºæºã«åºã¥ããŠãæšæºçãªãã£ã«ã¿ãŒãšã«ã¹ã¿ã å®çŸ©ã®ãã£ã«ã¿ãŒã®äž¡æ¹ãããŒã¿ã»ããã«é©çšããŸãã
ããŸããŸãªã¬ãã«ã®éè€æé€ (å³å¯ãææ§ãæå³ç) ãå®è¡ããŸãã
ã¢ãã«ããŒã¹ã®å質ãã£ã«ã¿ãªã³ã°ãå人æ
å ± (PII) ã®åé€ã(åæ£åŠçã«ãã) ããŒã¿åé¡ãäžæµã¿ã¹ã¯ã®æ±æé€å»ãªã©ã®é«åºŠãªå質ãã£ã«ã¿ãªã³ã°ãå¿
èŠã«å¿ããŠéžæçã«é©çšããŸãã
è€æ°ã®ãœãŒã¹ããåéããã粟éžãããããŒã¿ã»ãããäžäœåããçµ±åããããŒã¿ã»ãããäœæããŸãã
以äžã®ã»ã¯ã·ã§ã³ã§ã¯ããããã®å段éã«ã€ããŠè©³ãã説æããŸãã
ããã¹ããããŠã³ããŒãããŠæœåº
ããŒã¿ ãã¥ã¬ãŒã·ã§ã³ã®æåã®ã¹ãããã§ã¯ã Common Crawl ã®ãããªãããŸããŸãªäžè¬çãªãœãŒã¹ãarXiv ã PubMed ãªã©ã®å°éçãªã³ã¬ã¯ã·ã§ã³ãèªç€Ÿä¿æã®ãã©ã€ããŒã ããŒã¿ãªã©ããããŒã¿ã»ãããããŠã³ããŒãããŠæºåããŸãããããã®ããŒã¿ã»ããã«ã¯ããããããã©ãã€ãåäœã®ããŒã¿ãå«ãŸããŠããå¯èœæ§ããããŸãã
ãã®éèŠãªãã§ãŒãºã§ã¯ãä¿å圢åŒãšæœåºæ¹æ³ãæ
éã«æ€èšããå¿
èŠããããŸããäžè¬ã«å
¬éãããã¹ããããŠããããŒã¿ã»ããã¯å§çž®åœ¢åŒ (äŸ: .warc.gzãtar.gzãzip ãã¡ã€ã«) ã§æäŸãããããšãå€ããããåŸç¶ã®åŠçã®ããã«ããæ±ããããåœ¢åŒ (.jsonl ã .parquet ãªã©) ã«å€æããå¿
èŠããããŸãã
äºåçãªããã¹ã ã¯ãªãŒãã³ã°
Unicode ã®ä¿®æ£ãšèšèªã«ããåé¡ã¯ãç¹ã«å€§èŠæš¡ãª Web ã¹ã¯ã¬ã€ãã³ã°ã«ããããã¹ã ã³ãŒãã¹ãæ±ãå ŽåãããŒã¿ ãã¥ã¬ãŒã·ã§ã³ ãã€ãã©ã€ã³ã®éèŠãªåæã¹ãããã§ãããã®ãã§ãŒãºã§ã¯ãäžé©åã«ãã³ãŒãããã Unicode æåãšãããŒã¿ã»ããå
ã«è€æ°ã®èšèªãååšãããšãã 2 ã€ã®åºæ¬çãªèª²é¡ã«å¯ŸåŠããŸãã
Unicode 圢åŒã«é¢ããåé¡ã¯ãå€ãã®å Žåãæåãšã³ã³ãŒãã®èª€ããããšã³ã³ãŒã/ãã³ãŒã ãµã€ã¯ã«ãè€æ°åå®è¡ãããããšã«ãã£ãŠçºçããŸããããããåé¡ãšããŠã¯ãç¹æ®æåãæååãããæåå (äŸ:ãcaféãããcaféããšè¡šç€ºããã) ãšããŠè¡šç€ºãããããšãæããããŸããèšèªã®èå¥ãšåé¡ã¯ãç¹ã«åäžèšèªã®ããŒã¿ã»ããã®ãã¥ã¬ãŒã·ã§ã³ã«é¢å¿ã®ããéçºè
ã«ãšã£ãŠã¯åæ§ã«éèŠã§ããããã«ããã¥ãŒãªã¹ãã£ã㯠ãã£ã«ã¿ãªã³ã°ãã¢ãã«ããŒã¹ã®å質åé¡åšãªã©ã®ããŒã¿ ãã¥ã¬ãŒã·ã§ã³ã®ã¹ãããã®äžéšã¯èšèªã«äŸåããŠããŸãã
ãã®äºåçãªååŠçã¹ãããã§ã¯ãèå¥ãããèšèªã§é©åã«ãšã³ã³ãŒããããã¯ãªãŒã³ãªããã¹ãã確ä¿ããããã®åŸã®ãã¥ã¬ãŒã·ã§ã³ã¹ãããã®åºç€ãšãªããŸãã
ãã¥ãŒãªã¹ãã£ã㯠ãã£ã«ã¿ãªã³ã°
ãã¥ãŒãªã¹ãã£ã㯠ãã£ã«ã¿ãªã³ã°ã§ã¯ãã«ãŒã«ããŒã¹ã®è©äŸ¡ææšãšçµ±èšç尺床ã䜿çšããŠãäœå質ãªã³ã³ãã³ããç¹å®ããåé€ããŸãã
ãã®ããã»ã¹ã¯éåžžãããã¥ã¡ã³ãã®é·ããç¹°ãè¿ããã¿ãŒã³ãå¥èªç¹ã®ååžãããã¹ãã®æ§é çæŽåæ§ãªã©ãè€æ°ã®å質åºæºã§è©äŸ¡ãããŸããäžè¬çãªãã¥ãŒãªã¹ãã£ã㯠ãã£ã«ã¿ãŒã«ã¯ä»¥äžã®ãããªãã®ããããŸãã
åèªæ°ãã£ã«ã¿ãŒ:
æå³ããªããªãã»ã©çãããããŸãã¯çãããã»ã©ã«é·ãããããã¹ãããã£ã«ã¿ãªã³ã°ããŸãã
å®åæãã£ã«ã¿ãŒ:
éå°ãªå®åæãå«ãããã¹ããç¹å®ããåé€ããŸãã
N-gram å埩ãã£ã«ã¿ãŒ:
ç°ãªãé·ãã§ç¹°ãè¿ããããã¬ãŒãºãç¹å®ããäœå質ãŸãã¯äººå·¥çã«çæãããã³ã³ãã³ãã§ããå¯èœæ§ãããéå°ãªå埩ãå«ãææžãåé€ããŸãã
ãã¥ãŒãªã¹ãã£ã㯠ãã£ã«ã¿ãªã³ã°ã®å Žåã¯ãã«ã¹ã±ãŒã ã¢ãããŒããæ¡ãã®ãæåã®æ¹æ³ã§ããããã«ããããã£ã«ã¿ãªã³ã° ããã»ã¹ã®éææ§ãç¶æããªãããããç¹çŽ°ãªå質管çãå¯èœã«ãªããŸããåŠçããã©ãŒãã³ã¹ãåäžãããããã«ãããã ãã£ã«ã¿ãªã³ã°ãæ¡çšããŠè€æ°ã®ããã¥ã¡ã³ããåæã«åŠçãããšå€§èŠæš¡ãªããŒã¿ã»ãããæ±ãéã®èšç®æéã倧å¹
ã«ççž®ããããšãã§ããŸãã
éè€æé€
éè€æé€ã¯ãã¢ãã«ã®ãã¬ãŒãã³ã°å¹çã®åäžãèšç®ã³ã¹ãã®åæžãããŒã¿ã®å€æ§æ§ã®ç¢ºä¿ã«äžå¯æ¬ ã§ããç¹°ãè¿ãåºçŸããã³ã³ãã³ãã«ã¢ãã«ãéå°é©åããã®ãé²ããæ±çšæ§ãé«ããŸãããã®ããã»ã¹ã¯ãå³å¯ãææ§ãæå³ãšãã 3 ã€ã®äž»ãªéè€æé€ã¢ãããŒããéããŠå®è£
ã§ããŸãããããã¯ãåäžã®ã³ããŒããæŠå¿µçã«é¡äŒŒããã³ã³ãã³ããŸã§ã倧èŠæš¡ããŒã¿ã»ããå
ã®ç°ãªãã¿ã€ãã®éè€ãåŠçããå
æ¬çãªæŠç¥ã圢æããŸãã
å³å¯ãªéè€æé€
å³å¯ãªéè€æé€ã¯ãå®å
šã«åäžã®ããã¥ã¡ã³ããèå¥ããåé€ããããšã«éç¹ã眮ããŠããŸãããã®æ¹æ³ã§ã¯ãããã¥ã¡ã³ãããšã«ããã·ã¥çœ²åãçæããããã·ã¥ããšã«ããã¥ã¡ã³ããã°ã«ãŒãåããŠãã±ããã«æ ŒçŽãããã±ããããšã« 1 ã€ã®ããã¥ã¡ã³ãã®ã¿ãæ®ããŸãããã®æ¹æ³ã¯èšç®å¹çãé«ããé«éãã€ä¿¡é Œæ§ãé«ãã®ã§ãããå®å
šã«äžèŽããã³ã³ãã³ãã®æ€åºã«éå®ããããããæå³çã«ã¯åçãªã®ã«ãããã«ç°ãªãææžãèŠéãå¯èœæ§ããããŸãã
ææ§ãªéè€æé€
ææ§ãªéè€æé€ã¯ãMinHash 眲åãšå±ææ§éæåããã·ã¥å (LSH: Locality-Sensitive Hashing) ã䜿çšããŠãé¡äŒŒããããã¥ã¡ã³ããèå¥ããã»ãŒéè€ããã³ã³ãã³ãã«å¯ŸåŠããŸãã
ãã®ããã»ã¹ã«ã¯ã以äžã®ã¹ããããå«ãŸããŸãã
ããã¥ã¡ã³ãã® MinHash 眲åãèšç®ããŸãã
LSH ã䜿çšããŠãé¡äŒŒããããã¥ã¡ã³ãããã±ããã«ã°ã«ãŒãåããŸãã1 ã€ã®ããã¥ã¡ã³ãã 1 ã€ä»¥äžã®ãã±ããã«å±ããå ŽåããããŸãã
åããã±ããå
ã®ããã¥ã¡ã³ãé㧠Jaccard é¡äŒŒåºŠãèšç®ããŸãã
Jaccard é¡äŒŒåºŠã«åºã¥ããŠãé¡äŒŒåºŠè¡åãã°ã©ãã«å€æããã°ã©ãå
ã®é£çµæåãç¹å®ããŸãã
é£çµæåå
ã®ããã¥ã¡ã³ãã¯ææ§ãªéè€ãšèŠãªãããŸãã
ç¹å®ããéè€ãããŒã¿ã»ããããåé€ããŸãã
ãã®æ¹æ³ã¯ã軜埮ãªå€æŽãå ããããã³ã³ãã³ãã®ç¹å®ãéšåçãªããã¥ã¡ã³ãã®éè€ã®æ€åºãç°ãªããã©ãŒãããã§ãããé¡äŒŒããã³ã³ãã³ããæã€ããã¥ã¡ã³ãã®æ€çŽ¢ã«ç¹ã«æçšã§ããèšç®å¹çãšéè€æ€åºèœåã®ãã©ã³ã¹ãåããŠããŸãã
æå³çãªéè€æé€
æå³çãªéè€æé€ã¯ãæãæŽç·Žãããã¢ãããŒãã§ãããé«åºŠãªåã蟌ã¿ã¢ãã«ã䜿çšããŠã»ãã³ãã£ãã¯ãªæå³ãæããã¯ã©ã¹ã¿ãªã³ã°æè¡ãšçµã¿åãããŠæå³çã«é¡äŒŒããã³ã³ãã³ããã°ã«ãŒãåããŸããç 究ã§ã¯ãæå³çãªéè€æé€ã¯ãã¢ãã«ã®ããã©ãŒãã³ã¹ãç¶æãŸãã¯æ¹åããªãããããŒã¿ã»ããã®ãµã€ãºãå¹æçã«çž®å°ã§ããããšã瀺ãããŠããŸããèšãæããããã³ã³ãã³ããåãçŽ æã®ç¿»èš³çãæŠå¿µçã«åäžã®æ
å ±ãç¹å®ããã®ã«ç¹ã«æçšã§ãã
æå³ã«ããéè€æé€ã¯ã以äžã®ã¹ãããã§æ§æãããŸãã
åããŒã¿ ãã€ã³ãããäºååŠç¿æžã¿ã¢ãã«ã䜿çšããŠåã蟌ãŸããŸãã
åã蟌ã¿ã¯ãk-means ã䜿çšã㊠k åã®ã¯ã©ã¹ã¿ãŒã«ã°ã«ãŒãåãããŸãã
åã¯ã©ã¹ã¿ãŒå
ã§ããã¢ããšã®ã³ãµã€ã³é¡äŒŒåºŠãèšç®ãããŸãã
éŸå€ãè¶
ããã³ãµã€ã³é¡äŒŒåºŠãæããããŒã¿ ãã¢ã¯ãæå³ã®éè€ãšèŠãªãããŸãã
ã¯ã©ã¹ã¿ãŒå
ã®æå³çãªéè€ã®åã°ã«ãŒãããã1 ã€ã®ä»£è¡šçãªããŒã¿ãã€ã³ããä¿æãããæ®ãã¯åé€ãããŸãã
ã¢ãã«ããŒã¹ã®å質ãã£ã«ã¿ãªã³ã°
ã¢ãã«ããŒã¹ã®å質ãã£ã«ã¿ãªã³ã°ã§ã¯ãããŸããŸãªçš®é¡ã®ã¢ãã«ã䜿çšããŠãå質ææšã«åºã¥ããŠã³ã³ãã³ããè©äŸ¡ããŠãã£ã«ã¿ãªã³ã°ããŸããã¢ãã«ã®çš®é¡ã®éžæã¯ããã£ã«ã¿ãªã³ã°ã®æå¹æ§ãšå¿
èŠãªèšç®ãªãœãŒã¹ã®äž¡æ¹ã«å€§ããªåœ±é¿ãåãŒããããç¹å®ã®ãŠãŒã¹ ã±ãŒã¹ã«é©åãªã¢ãã«ãéžæããããšãéèŠã§ãã
å質ãã£ã«ã¿ãªã³ã°ã«äœ¿çšã§ããã¢ãã«ã«ã¯ã以äžã®çš®é¡ããããŸãã
N-gram ããŒã¹ã®åé¡åš:
æãåçŽãªã¢ãããŒãã¯ãfastText ã®ãã㪠N-gram ããŒã¹ã® Bag-of-Words åé¡åšã䜿çšããæ¹æ³ã§ããå¿
èŠãªãã¬ãŒãã³ã° ããŒã¿ (10 äžïœ100 äžãµã³ãã«) ãæãå°ãªãæžããããå¹çæ§ãšå®çšæ§ã«åªããŠããŸãã
BERT ã¹ã¿ã€ã«ã®åé¡åš:
BERT ã¹ã¿ã€ã«ã®åé¡åšã¯äžéçãªã¢ãããŒãã§ãããTransformer ããŒã¹ã®ã¢ãŒããã¯ãã£ãéããŠãã質ã®é«ãè©äŸ¡ãæäŸããŸããããè€éãªèšèªãã¿ãŒã³ãæèäžã®é¢ä¿ãæããããšãã§ããå質è©äŸ¡ã«å¹æçã§ãã
LLM:
LLM ã¯ãããã¹ãã®å質è©äŸ¡ã«å¹
åºãç¥èã掻çšããæãæŽç·Žãããå質è©äŸ¡æ©èœãæäŸããŸããã³ã³ãã³ãã®å質ãããæ·±ãç解ã§ããŸãããèšç®èŠä»¶ãé«ãããããã¡ã€ã³ãã¥ãŒãã³ã°çšã®ããŒã¿ã»ãããªã©ãå°èŠæš¡ãªã¢ããªã±ãŒã·ã§ã³ã«åããŠããŸãã
å ±é
¬ã¢ãã«:
å ±é
¬ã¢ãã«ã¯ãäŒè©±ããŒã¿ã®å質ãè©äŸ¡ã«ç¹åãèšèšãããå°éã«ããŽãªã§ãããããã®ã¢ãã«ã¯è€æ°ã®å質åºæºãåæã«è©äŸ¡ã§ããŸãããLLM ãšåããé«ãèšç®èŠä»¶ãæ±ããããŸãã
æé©ãªå質ãã£ã«ã¿ãªã³ã° ã¢ãã«ã®éžæã«ã¯ãããŒã¿ã»ããã®èŠæš¡ãšå©çšå¯èœãªèšç®ãªãœãŒã¹ã®äž¡æ¹ãèæ
®ããå¿
èŠããããŸãã倧èŠæš¡ãªäºååŠç¿ããŒã¿ã»ããã®å Žåãåæãã£ã«ã¿ãªã³ã°ã«ã¯è»œéãªã¢ãã«ã䜿çšããæçµçãªå質è©äŸ¡ã«ã¯é«åºŠãªã¢ãã«ãçµã¿åãããããšã§ãå¹çæ§ãšæå¹æ§ã®ãã©ã³ã¹ãåŸãããŸããå質ãéèŠãšãªãå°èŠæš¡ã§å°éçãªããŒã¿ã»ããã®å Žåã¯ãLLM ãå ±é
¬ã¢ãã«ãªã©ã®ã¢ãã«ã䜿çšããããšããããå®çŸçã§æçãšãªããŸãã
PII ã®åé€
å人ãç¹å®ã§ããæ
å ± (PII) ã®åé€ã«ã¯ãå人ã®ãã©ã€ãã·ãŒãä¿è·ããããŒã¿ä¿è·èŠå¶ã«å¯Ÿããéµå®ã確å®ã«ããããã«ãããŒã¿ã»ããå
ã®æ©å¯æ
å ±ãèå¥ããã³åé€ããããšãå«ãŸããŸãã
ãã®ããã»ã¹ã¯ãæ°åã瀟äŒä¿éçªå·ãªã©ã®çŽæ¥çãªèå¥åãããä»ã®ããŒã¿ãšçµã¿åãããããšã§å人ãèå¥ã§ããéæ¥çãªèå¥åãŸã§ãå人æ
å ±ãå«ãããŒã¿ã»ãããæ±ãå Žåã«ã¯ç¹ã«éèŠã§ãã
ææ°ã® PII åé€ã§ã¯ãæ©å¯æ
å ±ãä¿è·ããããã«ã以äžãå«ãããŸããŸãªæè¡ãçšããããŠããŸãã
ããŒã¿åœ¢åŒãšæ§é ãç¶æããªãããæ©å¯æ
å ±ãèšå·ã«çœ®ãæãã (ããšãã°ãç±³åœç€ŸäŒä¿éçªå·ã®å Žå XXX-XX-1234 ã«çœ®ãæãã)ã
åæã®ç®çã§åç
§æŽåæ§ãç¶æããªãããæ©å¯ããŒã¿ãæ©å¯ã§ãªãåçã®ããŒã¿ã«çœ®ãæããã
äžæµã¿ã¹ã¯ã«å¿
èŠã§ãªãå Žåããã®æ©å¯æ
å ±ãåé€ããã
å
šäœãšã㊠PII ã®åé€ã¯ãããŒã¿ã®ãã©ã€ãã·ãŒãä¿è·ããèŠå¶ãéµå®ãããã¬ãŒãã³ã°ãšåæã®ç®çã§ããŒã¿ã»ããã®æçšæ§ãç¶æããªããããŠãŒã¶ãŒãšä¿¡é Œé¢ä¿ãæ§ç¯ããã®ã«åœ¹ç«ã¡ãŸãã
(åæ£åŠçã«ãã) ããŒã¿åé¡
ããŒã¿åé¡ã¯ãããŒã¿ ãã¥ã¬ãŒã·ã§ã³ã«ãããŠéèŠãªåœ¹å²ãæãããŸãããã®ããã»ã¹ã§ã¯ããã¡ã€ã³ãå質ãªã©å€æ§ãªå±æ§ã«åºã¥ããŠããŒã¿ãæŽçããåé¡ããããšã§ããŒã¿ã®ãã©ã³ã¹ãåããããŸããŸãªç¥èãã¡ã€ã³ã代衚ãããã®ãšãªãããã«ããŸãã
ãã¡ã€ã³åé¡ã¯ãäž»é¡ã«åºã¥ããŠã³ã³ãã³ããèå¥ããŠã«ããŽãªãŒåãããããšã§ãLLM ãå
¥åããã¹ãã®ã³ã³ããã¹ããç¹å®ã®ãã¡ã€ã³ãç解ããã®ã«åœ¹ç«ã¡ãŸãããã¡ã€ã³æ
å ±ã¯ãéçºè
ãæœåšçã«æ害ãŸãã¯äžèŠãªã³ã³ãã³ããç¹å®ãããã£ã«ã¿ãªã³ã°ããªãããããå€æ§ãªãã¬ãŒãã³ã° ããŒã¿ã»ãããæ§ç¯ããããšãå¯èœã«ãã貎éãªè£å©çæ
å ±ãšãªããŸããããšãã°ãã³ã³ãã³ãã 13 ã®é倧ãªãªã¹ã¯ ã«ããŽãªã«åé¡ãã AEGIS Safety Model ã䜿çšããããšã§ãéçºè
ã¯ãã¬ãŒãã³ã° ããŒã¿ããæ害ãªã³ã³ãã³ããå¹æçã«èå¥ãããã£ã«ã¿ãªã³ã°ããããšãã§ããŸãã
æ°ååãã®ããã¥ã¡ã³ããå«ãŸããŠããããšãå€ãäºååŠç¿ã³ãŒãã¹ãæ±ãå Žåãåé¡ãè¡ãããã®æšè«ãå®è¡ããã®ã«å€ãã®èšç®åŠçãšæéãå¿
èŠãšãªããŸãããããã£ãŠããããã®èª²é¡ãå
æããã«ã¯ãåæ£åŠçãé©çšã§ããããŒã¿åé¡ãå¿
èŠã§ããããã¯ãããŒã¿ã»ãããè€æ°ã® GPU ããŒãã«åå²ããããšã§ãåé¡ã¿ã¹ã¯ãé«éåããããšã«ãã£ãŠå®çŸãããŸãã
äžæµã¿ã¹ã¯ã®æ±æé€å»
ãã¬ãŒãã³ã°ã®åŸãLLM ã¯éåžžãèŠããªããã¹ã ããŒã¿ã§æ§æãããäžæµã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ã«ãã£ãŠè©äŸ¡ãããŸããäžæµã¿ã¹ã¯ã®æ±æé€å»ã¯ããã¹ã ããŒã¿ããã¬ãŒãã³ã° ããŒã¿ã»ããã«æ··å
¥ãæŒæŽ©ããå¯èœæ§ã«å¯ŸåŠããã¹ãããã§ããããã¯æå³ããªãè©äŸ¡çµæããããããªã¹ã¯ãæããŸããæ±æé€å»ããã»ã¹ã«ã¯ãéåžžã以äžã®äž»èŠãªã¹ããããå«ãŸããŸãã
æœåšçãªäžæµã¿ã¹ã¯ãšãã®ãã¹ã ã»ãããç¹å®ããŸãã
ãã¹ã ããŒã¿ã N-gram è¡šçŸã«å€æããŸãã
ãã¬ãŒãã³ã° ã³ãŒãã¹ã§äžèŽãã N-gram ãæ€çŽ¢ããŸãã
ããã¥ã¡ã³ãã®æŽåæ§ãç¶æããªãããæ±æãããã»ã¯ã·ã§ã³ãåé€ãŸãã¯ä¿®æ£ããŸãã
ãã®äœç³»çãªã¢ãããŒãã¯ãããŒã¿ã®å質ã«å¯Ÿããæå³ããªã圱é¿ãæå°éã«æããªãããæ±æé€å»ã®å¹æã確å®ãªãã®ã«ããŠãæçµçã«ã¯ãããä¿¡é Œæ§ã®é«ãã¢ãã«ã®è©äŸ¡ãšéçºã«è²¢ç®ããŸãã
ãã¬ã³ããšã·ã£ããã«
ããŒã¿ã®ãã¬ã³ããšã·ã£ããã«ã¯ãããŒã¿ ãã¥ã¬ãŒã·ã§ã³ ãã€ãã©ã€ã³ã«ãããæçµã¹ãããã§ãããè€æ°ã®ãã¥ã¬ãŒã·ã§ã³ãããããŒã¿ã»ãããçµã¿åããããšåæã«é©åãªã©ã³ãã æ§ã確ä¿ããæé©ãªã¢ãã« ãã¬ãŒãã³ã°ãå®çŸããŸãããã®ããã»ã¹ã¯ãã¢ãã«ã®äžè¬åãšããã©ãŒãã³ã¹ãåäžããããå€æ§ã§ãã©ã³ã¹ã®åãããã¬ãŒãã³ã° ããŒã¿ã»ãããäœæããäžã§äžå¯æ¬ ã§ããããŒã¿ã®ãã¬ã³ãã§ã¯ãè€æ°ã®ãœãŒã¹ããã®ããŒã¿ãçµ±åããŠåäžã®ããŒã¿ã»ããã«çµåããããå
æ¬çã§å€æ§ãªãã¬ãŒãã³ã° ããŒã¿ãäœæããŸãããã¬ã³ã ããã»ã¹ã¯ã次㮠2 ã€ã®ã¢ãããŒãã䜿çšããŠå®è£
ãããŸãã
ãªã³ã©ã€ã³: ãã¬ãŒãã³ã°äžã«ããŒã¿ãçµåããã
ãªãã©ã€ã³: ãã¬ãŒãã³ã°åã«ããŒã¿ã»ãããçµåããã
ããããã®ã¢ãããŒãã«ã¯ããã¬ãŒãã³ã° ããã»ã¹ã®ç¹å®ã®èŠä»¶ãšæçµçãªããŒã¿ã»ããã®äœ¿çšç®çã«å¿ããŠç°ãªãå©ç¹ããããŸãã
åæããŒã¿ã®çæ
ååŠçãã§ãŒãºã®è€éãªããã»ã¹ãçµããŸããããçŸåšãLLM éçºã®åéã§ã¯ããŒã¿ã®äžè¶³ãšãã倧ããªèª²é¡ã«çŽé¢ããŠããŸããLLM ãåŠç¿çšããŒã¿ã»ããã倧éã«å¿
èŠãšããã®ã¯ããã¥ãŒãã³ã°ãç®çãšããå Žåã§ãåæ§ã§ããããã®é£œããªãèŠæ±ã¯ãç¹å®ã®ãã¡ã€ã³ãèšèªã«ç¹åããããŒã¿ã®å
¥æå¯èœæ§ãäžåãããšãå°ãªããããŸããããã®åé¡ã«å¯ŸåŠãã
åæããŒã¿çæ (SDG: Synthetic Data Generation)
ã¯ãLLM ã掻çšããŠããã©ã€ãã·ãŒã®ä¿è·ãšããŒã¿ã®æçšæ§ã確ä¿ããªãããçŸå®ã®ããŒã¿ç¹æ§ãæš¡å£ãã人工çãªããŒã¿ã»ãããçæãã匷åãªã¢ãããŒãã§ãããã®ããã»ã¹ã§ã¯å€éš LLM ãµãŒãã¹ã䜿çšããŠãäºååŠç¿ããã¡ã€ã³ãã¥ãŒãã³ã°ãä»ã®ã¢ãã«ã®è©äŸ¡ã«äœ¿çšã§ãããé«å質ã§å€æ§ãã€æèçã«é¢é£æ§ã®é«ãããŒã¿ãçæããŸãã
SDG ã¯ãäœãªãœãŒã¹èšèªã« LLM ãé©å¿ã§ããããã«ããããšã§ããã¡ã€ã³ã®å°éæ§ããµããŒãããã¢ãã«éã®ç¥èã®æœåºãä¿é²ããã¢ãã«æ©èœãæ¡åŒµããæ±çšçãªããŒã«ã«ãªããŸããSDG ã¯ãç¹ã«å®ããŒã¿ãäžè¶³ããŠããããæ©å¯ã§ãã£ãããååŸããã®ãå°é£ã ã£ããããã·ããªãªã«ãããŠãéèŠãªååšãšãªã£ãŠããŸãã
å³ 2. NeMo Curator ã«ããäžè¬çãªåæããŒã¿çæã¢ãŒããã¯ãã£
åæããŒã¿ ãã€ãã©ã€ã³ã«ã¯ãçæãæ¹è©ããã£ã«ã¿ãŒã® 3 ã€ã®äž»èŠãªã¹ãããããããŸãã
çæ:
ããã³ãã ãšã³ãžãã¢ãªã³ã°ã䜿çšããŠãããŸããŸãªã¿ã¹ã¯çšã®åæããŒã¿ãçæããŸãã
Nemotron-4
ãäŸã«ãšããšãSDG ã¯ã5 çš®é¡ã®ç°ãªãã¿ã¹ã¯ (èªç±åœ¢åŒ QAãéžæåŒ QAãèšè¿°åŒèª²é¡ãã³ãŒãã£ã³ã°ãæ°åŠåé¡) ã®ãã¬ãŒãã³ã° ããŒã¿ãçæããããã«é©çšãããŸãã
æ¹è©:
LLM ReflectionãLLM-as-judgeãå ±é
¬ã¢ãã«æšè«ããã®ä»ã®ãšãŒãžã§ã³ããªã©ã®ææ³ã䜿çšããŠãåæããŒã¿ã®å質ãè©äŸ¡ããŸããè©äŸ¡çµæ㯠SDG LLM ãžã®ãã£ãŒãããã¯ãšããŠäœ¿çšããããè¯ãçµæãçæããããäœå質ããŒã¿ããã£ã«ã¿ãªã³ã°ãããããããšãã§ããŸãã代衚çãªäŸã¯
Nemotron-4-340B reward NIM
ã§ããããã¯ã5 ã€ã®äž»èŠãªå±æ§ãããªãã¡ Helpfulness (æçšæ§)ãCorrectness (æ£ç¢ºæ§)ãCoherence (äžè²«æ§)ãComplexity (è€éæ§)ãVerbosity (åé·æ§) ãéããŠããŒã¿ã®å質ãè©äŸ¡ããŸãããããã®å±æ§ã¹ã³ã¢ã«é©åãªéŸå€ãèšå®ããããšã§ããã£ã«ã¿ãªã³ã°åŠçã§ã¯ãäœå質ãŸãã¯äžé©åãªã³ã³ãã³ããé€å€ããªãããé«å質ãªåæããŒã¿ã®ã¿ãä¿æãããããã«ãªããŸãã
ãã£ã«ã¿ãŒ:
éè€æé€ã PII ã®åé€ãªã©ã®ã¹ãããã§ãSDG ããŒã¿ã®å質ãããã«åäžãããŸãã
ãã ããSDG ããã¹ãŠã®ã±ãŒã¹ã«é©ããŠããããã§ã¯ãªãããšã«æ³šæããŠãã ãããå€éš LLM ã«ããå¹»èŠã¯ãä¿¡é Œæ§ã®äœãæ
å ±ããããããããŒã¿ã®æŽåæ§ãæãªãå¯èœæ§ããããŸããå ããŠãçæãããããŒã¿ã®ååžãã¿ãŒã²ããã®ååžãšäžèŽããªãå¯èœæ§ããããçŸå®äžçã®ããã©ãŒãã³ã¹ã«æªåœ±é¿ãåãŒãå¯èœæ§ããããŸãããã®ãããªå Žåã¯ãSDG ã䜿çšããããšã§ãã·ã¹ãã ã®å¹çæ§ãæ¹åããã©ãããããããäœäžãããå¯èœæ§ããããŸãã
ãœããªã³ AI LLM æ§ç¯ã®ããã®ããŒã¿åŠç
ãªãŒãã³ãœãŒã¹ LLM ã¯è±èªã§ã¯åªããŠããŸããããã®ä»ã®èšèªãç¹ã«æ±åã¢ãžã¢ã®èšèªã§ã¯èŠæŠããŠããŸãããã®äž»ãªåå ã¯ããããã®èšèªã®ãã¬ãŒãã³ã° ããŒã¿ã®äžè¶³ãçŸå°ã®æåã«å¯Ÿããç解ãéãããŠããããšãç¬èªã®èšèªæ§é ãšè¡šçŸãæããã®ã«ååãªããŒã¯ã³ãäžè¶³ããŠããããšã§ãã
è±èªå以å€ã®åœã
ã®äŒæ¥ã¯ã顧客ã®ããŒãºãå®å
šã«æºãããããæ±çšã¢ãã«ã«ãšã©ãŸãããçŸå°ã®èšèªã®ãã¥ã¢ã³ã¹ãæããããã«ã¢ãã«ãã«ã¹ã¿ãã€ãºããã·ãŒã ã¬ã¹ã§ã€ã³ãã¯ãã®ãã顧客äœéšã確ä¿ããå¿
èŠããããŸããäŸãã°ãViettel Solutions ã¯ãNeMo Curator ã䜿çšããŠã
é«å質ãªãããã èªããŒã¿
ãåŠçãã粟床ã 10% åäžãããããŒã¿ã»ããã®ãµã€ãºã 60% åæžãããã¬ãŒãã³ã°ã 3 åé«éåããŸããã
ãã®ãŠãŒã¹ ã±ãŒã¹ã®äž»ãªæé ã¯æ¬¡ã®ãšããã§ãã
ããã€ãã®ãããã èªããã³å€èšèªããŒã¿ã»ãã (Wikipediaããããã èªãã¥ãŒã¹ ã³ãŒãã¹ã
OSCAR
ãC4) ãããŠã³ããŒããã倧èŠæš¡ãªããŒã¿ã»ãããå¹ççã«åŠçããããã«ãParquet ã«å€æããŸãã
è€æ°ã®ããŒã¿ã»ãããçµåãæšæºåããåäžã®ããŒã¿ã»ããã«ã·ã£ãŒãããŸãã
Unicode ã®åãã©ãŒããããå³å¯ãªéè€æé€ãå質ãã£ã«ã¿ãªã³ã° (ãã¥ãŒãªã¹ãã£ãã¯ããã³åé¡åšããŒã¹) ãé©çšããŸãã
詳现ã¯ããã®
ãã¥ãŒããªã¢ã«
ãåç
§ããŠãã ããã
NVIDIA NeMo Curator ã«ããããŒã¿ã®å質åäž
ãããŸã§ãLLM ã®ç²ŸåºŠåäžã«ãããããŒã¿å質ã®éèŠæ§ã«ã€ããŠããããŠããŸããŸãªããŒã¿åŠçææ³ã«ã€ããŠèª¬æããŠããŸãããéçºè
ã¯ã
NeMo Curator
ãä»ããŠçŽæ¥ãããã®ææ³ãè©Šãããšãã§ããŸããNeMo Curator ã¯ãã«ã¹ã¿ãã€ãºå¯èœãªã¢ãžã¥ãŒã«åŒã®ã€ã³ã¿ãŒãã§ã€ã¹ãæäŸããŠãããããéçºè
ã¯ãããããŒã¹ã«ç°¡åã«æ§ç¯ããããšãã§ããŸãã
NeMo Curator ã¯ãcuDFãcuMLãcuGraphãDask ãªã©ã® NVIDIA RAPIDS GPU ã§é«éåãããã©ã€ãã©ãªã䜿çšããŠããã«ãããŒãããã«ã GPU ã«ãããã¯ãŒã¯ããŒããé«éåããå¿
èŠã«å¿ããŠã¹ã±ãŒã«ãããåŠçæéãåæžã§ããŸããäŸãã°ãGPU ã䜿çšããŠããŒã¿åŠçã®ãã€ãã©ã€ã³ãé«éåããããšã§ã
Zyphra ã¯ç·ææã³ã¹ã (TCO)
ã 50% åæžããããŒã¿ã 10 åé«éã«åŠçããŠããŸã (3 é±éãã 2 æ¥é)ã
ãŸãã¯ã
NVIDIA/NeMo-Curator GitHub ãªããžããª
ãšã以äžã®ããŸããŸãªããŒã¿ ãã¥ã¬ãŒã·ã§ã³ã®ã¯ãŒã¯ãããŒã網çŸ
ããŠãã
ãã¥ãŒããªã¢ã«
ãã芧ãã ããã
äºååŠç¿ã®ããã®ããŒã¿åŠç
ã«ã¹ã¿ãã€ãºã®ããã®ããŒã¿åŠç
SDG ãã€ãã©ã€ã³
ãŸãã
NeMo ãã¬ãŒã ã¯ãŒã¯ ã³ã³ãããŒ
ãä»ããŠã¢ã¯ã»ã¹ãã
NVIDIA AI Enterprise
ã©ã€ã»ã³ã¹ã§ãšã³ã¿ãŒãã©ã€ãº ãµããŒãããªã¯ãšã¹ãããããšãã§ããŸãã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
ã»ãã¥ã¢ãªãšã³ã¿ãŒãã©ã€ãº ããŒã¿ã§ã«ã¹ã¿ã LLM ã¢ããªãæ°åã§æ§ç¯ãã
GTC ã»ãã·ã§ã³:
LLM ã€ã³ãã©ã®æ§ç¯ããã¬ãŒãã³ã°é床ã®é«éåãçæ AI ã€ãããŒã·ã§ã³ã®æšé²ã®ããã®ãšã³ãããŒãšã³ãã®ãœãªã¥ãŒã·ã§ã³ã®èšèš (Aivres æäŸ)
NGC ã³ã³ãããŒ:
genai-llm-playground
NGC ã³ã³ãããŒ:
rag-application-query-decomposition-agent
ãŠã§ãããŒ:
AI ã«ããå»çã¯ãŒã¯ãããŒã®å€é©: CLLM ãæ·±ãæãäžãã |
https://developer.nvidia.com/blog/expanding-ai-agent-interface-options-with-2d-and-3d-digital-human-avatars/ | Expanding AI Agent Interface Options with 2D and 3D Digital Human Avatars | When interfacing with
generative AI
applications, users have multiple communication optionsâtext, voice, or through digital avatars.
Traditional chatbot or copilot applications have text interfaces where users type in queries and receive text-based responses. For hands-free communication, speech AI technologies like
automatic speech recognition
(ASR) and
text-to-speech
(TTS) facilitate verbal interactions, ideal for scenarios like phone-based customer service. Moreover, combining digital avatars with speech capabilities provides a more dynamic interface for users to engage visually with the application. According to Gartner, by 2028, 45% of organizations with more than 500 employees will leverage employee AI avatars to expand the capacity of human capital.
1
Digital avatars can vary widely in styleâsome use cases benefit from photorealistic 3D or 2D avatars, while other use cases work better with a stylized, or cartoonish avatar.
3D Avatars
offer fully immersive experiences, showcasing lifelike movements and photorealism. Developing these avatars requires specialized software and technical expertise, as they involve intricate body animations and high-quality renderings.
2D Avatars
are quicker to develop and ideal for web-embedded solutions. They offer a streamlined approach to creating interactive AI, often requiring artists for design and animation but less intensive in terms of technical resources.
To kickstart your creation of a photo-realistic digital human, the
NVIDIA AI Blueprint on digital humans for customer service
can be tailored for various use cases. This functionality is now included with support for the NVIDIA Maxine
Audio2Face-2D
NIM microservice. âAdditionally, the blueprint now offers flexibility in rendering for 3D avatar developers to use
Unreal Engine
.
How to add a talking digital avatar to your agent application
In the AI Blueprint for digital humans, a user interacts with an
AI agent
that leverages
NVIDIA ACE
technology (Figure 1).
Figure 1. Architecture diagram for the NVIDIA AI Blueprint for digital humans
The audio input from the user is sent to the ACE agent which orchestrates the communication between various NIM microservices. The ACE agent uses the
Riva Parakeet NIM
to convert the audio to text, which is then processed by a RAG pipeline. The RAG pipeline uses the NVIDIA NeMo Retriever
embedding
and
reranking
NIM microservices, and an
LLM NIM
, to respond with relevant context from stored documents.
Finally, the response is converted back to speech via Riva TTS, animating the digital human using the Audio2Face-3D NIM or Audio2Face-2D NIM.
Considerations when designing your AI agent application
In global enterprises, communication barriers across languages can slow down operations. AI-powered avatars with multilingual capabilities communicate across languages effortlessly. The digital human AI Blueprint provides conversational AI capabilities that simulate human interactions that accommodate usersâ speech styles and languages through Riva ASR, neural machine translation (NMT) along with intelligent interruption and barge-in support.
One of the key benefits of digital human AI agents is their ability to function as âalways-onâ resources for employees and customers alike. RAG-powered AI agents continuously learn from interactions and improve over time, providing more accurate responses and better user experiences.
For enterprises considering digital human interfaces, choosing the right avatar and rendering option depends on the use case and customization preferences.
Use Case
: 3D avatars are ideal for highly immersive use cases like in physical stores, kiosks or primarily one-to-one interactions, while 2D avatars are effective for web or mobile conversational AI use cases.
Development and Customization Preferences
: Teams with 3D and animation expertise can leverage their skillset to create an immersive and ultra-realistic avatar, while teams looking to iterate and customize quickly can benefit from the simplicity of 2D avatars.
Scaling Considerations:
Scaling is an important consideration when evaluating avatars and corresponding rendering options. Stream throughput, especially for 3D avatars, is highly dependent on the choice and quality of the character asset used, the desired output resolution and the rendering option of choice (Omniverse Renderer or Unreal Engine) can play a critical role in determining per stream compute footprint.
NVIDIA Audio2Face-2D allows creation of lifelike 2D avatars from just a portrait image and voice input. Easy and simple configurations allow developers to quickly iterate and produce target avatars and animations for their digital human use cases. With real-time output and cloud-native deployment, 2D digital humans are ideal for interactive use cases and streaming avatars for interactive web-embedded solutions.
For example, enterprises looking to deploy AI agents across multiple devices and inserting digital humans into web- or mobile-first customer journeys, can benefit from the reduced hardware demands of 2D avatars.
3D photorealistic avatars provide an unmatched immersive experience for use cases demanding âhighly empathetic user engagement. NVIDIA Audio2Face-3D and Animation NIM microservices animate a 3D character by generating blendshapes along with subtle head and body animation to create an immersive, photorealistic avatar. The digital human AI Blueprint now supports two rendering options for 3D avatars, including Omniverse Renderer and Unreal Engine Renderer, providing developers the flexibility to integrate the rendering option of their choice.
To explore how digital humans can enhance your enterprise, visit the
NVIDIA API catalog
to learn about the different avatar options.
Getting started with digital avatars
For hands-on development with Audio2Face-2D and Unreal Engine NIM microservices,
apply for ACE Early Access
or dive into the digital human AI Blueprint
technical blog
to learn how you can add digital human interfaces to personalize chatbot applications.
1
Gartner®, Hype Cycle for the Future of Work, 2024 by Tori Paulman, Emily Rose McRae, etc., July 2024
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. | https://developer.nvidia.com/ja-jp/blog/expanding-ai-agent-interface-options-with-2d-and-3d-digital-human-avatars/ | 2D ãš 3D ã®ããžã¿ã« ãã¥ãŒãã³ ã¢ãã¿ãŒã«ãã AI ãšãŒãžã§ã³ã ã€ã³ã¿ãŒãã§ã€ã¹ ãªãã·ã§ã³ã®æ¡åŒµ | Reading Time:
2
minutes
ãŠãŒã¶ãŒã
çæ AI
ã¢ããªã±ãŒã·ã§ã³ã䜿ã£ãŠããåãããéã«ã¯ãããã¹ããé³å£°ãããžã¿ã« ã¢ãã¿ãŒãªã©è€æ°ã®ã³ãã¥ãã±ãŒã·ã§ã³ ãªãã·ã§ã³ãå©çšããããšãã§ããŸãã
åŸæ¥ã®ãã£ããããããã³ãã€ããã ã¢ããªã±ãŒã·ã§ã³ã§ã¯ããŠãŒã¶ãŒãåãåãããå
¥åããããã¹ãããŒã¹ã®å¿çãåä¿¡ããããã¹ã ã€ã³ã¿ãŒãã§ã€ã¹ã䜿çšããŠããŸãããã³ãºããªãŒã®ã³ãã¥ãã±ãŒã·ã§ã³ã§ã¯ã
èªåé³å£°èªè
(ASR: Automatic Speech Recognition) ã
é³å£°åæ
(TTS: Text-To-Speech) ãªã©ã®é³å£° AI æè¡ã«ãããé»è©±ã䜿çšããã«ã¹ã¿ã㌠ãµãŒãã¹ãªã©ã®ã·ããªãªã«æé©ãªå£é ã«ããããåãã容æã«ãªããŸããããã«ãããžã¿ã« ã¢ãã¿ãŒã«é³å£°æ©èœãæãããããšã§ããŠãŒã¶ãŒãã¢ããªã±ãŒã·ã§ã³ãèŠèŠçã«äœ¿çšã§ããããããã€ãããã¯ãªã€ã³ã¿ãŒãã§ã€ã¹ãæäŸã§ããŸããGartner ã«ãããšã2028 幎ãŸã§ã«ãåŸæ¥å¡ 500 å以äžã®çµç¹ã® 45% ãã人çè³æ¬ã®èœåæ¡å€§ã®ããã«ã AI ã¢ãã¿ãŒã®åŸæ¥å¡ã掻çšããããã«ãªãããã§ãã
1
ããžã¿ã« ã¢ãã¿ãŒã®ã¹ã¿ã€ã«ã¯æ§ã
ã§ããã©ããªã¢ãªã¹ãã£ãã¯ãª 3D ãŸã㯠2D ã®ã¢ãã¿ãŒãé©ããŠããã±ãŒã¹ãããã°ãå®ååãããã¢ãã¿ãŒã挫ç»ã®ãããªã¢ãã¿ãŒã®æ¹ãé©ããŠããã±ãŒã¹ããããŸãã
3D ã¢ãã¿ãŒ
ã¯ããªã¢ã«ãªåããšåå®æ§ãåçŸããå®å
šãªæ²¡å
¥äœéšãæäŸããŸãããã®ãããªã¢ãã¿ãŒã®éçºã«ã¯ãè€éãªããã£ãŒ ã¢ãã¡ãŒã·ã§ã³ãé«å質ã®ã¬ã³ããªã³ã°ãå¿
èŠãšãªããããå°éçãªãœãããŠã§ã¢ãæè¡çãªå°éç¥èãå¿
èŠã«ãªããŸãã
2D ã¢ãã¿ãŒ
ã¯éçºãè¿
éã§ãWeb ã«çµã¿èŸŒã¿ãœãªã¥ãŒã·ã§ã³ã«æé©ã§ããã€ã³ã¿ã©ã¯ãã£ã㪠AI ã®äœæã«åççãªã¢ãããŒããæäŸãããã¶ã€ã³ãã¢ãã¡ãŒã·ã§ã³ã«ã¯ã¢ãŒãã£ã¹ããå¿
èŠã«ãªãããšãå€ãã§ãããæè¡çãªãªãœãŒã¹ã®é¢ã¯ããã»ã©è² æ
ã«ãªããŸããã
ãã©ããªã¢ãªã¹ãã£ãã¯ãªããžã¿ã« ãã¥ãŒãã³ã®äœæãå§ããã«ãããã
ã«ã¹ã¿ã㌠ãµãŒãã¹åãããžã¿ã« ãã¥ãŒãã³ã® NVIDIA AI Blueprint
ã¯ãããŸããŸãªãŠãŒã¹ ã±ãŒã¹ã«åãããŠã«ã¹ã¿ãã€ãºããããšãã§ããŸãããã®æ©èœã¯çŸåšãNVIDIA Maxine
Audio2Face-2D
NIM ãã€ã¯ããµãŒãã¹ã®ãµããŒãã«å«ãŸããŠããŸããããã«ããã® Blueprint ã§ã¯ã3D ã¢ãã¿ãŒéçºè
ã
Unreal Engine
ã䜿çšã§ãããããã¬ã³ããªã³ã°ã«æè»æ§ãæãããŠããŸãã
ãšãŒãžã§ã³ã ã¢ããªã±ãŒã·ã§ã³ã«äŒè©±ããããžã¿ã« ã¢ãã¿ãŒãè¿œå ããæ¹æ³
ããžã¿ã« ãã¥ãŒãã³åã AI Blueprint ã§ã¯ããŠãŒã¶ãŒã
NVIDIA ACE
æè¡ã掻çšãã
AI ãšãŒãžã§ã³ã
ãšå¯Ÿè©±ããŸã (å³ 1)ã
å³ 1. ããžã¿ã« ãã¥ãŒãã³åã NVIDIA AI Blueprint ã®ã¢ãŒããã¯ãã£
ãŠãŒã¶ãŒã«ããé³å£°å
¥åã¯ãããŸããŸãª NIM ãã€ã¯ããµãŒãã¹éã®éä¿¡ã調æŽãã ACE ãšãŒãžã§ã³ãã«éä¿¡ãããŸããACE ãšãŒãžã§ã³ãã¯ã
Riva Parakeet NIM
ã䜿çšããŠé³å£°ãããã¹ãã«å€æãããã®ããã¹ã㯠RAG ãã€ãã©ã€ã³ã§åŠçãããŸããRAG ãã€ãã©ã€ã³ã§ã¯ãNIM ãã€ã¯ããµãŒãã¹ã®
åã蟌ã¿
ãš
ãªã©ã³ã¯
ãè¡ã NVIDIA NeMo Retriever ãš
LLM NIM
ã䜿çšããŠãä¿åãããããã¥ã¡ã³ãããé¢é£ããã³ã³ããã¹ããçšããŠå¿çããŸãã
æåŸã«ãRiva TTS ãä»ããŠãã®å¿çãé³å£°ã«å€æããAudio2Face-3D NIM ãŸã㯠Audio2Face-2D NIM ã䜿çšããŠããžã¿ã« ãã¥ãŒãã³ãã¢ãã¡ãŒã·ã§ã³åããŸãã
AI ãšãŒãžã§ã³ã ã¢ããªã±ãŒã·ã§ã³ãèšèšããéã«èæ
®ãã¹ããã€ã³ã
ã°ããŒãã«äŒæ¥ã§ã¯ãèšèªã®å£ã«ããã³ãã¥ãã±ãŒã·ã§ã³ã®é害ãæ¥åã®åŠšããšãªãããšããããŸããå€èšèªæ©èœãåãã AI æèŒã¢ãã¿ãŒã䜿çšããã°ãèšèªã®å£ãè¶
ããåæ»ãªã³ãã¥ãã±ãŒã·ã§ã³ãåãããšãã§ããŸããããžã¿ã« ãã¥ãŒãã³ AI Blueprint ã¯ãRiva ASR ããã¥ãŒã©ã«æ©æ¢°ç¿»èš³ (NMT: Neural Machine Translation) ã«å ããã€ã³ããªãžã§ã³ããªå²ã蟌ã¿ãããŒãžã€ã³æ©èœãåãããŠãŒã¶ãŒã®è©±ãæ¹ãèšèªã«æè»ã«å¯Ÿå¿ã§ããã人éããã察話å AI ãå®çŸããŸãã
ããžã¿ã« ãã¥ãŒãã³ AI ãšãŒãžã§ã³ãã®äž»ãªå©ç¹ã® 1 ã€ã¯ãåŸæ¥å¡ãšé¡§å®¢ã®äž¡è
ã«ãšã£ãŠãåžžæ皌åããããªãœãŒã¹ãšããŠæ©èœã§ããããšã§ããRAG ãæèŒãã AI ãšãŒãžã§ã³ãã¯ããããšãããç¶ç¶çã«åŠç¿ããæéã®çµéãšãšãã«æ¹åããŠãããããããæ£ç¢ºãªå¯Ÿå¿ãšããåªãããŠãŒã¶ãŒäœéšãæäŸããããšãã§ããŸãã
ããžã¿ã« ãã¥ãŒãã³ ã€ã³ã¿ãŒãã§ã€ã¹ãæ€èšããŠããäŒæ¥ã«ãšã£ãŠãé©åãªã¢ãã¿ãŒãšã¬ã³ããªã³ã° ãªãã·ã§ã³ã®éžæã¯ããŠãŒã¹ ã±ãŒã¹ãã«ã¹ã¿ãã€ãºèšå®ã«äŸåããŸãã
ãŠãŒã¹ ã±ãŒã¹
: 3D ã¢ãã¿ãŒã¯ãå®åºèãããªã¹ã¯ (ç¡äººç«¯æ«) ãªã©ã䞻㫠1察 1 ã®ãããšãã®ãããªãéåžžã«æ²¡å
¥æã®é«ããŠãŒã¹ ã±ãŒã¹ã«æé©ã§ããã2D ã¢ãã¿ãŒã¯ãWeb ãã¢ãã€ã«ã®å¯Ÿè©±å AI ãŠãŒã¹ ã±ãŒã¹ã«å¹æçã§ãã
éçºãšã«ã¹ã¿ãã€ãºã®èšå®
: 3D ãã¢ãã¡ãŒã·ã§ã³ã®å°éç¥èãæã€ããŒã ã¯ããã®ã¹ãã«ã掻çšããŠæ²¡å
¥æã®ããè¶
ãªã¢ã«ãªã¢ãã¿ãŒãäœæã§ããŸããäžæ¹ãå埩äœæ¥ãã«ã¹ã¿ãã€ãºãè¿
éã«è¡ãããããŒã ã«ã¯ãã·ã³ãã«ãª 2D ã¢ãã¿ãŒãæå¹ã§ãã
ã¹ã±ãŒãªã³ã°ã®èæ
®ãã¹ããã€ã³ã
: ã¢ãã¿ãŒãšå¯Ÿå¿ããã¬ã³ããªã³ã° ãªãã·ã§ã³ãè©äŸ¡ããéã«ãã¹ã±ãŒãªã³ã°ã¯èæ
®ãã¹ãéèŠãªãã€ã³ãã§ããã¹ããªãŒã ã®ã¹ã«ãŒãããã¯ãç¹ã« 3D ã¢ãã¿ãŒã®å Žåã䜿çšãããã£ã©ã¯ã¿ãŒ ã¢ã»ããã®éžæãšå質ã«ãã£ãŠå€§ããç°ãªããŸããåžæããåºå解å床ãéžæããã¬ã³ããªã³ã° ãªãã·ã§ã³ (Omniverse Renderer ãŸã㯠Unreal Engine) ã¯ãã¹ããªãŒã ãããã®èšç®ãããããªã³ãã決å®ããäžã§éèŠãªåœ¹å²ãæãããŸãã
NVIDIA Audio2Face-2D ã§ã¯ãé¡åçãšé³å£°å
¥åã ãã§ãªã¢ã«ãª 2D ã¢ãã¿ãŒãäœæã§ããŸããç°¡åã§ã·ã³ãã«ãªæ§æã®ãããéçºè
ã¯ããžã¿ã« ãã¥ãŒãã³ã®ãŠãŒã¹ ã±ãŒã¹ã«åãããã¢ãã¿ãŒãã¢ãã¡ãŒã·ã§ã³ãè¿
éã«ç¹°ãè¿ãäœæã§ããŸãããªã¢ã«ã¿ã€ã åºåãšã¯ã©ãŠã ãã€ãã£ãã®ãããã€ã«ããã2D ããžã¿ã« ãã¥ãŒãã³ã¯ãã€ã³ã¿ã©ã¯ãã£ããªãŠãŒã¹ ã±ãŒã¹ããã€ã³ã¿ã©ã¯ãã£ã㪠Web çµã¿èŸŒã¿ãœãªã¥ãŒã·ã§ã³åãã®ã¹ããªãŒãã³ã° ã¢ãã¿ãŒã«æé©ã§ãã
ããšãã°ãè€æ°ã®ããã€ã¹ã« AI ãšãŒãžã§ã³ãããããã€ããWeb ãŸãã¯ã¢ãã€ã« ãã¡ãŒã¹ãã®ã«ã¹ã¿ã㌠ãžã£ãŒããŒã«ããžã¿ã« ãã¥ãŒãã³ãå°å
¥ããããšããŠããäŒæ¥ã«ã¯ã2D ã¢ãã¿ãŒã¯ããŒããŠã§ã¢èŠä»¶ã軜æžããã®ã§ã¡ãªããããããŸãã
3D ã®ãã©ããªã¢ãªã¹ãã£ãã¯ãªã¢ãã¿ãŒã¯ãé«ãå
±æãèŠæ±ããããŠãŒã¶ãŒ ãšã³ã²ãŒãžã¡ã³ããå¿
èŠãšãããŠãŒã¹ ã±ãŒã¹ã«ãæ¯é¡ã®ãªã没å
¥äœéšãæäŸããŸããNVIDIA Audio2Face-3D ãšã¢ãã¡ãŒã·ã§ã³ NIM ãã€ã¯ããµãŒãã¹ã¯ãç¹çŽ°ãªé éšãšèº«äœã®ã¢ãã¡ãŒã·ã§ã³ãšãšãã«ãã¬ã³ãã·ã§ã€ããçæãã没å
¥æã®ãããã©ããªã¢ãªã¹ãã£ãã¯ãªã¢ãã¿ãŒãäœæããããšã§ã3D ãã£ã©ã¯ã¿ãŒãã¢ãã¡ãŒã·ã§ã³åããŸããããžã¿ã« ãã¥ãŒãã³ AI Blueprint ã¯ã3D ã¢ãã¿ãŒã®ã¬ã³ããªã³ã° ãªãã·ã§ã³ããšããŠãOmniverse ã¬ã³ãã©ãŒãš Unreal-Engine ã¬ã³ãã©ãŒããµããŒãããŠãããéçºè
ãéžæããã¬ã³ããªã³ã° ãªãã·ã§ã³ãæè»ã«çµ±åã§ããããã«ãªããŸããã
ããžã¿ã« ãã¥ãŒãã³ãäŒæ¥ã匷åããæ¹æ³ã«ã€ããŠã¯ã
NVIDIA API ã«ã¿ãã°
ã«ã¢ã¯ã»ã¹ããŠãããŸããŸãªã¢ãã¿ãŒã®ãªãã·ã§ã³ãã芧ãã ããã
ããžã¿ã« ã¢ãã¿ãŒãå§ãã
Audio2Face-2D ãš Unreal Engine NIM ãã€ã¯ããµãŒãã¹ã䜿çšããå®è·µçãªéçºã«ã€ããŠã¯ã
ACE æ©æã¢ã¯ã»ã¹ã«ç³ã蟌ã
ããããžã¿ã« ãã¥ãŒãã³ AI Blueprint ã®
æè¡ããã°
ã«ã¢ã¯ã»ã¹ããŠããã£ããããã ã¢ããªã±ãŒã·ã§ã³ãããŒãœãã©ã€ãºããããã«ããžã¿ã« ãã¥ãŒãã³ ã€ã³ã¿ãŒãã§ã€ã¹ãè¿œå ããæ¹æ³ã«ã€ããŠåŠã¶ããšãã§ããŸãã
1
Gartner®, Hype Cycle for the Future of Work, 2024 by Tori Paulman, Emily Rose McRae, etc., July 2024
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
Enhancing the Digital Human Experience with Cloud Microservices Accelerated by Generative AI
GTC ã»ãã·ã§ã³:
Build a World of Interactive Avatars Based on NVIDIA Omniverse, AIGC, and LLM
NGC ã³ã³ãããŒ:
ACE ãšãŒãžã§ã³ã ãµã³ãã« ããã³ããšã³ã
SDK:
NVIDIA Tokkio
ãŠã§ãããŒ:
How Telcos Transform Customer Experiences with Conversational AI |
https://developer.nvidia.com/blog/ai-ran-goes-live-and-unlocks-a-new-ai-opportunity-for-telcos/ | AI-RAN Goes Live and Unlocks a New AI Opportunity for Telcos | AI is transforming industries, enterprises, and consumer experiences in new ways. Generative AI models are moving towards reasoning,
agentic AI
is enabling new outcome-oriented workflows and
physical AI
is enabling endpoints like cameras, robots, drones, and cars to make decisions and interact in real time.
The common glue between all these use cases is the need for pervasive, reliable, secure, and super-fast connectivity.
Telecommunication networks must prepare for this new kind of AI traffic, which can come directly through the fronthaul wireless access network or backhauled from the public or private cloud as a completely standalone AI inferencing traffic generated by enterprise applications.
Local wireless infrastructure offers an ideal place to process AI inferencing. This is where a new approach to telco networks, AI radio access network (
AI-RAN
), stands out.
Traditional CPU or ASIC-based RAN systems are designed only for RAN use and cannot process AI traffic today. AI-RAN enables a common GPU-based infrastructure that can run both wireless and AI workloads concurrently, turning networks from single-purpose to multi-purpose infrastructures and turning sites from cost-centers to revenue sources.
With a strategic investment in the right kind of technology, telcos can leap forward to become the AI grid that facilitates the creation, distribution, and consumption of AI across industries, consumers, and enterprises. This moment in time presents a massive opportunity for telcos to build a fabric for AI training (creation) and AI inferencing (distribution) by repurposing their central and distributed infrastructures.
SoftBank and NVIDIA fast-forward AI-RAN commercialization
SoftBank has turned the AI-RAN vision into reality, with its
successful outdoor field trial
in Fujisawa City, Kanagawa, Japan, where NVIDIA-accelerated hardware and
NVIDIA Aerial
software served as the technical foundation.
This achievement marks multiple steps forward for AI-RAN commercialization and provides real proof points addressing industry requirements on technology feasibility, performance, and monetization:
Worldâs first outdoor 5G AI-RAN field trial running on an NVIDIA-accelerated computing platform. This is an end-to-end solution based on a full-stack, virtual 5G RAN software integrated with 5G core.
Carrier-grade virtual RAN performance achieved.
AI and RAN multi-tenancy and orchestration achieved.
Energy efficiency and economic benefits validated compared to existing benchmarks.
A new solution to unlock AI marketplace integrated on an AI-RAN infrastructure.
Real-world AI applications showcased, running on an AI-RAN network.
Above all, SoftBank aims to commercially release their own AI-RAN product for worldwide deployment in 2026.
To help other mobile network operators get started on their AI-RAN journey now, SoftBank is also planning to offer a reference kit comprising the hardware and software elements required to trial AI-RAN in a fast and easy way.
End-to-end AI-RAN solution and field results
SoftBank developed their AI-RAN solution by integrating hardware and software components from NVIDIA and ecosystem partners and hardening them to meet carrier-grade requirements. Together, the solution enables a full 5G vRAN stack that is 100% software-defined, running on NVIDIA GH200 (CPU+GPU), NVIDIA Bluefield-3 (NIC/DPU), and Spectrum-X for fronthaul and backhaul networking. It integrates with 20 radio units and a 5G core network and connects 100 mobile UEs.
The core software stack includes the following components:
SoftBank-developed and optimized 5G RAN Layer 1 functions such as channel mapping, channel estimation, modulation, and forward-error-correction, using
NVIDIA Aerial CUDA-Accelerated-RAN
libraries
Fujitsu software for Layer 2 functions
Red Hatâs OpenShift Container Platform (OCP) as the container virtualization layer, enabling different types of applications to run on the same underlying GPU computing infrastructure
A SoftBank-developed E2E AI and RAN orchestrator, to enable seamless provisioning of RAN and AI workloads based on demand and available capacity
The underlying hardware is the
NVIDIA GH200 Grace Hopper Superchip
, which can be used in various configurations from distributed to centralized RAN scenarios. This implementation uses multiple GH200 servers in a single rack, serving AI and RAN workloads concurrently, for an aggregated-RAN scenario. This is comparable to deploying multiple traditional RAN base stations.
In this pilot, each GH200 server was able to process 20 5G cells using 100-MHz bandwidth, when used in RAN-only mode. For each cell, 1.3 Gbps of peak downlink performance was achieved in ideal conditions, and 816Mbps was demonstrated with carrier-grade availability in the outdoor deployment.
AI-RAN multi-tenancy achieved
One of the first principles of AI-RAN technology is to be able to run RAN and AI workloads concurrently and without compromising carrier-grade performance. This multi-tenancy can be either in time or space: dividing the resources based on time of day or based on percentage of compute. This also implies the need for an orchestrator that can provision, de-provision, or shift workloads seamlessly based on available capacity.
At the Fujisawa City trial, concurrent AI and RAN processing was successfully demonstrated over GH200 based on static allocation of resources between RAN and AI workloads (Figure 1).
Figure 1. AI and RAN concurrency and total GPU utilization
Each NVIDIA GH200 server constitutes multiple MIGs (Multi-Instance GPU), that enable a single GPU to be divided into multiple isolated GPU instances. Each instance has its own dedicated resources, such as memory, cache, and compute cores, and can operate independently.
The SoftBank orchestrator intelligently assigns whole GPUs or some MIGs within a GPU to run AI and some to run RAN workloads and switches them dynamically when needed. It is also possible to statically allocate a certain percentage of compute for RAN and AI, for example, 60% for RAN and 40% for AI instead of demand-based allocation.
The goal is to maximize capacity utilization. With AI-RAN, telcos can achieve almost 100% utilization compared to 33% capacity utilization for typical RAN-only networks. This is an increase of up to 3x while still catering to peak RAN loads, thanks to dynamic orchestration and prioritization policies.
Enabling an AI-RAN marketplace
With a new capacity for AI computing now available on distributed AI-RAN infrastructure, the question arises of how to bring AI demand to this AI computing supply.
To solve this, SoftBank used a serverless API powered by NVIDIA AI Enterprise to deploy and manage AI workloads on AI-RAN, with security, scale, and reliability. The NVIDIA AI Enterprise serverless API is hosted on the AI-RAN infrastructure and integrated with the SoftBank E2E AI-RAN orchestrator. It connects to any public or private cloud running the same API, to dispatch external AI inferencing jobs to the AI-RAN server when compute is available (Figure 2).
Figure 2. AI marketplace solution integrated with SoftBank AI-RAN
This solution enables an AI marketplace, helping SoftBank deliver localized, low-latency, secured inferencing services. It also demonstrated the importance of AI-RAN in helping telcos become the AI distribution grid, particularly for external AI inferencing jobs, and opened a new revenue opportunity.
AI-RAN applications showcased
In this outdoor trial, new edge AI applications developed by SoftBank were demonstrated over the live AI-RAN network:
Remote support of autonomous vehicles over 5G
Factory multi-modal AI applications
Robotics applications
Remote support of autonomous vehicles over 5G
The key requirements of the social implementation of autonomous driving are vehicle safety and reducing operational costs.
At the Fujisawa City trial, SoftBank demonstrated an autonomous vehicle, relaying its front camera video using 5G to an AI-based remote support service hosted on the AI-RAN server. Multi-modal AI models analyzed the video stream, did risk assessment, and sent recommended actions to autonomous vehicles using text over 5G.
This is an example of explainable AI as well, as all the actions of the autonomous vehicle could be monitored and explained through summarized text and logging for remote support.
Factory multi-modal AI applications
In this use case, multi-modal inputs including video, audio, and sensor data, are streamed using 5G into the AI-RAN server. Multiple LLMs, VLMs, retrieval-augmented generation (RAG) pipelines, and NVIDIA NIM microservices hosted on the AI-RAN server are used to coalesce these inputs and make the knowledge accessible through a chat interface to users using 5G.
This fits well for factory monitoring, construction site inspections, and similar complex indoor and outdoor environments. The use case demonstrates how edge AI-RAN enables local data sovereignty by keeping data access and analysis local, secure, and private, which is a mandatory requirement of most enterprises.
Robotics applications
SoftBank demonstrated the benefit of edge AI inferencing for a robot connected over 5G. A robodog was trained to follow a human based on voice and motion.
The demo compared the response time of the robot when the AI inferencing was hosted on the local AI-RAN server to when it was hosted on the central cloud. The difference was apparent and obvious. The edge-based inference robodog followed the humanâs movements instantly, while the cloud-based inference robot struggled to keep up.
Accelerating the AI-RAN business case with the Aerial RAN Computer-1
While the AI-RAN vision has been embraced by the industry, the energy efficiency and economics of GPU-enabled infrastructure remain key requirements, particularly how they compare to traditional CPUâ and ASIC-based RAN systems.
With this live field trial of AI-RAN, SoftBank and NVIDIA have not only proven that GPU-enabled RAN systems are feasible and high-performant, but they are also significantly better in energy efficiency and economic profitability.
NVIDIA recently announced the
Aerial RAN Computer-1
based on the next-generation NVIDIA Grace Blackwell superchips as the recommended AI-RAN deployment platform. The goal is to migrate SoftBank 5G vRAN software from NVIDIA GH200 to NVIDIA Aerial RAN Computer-1 based on GB200-NVL2, which is an easier shift given the code is already CUDA-ready.
With
GB200-NVL2
, the available compute for AI-RAN will increase by a factor of 2x. The AI processing capabilities will improve by 5x for Llama-3 inferencing, 18x for data processing, and 9x for vector database search compared to prior H100 GPU systems.
For this evaluation, we compared the target deployment platform, Aerial RAN Computer-1 based on GB200 NVL2, with the latest generation of x86 and the best-in-class custom RAN product benchmarks and validated the following findings:
Accelerated AI-RAN offers best-in-class AI performance
Accelerated AI-RAN is sustainable RAN
Accelerated AI-RAN is highly profitable
Accelerated AI-RAN offers best-in-class AI performance
In 100% AI-only mode, each GB200-NVL2 server generates 25000 tokens/second, which translates to $20/hr of available monetizable compute per server, or $15K/month per server.
Keeping in mind that the average revenue per user (ARPU) of wireless services today ranges between $5â50/month depending on the country, AI-RAN opens a new multi-billion-dollar AI revenue opportunity that is orders of magnitude higher than revenues from RAN-only systems.
The token AI workload used is Llama-3-70B FP4, showcasing that AI-RAN is already capable of running the worldâs most advanced LLM models.
Accelerated AI-RAN is sustainable RAN
In 100% RAN-only mode, GB200-NVL2 server power performance in Watt/Gbps shows the following benefits:
40% less power consumption than the best-in-class custom RAN-only systems today
60% less power consumption than x86-based vRAN
For an even comparison, this assumes the same number of 100-MHz 4T4R cells and 100% RAN-only workload across all platforms.
Figure 3. RAN power consumption and performance (watt/Gbps)
Accelerated AI-RAN is highly profitable
For this evaluation, we used the scenario of covering one district in Tokyo with 600 cells as the common baseline for RAN deployment for each of the three platforms being compared. We then looked at multiple scenarios for AI and RAN workload distribution, ranging from RAN-only to RAN-heavy or AI-heavy.
In the AI-heavy scenario (Figure 4), we used a one-third RAN and two-third AI workload distribution:
For every dollar of CapEx investment in accelerated AI-RAN infrastructure based on NVIDIA GB200 NVL2, telcos can generate 5x the revenue over 5 years.
From an ROI perspective, the overall investment delivers a 219% return, considering all CapEx and OpEx costs.This is of course specific to SoftBank, as it uses local country costs assumptions.
Figure 4. AI-RAN economics for covering one Tokyo district with 600 cells
33% AI and 67% RAN
67% AI and 33% RAN
$ of revenue per $ of CapEx
2x
5x
ROI %
33%
219%
Table 1. AI-heavy scenario compared to RAN-heavy results
In the RAN-heavy scenario, we used two-thirds RAN and one-third AI workload distribution and found that revenue divided by CapEx for NVIDIA-accelerated AI-RAN is 2x, with a 33% ROI over 5 years, using SoftBank local cost assumptions.
In the RAN-only scenario, NVIDIA Aerial RAN Computer-1 is more cost-efficient than custom RAN-only solutions, which underscores the benefits of using accelerated computing for radio signal processing.
From these scenarios, it is evident that AI-RAN is highly profitable as compared to RAN-only solutions, in both AI-heavy and RAN-heavy modes. In essence, AI-RAN transforms traditional RAN from a cost center to a profit center.
The profitability per server improves with higher AI use. Even in RAN-only, AI-RAN infrastructure is more cost-efficient than custom RAN-only options.
Key assumptions used for the revenue and TCO calculations include the following:
The respective number of platforms, servers, and racks for each platform are calculated using a common baseline of deploying 600 cells on the same frequency, 4T4R.
The total cost of ownership (TCO) is calculated over 5 years and includes the cost of hardware, software, and vRAN and AI operating costs.
For the new AI revenue calculation, we used $20/hr/server based on GB200 NVL2 AI performance benchmarks.
OpEx costs are based on local Japan power costs and arenât extensible worldwide.
ROI % = (new AI revenues â TCO) / TCO
This validation of AI revenue upside, energy efficiency, and profitability of AI-RAN leaves no doubts about the feasibility, performance, and economic benefits of the technology.
Going forward, exponential gains with each generation of NVIDIA superchips, such as Vera Rubin, will multiply these benefits by orders of magnitude further, enabling the much-awaited business transformation of telco networks.
Looking ahead
SoftBank and NVIDIA are
continuing to collaborate
toward the commercialization of AI-RAN and bringing new applications to life. The next phase of the engagements will entail work on AI-for-RAN to improve spectral efficiency and on NVIDIA Aerial Omniverse digital twins to simulate accurate physical networks in the digital world for fine-tuning and testing.
NVIDIA AI Aerial lays the foundation for operators and ecosystem partners globally to use the power of accelerated computing and software-defined RAN + AI to transform 5G and 6G networks. You can now use NVIDIA Aerial RAN Computer-1 and AI Aerial software libraries to develop your own implementation of AI-RAN.
NVIDIA AI Enterprise is also helping create new AI applications for telcos, hostable on AI-RAN, as is evident from this trial where many NVIDIA software toolkits have been used. This includes NIM microservices for generative AI, RAG, VLMs, NVIDIA Isaac for robotics training, NVIDIA NeMo, RAPIDS, NVIDIA Triton for inferencing, and a serverless API for AI brokering.
The telecom industry is at the forefront of a massive opportunity to become an AI service provider. AI-RAN can kickstart this new renaissance for telcos worldwide, using accelerated computing as the new foundation for wireless networks.
This announcement marks a breakthrough moment for AI-RAN technology, proving its feasibility, carrier-grade performance, superior energy efficiency, and economic value. Every dollar of CapEx invested in NVIDIA-accelerated AI-RAN infrastructure generates 5x revenues, while being 6G-ready.
The journey to AI monetization can start now. | https://developer.nvidia.com/ja-jp/blog/ai-ran-goes-live-and-unlocks-a-new-ai-opportunity-for-telcos/ | AI-RAN ãéä¿¡äºæ¥è
åãã«æ°ãã AI ã®ããžãã¹ ãã£ã³ã¹ããããã | Reading Time:
4
minutes
AI ã¯ãæ¥çãäŒæ¥ãæ¶è²»è
ã®äœéšãæ°ããæ¹æ³ã§å€é©ããŠããŸãã çæ AI ã¢ãã«ã¯æšè«ã«ç§»è¡ãã
ãšãŒãžã§ã³ãå AI
ã¯æ°ããçµæéèŠã®ã¯ãŒã¯ãããŒãå¯èœã«ã
ãã£ãžã«ã« AI
ã«ãããã«ã¡ã©ãããããããããŒã³ãèªåè»ãªã©ã®ãšã³ããã€ã³ãããªã¢ã«ã¿ã€ã ã§ææ決å®ãè¡ãã察話ã§ããããã«ãªããŸãã
ãããã®ãŠãŒã¹ ã±ãŒã¹ã«å
±éããã®ã¯ãæ®åããä¿¡é Œæ§ãé«ããå®å
šã§ãè¶
é«éãªæ¥ç¶ãå¿
èŠã§ããããšã§ãã
éä¿¡ãããã¯ãŒã¯ã¯ãããã³ãããŒã«ç¡ç·ã¢ã¯ã»ã¹ ãããã¯ãŒã¯ãä»ããŠçŽæ¥éä¿¡ããããããšã³ã¿ãŒãã©ã€ãº ã¢ããªã±ãŒã·ã§ã³ã«ãã£ãŠçæããããããªã㯠ã¯ã©ãŠããŸãã¯ãã©ã€ããŒã ã¯ã©ãŠãããã®ããã¯ããŒã«ããã®å®å
šã«ã¹ã¿ã³ãã¢ãã³ã® AI æšè«ãã©ãã£ãã¯ã®ãããªæ°ããçš®é¡ã® AI ãã©ãã£ãã¯ã«åããå¿
èŠããããŸãã
ããŒã«ã« ã¯ã€ã€ã¬ã¹ ã€ã³ãã©ã¹ãã©ã¯ãã£ã¯ãAI æšè«ãåŠçããã®ã«æé©ãªå ŽæãæäŸããŸãã ããã¯ãéä¿¡äŒç€Ÿ ãããã¯ãŒã¯ã«å¯Ÿããæ°ããã¢ãããŒãã§ãã AI ç¡ç·ã¢ã¯ã»ã¹ ãããã¯ãŒã¯ (
AI-RAN
) ã®ç¹åŸŽã§ãã
åŸæ¥ã® CPU ãŸã㯠ASIC ããŒã¹ã® RAN ã·ã¹ãã ã¯ãRAN ã®ã¿ã®ããã«èšèšãããŠãããçŸåšã§ã¯ AI ãã©ãã£ãã¯ãåŠçã§ããŸããã AI-RAN ã¯ãã¯ã€ã€ã¬ã¹ãš AI ã®ã¯ãŒã¯ããŒããåæã«å®è¡ã§ããå
±éã® GPU ããŒã¹ã®ã€ã³ãã©ã¹ãã©ã¯ãã£ãæäŸããŸããããã«ããããããã¯ãŒã¯ãåäžç®çããå€ç®çã€ã³ãã©ã¹ãã©ã¯ãã£ã«å€ããã³ã¹ã ã»ã³ã¿ãŒãããããã£ãã ã»ã³ã¿ãŒã«å€ããããŸãã
é©åãªçš®é¡ã®ãã¯ãããžã«æŠç¥çæè³ãè¡ãããšã§ãéä¿¡äŒç€Ÿã¯æ¥çãæ¶è²»è
ãäŒæ¥ã«ããã£ãŠ AI ã®äœæãé
ä¿¡ã䜿çšã容æã«ããã AI ã°ãªãããžãšé£èºããããšãã§ããŸããä»ãéä¿¡äŒç€Ÿã«ãšã£ãŠãäžå€®éäžçã§åæ£ãããã€ã³ãã©ã¹ãã©ã¯ãã£ãåå©çšããããšã§ãAI ãã¬ãŒãã³ã° (äœæ) ãš AI æšè« (é
ä¿¡) ã®ããã®ãã¡ããªãã¯ãæ§ç¯ãã倧ããªæ©äŒãšãªããŸãã
SoftBank ãš NVIDIA ã AI-RANã®åçšåãé²ãã
SoftBank ã¯ãNVIDIA ã¢ã¯ã»ã©ã¬ãŒã·ã§ã³ ããŒããŠã§ã¢ãš NVIDIA Aerial ãœãããŠã§ã¢ãæè¡åºç€ãšããŠæŽ»çšãã
ç¥å¥å·çè€æ²¢åžã§å±å€
ãã£ãŒã«ã ãã©ã€ã¢ã«ãæåããã
AI-RAN ããžã§ã³ã
çŸå®ã®ãã®ã«ããŸããã
ãã®éæã¯ãAI-RAN ã®åçšåã«åãã倧ããªåé²ã§ããããã¯ãããžã®å®çŸæ§ãããã©ãŒãã³ã¹ãåçåã«é¢ããæ¥çã®èŠä»¶ã«å¯Ÿå¿ããå®èšŒãã€ã³ããæäŸããŸãã
NVIDIA ã®ã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã° ãã©ãããã©ãŒã ã§å®è¡ãããäžçåã®å±å€ 5G AI-RAN ãã£ãŒã«ã ãã©ã€ã¢ã«ã ããã¯ã5G ã³ã¢ãšçµ±åããããã«ã¹ã¿ãã¯ã®ä»®æ³ 5G RAN ãœãããŠã§ã¢ã«åºã¥ããšã³ãããŒãšã³ãã®ãœãªã¥ãŒã·ã§ã³ã§ãã
ãã£ãªã¢ ã°ã¬ãŒãã®ä»®æ³ RAN ã®ããã©ãŒãã³ã¹ãå®çŸã
AI ãš RAN ã®ãã«ãããã³ããšãªãŒã±ã¹ãã¬ãŒã·ã§ã³ãå®çŸã
ãšãã«ã®ãŒå¹çãšçµæžçãªã¡ãªããããæ¢åã®ãã³ãããŒã¯ãšæ¯èŒããŠæ€èšŒãããŸããã
AI-RAN ã€ã³ãã©ã¹ãã©ã¯ãã£ã«çµ±åããã AI ããŒã±ãããã¬ã€ã¹ãæäŸããæ°ãããœãªã¥ãŒã·ã§ã³ã
AI-RAN ãããã¯ãŒã¯ã§å®è¡ãããå®éã® AI ã¢ããªã±ãŒã·ã§ã³ã玹ä»ãããŸãã
äœããããSoftBank ã¯ãäžçäžã«å±éããããã«ãç¬èªã® AI-RAN 補åãåæ¥çã«ãªãªãŒã¹ããããšãç®æããŠããŸãã
ä»ã®éä¿¡äºæ¥è
ãä»ãã AI-RAN ã®å°å
¥ãæ¯æŽããããã«ãSoftBank ã¯ãAI-RAN ãè©Šçšããããã«å¿
èŠãªããŒããŠã§ã¢ãšãœãããŠã§ã¢ã®èŠçŽ ã§æ§æããããªãã¡ã¬ã³ã¹ ãããããç°¡åãã€è¿
éã«æäŸããäºå®ã§ãã
ãšã³ãããŒãšã³ãã® AI-RAN ãœãªã¥ãŒã·ã§ã³ãšãã£ãŒã«ã ãã©ã€ã¢ã«ã®çµæ
SoftBank ã¯ãNVIDIA ãšãšã³ã·ã¹ãã ããŒãããŒã®ããŒããŠã§ã¢ãšãœãããŠã§ã¢ ã³ã³ããŒãã³ããçµ±åãããã£ãªã¢ã°ã¬ãŒãã®èŠä»¶ãæºããããã«åŒ·åããããšã§ãAI-RAN ãœãªã¥ãŒã·ã§ã³ãéçºããŸããã ãã®ãœãªã¥ãŒã·ã§ã³ã¯ãNVIDIA GH200 (CPU+GPU)ãNVIDIA Bluefield-3 (NIC/DPU)ãããã³ãããŒã«ããã³ããã¯ããŒã« ãããã¯ãŒãã³ã°çšã® Spectrum-X ã§å®è¡ããã 100% ãœãããŠã§ã¢ ããã¡ã€ã³ãã®å®å
šãª 5G vRAN ã¹ã¿ãã¯ãå®çŸããŸãã 20 å°ã®ç¡ç·ãŠããããš 5G ã³ã¢ ãããã¯ãŒã¯ãçµ±åãã100 å°ã®ã¢ãã€ã« UE ãæ¥ç¶ããŸãã
ã³ã¢ ãœãããŠã§ã¢ ã¹ã¿ãã¯ã«ã¯ã以äžã®ã³ã³ããŒãã³ããå«ãŸããŠããŸãã
SoftBank ã
NVIDIA Aerial CUDA-Accelerated-RAN
ã©ã€ãã©ãªã䜿çšããŠã 5G RAN ã¬ã€ã€ãŒ 1 ã®ãã£ãã« ãããã³ã°ããã£ãã«æšå®ãå€èª¿ãåæ¹ãšã©ãŒèšæ£ãªã©ã®æ©èœãéçºããæé©åããŸããã
ã¬ã€ã€ãŒ 2 æ©èœåã Fujitsu ãœãããŠã§ã¢
ã³ã³ãããŒã®ä»®æ³åã¬ã€ã€ãŒãšããŠã® Red Hat ã® OpenShift Container Platform (OCP) ã«ãããåãåºç€ãšãªã GPU ã³ã³ãã¥ãŒãã£ã³ã° ã€ã³ãã©ã¹ãã©ã¯ãã£ã§ç°ãªãã¿ã€ãã®ã¢ããªã±ãŒã·ã§ã³ãå®è¡ãããŸã
SoftBank ãéçºãã E2EãAI ãš RAN ãªãŒã±ã¹ãã¬ãŒã¿ãŒãéèŠãšäœ¿çšå¯èœãªå®¹éã«åºã¥ã㊠RAN ãš AI ã®ã¯ãŒã¯ããŒãã®ã·ãŒã ã¬ã¹ãªããããžã§ãã³ã°ãå¯èœã«ããŸãã
åºç€ãšãªãããŒããŠã§ã¢ã¯ã
NVIDIA GH200 Grace Hopper Superchip
ã§ãããåæ£åããéäžå RAN ã·ããªãªãŸã§ãããŸããŸãªæ§æã§äœ¿çšã§ããŸãã ãã®å®è£
ã§ã¯ãéçŽããã RAN ã®ã·ããªãªã®ããã«ã1 ã€ã®ã©ãã¯ã§è€æ°ã® GH200 ãµãŒããŒã䜿çšããAI ãš RAN ã®ã¯ãŒã¯ããŒããåæã«åŠçããŸãã ããã¯ãåŸæ¥ã® RAN åºå°å±ãè€æ°å±éããã®ã«çžåœããŸãã
ãã®ãã€ãããã§ã¯ãRAN ã®ã¿ã®ã¢ãŒãã§äœ¿çšãããå Žåãå GH200 ãµãŒããŒã¯ã100 MHz 垯åå¹
㧠20 åã® 5G ã»ã«ãåŠçããããšãã§ããŸããã åã»ã«ã§ã¯ãçæ³çãªæ¡ä»¶äžã§ 1.3 Gbps ã®ããŒã¯ ããŠã³ãªã³ã¯æ§èœãéæãããå±å€å±éã§ã¯ãã£ãªã¢ã°ã¬ãŒãã®å¯çšæ§ã§ 816 Mbps ãå®èšŒãããŸããã
AI-RAN ã®ãã«ãããã³ããå®çŸ
AI-RAN ãã¯ãããžã®ç¬¬äžã®ååã® 1 ã€ã¯ããã£ãªã¢ã°ã¬ãŒãã®ããã©ãŒãã³ã¹ãæãªãããšãªããRAN ãš AI ã®ã¯ãŒã¯ããŒããåæã«å®è¡ã§ããããšã§ãã ãã®ãã«ãããã³ãã¯ãæéãŸãã¯ç©ºéã®ããããã§å®è¡ã§ããæé垯ãŸãã¯ã³ã³ãã¥ãŒãã£ã³ã°ã®å²åã«åºã¥ããŠãªãœãŒã¹ãåå²ããŸãã ãŸããããã¯ã䜿çšå¯èœãªå®¹éã«åºã¥ããŠãã¯ãŒã¯ããŒããã·ãŒã ã¬ã¹ã«ããããžã§ãã³ã°ãããããžã§ãã³ã°ã®è§£é€ãã·ããã§ãããªãŒã±ã¹ãã¬ãŒã¿ãŒã®å¿
èŠæ§ãæå³ããŸãã
è€æ²¢åžã®å®èšŒå®éšã§ã¯ãRAN ãš AI ã¯ãŒã¯ããŒãéã®ãªãœãŒã¹ã®éçå²ãåœãŠã«åºã¥ããŠãGH200 äžã§ã® AI ãš RAN ã®åæåŠçãå®èšŒãããŸããã (å³ 1)ã
å³ 1. AI ãš RAN ã®åæåŠçãš GPU ã®åèšäœ¿çšç
å NVIDIA GH200 ãµãŒããŒã¯ãè€æ°ã® MIG (ãã«ãã€ã³ã¹ã¿ã³ã¹ GPU) ã§æ§æããã1 ã€ã® GPU ãè€æ°ã®ç¬ç«ãã GPU ã€ã³ã¹ã¿ã³ã¹ã«åå²ã§ããŸãã åã€ã³ã¹ã¿ã³ã¹ã«ã¯ãã¡ã¢ãªããã£ãã·ã¥ãã³ã³ãã¥ãŒãã£ã³ã° ã³ã¢ãªã©ãç¬èªã®å°çšãªãœãŒã¹ããããç¬ç«ããŠåäœã§ããŸãã
SoftBank ãªãŒã±ã¹ãã¬ãŒã¿ãŒã¯ãAI ãå®è¡ããããã« GPU å
šäœãŸã㯠GPU ã®äžéšãã€ã³ããªãžã§ã³ãã«å²ãåœãŠãRAN ã®ã¯ãŒã¯ããŒããå®è¡ããå¿
èŠã«å¿ããŠåçã«åãæ¿ããŸãã éèŠã«åºã¥ãå²ãåœãŠã§ã¯ãªããRAN ãš AI ã«äžå®ã®å²ãåœãŠããRAN ã« 60% ãš AI ã« 40% ã®ã³ã³ãã¥ãŒãã£ã³ã°ãéçã«å²ãåœãŠãããšãã§ããŸãã
ç®æšã¯ã容é䜿çšçãæ倧åããããšã§ãã AI-RAN ã䜿çšãããšãéä¿¡äŒç€Ÿã¯ãéåžžã® RAN ã®ã¿ã®ãããã¯ãŒã¯ã§ã® 33% ã®å®¹é䜿çšçãšæ¯èŒããŠãã»ãŒ 100% ã®äœ¿çšçãå®çŸã§ããŸãã ããã¯ãåçãªãªãŒã±ã¹ãã¬ãŒã·ã§ã³ãšåªå
é äœä»ãããªã·ãŒã®ãããã§ãããŒã¯ã® RAN ã®è² è·ã«å¯Ÿå¿ããªãããæ倧 3 åã®å¢å ã§ãã
AI-RAN ããŒã±ãããã¬ã€ã¹ã®å®çŸ
åæ£å AI-RAN ã€ã³ãã©ã¹ãã©ã¯ãã£ã§ AI ã³ã³ãã¥ãŒãã£ã³ã°ã®æ°ããæ©èœãå©çšã§ããããã«ãªã£ãããããã® AI ã³ã³ãã¥ãŒãã£ã³ã°ã®äŸçµŠã« AI ã®éèŠãã©ã®ããã«åã蟌ãããšããçåãçããŸãã
ãã®åé¡ã解決ããããã«ãSoftBank ã¯ãNVIDIA AI ãšã³ã¿ãŒãã©ã€ãº ã掻çšãããµãŒããŒã¬ã¹ API ã䜿çšããŠãã»ãã¥ãªãã£ãæ¡åŒµæ§ãä¿¡é Œæ§ãåã㊠AI-RAN 㧠AI ã¯ãŒã¯ããŒããå±éãã管çããŸããã NVIDIA AI ãšã³ã¿ãŒãã©ã€ãºã®ãµãŒããŒã¬ã¹ API ã¯ãAI-RAN ã€ã³ãã©ã¹ãã©ã¯ãã£ã§ãã¹ããããSoftBank E2E AI-RAN ãªãŒã±ã¹ãã¬ãŒã¿ãŒãšçµ±åãããŠããŸãã åã API ãå®è¡ãããããªã㯠ã¯ã©ãŠããŸãã¯ãã©ã€ããŒã ã¯ã©ãŠãã«æ¥ç¶ããã³ã³ãã¥ãŒãã£ã³ã°ãå©çšå¯èœã«ãªã£ããšãã«ãå€éšã® AI æšè«ãžã§ãã AI-RAN ãµãŒããŒã«å²ãåœãŠãŸã (å³ 2)ã
å³ 2. SoftBank AI-RAN ãšçµ±åããã AI ããŒã±ãããã¬ã€ã¹ ãœãªã¥ãŒã·ã§ã³
ãã®ãœãªã¥ãŒã·ã§ã³ã«ãã AI ããŒã±ãããã¬ã€ã¹ãå®çŸãããœãããã³ã¯ã¯ããŒã«ã©ã€ãºãããäœé
延ã®å®å
šãªæšè«ãµãŒãã¹ãæäŸã§ããããã«ãªããŸãã ãŸããç¹ã«å€éšã® AI æšè«ã®ä»äºã®ããã«ãéä¿¡äŒç€Ÿã AI é
ä¿¡ã°ãªããã«ãªãã®ãæ¯æŽããäžã§ AI-RAN ã®éèŠæ§ãå®èšŒããæ°ããåçã®æ©äŒãäœããŸãã
AI-RAN ã¢ããªã±ãŒã·ã§ã³ã玹ä»
ãã®å±å€ã®è©Šçšã§ã¯ãSoftBank ãéçºããæ°ãããšããž AI ã¢ããªã±ãŒã·ã§ã³ãã©ã€ã AI-RAN ãããã¯ãŒã¯ã§ãã¢ã³ã¹ãã¬ãŒã·ã§ã³ãããŸããã
5G ãä»ããèªåé転è»ã®ãªã¢ãŒã ãµããŒã
å·¥å Žåºè·æã®ãã«ãã¢ãŒãã«
ãããã£ã¯ã¹ ã¢ããªã±ãŒã·ã§ã³
5G ãä»ããèªåé転è»ã®ãªã¢ãŒã ãµããŒã
èªåé転ã®ç€ŸäŒçå®è£
ã®éèŠãªèŠä»¶ã¯ãè»ã®å®å
šæ§ãšéçšã³ã¹ãã®åæžã§ãã
è€æ²¢åžã®å®èšŒå®éšã§ã¯ããœãããã³ã¯ãèªåé転è»ãå®æŒããåæ¹ã«ã¡ã©ã®æ åã 5G 㧠AI-RAN ãµãŒããŒã«ãã¹ãããã AI ããŒã¹ã®é éãµããŒã ãµãŒãã¹ã«äžç¶ããã ãã«ãã¢ãŒãã« AI ã¢ãã«ã¯ããã㪠ã¹ããªãŒã ãåæãããªã¹ã¯è©äŸ¡ãè¡ãã5G ãä»ããããã¹ãã䜿çšããŠèªåé転è»ã«æšå¥šã®ã¢ã¯ã·ã§ã³ãéä¿¡ããŸããã
ããã¯ã説æå¯èœãª AI ã®äŸã§ããããŸãããªã¢ãŒã ãµããŒãã®ããã®èŠçŽãããããã¹ããšãã°ãéããŠãèªåé転è»ã®ãã¹ãŠã®åäœãç£èŠãã説æããããšãã§ããŸããã
å·¥å Žåºè·æã®ãã«ãã¢ãŒãã«
ãã®ãŠãŒã¹ ã±ãŒã¹ã§ã¯ããããªããªãŒãã£ãªãã»ã³ãµãŒ ããŒã¿ãå«ããã«ãã¢ãŒãã«å
¥åãã5G ã䜿çšã㊠AI-RAN ãµãŒããŒã«ã¹ããªãŒãã³ã°ãããŸãã AI-RAN ãµãŒããŒã§ãã¹ãããããè€æ°ã® LLMãVLMãæ€çŽ¢æ¡åŒµçæ (RAG) ãã€ãã©ã€ã³ãNVIDIA NIM ãã€ã¯ããµãŒãã¹ã¯ããããã®å
¥åãçµ±åãã5G ã䜿çšãããŠãŒã¶ãŒããã£ãã ã€ã³ã¿ãŒãã§ã€ã¹ãä»ããŠæ
å ±ã«ã¢ã¯ã»ã¹ã§ããããã«ããããã«äœ¿çšãããŸãã
ããã¯ãå·¥å Žã®ç£èŠã建èšçŸå Žã®æ€æ»ãåæ§ã®è€éãªå±å
ããã³å±å€ã®ç°å¢ã«æé©ã§ãã ãã®ãŠãŒã¹ ã±ãŒã¹ã§ã¯ããšããž AI-RAN ãããŒã¿ ã¢ã¯ã»ã¹ãšåæãããŒã«ã«ãå®å
šããã©ã€ããŒãã«ä¿ã€ããšã§ãããŒã«ã« ããŒã¿ã®äž»æš©ãå®çŸããæ¹æ³ã瀺ããŠããŸããããã¯ãã»ãšãã©ã®äŒæ¥ã«ãšã£ãŠå¿
é ã®èŠä»¶ã§ãã
ãããã£ã¯ã¹ ã¢ããªã±ãŒã·ã§ã³
SoftBank ã¯ã5G ãä»ããŠæ¥ç¶ãããããããã®ãšããž AI æšè«ã®å©ç¹ãå®èšŒããŸããã ããããã°ã¯ã声ãšåãã«åºã¥ããŠäººéãè¿œãããã«ãã¬ãŒãã³ã°ãããŸããã
ãã®ãã¢ã§ã¯ãAI æšè«ãããŒã«ã« AI-RAN ãµãŒããŒã§ãã¹ãããããšãã®ããããã®å¿çæéãšãã»ã³ãã©ã« ã¯ã©ãŠãã§ãã¹ãããããšãã®å¿çæéãæ¯èŒããŸããã ãã®éãã¯æçœã§ããã ãšããž ããŒã¹ã®æšè« ããããã°ã¯ã人éã®åããå³åº§ã«è¿œè·¡ããŸããããã¯ã©ãŠã ããŒã¹ã®æšè«ããããã¯ãè¿œãã€ãã®ã«èŠåŽããŸããã
Aerial RAN Computer-1 㧠AI-RAN ã®ããžãã¹ ã±ãŒã¹ãé«éå
AI-RAN ããžã§ã³ã¯æ¥çã§åãå
¥ããããŠããŸãããGPU 察å¿ã€ã³ãã©ã¹ãã©ã¯ãã£ã®ãšãã«ã®ãŒå¹çãšçµæžæ§ãç¹ã«åŸæ¥ã® CPU ããã³ ASIC ããŒã¹ã® RAN ã·ã¹ãã ãšã®æ¯èŒã¯äŸç¶ãšããŠéèŠãªèŠä»¶ã§ãã
AI-RAN ã®ãã®ã©ã€ã ãã£ãŒã«ã ãã©ã€ã¢ã«ã«ãããSoftBank ãš NVIDIA ã¯ãGPU 察å¿ã® RAN ã·ã¹ãã ãå®çŸå¯èœã§ãé«æ§èœã§ããããšãå®èšŒããã ãã§ãªãããšãã«ã®ãŒå¹çãšçµæžçãªåçæ§ã倧å¹
ã«åäžããŠããããšãå®èšŒããŸããã
NVIDIA ã¯æè¿ã次äžä»£ NVIDIA Grace Blackwell Superchip ãããŒã¹ã«ãã
Aerial RAN Computer-1
ãæšå¥š AI-RAN å±éãã©ãããã©ãŒã ãšããŠçºè¡šããŸããã ç®çã¯ãGB200-NVL2 ãããŒã¹ãšãã SoftBank 5G vRAN ãœãããŠã§ã¢ã NVIDIA GH200 ãã NVIDIA Aerial RAN Computer-1 ã«ç§»è¡ããããšã§ããããã¯ãã³ãŒãããã§ã« CUDA ã«å¯Ÿå¿ããŠããããã移è¡ã容æã§ãã
ãŸãã
GB200-NVL2
ã䜿çšãããšãAI-RAN ã§å©çšå¯èœãªã³ã³ãã¥ãŒãã£ã³ã°èœåã 2 åã«ãªããŸãã AI åŠçæ©èœã¯ã以åã® H100 GPU ã·ã¹ãã ãšæ¯èŒããŠãLlama-3 æšè«ã 5 åãããŒã¿åŠçã 18 åããã¯ãã« ããŒã¿ããŒã¹æ€çŽ¢ã 9 åã«æ¹åãããŸãã
ãã®è©äŸ¡ã®ããã«ãã¿ãŒã²ããã®å±é ãã©ãããã©ãŒã ãGB200 NVL2 ãããŒã¹ãšãã Aerial RAN Computer-1ãææ°äžä»£ã® x86 ãšã¯ã©ã¹æé«ã®ã«ã¹ã¿ã RAN 補åãã³ãããŒã¯ãæ¯èŒãã以äžã®çµæãæ€èšŒããŸããã
é«éåããã AI-RAN ã¯ãã¯ã©ã¹æé«ã® AI ããã©ãŒãã³ã¹ãæäŸããŸã
é«éåããã AI-RAN ã¯æç¶å¯èœãª RAN
é«éåããã AI-RAN ã¯ãéåžžã«åçæ§ãé«ã
é«éåããã AI-RAN ã¯ãã¯ã©ã¹æé«ã® AI ããã©ãŒãã³ã¹ãæäŸããŸã
100% AI ã®ã¿ã®ã¢ãŒãã§ã¯ãå GB200-NVL2 ãµãŒããŒã¯ãæ¯ç§ 25,000 ããŒã¯ã³ãçæããŸããããã¯ããµãŒã㌠1 å°ã®åçåå¯èœãªã³ã³ãã¥ãŒãã£ã³ã°ã®å©çšçã 20 ãã«/æéããŸãã¯ãµãŒããŒãããã®æ15,000 ãã«ã«æç®ããŸãã
çŸåšã®ã¯ã€ã€ã¬ã¹ ãµãŒãã¹ã®ãŠãŒã¶ãŒ 1 人ã®å¹³ååç (ARPU) ã¯ãåœã«ãã£ãŠã¯æ 5 ïœ 50 ãã«ã®ç¯å²ã§ããããšã«çæããŠãAI-RAN ã¯ãRAN ã®ã¿ã®ã·ã¹ãã ãããæ°åã®é«ããæ°ååãã«èŠæš¡ã® AI åçã®æ©äŒãæäŸããŸãã
䜿çšãããããŒã¯ã³ AI ã¯ãŒã¯ããŒãã¯ãLlama-3-70B FP4 ã§ãããAI-RAN ããã§ã«äžçã§æãé«åºŠãª LLM ã¢ãã«ãå®è¡ã§ããããšãå®èšŒããŸãã
é«éåããã AI-RAN ã¯æç¶å¯èœãª RAN
100% RAN ã®ã¿ã®ã¢ãŒãã§ã¯ãGB200-NVL2 ãµãŒããŒã®é»åããã©ãŒãã³ã¹ã¯ãã¯ãã/Gbps ã§ä»¥äžã®å©ç¹ããããŸãã
ä»æ¥ãã¯ã©ã¹æé«ã®ã«ã¹ã¿ã RAN ã®ã¿ã®ã·ã¹ãã ãšæ¯èŒããŠãæ¶è²»é»åã 40% åæž
x86 ããŒã¹ã® vRAN ãšæ¯èŒããŠãæ¶è²»é»åã 60% åæž
æ¯èŒã®ããã«ãããã¯ãã¹ãŠã®ãã©ãããã©ãŒã ã§åãæ°ã® 100 MHz 4T4R ã»ã«ãšã100% RAN ã®ã¿ã®ã¯ãŒã¯ããŒããæ³å®ããŠããŸãã
å³ 3. RAN ã®æ¶è²»é»åãšããã©ãŒãã³ã¹ (ã¯ãã/Gbps)
é«éåããã AI-RAN ã¯ãéåžžã«åçæ§ãé«ã
ãã®è©äŸ¡ã®ããã«ãæ¯èŒããã 3 ã€ã®ãã©ãããã©ãŒã ã®ãããã㧠RAN å±éã®å
±éã®ããŒã¹ã©ã€ã³ãšããŠãæ±äº¬éœã® 1 å°åºã 600 ã»ã«ã§ã«ããŒããã·ããªãªã䜿çšããŸããã 次ã«ãRAN ã®ã¿ãã RAN ãéããããŸã㯠AI ãéèŠãããŸã§ãAI ãš RAN ã®ã¯ãŒã¯ããŒãååžã®è€æ°ã®ã·ããªãªã調ã¹ãŸããã
AI ãå€ãã·ããªãª (å³ 4) ã§ã¯ãRAN ã 3 åã® 1ãAI ã¯ãŒã¯ããŒãã 3 åã® 2 ãåæ£ããŸããã
NVIDIA GB200 NVL2 ãããŒã¹ãšããé«éåããã AI-RAN ã€ã³ãã©ã¹ãã©ã¯ãã£ãžã®è³æ¬æ¯åº (CapEx) æè³é¡ã®1ãã«ã«å¯ŸããŠãéä¿¡äŒç€Ÿã¯ 5 幎é㧠5 åã®åçãçã¿åºãããšãã§ããŸãã
ROI ã®èŠ³ç¹ãããè³æ¬æ¯åºãšéçšæ¯åºã®ãã¹ãŠã®ã³ã¹ããèæ
®ããŠãæè³å
šäœã¯ 219% ã®ãªã¿ãŒã³ãå®çŸããŸããããã¯ãçŸå°ã®ã³ã¹ãæ³å®ã䜿çšããŠããããããã¡ãã SoftBank ç¹æã®ãã®ã§ãã
å³ 4. 600 ã»ã«ã§ 1 ã€ã®æ±äº¬éœå°åºãã«ããŒãã AI-RAN ã®çµæžæ§
33% AIãš 67% RAN
67% AI ãš 33% RAN
CapEx 1 ãã«ãããã®åç $
2x
5x
ROI %
33%
219%
è¡š 1. AI ãå€çšããã·ããªãªãšæ¯èŒããçµæ
RAN ãå€çšããã·ããªãªã§ã¯ã3 åã® 2 ã RANã3 åã® 1 ã AI ã¯ãŒã¯ããŒãåæ£ã«äœ¿çšããNVIDIA ã¢ã¯ã»ã©ã¬ãŒã·ã§ã³ AI-RAN ã® CapEx ã§å²ã£ãåç㯠2 åã«ãªããSoftBank ã®ããŒã«ã« ã³ã¹ãæ³å®ã䜿çšã㊠5 幎é㧠33% ã® ROI ãåŸãããããšãããããŸããã
RAN ã®ã¿ã®ã·ããªãªã§ã¯ãNVIDIA Aerial RAN Computer-1 ã¯ã«ã¹ã¿ã RAN ã®ã¿ã®ãœãªã¥ãŒã·ã§ã³ãããã³ã¹ãå¹çãé«ããç¡ç·ä¿¡å·åŠçã«ã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ã䜿çšãã倧ããªå©ç¹ãšãªããŸãã
ãããã®ã·ããªãªãããAI ãå€çšããã¢ãŒã RAN ãå€çšããã¢ãŒãã®äž¡æ¹ã§ãRAN ã®ã¿ã®ãœãªã¥ãŒã·ã§ã³ãšæ¯èŒããŠãAI-RAN ãé«ãåçæ§ãæããã«ãªããŸãã æ¬è³ªçã«ãAI-RAN ã¯ãåŸæ¥ã® RAN ãã³ã¹ã ã»ã³ã¿ãŒããå©çã»ã³ã¿ãŒã«å€é©ããŸãã
AI ã®äœ¿çšéã®å¢å ã«ããããµãŒããŒãããã®åçæ§ãåäžããŸãã RAN ã®ã¿ã®å Žåã§ããAI-RAN ã€ã³ãã©ã¹ãã©ã¯ãã£ã¯ãã«ã¹ã¿ã RAN ã®ã¿ã®ãªãã·ã§ã³ãããã³ã¹ãå¹çãé«ããªããŸãã
åçãš TCO ã®èšç®ã«äœ¿çšãããäž»ãªåææ¡ä»¶ã«ã¯ã次ã®ãã®ãå«ãŸããŸãã
åãã©ãããã©ãŒã ã®ãã©ãããã©ãŒã ããµãŒããŒãã©ãã¯ã®ããããã®æ°ã¯ãåãåšæ³¢æ°ã§ãã 4T4R 㧠600 ã»ã«ããããã€ããå
±éã®ããŒã¹ã©ã€ã³ã䜿çšããŠèšç®ãããŸãã
ç·ææã³ã¹ã (TCO) ã¯ã5 幎以äžã§èšç®ãããŠãããããŒããŠã§ã¢ããœãããŠã§ã¢ãvRANãAI ã®éçšã³ã¹ããå«ãŸããŠããŸãã
æ°ãã AI åçã®èšç®ã«ã¯ãGB200 NVL2 AI ããã©ãŒãã³ã¹ ãã³ãããŒã¯ã«åºã¥ããŠããµãŒããŒãããã®æé 20 ãã«ã䜿çšããŸããã
éçšæ¯åºã³ã¹ãã¯ãæ¥æ¬ã®çŸå°ã®é»åã³ã¹ãã«åºã¥ããŠãããäžççã«æ¡åŒµããããšã¯ã§ããŸããã
ROI % = (æ°ãã AI åç â TCO) / TCO
AI ã®åçã®åäžããšãã«ã®ãŒå¹çãåçæ§ãåçæ§ã®ãã®æ€èšŒã«ããããã®ãã¯ãããžã®å®çŸæ§ãããã©ãŒãã³ã¹ãçµæžçãªã¡ãªããã«çãã®äœå°ã¯ãããŸããã
ä»åŸãVera Rubin ãªã©ã® NVIDIAã¹ãŒããŒãããã®åäžä»£ãææ°é¢æ°çã«å¢å ããããšã§ããããã®ã¡ãªããã¯ããã«æ¡éãã«å¢å€§ããåŸ
æã®éä¿¡ãããã¯ãŒã¯ã®ããžãã¹å€é©ãå¯èœã«ãªããŸãã
å°æ¥ãèŠæ®ãã
SoftBank ãš NVIDIA ã¯ãAI-RAN ã®åæ¥åãšæ°ããã¢ããªã±ãŒã·ã§ã³ãçã¿åºãããã«ã
ç¶ç¶çã«åå
ããŠããŸãã ãã®å¥çŽã®æ¬¡ã®ãã§ãŒãºã§ã¯ãã¹ãã¯ãã«å¹çãåäžããã AI-for-RAN ã®åãçµã¿ãšããã¡ã€ã³ãã¥ãŒãã³ã°ãšãã¹ãã®ããã«ããžã¿ã« ãããã¯ãŒã¯ãã·ãã¥ã¬ãŒããã NVIDIA Aerial Omniverse ããžã¿ã« ãã€ã³ã®åãçµã¿ãå«ãŸããŸãã
NVIDIA AI Aerial ã¯ãäžçäžã®éä¿¡äºæ¥è
ãšãšã³ã·ã¹ãã ããŒãããŒããã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ãšãœãããŠã§ã¢ ããã¡ã€ã³ã RAN + AI ã®ãã¯ãŒã䜿çšããŠã5G ããã³ 6G ãããã¯ãŒã¯ãå€é©ããåºç€ãç¯ããŸãã NVIDIA Aerial RAN Computer-1 ãš AI Aerial ãœãããŠã§ã¢ ã©ã€ãã©ãªã䜿çšããŠãç¬èªã® AI-RAN å®è£
ãéçºã§ããããã«ãªããŸããã
NVIDIA AI ãšã³ã¿ãŒãã©ã€ãº ã¯ãå€ãã® NVIDIA ãœãããŠã§ã¢ ããŒã«ãããã䜿çšããããã®ãã©ã€ã¢ã«ãããæãããªããã«ãAI-RAN ã§ãã¹ãå¯èœãªéä¿¡äºæ¥è
åãã®æ°ãã AI ã¢ããªã±ãŒã·ã§ã³ã®äœæã«ãè²¢ç®ããŠããŸããããã«ã¯ãçæ AI åãã® NIM ãã€ã¯ããµãŒãã¹ãRAGãVLMããããã£ã¯ã¹ ãã¬ãŒãã³ã°çšã® NVIDIA IsaacãNVIDIA NeMoãRAPIDSãæšè«çšã® NVIDIA TritonãAI ãããŒã«ãŒçšãµãŒããŒã¬ã¹ API ãå«ãŸããŸãã
éä¿¡æ¥çã¯ãAI ãµãŒãã¹ ãããã€ããŒã«ãªã倧ããªãã£ã³ã¹ã®æåç·ã«ç«ã£ãŠããŸãã AI-RAN ã¯ãã¯ã€ã€ã¬ã¹ ãããã¯ãŒã¯ã®æ°ããåºç€ãšããŠã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ã䜿çšããããšã§ãäžçäžã®éä¿¡äŒç€Ÿã«ãšã£ãŠãã®æ°ããå€é©ãä¿é²ã§ããŸãã
ãã®çºè¡šã¯ãAI-RAN ãã¯ãããžã®ç»æçãªç¬éã§ããããã®å®çŸæ§ããã£ãªã¢ã°ã¬ãŒãã®ããã©ãŒãã³ã¹ãåªãããšãã«ã®ãŒå¹çãçµæžçãªäŸ¡å€ã蚌æããŸããã NVIDIA ã®é«éåããã AI-RAN ã€ã³ãã©ã¹ãã©ã¯ãã£ã«æè³ãããè³æ¬æ¯åº 1 ãã«ã¯ã6G ã«å¯Ÿå¿ããªããã5 åã®åçãçã¿åºããŸãã
AI åçåãžã®åãçµã¿ã¯ãä»ããå§ããããŸãã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
éä¿¡äŒç€Ÿãåœå®¶ AI ã€ã³ãã©ã¹ãã©ã¯ãã£ãšãã©ãããã©ãŒã ãã©ã®ããã«å®çŸããã
GTC ã»ãã·ã§ã³:
çŸä»£ã®éä¿¡äŒç€Ÿ Blueprint: AI ã䜿çšããŠå€é©ãšåçºæ
GTC ã»ãã·ã§ã³:
人工ç¥èœãéä¿¡ãå€é©ãã 3 ã€ã®æ¹æ³
SDK:
Aerial Omniverse ããžã¿ã« ãã€ã³
ãŠã§ãããŒ:
How Telcos Transform Customer Experiences with Conversational AI
ãŠã§ãããŒ:
å€èšèªé³å£° AI ã«ã¹ã¿ãã€ãºããããšãŒãžã§ã³ãã¢ã·ã¹ãã§éä¿¡äŒç€Ÿ ã³ã³ã¿ã¯ã ã»ã³ã¿ãŒ ãšãŒãžã§ã³ãã®åŒ·å |
https://developer.nvidia.com/blog/developing-a-172b-llm-with-strong-japanese-capabilities-using-nvidia-megatron-lm/ | Developing a 172B LLM with Strong Japanese Capabilities Using NVIDIA Megatron-LM | Generative AI has the ability to create entirely new content that traditional machine learning (ML) methods struggle to produce. In the field of natural language processing (NLP), the advent of
large language models (LLMs)
specifically has led to many innovative and creative AI use cases. These include customer support chatbots, voice assistants, text summarization and translation, and moreâtasks previously handled by humans.
LLMs continue to evolve through various approaches, including increasing the number of parameters and the adoption of new algorithms like Mixture of Experts (MoE). The application and adaptation of LLMs are anticipated across many industries, including retail, manufacturing, and finance.
However, many models that currently top the LLM leaderboard show insufficient understanding and performance in non-English languages, including Japanese. One of the reasons for this is that the training corpus contains a high proportion of English data. For example,
only 0.11% of the GPT-3 corpus is Japanese data
. Creating LLM models that perform well in Japanese, which has less training data than English, has been immensely challenging.
This post presents insights gained from training an AI model with 172 billion parameters as part of the
Generative AI Accelerator Challenge (GENIAC)
project, using
NVIDIA Megatron-LM
to help address the shortage of high-performance models for Japanese language understanding.
LLM-jp initiatives at GENIAC
The
Ministry of Economy, Trade and Industry (METI)
launched GENIAC to raise the level of platform model development capability in Japan and to encourage companies and others to be creative. GENIAC has provided computational resources, supported matching with companies and data holders, fostered collaboration with global technology companies, held community events, and evaluated the performance of the developed platform models.
The
LLM-jp
project to develop a completely
open model with 172 billion parameters
(available on Hugging Face) with strong Japanese language capabilities was selected for the GENIAC initiative. LLM-jp 172B was the largest model development in Japan at that time (February to August 2024), and it was meaningful to share the knowledge of its development widely.
LLM-jp is an initiative launched by researchers in the field of natural language processing and computer systems, mainly at NII, to accumulate know-how on the mathematical elucidation of training principles, such as how large-scale models acquire generalization performance and the efficiency of learning, through the continuous development of models that are completely open and commercially available. The objective is to accumulate know-how on the efficiency of training.
Training the model using NVIDIA Megatron-LM
Megatron-LM
serves as a lightweight research-oriented framework leveraging
Megatron-Core
for training LLMs at unparalleled speed. Megatron-Core, the main component, is an open-source library that contains GPU-optimized techniques and cutting-edge system-level optimizations essential for large-scale training.
Megatron-Core supports various advanced model parallelism techniques, including tensor, sequence, pipeline, context, and MoE expert parallelism. This library offers
customizable building blocks
, training resiliency features such as
fast distributed checkpointing
, and many other innovations such as
Mamba-based hybrid model training
. Itâs compatible with all NVIDIA Tensor Core GPUs, and includes support for
Transformer Engine (TE)
with FP8 precision introduced with
NVIDIA Hopper architecture
.
Model architecture and training settings
Table 1 provides an overview of the model architecture for this project, which follows
Llama 2 architecture
.
Parameter
Value
Hidden size
12288
FFN intermediate size
38464
Number of layers
96
Number of attention heads
96
Number of query groups
16
Activation function
SwiGLU
Position embedding
RoPE
Normalization
RMSNorm
Table 1. Overview of LLM-jp 172B model architecture
The LLM-jp 172B model is being trained from scratch using 2.1 trillion tokens of a multilingual corpus developed for the project consisting mainly of Japanese and English. The training is performed using NVIDIA H100 Tensor Core GPUs on Google Cloud A3 Instance with FP8 hybrid training using the Transformer Engine. Megatron-Core v0.6 and Transformer Engine v1.4 are used in the experiment.
Table 2 shows hyperparameter settings for training.
Parameter
Value
LR
1E-4
min LR
1E-5
LR WARMUP iters
2000
Weight decay
0.1
Grad clip
1.0
Global batch size
1728
Context length
4096
Table 2. Hyperparameters used for the model training
In addition, z-loss and batch-skipping techniques, which are used in
PaLM
, are incorporated to stabilize the training process, and flash attention is used to further speed up the training process.
To view other training configurations, please see
llm-jp/Megatron-LM
.
Training throughput and results
Pretraining for the latest LLM-jp 172B model is currently underway, with periodic evaluations every few thousand iterations to monitor training progress and ensure successful accuracy results on Japanese and English downstream tasks (Figure 1). So far, over 80% is complete, of the targeted 2.1 trillion tokens.
Figure 1. Training loss curves for pretraining with 1.7 trillion tokens using Megatron FP8 hybrid training
Notably, there is a sharp increase in TFLOP/s after approximately 7,000 iterations, corresponding to the transition from BF16 to FP8-hybrid precision. In this experiment, BF16 plus TE was used for training before 7,000 iterations, and FP8 hybrid plus TE was used after 7,000 iterations. In Megatron-LM, it is possible to enable hybrid FP8 training with the simple option
--fp8-format
â
hybrid
â. Note that this feature is experimental, with further optimizations coming soon.
Figure 2. Training throughput (TFLOP/s) when TE is used with BF16 and FP8 hybrid
The reason we started the training with BF16 plus TE and then switched to FP8 hybrid was not only to see the tokens/sec performance difference between BF16 and FP8, but also to make the initial training more stable. In the early stages of training, the learning rate (LR) increases due to the warm-up, leading to unstable training.
We chose to perform the initial training with BF16, and after confirming that there were no problems with the values of training loss, optimizer states, gradient norm, and so on, we switched to FP8 to speed up the training process. FP8 hybrid has improved the training speed. We observed a training speed of 545-553 TFLOP/s with Megatron-LM.
Figure 3. Weak scaling performance based on the results of the main and preliminary experiments of the LLM-jp 172B model training
Conclusion
As mentioned above, the training of LLM-jp 172BÂ is still ongoing using Megatron-LM. Based on the evaluation results of downstream tasks using the current checkpoint data, we suppose that the model has already acquired excellent Japanese language capabilities, but the complete model is expected to be ready early next year.Training time is often a significant challenge in pretraining LLMs, where vast datasets are required.Therefore, efficient training frameworks like Megatron-LM are crucial for accelerating Generative AI research and development.ãFor the 172B model trained with
Megatron-LM
, we explored FP8-hybrid training as a potential method for improving training speed, achieving a 1.4x training speed acceleration from 400 TFLOP/s to 550 TFLOP/s. We observed a performance acceleration from 400 TFLOP/s to 550 TFLOP/s, suggesting that FP8-hybrid could be a valuable approach for enhancing the efficiency of large-scale model pretraining. | https://developer.nvidia.com/ja-jp/blog/developing-a-172b-llm-with-strong-japanese-capabilities-using-nvidia-megatron-lm/ | Megatron-LM ãçšããæ¥æ¬èªã«åŒ·ã 172B 倧èŠæš¡èšèªã¢ãã«ã®éçº | Reading Time:
2
minutes
çæ AI ã¯ããã®åè¶ããèœåã®ãããã§ãåŸæ¥ã®æ©æ¢°åŠç¿ææ³ã§ã¯ã§ããªãã£ãã¿ã¹ã¯ãå®è¡ãã泚ç®ãéããŠããŸããäŸãã°ãèªç¶èšèªåŠçã®åéã§ã¯ã
倧èŠæš¡èšèªã¢ãã« (LLM)
ãç»å Žããããšã§ããã£ãããããã«ããã«ã¹ã¿ã㌠ãµããŒããäŒè°å
容ã®èŠçŽãªã©ããããŸã§äººéãæ
ã£ãŠãã圹å²ã AI ã代ããã«è¡ããªã©å€ãã®é©æ°çã§åµé çãªãŠãŒã¹ ã±ãŒã¹ãçãŸããŠããŸãã
LLM ã¯ããã©ã¡ãŒã¿ãŒæ°ã®å¢å ã MoE (Mixture of Experts) ã®ãããªæ°ããã¢ã«ãŽãªãºã ã®æ¡çšãªã©ãæ§ã
ãªã¢ãããŒããéããŠé²åãç¶ããŠãããå°å£²æ¥ã補é æ¥ãéèæ¥ãªã©ãããŸããŸãªæ¥çãžã®å¿çšãšé©çšãæåŸ
ãããŠããŸãã
ããããçŸåš LLM ãªãŒããŒããŒãã®äžäœã¢ãã«ã®å€ãã¯ãè±èªã«æ¯ã¹ãŠæ¥æ¬èªã®ç解床ãããã©ãŒãã³ã¹ãäœãåŸåã«ãããŸãããã®çç±ã®äžã€ã¯ãåŠç¿ã³ãŒãã¹ã®è±èªããŒã¿ã®å²åã倧ããããšã§ããäŸãã°ã
GPT-3 ã®å Žåãæ¥æ¬èªããŒã¿ã¯ã³ãŒãã¹ã® 0.11% ãããããŸãã
ãæ¥æ¬ã®çæ AI ã®çºå±ã®ããã«ã¯ãéåžžã«å°é£ã§ããè±èªãããåŠç¿ããŒã¿ã®å°ãªãæ¥æ¬èªã§åªããæ§èœãçºæ®ãã LLM ã¢ãã«ãäœæããããšããä¹ãè¶ããã¹ãéèŠãªèª²é¡ã§ãã
æ¬çš¿ã§ã¯ã
GENIAC (Generative AI Accelerator Challenge)
ãããžã§ã¯ãã®äžç°ãšããŠåãçµãã ãMegatron-LM ãçšãã 172B 倧èŠæš¡èšèªã¢ãã«ã®åŠç¿ããåŸãããç¥èŠã玹ä»ããããŒã¿äžè¶³ã®åé¡ãä¹ãè¶ããŠæ¥æ¬èªç解èœåã®é«ãã¢ãã«äœæã«åãçµãã éã®æŽå¯ã«ã€ããŠçŽ¹ä»ããŸãã
GENIAC ã«ããã LLM-jp ã®åãçµã¿
äžèšã§è¿°ã¹ããããªèª²é¡ã解決ããããã«ãçµæžç£æ¥çã¯ãæ¥æ¬åœå
ã®ãã©ãããã©ãŒã ã¢ãã«éçºåã®åäžãšäŒæ¥çã®åµæ工倫ã奚å±ãããããã
Generative AI Accelerator Challenge (GENIAC)
ããç«ã¡äžããŸãããGENIAC ã§ã¯ãèšç®è³æºã®æäŸãäŒæ¥ãšããŒã¿ä¿æè
ãšã®ãããã³ã°æ¯æŽãã°ããŒãã«ããã¯äŒæ¥ãšã®é£æºä¿é²ãã³ãã¥ãã㣠ã€ãã³ãã®éå¬ãéçºããããã©ãããã©ãŒã ã¢ãã«ã®æ§èœè©äŸ¡ãªã©ãçŸåšãç¶ç¶ããŠè¡ã£ãŠããŸãã
ãã®åãçµã¿ã«ã
LLM-jp
ã®æ¥æ¬èªå¯Ÿå¿åã«åªãã
å®å
šãªãŒãã³ãª 172B ã¢ãã«
ã®éçºãšããããŒããéžã°ããŸããã172B ã¯åœæ (2024 幎 2 æãã 8 æ) æ¥æ¬åœå
ã§æ倧èŠæš¡ã®ã¢ãã«éçºã§ããããã®éçºããŠããŠãåºãå
±æããããšã¯éåžžã«ææ矩ãªããšã§ããã
LLM-jp ã¯ã
NII (åœç«æ
å ±åŠç 究æ)
ãäžå¿ãšããèªç¶èšèªåŠçãèšç®æ©ã·ã¹ãã åéã®ç 究è
ãäžå¿ãšãªã£ãŠç«ã¡äžããåãçµã¿ã§ã倧èŠæš¡ã¢ãã«ãæ±åæ§èœãç²åŸããä»çµã¿ãåŠç¿ã®å¹çæ§ãšãã£ãåŠç¿åçã®æ°åŠç解æã«é¢ããããŠããŠããå®å
šã«ãªãŒãã³ã§åçšå©çšå¯èœãªã¢ãã«ã®ç¶ç¶çãªéçºãéããŠèç©ããäºãããã³åŠç¿ã®å¹çæ§ã«é¢ããããŠããŠãèç©ããããšãç®çãšããŠããŸãã
NVIDIA Megatron-LM
Megatron-LM
ã¯ã
Megatron-Core
ã掻çšããŠå€§èŠæš¡èšèªã¢ãã« (LLM) ãæ¯é¡ã®ãªãé床ã§åŠç¿ãã軜éãªç 究æåãã¬ãŒã ã¯ãŒã¯ãšããŠæ©èœããŸããäž»èŠã³ã³ããŒãã³ãã§ãã Megatron-Core ã¯ã倧èŠæš¡ãªåŠç¿ã«äžå¯æ¬ 㪠GPU æé©åæè¡ãšæå
端ã®ã·ã¹ãã ã¬ãã«ã®æé©åãå«ãã©ã€ãã©ãªã§ãã
Megatron-Core ã¯ããã³ãœã«ãã·ãŒã±ã³ã¹ããã€ãã©ã€ã³ãã³ã³ããã¹ããMoE ãšãã¹ããŒã䞊ååŠçãªã©ãããŸããŸãªé«åºŠãªã¢ãã«äžŠååŠçææ³ããµããŒãããŠããŸãããã®ã©ã€ãã©ãªã¯ã
ã«ã¹ã¿ãã€ãºå¯èœãªãã«ãã£ã³ã° ãããã¯
ã
é«éåæ£ãã§ãã¯ãã€ã³ã
ãªã©ã®åŠç¿å埩åæ©èœã
Mamba ããŒã¹ã®ãã€ããªãã ã¢ãã«åŠç¿
ãªã©ã®ä»ã®å€ãã®ã€ãããŒã·ã§ã³ãæäŸããŸãããã¹ãŠã® NVIDIA Tensor ã³ã¢ GPU ãšäºææ§ãããã
NVIDIA Hopper ã¢ãŒããã¯ãã£
ã§å°å
¥ããã FP8 粟床㮠Transformer Engine ãã¯ãããžã®ãµããŒããå«ãŸããŠããŸãã
äžèšã®ãããªæå
端ã®æ©èœãæäŸããããšã§ãMegatron-LM ã¯ãç 究è
ãã¢ãã«éçºè
ã 1,000 åã®ãã©ã¡ãŒã¿ãŒãè¶
ããã¢ãã«ã§ããé«éãªåŠç¿ãšæ°åã® GPU ã¹ã±ãŒã«ãžã®ã¹ã±ãŒã©ããªãã£ãå®çŸã§ããããã«ããŸãã
ã¢ãã« ã¢ãŒããã¯ãã£ãšåŠç¿èšå®
以äžã¯ã
Meta ã® Llama2
ã¢ãŒããã¯ãã£ã«æºæ ãããã®ãããžã§ã¯ãã®ã¢ãã« ã¢ãŒããã¯ãã£ã®æŠèŠã§ãã
ãã©ã¡ãŒã¿ãŒ
å€
Hidden size
12288
FFN Intermediate size
38464
Number of layers
96
Number of attention heads
96
Number of query groups
16
Activation function
SwiGLU
Position embedding
RoPE
Normalization
RMSNorm
è¡š 1. LLM-jp 172B ã¢ãã«ã¢ãŒããã¯ãã£æŠèŠ
ãã® 172B ã¢ãã«ã¯ããããžã§ã¯ãçšã«éçºãããå€èšèªã³ãŒãã¹ (äž»ã«æ¥æ¬èªãšè±èª) ã® 2.1T ããŒã¯ã³ (2.1 å
ããŒã¯ã³) ã䜿çšããŠãŒãããåŠç¿ãããŠããŸããåŠç¿ã¯ãTransformer Engine ã䜿çšãã FP8 ãã€ããªããåŠç¿ã§ãGoogle Cloud ã® A3 ã€ã³ã¹ã¿ã³ã¹äžã® H100 Tensor ã³ã¢ GPU ã䜿çšããŠå®è¡ãããŠããŸããå®éšã§ã¯ã
Megatron-Core
v0.6 ãš
Transformer Engine
v1.4 ã䜿çšãããŠããŸãã
åŠç¿ã®ãã€ããŒãã©ã¡ãŒã¿ãŒèšå®ã¯æ¬¡ã®ãšããã§ãã
ãã©ã¡ãŒã¿ãŒ
å€
LR
1E-4
min LR
1E-5
LR WARMUP iters
2000
Weight Decay
0.1
Grad Clip
1.0
global batch size
1728
context length
4096
è¡š 2. ãã®å®éšã§äœ¿çšãããã€ããŒãã©ã¡ãŒã¿ãŒã®æŠèŠ
詳现ãªèšå®ã«èå³ã®ããæ¹ã¯ãä»ã®åŠç¿èšå®ã
llm-jp/Megatron-LM
ã§ã芧ããã ããŸãã
ãŸãã
PaLM
ã§æ¡çšãããŠãããz-loss ã batch-skipping ãã¯ããã¯ãåãå
¥ããããšã§åŠç¿ããã»ã¹ãå®å®åãããflash attention ãå©çšããããšã§åŠç¿ããã»ã¹ãããã«é«éåãããŠããŸãã
åŠç¿çµæãšã¹ã«ãŒããã
LLM-jp 172B ã¢ãã«ã®äºååŠç¿ã¯ãæ°åã€ãã¬ãŒã·ã§ã³ããšã«ãæ¥æ¬èªãšè±èªã®äžæµã¿ã¹ã¯ã®è©äŸ¡çµæãã¢ãã¿ãŒããåŠç¿ãããŸãé²ãã§ãããã©ããã確èªããªããçŸåšãé²è¡äžã§ãããããŸã§ã®ãšãããç®æšãšãã 2 å
1,000 åããŒã¯ã³ã® 80% 匷ãŸã§å®äºããŠããŸãã
Megatron ã® FP8 ãã€ããªããåŠç¿ãçšãã 1.7 å
ããŒã¯ã³ã®äºååŠç¿ã«ãããåŠç¿æ倱æ²ç·ã以äžã«ç€ºããŸãããã®æ²ç·ã¯ã240,000 ã¹ããããŸã§æ倱ãçå®ã«æžå°ããŠããããšã瀺ããŠããŸãã
å³ 1. 240k ã¹ããããŸã§ã®åŠç¿ãã¹
以äžã®ã°ã©ãã¯ãY 軞㯠TFLOP/sãX 軞ã¯ã€ãã¬ãŒã·ã§ã³åæ°ã瀺ããŠããŸãã泚ç®ãã¹ãã¯ãåŠç¿ã BF16 ãã FP8 ãã€ããªãããžåãæ¿ããçŽ 7,000 åã®ã€ãã¬ãŒã·ã§ã³ã®ã¿ã€ãã³ã°ã§ãTFLOP/s ãæ¥æ¿ã«å¢å ããŠããããšã§ãããã®å®éšã§ã¯ãBF16 + Transformer Engine ã 7,000 å以åã®åŠç¿ã«äœ¿çšãããFP8 ãã€ããªãã + Transformer Engine ã 7000 å以éã®åŠç¿ã«äœ¿çšãããŸãããMegatron-LM ã§ã¯ãåçŽãªãªãã·ã§ã³
--fp8-format
â
hybrid
â 㧠FP8 ãã€ããªããåŠç¿ãæå¹ã«ããããšãã§ããŸãã
å³ 2. Transformer Engine ã BF16 ãš FP8 ã®ãã€ããªããã§äœ¿çšããå Žåã®åŠç¿ã¹ã«ãŒããã (TFLOP/s)
BF16 + Transformer Engine ã§åŠç¿ãéå§ãããã®åŸ FP8 ãã€ããªããã«åãæ¿ããçç±ã¯ãBF16 ãš FP8 ã§ã©ã®çšåºŠã® tokens/sec ã®æ§èœå·®ãããããèŠãããã ãã§ãªããåæåŠç¿ãããå®å®ãããããã§ããããŸããåŠç¿ã®åæ段éã§ã¯ããŠã©ãŒã ã¢ããã«ããåŠç¿ç (LR) ãäžæããåŠç¿ãäžå®å®ã«ãªããŸããããã§ãåæåŠç¿ã¯ BF16 ã§è¡ããåŠç¿æ倱ã®å€ããªããã£ãã€ã¶ãŒã®ç¶æ
ãåŸé
ãã«ã ãªã©ã«åé¡ããªãããšã確èªããåŸãFP8 ã«åãæ¿ããŠåŠç¿ãé«éåããããšã«ããŸãããFP8 ãã€ããªããã«ããåŠç¿é床ãåäžããŠããããšãåãããŸãã
ç§ãã¡ã¯ãæçµçã« Megatron-LM ãçšã㊠545-553TFLOP/s ã®æ§èœãéæã§ããããšã確èªããŸããã以äžã¯ãLLM-jp 172B ã¢ãã«åŠç¿ã®æ¬å®éšãšäºåå®éšã®çµæã«åºã¥ãã匱ã¹ã±ãŒãªã³ã°æ§èœã®ã°ã©ãã§ãããã®ã°ã©ãã§ã¯ãY 軞ã Aggregate Throughput ãè¡šããX 軞ãåŠç¿ã«äœ¿çšãã GPU ã®æ°ãè¡šããŠããŸããLlama2 7BãLlama2 13BãLLM-jp 172B ã®åŠç¿çµæã¯ãç·åœ¢ã¹ã±ãŒãªã³ã°ã瀺ããŠããããšãåãããŸãã
å³ 3. LLM-jp 172B ã¢ãã«å®éšã®åŒ±ã¹ã±ãŒãªã³ã°æ§èœ
ãŸãšã
åè¿°ã®éããLLM-jp 172B ã®åŠç¿ã¯çŸåšã Megatron-LM ãçšããŠé²è¡äžã§ããçŸåšã®ãã§ãã¯ãã€ã³ã ããŒã¿ãçšããäžæµã¿ã¹ã¯ã®è©äŸ¡çµæãããçŸç¶ã§ãæ¢ã«åªããæ¥æ¬èªèœåãç²åŸããŠãããšæšå¯ãããŸãããå®å
šãªã¢ãã«ã¯æ¥å¹Žã®åãã«å®æäºå®ã§ãã
èšå€§ãªããŒã¿ã»ãããå¿
èŠãšãã倧èŠæš¡èšèªã¢ãã«ã®äºååŠç¿ã«ãããŠãåŠç¿æéã¯ãã°ãã°å€§ããªèª²é¡ãšãªããŸãããã®ãããMegatron-LM ã®ãããªå¹ççã«åŠç¿å¯èœãªãã¬ãŒã ã¯ãŒã¯ã¯ãçæ AI ã®ç 究éçºãå éãããçºã«éåžžã«éèŠã§ãã
Megatron-LM
ã§åŠç¿ãã 172B ã¢ãã«ã«ãããŠãFP8-hybrid åŠç¿ãåŠç¿é床ãåäžãããå¹æçãªææ³ã§ããããšãå®èšŒãã1.4 åã®é«éå (400 TFLOP/s â 550 TFLOP/s) ãéæããŸããããã®çµæã¯ãFP8-hybrid ã倧èŠæš¡ã¢ãã«ã®äºååŠç¿ã®å¹çãåäžãããæçšãªã¢ãããŒãã§ããããšã匷調ããŠããŸãã |
https://developer.nvidia.com/blog/5x-faster-time-to-first-token-with-nvidia-tensorrt-llm-kv-cache-early-reuse/ | 5x Faster Time to First Token with NVIDIA TensorRT-LLM KV Cache Early Reuse | In our previous
blog post
, we demonstrated how reusing the key-value (KV) cache by offloading it to CPU memory can accelerate time to first token (TTFT) by up to 14x on x86-based NVIDIA H100 Tensor Core GPUs and 28x on the NVIDIA GH200 Superchip. In this post, we shed light on KV cache reuse techniques and best practices that can drive even further TTFT speedups.
Introduction to KV cache
LLM models are rapidly being adopted for many tasks, including question-answering, and code generation. To generate a response, these models begin by converting the userâs prompt into tokens, which are then transformed into dense vectors. Extensive dot-product operations follow to mathematically model the relationships between the tokens and build a contextual understanding of the user input. The computational cost of generating this contextual understanding increases quadratically with the length of the input sequence.
This resource-intensive process generates keys and values, which are cached to avoid recomputation when generating subsequent tokens. Reusing the KV cache reduces the computational load and time needed to generate additional tokensâleading to a faster and more efficient user experience.
When reusing the KV cache, careful attention must be given to how long it remains in memory, which components to evict first when memory is full, and when it can be reused for new incoming prompts. Optimizing these factors can lead to incremental performance improvements in KV cache reuse. NVIDIA TensorRT-LLM offers three key features that specifically address these areas.
Early KV cache reuse
Traditional reuse algorithms require the entire KV cache computation to be completed before any portions of it can be reused with new user prompts. In scenarios such as enterprise chatbots, where system promptsâpredefined instructions added to user queriesâare essential to direct the LLMâs responses in line with enterprise guidelines, this method can be inefficient.
When a surge of users interacts with the chatbot simultaneously, each user would require a separate computation of the system prompt KV cache. With TensorRT-LLM, we can instead reuse the system prompt as it is being generated in real time, enabling it to be shared across all users during the burst, rather than recalculating it for each user. This can significantly accelerate inference for use cases requiring system prompts by up to 5x.
Figure 1. TensorRT-LLM KV cache reuse can speed up TTFT by up to 5x
Flexible KV cache block sizing
In reuse implementations, only entire cache memory blocks can be allocated for reuse. For example, if the cache memory block size is 64 tokens and KV cache is 80 tokens, only 64 tokens will be stored for reuse, while the remaining 16 tokens will need to be recomputed. However, if the memory block size is reduced to 16 tokens, all 64 tokens can be stored across five memory blocks, eliminating the need for re-computation.
This effect is most pronounced when the input sequences are short. For long input sequences, larger blocks can be more beneficial. As is clear, the more granular the control you have over the KV cache, the better you can optimize it for your specific use case.
TensorRT-LLM provides fine-grained control over KV cache memory blocks, giving developers the ability to chop them into smaller blocks between 64 to 2 tokens. This optimizes the usage of allocated memory, increases reuse rates, and improves TTFT. When running LLAMA70B on NVIDIA H100 Tensor Core GPUs, we can speed up TTFT up to 7% in multi-user environments by reducing KV cache block size from 64 tokens to 8 tokens.
Figure 2. Impact of changing KV cache block size on inference speedup
Efficient KV cache eviction protocols
Partitioning the KV cache into smaller blocks and evicting unused ones can be effective for memory optimization, but it introduces dependency complexities. When a specific block is used to generate a response, and the result is stored as a new block, it can form a tree-like structure of dependencies.
Over time, the counters tracking the usage of the source blocks (the branches) may become stale as the dependent nodes (the leaves) are reused. Evicting the source block then requires the eviction of all dependent blocks, which would require recalculation of the KV cache for new user prompts, increasing TTFT.
To address this challenge, TensorRT-LLM includes intelligent eviction algorithms that can trace the dependent nodes from their source nodes and evict dependent nodes first, even if they have more recent reuse counters. This ensures more efficient memory management while preventing unnecessary evictions of dependent blocks.
Figure 3. A logical representation of KV cache eviction algorithm show how it can reduce the number of evicted blocks, increasing the likelihood of reuse
Getting started with TensorRT-LLM KV cache reuse
Generating KV cache during inference requires a lot of compute and memory resources. Using it efficiently is critical to improving model response, accelerating inference, and increasing system throughput. TensorRT-LLM provides advanced reuse features for developers looking to further optimize TTFT response times for peak performance.
To start using TensorRT-LLM KV cache reuse check out our
GitHub documentation
. | https://developer.nvidia.com/ja-jp/blog/5x-faster-time-to-first-token-with-nvidia-tensorrt-llm-kv-cache-early-reuse/ | NVIDIA TensorRT-LLM ã® KV Cache Early Reuseã§ãTime to First Token ã 5 åé«éå | Reading Time:
2
minutes
以åã®
ããã°èšäº
ã§ã¯ãkey-value (KV) ãã£ãã·ã¥ã CPU ã¡ã¢ãªã«ãªãããŒãããŠåå©çšããããšã§ãæåã®ããŒã¯ã³ãåºåããããŸã§ã®æé (TTFT: Time To First Token) ã x86 ããŒã¹ã® NVIDIA H100 Tensor ã³ã¢ GPU ã§æ倧 14 åãNVIDIA GH200 Superchip ã§æ倧 28 åã«é«éåã§ããæ¹æ³ãã玹ä»ããŸãããæ¬èšäºã§ã¯ãKV ãã£ãã·ã¥ã®åå©çšæè¡ãšãTTFT ã®ãããªãé«éåãå®çŸãããã¹ããã©ã¯ãã£ã¹ã«ã€ããŠè§£èª¬ããŸãã
KV ãã£ãã·ã¥ã®æŠèŠ
LLM ã¢ãã«ã¯ã質ååçãã³ãŒãçæãªã©ãå€ãã®ã¿ã¹ã¯ã§æ¥éã«æ¡çšãããŠããŸããå¿çãçæããã«ãããããããã®ã¢ãã«ã¯ãŸãããŠãŒã¶ãŒã®ããã³ãããããŒã¯ã³ãžå€æãããã®åŸãããã®ããŒã¯ã³ãå¯ãã¯ãã«ãžãšå€æããŸããèšå€§ãªãããç©æŒç®ããã®åŸã«ç¶ãããã®åŸããŒã¯ã³éã®é¢ä¿æ§ãæ°åŠçã«ã¢ãã«åãããŠãŒã¶ãŒå
¥åã«å¯Ÿããæèç解ãæ§ç¯ããŸãããã®æèç解ãçæããããã«ãããèšç®ã³ã¹ãã¯ãå
¥åã·ãŒã±ã³ã¹ã®é·ãã®äºä¹ã«æ¯äŸããŠå¢å ããŸãã
ãã®ãªãœãŒã¹ã倧éã«æ¶è²»ããããã»ã¹ãã key ãšvalue ãçæãããåŸç¶ã®ããŒã¯ã³ãçæãããšãã«å床èšç®ãããªãããã«ãã£ãã·ã¥ãããŸããKV ãã£ãã·ã¥ãåå©çšããããšã§ãè¿œå ã®ããŒã¯ã³ãçæããéã«å¿
èŠãšãªãèšç®è² è·ãšæéã軜æžãããããé«éã§å¹ççãªãŠãŒã¶ãŒäœéšãå®çŸããŸãã
KV ãã£ãã·ã¥ãåå©çšãããšãã«ã¯ããã£ãã·ã¥ãã¡ã¢ãªã«æ®ãæéãã¡ã¢ãªãäžæ¯ã«ãªã£ããšãã«æåã«åé€ããã³ã³ããŒãã³ããããã³æ°ããå
¥åããã³ããã«åå©çšã§ããã¿ã€ãã³ã°ãªã©ã®ç¹ã«çŽ°å¿ã®æ³šæãæãå¿
èŠããããŸãããããã®èŠå ãæé©åããããšã§ãKV ãã£ãã·ã¥ã®åå©çšã«ãããããã©ãŒãã³ã¹ã®æ®µéçãªå¢å ãžãšã€ãªããããšãã§ããŸããNVIDIA TensorRT-LLM ã¯ããããã®åéã«ç¹åãã 3 ã€ã®äž»èŠãªæ©èœãæäŸããŸãã
Early KV cache reuse
åŸæ¥ã®åå©çšã¢ã«ãŽãªãºã ã§ã¯ãKV ãã£ãã·ã¥ããã®äžéšã§ãã£ãŠãæ°ãããŠãŒã¶ãŒ ããã³ããã§åå©çšããããã«ã¯ãäºåã«ãã¹ãŠã® KV ãã£ãã·ã¥ã®èšç®ãå®äºãããŠããå¿
èŠããããŸããããã®æ¹æ³ã¯ãLLM ã®ã¬ã¹ãã³ã¹ãäŒæ¥ã®ã¬ã€ãã©ã€ã³ã«æ²¿ã£ããã®ã«ããããã«ãã·ã¹ãã ããã³ãã (ãŠãŒã¶ãŒã®åãåããã«è¿œå ãããäºåå®çŸ©ã®æ瀺) ãäžå¯æ¬ ãšãªãäŒæ¥åããã£ããããããªã©ã®ã·ããªãªã§ã¯ãéå¹ççã§ããå¯èœæ§ããããŸãã
ãã£ããããããšåæã«ããåããããŠãŒã¶ãŒãæ¥å¢ããå ŽåãåãŠãŒã¶ãŒã«å¯ŸããŠã·ã¹ãã ããã³ãã KV ãã£ãã·ã¥ãåå¥ã«èšç®ããå¿
èŠããããŸããTensorRT-LLM ã§ã¯ããªã¢ã«ã¿ã€ã ã§çæãããã·ã¹ãã ããã³ãããåå©çšããããšãã§ãããããæ¥å¢æã«ã¯ãã¹ãŠã®ãŠãŒã¶ãŒãšå
±æããããšãã§ãããŠãŒã¶ãŒããšã«åèšç®ããå¿
èŠããããŸãããããã«ãããã·ã¹ãã ããã³ãããå¿
èŠãšãããŠãŒã¹ ã±ãŒã¹ã®æšè«ãæ倧 5 åã«ãŸã§é«éåããããšãã§ããŸãã
å³ 1. TensorRT-LLM KV cache reuse ã«ãããTTFT ãæ倧 5 åé«éå
æè»ãª KV ãã£ãã·ã¥ ããã㯠ãµã€ãº
åå©çšãå®è£
ããéã«ã¯ããã£ãã·ã¥ ã¡ã¢ãª ãããã¯å
šäœã®ã¿ãåå©çšã«å²ãåœãŠãããšãã§ããŸããäŸãã°ããã£ãã·ã¥ ã¡ã¢ãª ããã㯠ãµã€ãºã 64 ããŒã¯ã³ã§ãKV ãã£ãã·ã¥ã 80 ããŒã¯ã³ã§ããå Žåãåå©çšã®ããã«ä¿åã§ããã®ã¯ 64 ããŒã¯ã³ã®ã¿ã§ãããæ®ãã® 16 ããŒã¯ã³ã¯åèšç®ããå¿
èŠããããŸããããããªãããã¡ã¢ãª ããã㯠ãµã€ãºã 16 ããŒã¯ã³ã«æžãããšã64 ããŒã¯ã³ãã¹ãŠã 5 ã€ã®ã¡ã¢ãª ãããã¯ã«æ ŒçŽããããšãã§ããåèšç®ã®å¿
èŠæ§ããªããªããŸãã
ãã®å¹æã¯ãå
¥åã·ãŒã±ã³ã¹ãçããšãã«æãé¡èã«çŸããŸããé·ãå
¥åã·ãŒã±ã³ã¹ã®å Žåã¯ããã倧ããªãããã¯ã®æ¹ãããæçã§ããæããã«ãKV ãã£ãã·ã¥ããã现ããå¶åŸ¡ã§ããã°ã§ããã»ã©ãç¹å®ã®ãŠãŒã¹ ã±ãŒã¹ã«åãããæé©åãåäžããŸãã
TensorRT-LLM ã§ã¯ãKV ãã£ãã·ã¥ ã¡ã¢ãª ãããã¯ããã现ããå¶åŸ¡ã§ãããããéçºè
㯠KV ãã£ãã·ã¥ ã¡ã¢ãª ãããã¯ã 64 ãã 2 ããŒã¯ã³ãŸã§ãããå°ããªãããã¯ã«åå²ããããšãã§ããŸããããã«ãããå²ãåœãŠãããã¡ã¢ãªã®äœ¿çšãæé©åãããåå©çšçãäžæããTTFT ãæ¹åãããŸããNVIDIA H100 Tensor ã³ã¢ GPU 㧠LLAMA70B ãå®è¡ããå ŽåãKV ãã£ãã·ã¥ ãããã¯ãµã€ãºã 64 ããŒã¯ã³ãã 8 ããŒã¯ã³ãžãšæžããããšã§ããã«ããŠãŒã¶ãŒç°å¢ã§ TTFT ãæ倧 7% é«éåã§ããŸãã
å³ 2. KV ãã£ãã·ã¥ ããã㯠ãµã€ãºã®å€æŽã«ããæšè«ã®é«éå
å¹çç㪠KV ãã£ãã·ã¥ã®é€å€ (Eviction) ãããã³ã«
KV ãã£ãã·ã¥ãããå°ããªãããã¯ã«åå²ããæªäœ¿çšã®ãããã¯ãé€å€ããããšã¯ãã¡ã¢ãªã®æé©åã«å¹æçã§ãããäŸåé¢ä¿ã«è€éããçãŸããŸããç¹å®ã®ãããã¯ãã¬ã¹ãã³ã¹ã®çæã«äœ¿çšããããã®çµæãæ°ãããããã¯ãšããŠä¿åããããšãäŸåé¢ä¿ã®ããªãŒæ§é ã圢æãããå¯èœæ§ããããŸãã
æéã®çµéãšãšãã«ããœãŒã¹ ããã㯠(ãã©ã³ã) ã®äœ¿çšã远跡ããã«ãŠã³ã¿ãŒã¯ãåŸå±ããŒã (ãªãŒã) ãåå©çšãããã«ã€ããŠå€ããªãå¯èœæ§ããããŸãããœãŒã¹ ãããã¯ãé€å€ããã«ã¯ãåŸå±ãããã¹ãŠã®ãããã¯ãé€å€ããå¿
èŠããããæ°ãããŠãŒã¶ ããã³ããã® KV ãã£ãã·ã¥ãåèšç®ããå¿
èŠãçã㊠TTFT ãå¢å ããŸãã
ãã®èª²é¡ã«å¯ŸåŠããããã«ãTensorRT-LLM ã«ã¯ãåŸå±ããŒãããœãŒã¹ ããŒããã远跡ããåŸå±ããŒããããæè¿ã®åå©çšã«ãŠã³ã¿ãŒãæã£ãŠããå Žåã§ããæåã«åŸå±ããŒããé€å€ããããšãã§ããã€ã³ããªãžã§ã³ããªé€å€ã¢ã«ãŽãªãºã ãå«ãŸããŠããŸããããã«ãããããå¹ççã«ã¡ã¢ãªã管çã§ããããã«ãªããšå
±ã«ãåŸå±ãããã¯ã®äžèŠãªé€å€ãåé¿ã§ããŸãã
å³ 3. KV ãã£ãã·ã¥ã®é€å€ã¢ã«ãŽãªãºã ã®è«çãè¡šçŸããå³ãé€å€ããããããã¯ã®æ°ãæžãããåå©çšã®å¯èœæ§ãé«ããããæ§åã瀺ããŠããŸãã
TensorRT-LLM KV cache reuse ã䜿ãå§ãã
æšè«äžã« KV ãã£ãã·ã¥ãçæããã«ã¯ãå€ãã®èšç®ãšã¡ã¢ãª ãœãŒã¹ãå¿
èŠã«ãªããŸããå¹ççã«äœ¿çšããããšããã¢ãã«å¿çã®æ¹åãæšè«ã®é«éåãã·ã¹ãã ã¹ã«ãŒãããã®åäžã«ã¯äžå¯æ¬ ã§ããTensorRT-LLM ã¯ãããŒã¯æ§èœã®ããã« TTFT å¿çæéãããã«æé©åããããšããéçºè
ã«é«åºŠãªåå©çšæ©èœãæäŸããŸãã
TensorRT-LLM KV cache reuse ã䜿ãå§ããã«ã¯ã
GitHub ã®ããã¥ã¡ã³ã
ãåç
§ããŠãã ããã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
Speeding up LLM Inference With TensorRT-LLM (TensorRT-LLM ã«ãã LLM æšè«ã®é«éå)
GTC ã»ãã·ã§ã³:
Optimizing and Scaling LLMs With TensorRT-LLM for Text Generation (ããã¹ãçæã®ããã® TensorRT-LLM ã䜿çšãã LLM ã®æé©åãšã¹ã±ãŒãªã³ã°)
SDK:
Torch-TensorRT
SDK:
TensorRT
SDK:
TensorFlow-TensorRT |
https://developer.nvidia.com/blog/state-of-the-art-multimodal-generative-ai-model-development-with-nvidia-nemo/ | State-of-the-Art Multimodal Generative AI Model Development with NVIDIA NeMo | Generative AI
has rapidly evolved from text-based models to multimodal capabilities. These models perform tasks like image captioning and visual question answering, reflecting a shift toward more human-like AI. The community is now expanding from text and images to video, opening new possibilities across industries.
Video AI models are poised to revolutionize industries such as robotics, automotive, and retail. In
robotics
, they enhance autonomous navigation in complex, ever-changing environments, which is vital for sectors like manufacturing and warehouse management. In the automotive industry, video AI is propelling autonomous driving, boosting vehicle perception, safety, and predictive maintenance to improve efficiency.
To build image and video foundation models, developers must curate and preprocess a large amount of training data, tokenize the resulting high-quality data at high fidelity, train or customize pretrained models efficiently and at scale, and then generate high-quality images and videos during inference.
Announcing NVIDIA NeMo for multimodal generative AI
NVIDIA NeMo
is an end-to-end platform for developing, customizing, and deploying generative AI models.
NVIDIA just announced the expansion of NeMo to support the end-to-end pipeline for developing multimodal models. NeMo enables you to easily curate high-quality visual data, accelerate
training
and
customization
with highly efficient tokenizers and parallelism techniques, and reconstruct high-quality visuals during inference.
Accelerated video and image data curation
High-quality training data ensures high-accuracy results from an AI model. However, developers face various challenges in building data processing pipelines, ranging from scaling to data orchestration.
NeMo Curator
streamlines the data curation process, making it easier and faster for you to build multimodal generative AI models. Its out-of-the-box experience minimizes the total cost of ownership (TCO) and accelerates time-to-market.
While working with visuals, organizations can easily reach petabyte-scale data processing. NeMo Curator provides an orchestration pipeline that can load balance on multiple GPUs at each stage of the data curation. As a result, you can reduce video processing time by 7x compared to a naive GPU-based implementation. The scalable pipelines can efficiently process over 100 PB of data, ensuring the seamless handling of large datasets.
Figure 1. NVIDIA NeMo Curator video processing speed
NeMo Curator provides reference video curation models optimized for high-throughput filtering, captioning, and embedding stages to enhance dataset quality, empowering you to create more accurate AI models.
For instance, NeMo Curator uses an optimized captioning model that delivers an order of magnitude throughput improvement compared to unoptimized inference model implementations.
NVIDIA Cosmos tokenizers
Tokenizers map redundant and implicit visual data into compact and semantic tokens, enabling efficient training of large-scale generative models and democratizing their inference on limited computational resources.
Todayâs open video and image tokenizers often generate poor data representations, leading to lossy reconstructions, distorted images, and temporally unstable videos and placing a cap on the capability of generative models built on top of the tokenizers. Inefficient tokenization processes also result in slow encoding and decoding and longer training and inference times, negatively impacting both developer productivity and the user experience.
NVIDIA Cosmos tokenizers are open models that offer superior visual tokenization with exceptionally large compression rates and cutting-edge reconstruction quality across diverse image and video categories.
Video 1. Efficient Generative AI Tokenizers for Image and Video
These tokenizers provide ease of use through a suite of tokenizer standardized models that support vision-language models (VLMs) with discrete latent codes, diffusion models with continuous latent embeddings, and various aspect ratios and resolutions, enabling the efficient management of large-resolution images and videos. This provides you with tools for tokenizing a wide variety of visual input data to build image and video AI models.
Cosmos tokenizer architecture
A Cosmos tokenizer uses a sophisticated encoder-decoder structure designed for high efficiency and effective learning. At its core, it employs 3D
causal convolution blocks
, which are specialized layers that jointly process spatiotemporal information, and uses causal temporal attention that captures long-range dependencies in data.
The causal structure ensures that the model uses only past and present frames when performing tokenization, avoiding future frames. This is crucial for aligning with the causal nature of many real-world systems, such as those in physical AI or multimodal LLMs.
Figure 2. NVIDIA Cosmos tokenizer architecture
The input is downsampled using 3D wavelets, a signal processing technique that represents pixel information more efficiently. After the data is processed, an inverse wavelet transform reconstructs the original input.
This approach improves learning efficiency, enabling the tokenizer encoder-decoder learnable modules to focus on meaningful features rather than redundant pixel details. The combination of such techniques and its unique training recipe makes the Cosmos tokenizers a cutting-edge architecture for efficient and powerful tokenization.
During inference, the Cosmos tokenizers significantly reduce the cost of running the model by delivering up to 12x faster reconstruction compared to leading open-weight tokenizers (Figure 3).
Figure 3. Quantitative comparison of reconstruction quality (left) and runtime performance (right) for video tokenizers
The Cosmos tokenizers also produce high-fidelity images and videos while compressing more than other tokenizers, demonstrating an unprecedented quality-compression trade-off.
Figure 4. Continuous tokenizer compression rate compared to reconstruction quality
Figure 5. Discrete tokenizer compression rate compared to reconstruction quality
Although the Cosmos tokenizer regenerates from highly compressed tokens, it is capable of creating high-quality images and videos due to an innovative neural network training technique and architecture.
Figure 6. Reconstructed video frame for continuous video tokenizers
Build Your Own Multimodal Models with NeMo
The expansion of the NVIDIA NeMo platform with at-scale data processing using
NeMo Curator
and high-quality tokenization and visual reconstruction using the Cosmos tokenizer empowers you to build state-of-the-art multimodal, generative AI models.
Join the waitlist
and be notified when NeMo Curator is available. The tokenizer is available now on the
/NVIDIA/cosmos-tokenizer
GitHub repo and
Hugging Face
. | https://developer.nvidia.com/ja-jp/blog/state-of-the-art-multimodal-generative-ai-model-development-with-nvidia-nemo/ | NVIDIA NeMo ã«ããæå
端ã®ãã«ãã¢ãŒãã«çæ AI ã¢ãã«éçº | Reading Time:
2
minutes
çæ AI
ã¯ãããã¹ãããŒã¹ã®ã¢ãã«ãããã«ãã¢ãŒãã«æ©èœãžãšæ¥éã«é²åããŠããŸãããããã®ã¢ãã«ã¯ãç»åã®ãã£ãã·ã§ã³äœæãèŠèŠçãªè³ªååçãªã©ã®ã¿ã¹ã¯ãå®è¡ãããã人éã«è¿ã AI ãžãšã·ããããŠããããšãåæ ããŠããŸãããã®ã³ãã¥ããã£ã¯çŸåšãããã¹ããç»åããåç»ãžãšæ¡å€§ããŠãããããŸããŸãªæ¥çã§æ°ããªå¯èœæ§ãåãéãããŠããŸãã
åç» AI ã¢ãã«ã¯ããããã£ã¯ã¹ãèªåè»ãå°å£²ãªã©ã®æ¥çã«é©åœãèµ·ããããšããŠããŸãã
ãããã£ã¯ã¹
ã§ã¯ã補é æ¥ãå庫管çãªã©ã®åéã«äžå¯æ¬ ãªãè€éã§å€åãç¶ããç°å¢ã«ãããèªåŸçãªããã²ãŒã·ã§ã³ã匷åããŠããŸããèªåè»æ¥çã§ã¯ãåç» AI ãèªåé転ãæšé²ããè»äž¡ã®èªèãå®å
šæ§ãäºç¥ä¿å
šã匷åããå¹çæ§ãé«ããŠããŸãã
ç»åãåç»ã®åºç€ã¢ãã«ãæ§ç¯ããã«ã¯ãéçºè
ã¯å€§éã®åŠç¿ããŒã¿ã®ãã¥ã¬ãŒã·ã§ã³ãšäºååŠçãè¡ããçµæãšããŠåŸãããé«å質ããŒã¿ãé«ãå¿ å®åºŠã§ããŒã¯ã³åããåŠç¿æžã¿ã¢ãã«ãå¹ççã«å€§èŠæš¡ã«åŠç¿ãŸãã¯ã«ã¹ã¿ãã€ãºããŠãæšè«äžã«é«å質ãªç»åãåç»ãçæããå¿
èŠããããŸãã
ãã«ãã¢ãŒãã«çæ AI åãã® NVIDIA NeMo ãçºè¡š
NVIDIA NeMo
ã¯ãçæ AI ã¢ãã«ãéçºãã«ã¹ã¿ãã€ãºããããã€ãããšã³ãããŒãšã³ãã®ãã©ãããã©ãŒã ã§ãã
NVIDIA ã¯ããã«ãã¢ãŒãã« ã¢ãã«éçºåãã®ãšã³ãããŒãšã³ãã®ãã€ãã©ã€ã³ããµããŒããã NeMo ã®æ¡åŒµãçºè¡šããŸãããNeMo ã«ãããé«å質ãªèŠèŠããŒã¿ãç°¡åã«ãã¥ã¬ãŒã·ã§ã³ããé«å¹çãªããŒã¯ãã€ã¶ãŒãšäžŠååŠçæè¡ã§
åŠç¿
ãš
ã«ã¹ã¿ãã€ãº
ãå éããæšè«äžã«é«å質ãªããžã¥ã¢ã«ãåæ§ç¯ããããšãã§ããŸãã
åç»ãšç»åããŒã¿ã®ãã¥ã¬ãŒã·ã§ã³ãå é
é«å質ãªåŠç¿ããŒã¿ã§ã¯ãAI ã¢ãã«ããé«ç²ŸåºŠãªçµæãåŸãããŸããããããéçºè
ã¯ãããŒã¿åŠçãã€ãã©ã€ã³ã®æ§ç¯ã«ãããŠãã¹ã±ãŒãªã³ã°ããããŒã¿ã®ãªãŒã±ã¹ãã¬ãŒã·ã§ã³ãŸã§ãããŸããŸãªèª²é¡ã«çŽé¢ããŠããŸãã
NeMo Curator
ã¯ãããŒã¿ ãã¥ã¬ãŒã·ã§ã³ ããã»ã¹ãåçåããããšã§ããã«ãã¢ãŒãã«çæ AI ã¢ãã«ãããç°¡åãã€è¿
éã«æ§ç¯ããããšãã§ããŸããããã«è©Šãããšãã§ãããããç·ä¿æã³ã¹ã (TCO) ãæå°éã«æããåžå Žæå
¥ãŸã§ã®æéãççž®ããŸãã
ããžã¥ã¢ã«ãæ±ãéã«ã¯ãçµç¹ã¯ãã¿ãã€ãèŠæš¡ã®ããŒã¿åŠçã容æã«å®è¡ã§ããŸããNeMo Curator ã¯ãããŒã¿ ãã¥ã¬ãŒã·ã§ã³ã®å段éã§è€æ°ã® GPU ã«è² è·åæ£ã§ãããªãŒã±ã¹ãã¬ãŒã·ã§ã³ ãã€ãã©ã€ã³ãæäŸããŸãããã®çµæãåçŽãª GPU ããŒã¹ã®å®è£
ãšæ¯èŒããŠãåç»åŠçæéã 7 åã® 1 ã«ççž®ã§ããŸããã¹ã±ãŒã«å¯èœãªãã€ãã©ã€ã³ã¯ã100 PB ãè¶
ããããŒã¿ãå¹ççã«åŠçã§ãã倧èŠæš¡ãªããŒã¿ã»ãããã·ãŒã ã¬ã¹ã«åãæ±ãããšãã§ããŸãã
å³ 1. NVIDIA NeMo Curator ã®åç»åŠçé床
NeMo Curator ã¯ãé«ãã¹ã«ãŒãããã®ãã£ã«ã¿ãªã³ã°ããã£ãã·ã§ã³äœæãåã蟌ã¿ã®å段éã«æé©åããããªãã¡ã¬ã³ã¹ ãã㪠ãã¥ã¬ãŒã·ã§ã³ ã¢ãã«ãæäŸããããŒã¿ã»ããã®å質ãåäžãããããæ£ç¢ºãª AI ã¢ãã«ã®äœæããµããŒãããŸãã
ããšãã°ãNeMo Curator ã¯ãæé©åããããã£ãã·ã§ã³ ã¢ãã«ã䜿çšããæé©åãããŠããªãæšè«ã¢ãã«ã®å®è£
ãšæ¯èŒããŠãæ¡éãã®ã¹ã«ãŒãããã®åäžãå®çŸããŸãã
NVIDIA Cosmos ããŒã¯ãã€ã¶ãŒ
ããŒã¯ãã€ã¶ãŒã¯ãåé·çã§æé»çãªèŠèŠããŒã¿ãã³ã³ãã¯ãã§æå³ã®ããããŒã¯ã³ã«ãããã³ã°ãã倧èŠæš¡ãªçæã¢ãã«ã®å¹ççãªåŠç¿ãå®çŸãã誰ããéãããèšç®ãªãœãŒã¹ã§æšè«ã§ããããã«ããŸãã
ä»æ¥ã®ãªãŒãã³ãªåç»ãç»åã®ããŒã¯ãã€ã¶ãŒã¯ãããŒã¿è¡šçŸãäžååãªããšãå€ããããå£åã®å€ãåæ§ç¯ãæªãã ç»åãäžé£ç¶ãªåç»ã«ã€ãªãããããŒã¯ãã€ã¶ãŒäžã«æ§ç¯ãããçæã¢ãã«ã®èœåã«éçããããããŸããããŒã¯ã³åããã»ã¹ãéå¹çãªããããšã³ã³ãŒãããã³ãŒãã«æéãããããåŠç¿ãæšè«ã®æéãé·ããªããéçºè
ã®çç£æ§ãšãŠãŒã¶ãŒäœéšã®äž¡æ¹ã«æªåœ±é¿ãåãŒããŸãã
NVIDIA Cosmos ããŒã¯ãã€ã¶ãŒã¯ãåªããèŠèŠããŒã¯ã³åãæäŸãããªãŒãã³ãªã¢ãã«ã§ãããŸããŸãªç»åãåç»ã®ã«ããŽãªãŒã§ãé«ãå§çž®çãšæå
端ã®åæ§ç¯å質ãå®çŸããŸãã
é¢æ£çãªæœåšã³ãŒããåããèŠèŠèšèªã¢ãã« (VLM: Vision-language Model)ãé£ç¶ããæœåšçåã蟌ã¿ã«ããæ¡æ£ã¢ãã«ãããŸããŸãªã¢ã¹ãã¯ãæ¯ã解å床ããµããŒãããäžé£ã®ããŒã¯ãã€ã¶ãŒæšæºåã¢ãã«ã䜿çšããŠããããã®ããŒã¯ãã€ã¶ãŒãç°¡åã«äœ¿çšã§ããé«è§£å床ã®ç»åãåç»ãå¹ççã«ç®¡çããããšãã§ããŸããããã«ãããç»åãåç» AI ã¢ãã«ãæ§ç¯ããããã«ãå¹
åºãèŠèŠå
¥åããŒã¿ãããŒã¯ã³åããããŒã«ãæäŸãããŸãã
Cosmos ããŒã¯ãã€ã¶ãŒã®ã¢ãŒããã¯ãã£
Cosmos ããŒã¯ãã€ã¶ãŒã¯ãé«å¹çãã€å¹æçãªåŠç¿åãã«èšèšãããŠãããé«åºŠãªãšã³ã³ãŒã㌠/ ãã³ãŒããŒæ§é ã䜿çšããŠããŸãããã®äžæ žã«ã¯ 3D
Causal Convolution Block
(å æç³ã¿èŸŒã¿ãããã¯) ãæ¡çšããŠããŸããããã¯æ空éæ
å ±ãå
±ååŠçããç¹æ®ãªã¬ã€ã€ãŒã§ãããŒã¿ã®é·æçãªäŸåé¢ä¿ãæãã Causal Temporal Attention (å æçæé泚ææ©æ§) ã䜿çšããŠããŸãã
ãã®å ææ§é ã«ãããããŒã¯ã³åã®å®è¡æã«ã¢ãã«ãéå»ãšçŸåšã®ãã¬ãŒã ã®ã¿ã䜿çšããæªæ¥ã®ãã¬ãŒã ã¯äœ¿çšããŸãããããã¯ãç©ççãªAIããã«ãã¢ãŒãã«LLMãªã©ã®å€ãã®çŸå®äžçã®ã·ã¹ãã ã®å ææ§ã«åãããããã«éèŠã§ãã
å³ 2. NVIDIA Cosmos ããŒã¯ãã€ã¶ãŒã®ã¢ãŒããã¯ãã£
å
¥åã¯ããã¯ã»ã«æ
å ±ãããå¹ççã«è¡šãä¿¡å·åŠçæè¡ã§ãã 3D ãŠã§ãŒãã¬ããã䜿çšããŠããŠã³ãµã³ããªã³ã°ãããŸããããŒã¿åŠçåŸãéãŠã§ãŒãã¬ããå€æã«ãã£ãŠå
ã®å
¥åãåæ§ç¯ãããŸãã
ãã®ã¢ãããŒãã«ãããåŠç¿å¹çãåäžããããŒã¯ãã€ã¶ãŒã®ãšã³ã³ãŒã㌠/ ãã³ãŒããŒã®åŠç¿å¯èœãªã¢ãžã¥ãŒã«ã¯ãåé·ãªãã¯ã»ã«ã®è©³çŽ°ã§ã¯ãªããæå³ã®ããç¹åŸŽã«çŠç¹ãåœãŠãããšãã§ããŸãããã®ãããªæè¡ãšç¬èªã®åŠç¿ã¬ã·ãã®çµã¿åããã«ãããCosmos ããŒã¯ãã€ã¶ãŒã¯ãå¹ççãã€åŒ·åãªããŒã¯ã³åãå®çŸããæå
端ã®ã¢ãŒããã¯ãã£ãšãªã£ãŠããŸãã
æšè«ã®éãCosmos ããŒã¯ãã€ã¶ãŒã¯ãäž»èŠãªãªãŒãã³ãŠã§ã€ãã®ããŒã¯ãã€ã¶ãŒãšæ¯èŒããŠæ倧 12 åé«éãªåæ§ç¯ãå®çŸããã¢ãã«ã®å®è¡ã³ã¹ãã倧å¹
ã«åæžããŸãã (å³ 3)ã
å³ 3. Cosmos ããŒã¯ãã€ã¶ãŒãšäž»èŠãªãªãŒãã³ãŠã§ã€ãã®ããŒã¯ãã€ã¶ãŒãšã®æ¯èŒ
Cosmos ããŒã¯ãã€ã¶ãŒã¯ãä»ã®ããŒã¯ãã€ã¶ãŒãããé«ãå§çž®çãå®çŸããªãããé«ãå¿ å®åºŠã®ç»åãåç»ãçæããåäŸã®ãªãå質ãšå§çž®ã®ãã¬ãŒããªããå®çŸããŠããŸãã
å³ 4. é£ç¶ããŒã¯ãã€ã¶ãŒã®å§çž®çãšåæ§ç¯å質ã®æ¯èŒ
å³ 5. é¢æ£ããŒã¯ãã€ã¶ãŒã®å§çž®çãšåæ§ç¯å質ã®æ¯èŒ
Cosmos ããŒã¯ãã€ã¶ãŒã¯ãé«åºŠã«å§çž®ãããããŒã¯ã³ããåçæãããŸãããé©æ°çãªãã¥ãŒã©ã« ãããã¯ãŒã¯ã®åŠç¿æè¡ãšã¢ãŒããã¯ãã£ã«ãããé«å質ãªç»åãåç»ãäœæããããšãã§ããŸãã
å³ 6. é£ç¶åç»ããŒã¯ãã€ã¶ãŒã§åæ§ç¯ãããåç»ãã¬ãŒã
NeMo ã§ç¬èªã®ãã«ãã¢ãŒãã« ã¢ãã«ãæ§ç¯
NeMo Curator
ã䜿çšãã倧èŠæš¡ãªããŒã¿åŠçãšãCosmos ããŒã¯ãã€ã¶ãŒã䜿çšããé«å質ãªããŒã¯ã³åãããžã¥ã¢ã«åæ§ç¯ãåãããNVIDIA NeMo ãã©ãããã©ãŒã ã®æ¡åŒµã«ãããæå
端ã®ãã«ãã¢ãŒãã«çæ AI ã¢ãã«ãæ§ç¯ããããšãã§ããŸãã
ç»é²
ããŠããã ããšãNeMo Curator ãå©çšå¯èœã«ãªã£ãéã«éç¥ãåãåãããšãã§ããŸããããŒã¯ãã€ã¶ãŒã¯ãçŸåš
/NVIDIA/cosmos-tokenizer
GitHub ãªããžããªããã³
Hugging Face
ã§å©çšããããšãã§ããŸãã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
Large Language Model Fine-Tuning using Parameter Efficient Fine-Tuning (PEFT ã䜿çšãã倧èŠæš¡èšèªã¢ãã«ã®ãã¡ã€ã³ãã¥ãŒãã³ã°)
GTC ã»ãã·ã§ã³:
Large Language Model Fine-Tuning using NVIDIA NeMo (NVIDIA NeMo ã䜿çšãã倧èŠæš¡èšèªã¢ãã«ã®ãã¡ã€ã³ãã¥ãŒãã³ã° â Domino Data Lab æäŸ)
SDK:
NVIDIA NeMo ã«ã¹ã¿ãã€ã¶ãŒ
SDK:
NeMo LLM ãµãŒãã¹
SDK:
NeMo Megatron |
https://developer.nvidia.com/blog/frictionless-collaboration-and-rapid-prototyping-in-hybrid-environments-with-nvidia-ai-workbench/ | Frictionless Collaboration and Rapid Prototyping in Hybrid Environments with NVIDIA AI Workbench | NVIDIA AI Workbench
is a free development environment manager that streamlines data science, AI, and machine learning (ML) projects on systems of choice. The goal is to provide a frictionless way to create, compute, and collaborate on and across PCs, workstations, data centers, and clouds. The basic user experience is straightforward:
Easy setup on single systems:
Click through install in minutes on Windows, Ubuntu, and macOS, with a one-line install on remote systems.
Managed experience for decentralized deployment
: A free, PaaS/SaaS type UX in truly hybrid contexts with no need for a centralized, service-based platform.
Seamless collaboration for experts and beginners:
Friendly Git, container, and application management without limiting customization by power users.
Consistent across users and systems:
Migrate workloads and applications across different systems while maintaining functionality and user experience.
Simplified GPU handling
: Handles system dependencies like
NVIDIA drivers
and the
NVIDIA Container Toolkit
, as well as
GPU-enabled container
runtime configuration.
This post explores highlights of the October release of NVIDIA AI Workbench, which is the most significant since the product launch at GTC 2024 and is a big step closer to the full product vision.
Release highlights
This section will detail the major new capabilities and user-requested updates in the latest release.
Major new capabilities include:
Enhance collaboration through expanded Git support, such as branching, merging, diffs, and finer-grained control for commits and gitignore.
Create complex applications and workflows with multicontainer environments through Docker Compose support.
Simple, fast, and secure rapid prototyping with application sharing with single-user URLs.
User requested updates:
Dark mode for the Desktop App
Improved installation on localized versions of Windows
Expanded Git support
Previously, AI Workbench supported only single, monolithic commits on the main branch. Users had to manage branches and merges manually, and this created various types of confusion, especially around resolving merge conflicts. Now, users can manage branches, merges, and conflicts directly in the Desktop App and the CLI. In addition, they can see and triage individual file diffs for commits. The UI is built to work seamlessly with manual Git operations and will update to reflect relevant changes.
Figure 1. AI Workbench Desktop App tab for Git branching
These features are found in two new tabs on the Desktop App: Changes and Branches.
Changes
: Gives a line-by-line view of the diffs between the working tree and previous commits. Users can now select and commit file changes individually or in bulk based on visible file diffs tracked changes (addition, modification, or deletion), as well as being able to individually reject or add a file to git-ignore. The view also updates dynamically to reflect manual Git actions, for example manually staging a file and then following up with a change to the file in the working tree.
Branches
: Provides branch management, including creation, switching, and merging, as well as visibility for remote branches on a Git server. Merging branches with a conflict initiates a conflict resolution flow that users can do within the UI, or move to a terminal or file editor of their choice.
Learn more about how these advanced Git features work
.
Multicontainer support with Docker Compose stacks
AI Workbench now supports
Docker Compose
. Users can work with multicontainer applications and workflows with the same ease of configuration, reproducibility, and portability that AI Workbench provides for single-container environments.
Figure 2. The Docker Compose feature in the AI Workbench Environment Management tab
The basic idea is to add a Docker Compose-based âstackâ that is managed by AI Workbench and connects to the main development container. To add the stack, a user just needs to add the appropriate Docker Compose file to the project repository and do some configuration in the Desktop App or CLI.
Weâre using Docker Compose for a few reasons. First, we didnât want to develop in a vacuum, and thatâs why weâve been
collaborating with the Docker team
on features like a
managed Docker Desktop install
.
Second, we want users to be able to work with the multicontainer applications outside of AI Workbench, and Docker Compose is the easiest way to do that. The vision for this feature is to enable streamlined, powerful development and compute for multicontainer applications within AI Workbench that can then be stood up outside of AI Workbench with a simple
docker-compose
up command.
This multicontainer feature is new and will continue to evolve. We would love to get feedback and help you sort out any issues through the
NVIDIA AI Workbench Developer Forum
.
Learn more about how Docker Compose works
.
Web application sharing through secure URLs
AI Workbench enables users to easily spin up managed web applications that are built into a project. The process is fairly simple: create or clone a project with the web app installed, start the project, then start the app, and it appears in your browser.
This approach is great for a developer UX, but it wasnât good for rapid prototyping UX and collaboration. If you wanted another user to access and test your application, you either asked them to install AI Workbench, clone the project and run it, or you had to fully extract the application to run it and make it available to the user. The first is a speed bump for the user, and the second is a speed bump for the developer.
We eliminated these speed bumps with a simple feature that enables you to set a remote AI Workbench to enable external access and to create single-use, secure URLs for running web applications in a project on that remote. You just need to make sure the user has access to port 10000 on the remote, and the application will be directly accessible. All they have to do is click the link and go to the app.
Figure 3. Developers can now give end users direct access to applications running in an AI Workbench Project on a remote through secure, one-time-use URLs
Enabling this kind of access is useful for rapid prototyping and collaboration. Thatâs why various SaaS offerings provide this as a managed service. The difference with AI Workbench is that you can provide this access on your own resources and in your own network, for example on data center resources or a shared server. It doesnât have to be in the cloud.
AI Workbench keeps things secure by restricting this access to a single browser and to a single application thatâs running in the project. This means a user canât share the URL with someone else, and they are constrained to the web app that you shared with them.
Learn more about how application sharing works.
Dark mode and localized Windows installation
Many users requested a dark mode option because itâs easier on the eyes. Itâs now available and can be selected through the Settings window that is now available directly from within the Desktop App.
Learn more about how dark mode works
.
Windows users are by far our main demographic for the local installs, and not all Windows users are using the English language pack, and this blocked AI Workbench install due to how we handled some WSL commands. In particular, weâve had users working in Cyrillic or Chinese that were blocked on Windows. We adjusted how we handle non-English language packs, and it should work well now. If you were previously blocked by this, give it a try now. If it still doesnât work for you, let us know in the
NVIDIA AI Workbench Developer Forum
so we can continue to improve this capability.
New AI Workbench projects
This release introduces new example projects designed to jumpstart your AI development journey, detailed below. An
AI Workbench project
is a structured Git repository that defines a containerized development environment in AI Workbench. AI Workbench projects provide:
Effortless setup and GPU configuration:
Simply clone a project from GitHub or GitLab, and AI Workbench handles the rest with automatic GPU configuration.
Development integrations:
Seamless support for popular development environments such as Jupyter and VS Code, as well as support for user-configured web applications.
Containerized and customizable environments:
Projects are containerized, isolated, and easily modifiable. Adapt example projects to suit your specific needs while ensuring consistency and reproducibility.
Explore NVIDIA AI Workbench example projects
.
Multimodal virtual assistant
example project
This project enables users to build their own virtual assistant using a multimodal
retrieval-augmented generation (RAG)
pipeline with fallback to web search. Users can interact with two RAG-based applications to learn more about AI Workbench, converse with the user documentation, troubleshoot their own installation, or even focus the RAG pipeline to their own, custom product.
Control-Panel:
Customizable Gradio app for working with product documentation allows uploading webpages, PDFs, images, and videos to a persistent vector store and query them. For inference, users can select between cloud endpoints like on the NVIDIA API Catalog or use self-hosted endpoints to run their own inference.
Public-Chat:
With product documents loaded, the Gradio app is a simplified, âread-onlyâ chatbot that you can share with end users through the new AI Workbench App Sharing feature.
Figure 4. Using the Public-Chat web app, a read-only, pared down chat application that is meant to be more consumable and shareable to end users
Competition-Kernel example project
This project provides an easy, local experience when working on Kaggle competitions. You can easily leverage your local machine or a cloud instance to work on competition datasets, write code, build out models, and submit results, all through AI Workbench. The Competition Kernel project offers:
A managed experience to develop and test on your own GPUs and set up and customize in minutes.
Easy version control and tracking of code through GitHub or GitLab and very easy collaboration.
The power of using a local, dedicated IDE: robust debugging, intelligent code completion, extensive customization options.
Easy plugin to existing data sources (external or your own).
No Internet? No problem. Develop while offline.
Get started
This release of NVIDIA AI Workbench marks a significant step forward in providing a frictionless experience for AI development across GPU systems. New features from this release, including expanded Git support, support for multicontainer environments, and secure web app sharing, streamline developing and collaborating on AI workloads. Explore these features in the three new example projects available with this release or create your own projects.
To get started with AI Workbench,
install the application from the webpage
. For more information about installing and updating, see the
NVIDIA AI Workbench documentation
.
Explore a range of
NVIDIA AI Workbench example projects
, from data science to RAG.
Visit the
NVIDIA AI Workbench Developer Forum
to report issues and learn more about how other developers are using AI Workbench. | https://developer.nvidia.com/ja-jp/blog/frictionless-collaboration-and-rapid-prototyping-in-hybrid-environments-with-nvidia-ai-workbench/ | NVIDIA AI Workbench ã«ãããã€ããªããç°å¢ã«ãããã¹ã ãŒãºãªã³ã©ãã¬ãŒã·ã§ã³ãšè¿
éãªãããã¿ã€ãã³ã° | Reading Time:
3
minutes
NVIDIA AI Workbench
ã¯ãéžæããã·ã¹ãã ã§ããŒã¿ ãµã€ãšã³ã¹ãAIãæ©æ¢°åŠç¿ (ML) ãããžã§ã¯ããåçåããç¡æã®éçºç°å¢ãããŒãžã£ãŒã§ãã PCãã¯ãŒã¯ã¹ããŒã·ã§ã³ãããŒã¿ ã»ã³ã¿ãŒãã¯ã©ãŠãäžã§ããããã¯ãããããŸããããã¹ã ãŒãºãªäœæãèšç®ãã³ã©ãã¬ãŒã·ã§ã³ãè¡ãããšãç®çãšããŠããŸããåºæ¬çãªãŠãŒã¶ãŒäœéšã¯ã·ã³ãã«ã§ã:
åäžã·ã¹ãã ã§ç°¡åãªã»ããã¢ãã:
WindowsãUbuntuãmacOS ã§ã¯ã¯ãªãã¯æäœã§ã€ã³ã¹ããŒã«ãå®äºãããªã¢ãŒã ã·ã¹ãã ã§ã¯ 1 è¡ã®ã³ãã³ãã§ã€ã³ã¹ããŒã«ããããšãã§ããŸãã
åæ£åãããã€ã®ããã®ç®¡çåãããäœéš
: éäžåã®ãµãŒãã¹ããŒã¹ã®ãã©ãããã©ãŒã ãå¿
èŠãšããªããæ¬åœã®æå³ã§ãã€ããªãããªã³ã³ããã¹ãã«ãããç¡æã® PaaS/SaaS åã®ãŠãŒã¶ãŒäœéšã
ãšãã¹ããŒããšåå¿è
åãã®ã·ãŒã ã¬ã¹ãªã³ã©ãã¬ãŒã·ã§ã³:
ãã¯ãŒ ãŠãŒã¶ãŒã«ããã«ã¹ã¿ãã€ãºãå¶éããããšã®ãªãã䜿ãããã Gitãã³ã³ãããŒãã¢ããªã±ãŒã·ã§ã³ç®¡çã
ãŠãŒã¶ãŒãšã·ã¹ãã éã®äžè²«æ§:
æ©èœãšãŠãŒã¶ãŒäœéšãç¶æããªãããç°ãªãã·ã¹ãã éã§ã¯ãŒã¯ããŒããšã¢ããªã±ãŒã·ã§ã³ã移è¡ã
GPU åŠçã®ç°¡çŽ å
:
NVIDIA ãã©ã€ããŒ
ã
NVIDIA ã³ã³ãã㌠ããŒã«ããã
ãªã©ã®ã·ã¹ãã äŸåé¢ä¿ã
ããã³ GPU 察å¿ã®ã³ã³ãããŒ
ã©ã³ã¿ã€ã æ§æãåŠçã
ãã®èšäºã§ã¯ãGTC 2024 ã§ã®è£œåçºè¡šä»¥æ¥ãæãéèŠãª NVIDIA AI Workbench ã® 10 æã®ãªãªãŒã¹ã«ããããã€ã©ã€ããã玹ä»ããŸãã補åããžã§ã³å®çŸã«åãã倧ããªäžæ©ã§ãã
ãªãªãŒã¹ ãã€ã©ã€ã
ãã®ã»ã¯ã·ã§ã³ã§ã¯ãææ°ãªãªãŒã¹ã§ã®äž»èŠãªæ°æ©èœãšãŠãŒã¶ãŒããèŠæã®ãã£ãæŽæ°ã«ã€ããŠã詳ãã説æããŸãã
äž»ãªæ°æ©èœã«ã¯ä»¥äžãå«ãŸããŸãã
ãã©ã³ããããŒãžãå·®åãã³ããããš gitignore ã®çŽ°ããå¶åŸ¡ãªã©ãGit ã®ãµããŒããæ¡å€§ããã³ã©ãã¬ãŒã·ã§ã³ã匷åããŸãã
Docker Compose ã®ãµããŒããéããŠããã«ãã³ã³ãããŒç°å¢ã§è€éãªã¢ããªã±ãŒã·ã§ã³ãšã¯ãŒã¯ãããŒãäœæããŸãã
ã·ã³ã°ã«ãŠãŒã¶ãŒ URL ã§ã¢ããªã±ãŒã·ã§ã³ãå
±æããããšã§ãã·ã³ãã«ãã€è¿
éãå®å
šãªãããã¿ã€ãã³ã°ãå®çŸããŸãã
ãŠãŒã¶ãŒã®èŠæã«ããã¢ããããŒã:
ãã¹ã¯ããã ã¢ããªã®ããŒã¯ã¢ãŒã
ããŒã«ã©ã€ãºç Windows ã®ã€ã³ã¹ããŒã«æ¹å
Git ãµããŒãã®æ¡åŒµ
ãããŸã§ AI Workbench ã¯ãã¡ã€ã³ ãã©ã³ãã§ã®åäžã®ã¢ããªã·ãã¯ãªã³ãããã®ã¿ããµããŒãããŠããŸããã ãŠãŒã¶ãŒã¯ãã©ã³ããšããŒãžãæåã§ç®¡çããå¿
èŠããããç¹ã«ããŒãžã®ç«¶åã®è§£æ±ºã«é¢ããŠãããŸããŸãªçš®é¡ã®æ··ä¹±ãçããŠããŸããã çŸåšã¯ããã©ã³ããããŒãžã競åãããã¹ã¯ããã ã¢ããªãš CLI ã§çŽæ¥ç®¡çããããšãã§ããŸãã å ããŠãã³ãããã®åã
ã®ãã¡ã€ã«å·®åã確èªããåªå
é äœãä»ããããšãã§ããŸãã ãã® UI ã¯ãæåã® Git æäœãšã·ãŒã ã¬ã¹ã«åäœããããã«æ§ç¯ãããŠãããé¢é£ããå€æŽãåæ ããŠæŽæ°ãããŸãã
å³ 1. Git ãã©ã³ãçšã® AI Workbench Desktop ã¢ã㪠ã¿ã
ãããã®æ©èœã¯ããã¹ã¯ããã ã¢ããªã® 2 ã€ã®æ°ããã¿ã: [Changes (å€æŽ)] ãš [Branches (ãã©ã³ã)] ã«è¡šç€ºãããŸãã
å€æŽ
: äœæ¥ããªãŒãšä»¥åã®ã³ãããéã®å·®åã 1 è¡ãã€è¡šç€ºããŸãã ãŠãŒã¶ãŒã¯ã衚瀺ãããŠãããã¡ã€ã«å·®åã远跡ãããå€æŽ (è¿œå ãä¿®æ£ãåé€) ã«åºã¥ããŠããã¡ã€ã«å€æŽãåå¥ãŸãã¯äžæ¬ã§éžæããã³ãããããããšãã§ããããã«ãªããŸããããŸããgit-ignore ã«ãã¡ã€ã«ãåå¥ã«æåŠããŸãã¯è¿œå ããããšãã§ããŸãã ãã®ãã¥ãŒã¯ãŸããæåã® Git æäœãåæ ããããã«åçã«æŽæ°ãããŸããäŸãã°ããã¡ã€ã«ãæåã§ã¹ããŒãžã³ã°ããäœæ¥ããªãŒå
ã®ãã¡ã€ã«ã«å€æŽãå ããŸãã
ãã©ã³ã
: Git ãµãŒããŒäžã®ãªã¢ãŒã ãã©ã³ããå¯èŠåããã ãã§ãªããäœæãåãæ¿ããããŒãžãªã©ã®ãã©ã³ã管çãæäŸããŸãã競åã®ãããã©ã³ããããŒãžãããšã競å解決ãããŒãéå§ãããŸãããã®ãããŒã¯ããŠãŒã¶ãŒã UI å
ã§å®è¡ããããšããéžæãã端æ«ããã¡ã€ã« ãšãã£ã¿ãŒã«ç§»åããããšãã§ããŸãã
ãããã®é«åºŠãª Git æ©èœã®ä»çµã¿ã®è©³çŽ°ãã芧ãã ãã
ã
Docker Compose ã¹ã¿ãã¯ã«ãããã«ãã³ã³ãããŒã®ãµããŒã
AI Workbench ãã
Docker Compose
ããµããŒãããããã«ãªããŸããã ãŠãŒã¶ãŒã¯ãAI Workbench ãã·ã³ã°ã«ã³ã³ãããŒç°å¢åãã«æäŸããæ§æãåçŸæ§ã移æ€æ§ãšåæ§ã®å®¹æãã§ããã«ãã³ã³ãã㌠ã¢ããªã±ãŒã·ã§ã³ãšã¯ãŒã¯ãããŒãæäœããããšãã§ããŸãã
å³ 2. AI Workbench ç°å¢ç®¡çã¿ãã® Docker Compose æ©èœ
åºæ¬çãªèãæ¹ã¯ãAI Workbench ã«ãã£ãŠç®¡çãããã¡ã€ã³ã®éçºã³ã³ãããŒã«æ¥ç¶ãã Docker Compose ããŒã¹ã®ãã¹ã¿ãã¯ããè¿œå ããããšã§ãã ã¹ã¿ãã¯ãè¿œå ããã«ã¯ããŠãŒã¶ãŒã¯ é©å㪠Docker Compose ãã¡ã€ã«ããããžã§ã¯ã ãªããžããªã«è¿œå ãããã¹ã¯ããã ã¢ããªãŸã㯠CLI ã§ããã€ãã®èšå®ãè¡ãã ãã§ãã
NVIDIA ã§ã¯ãããã€ãã®çç±ããã£ãŠ Docker Compose ã䜿çšããŠããŸãã 1 ã€ç®ã¯ãäœããªãæããéçºãè¡ãããšãæãã§ããªãã£ãããã§ãããã®ããã
管çããã Docker ãã¹ã¯ããã ã€ã³ã¹ããŒã«
ãªã©ã®æ©èœã«ã€ããŠ
Docker ããŒã ãšåå
ããŠããŸããã
2 ã€ç®ã¯ãAI Workbench ã®ä»¥å€ã§ããŠãŒã¶ãŒããã«ãã³ã³ãã㌠ã¢ããªã±ãŒã·ã§ã³ãæäœã§ããããã«ããã«ã¯ãDocker Compose ãæãç°¡åãªæ¹æ³ã ããã§ãããã®æ©èœã®ããžã§ã³ã¯ãAI Workbench å
ã®ãã«ãã³ã³ãã㌠ã¢ããªã±ãŒã·ã§ã³ãåçåããå¹æçãªéçºãšæŒç®åŠçãå¯èœã«ããã·ã³ãã«ãª
docker-compose
up ã³ãã³ã㧠AI Workbench å€ã§èµ·åã§ããããã«ããããšã§ãã
ãã®ãã«ãã³ã³ãããŒæ©èœã¯æ°ããæ©èœã§ãããä»åŸãé²åãç¶ããŸãã
NVIDIA AI Workbench éçºè
ãã©ãŒã©ã
ãéããŠãæ¯éãã£ãŒãããã¯ããå¯ããã ãããåé¡è§£æ±ºããæäŒãããããŸãã
Docker Compose ã®ä»çµã¿ã®è©³çŽ°ãã芧ãã ãã
ã
ã»ãã¥ã¢ãª URL ã«ãããŠã§ã ã¢ããªã±ãŒã·ã§ã³å
±æ
AI Workbench ã«ããããŠãŒã¶ãŒã¯ãããžã§ã¯ãã«çµã¿èŸŒãŸãã管çããããŠã§ã ã¢ããªã±ãŒã·ã§ã³ãç°¡åã«èµ·åããããšãã§ããŸãã ãã®ããã»ã¹ã¯éåžžã«ç°¡åã§ãWeb ã¢ããªãã€ã³ã¹ããŒã«ããããããžã§ã¯ããäœæãŸãã¯è€è£œãããããžã§ã¯ããéå§ããŠããã¢ããªãéå§ãããšããã©ãŠã¶ãŒã«è¡šç€ºãããŸãã
ãã®ã¢ãããŒãã¯éçºè
ã® UX ã«ã¯æé©ã§ãããè¿
éãªãããã¿ã€ãã³ã°ã® UX ãã³ã©ãã¬ãŒã·ã§ã³ã«ã¯é©ããŠããŸããã§ããã ä»ã®ãŠãŒã¶ãŒã«ã¢ããªã±ãŒã·ã§ã³ãžã®ã¢ã¯ã»ã¹ãšãã¹ããè¡ã£ãŠãããå ŽåãAI Workbench ã®ã€ã³ã¹ããŒã«ããããžã§ã¯ãã®è€è£œãå®è¡ãäŸé Œããããã¢ããªã±ãŒã·ã§ã³ãå®å
šã«æœåºããŠå®è¡ãããŠãŒã¶ãŒãå©çšã§ããããã«ããå¿
èŠããããŸããã 1 ã€ç®ã¯ãŠãŒã¶ãŒã®èª²é¡ã§ããã2 ã€ç®ã¯éçºè
ã®èª²é¡ã§ãã
NVIDIA ã§ã¯ããªã¢ãŒãã® AI Workbench ãèšå®ããŠå€éšããã®ã¢ã¯ã»ã¹ãå¯èœã«ãããã®ãªã¢ãŒãäžã®ãããžã§ã¯ãã«ãããŠããŠã§ã ã¢ããªã±ãŒã·ã§ã³ãå®è¡ããããã®äžåéãã®å®å
šãª URL ãäœæããããšãã§ããã·ã³ãã«ãªæ©èœã§ãããã®èª²é¡ãå
æããŸããã ãŠãŒã¶ãŒããªã¢ãŒãã®ããŒã 10000 ã«ã¢ã¯ã»ã¹ã§ããããšã確èªããã ãã§ãã¢ããªã±ãŒã·ã§ã³ã«çŽæ¥ã¢ã¯ã»ã¹ã§ããããã«ãªããŸãã ãªã³ã¯ãã¯ãªãã¯ããŠã¢ããªã«ç§»åããã ãã§ãã
å³ 3. éçºè
ã¯ãäžåéãã®å®å
šãª URL ãéããŠããšã³ããŠãŒã¶ãŒã AI Workbench ãããžã§ã¯ãã§å®è¡ããŠããã¢ããªã±ãŒã·ã§ã³ã«ããªã¢ãŒãã§çŽæ¥ã¢ã¯ã»ã¹ãããããšãã§ããããã«ãªããŸãã
ãã®ãããªã¢ã¯ã»ã¹ãæå¹ã«ããããšã¯ãè¿
éãªãããã¿ã€ãã³ã°ãšã³ã©ãã¬ãŒã·ã§ã³ã«åœ¹ç«ã¡ãŸãã ã ãããããããŸããŸãª SaaS ããããæäŸãããããŒãžã ãµãŒãã¹ãšããŠæäŸããŠããã®ã§ããAI Workbench ãšã®éãã¯ãããŒã¿ ã»ã³ã¿ãŒã®ãªãœãŒã¹ãå
±æãµãŒããŒãªã©ãç¬èªã®ãªãœãŒã¹ãç¬èªã®ãããã¯ãŒã¯ã§ããã®ã¢ã¯ã»ã¹ãæäŸã§ããããšã§ãã ã¯ã©ãŠãã§ããå¿
èŠã¯ãããŸããã
AI Workbench ã¯ãåäžã®ãã©ãŠã¶ãŒãšãããžã§ã¯ãã§å®è¡ãããŠããåäžã®ã¢ããªã±ãŒã·ã§ã³ã«å¯ŸããŠããã®ã¢ã¯ã»ã¹ãå¶éããããšã§ãå®å
šæ§ã確ä¿ããŸãã ã€ãŸãããŠãŒã¶ãŒã¯ URL ãä»ã®ãŠãŒã¶ãŒãšå
±æã§ãããå
±æãããŠã§ã ã¢ããªã«å¶éãããŸãã
ã¢ããªã±ãŒã·ã§ã³å
±æã®ä»çµã¿ã®è©³çŽ°ãã芧ãã ããã
ããŒã¯ ã¢ãŒããšããŒã«ã©ã€ãºããã Windows ã€ã³ã¹ããŒã«
å€ãã®ãŠãŒã¶ãŒãããç®ã«åªããããŒã¯ ã¢ãŒããªãã·ã§ã³ã®èŠæãå¯ããããŸããã çŸåšããã®ãªãã·ã§ã³ã¯å©çšå¯èœã§ããã¹ã¯ããã ã¢ããªããçŽæ¥äœ¿çšã§ããèšå®ãŠã£ã³ããŠããéžæã§ããããã«ãªã£ãŠããŸãã
ããŒã¯ ã¢ãŒãã®ä»çµã¿ã®è©³çŽ°ãã芧ãã ãã
ã
ããŒã«ã« ã€ã³ã¹ããŒã«ã®äž»ãªãŠãŒã¶ãŒå±€ã¯ Windows ãŠãŒã¶ãŒã§ããããã¹ãŠã® Windows ãŠãŒã¶ãŒãè±èªããã¯ã䜿çšããŠããããã§ã¯ãããŸããããŸããWSL ã³ãã³ãã®åŠçæ¹æ³ã«ããããã® AI Workbench ã®ã€ã³ã¹ããŒã«ããããã¯ãããŠããŸããã ç¹ã«ãWindows äžã§ããªã«æåããŸãã¯äžåœèªã§äœæ¥ãããŠãŒã¶ãŒããããã¯ãããŠããŸããã è±èªä»¥å€ã®èšèªããã¯ãåŠçããæ¹æ³ã調æŽããã®ã§ãçŸåšã¯åé¡ãªãæ©èœããã¯ãã§ãã 以åãããã¯ãããŠããå Žåã¯ãæ¯éãè©Šããã ããã ããã§ãæ©èœããªãå Žåã¯ã
NVIDIA AI Workbench éçºè
ãã©ãŒã©ã
ã§ãç¥ãããã ãããããã«ãã NVIDIA ã¯ãåŒãç¶ããã®æ©èœãæ¹åããããšãã§ããŸãã
æ°ãã AI Workbench ãããžã§ã¯ã
ãã®ãªãªãŒã¹ã§ãAI éçºãããã«å§ããããããã«èšèšãããæ°ãããµã³ãã« ãããžã§ã¯ããã玹ä»ããŸãã詳现ã¯ä»¥äžãã芧ãã ããã
AI Workbench ãããžã§ã¯ã
ã¯ãAI Workbench ã®ã³ã³ãããŒåãããéçºç°å¢ãå®çŸ©ããæ§é åããã Git ãªããžããªã§ãã AI Workbench ãããžã§ã¯ãã§ã¯ã以äžãæäŸããŸãã
ç°¡åãªã»ããã¢ãããš GPU æ§æ:
ãããžã§ã¯ãã GitHub ãŸã㯠GitLab ããã¯ããŒã³ããã ãã§ãæ®ã㯠AI Workbench ãèªå㧠GPU æ§æãè¡ããŸãã
éçºã®çµ±å:
Jupyter ã VS Code ãªã©ã®äžè¬çãªéçºç°å¢ã«å¯ŸããŠã·ãŒã ã¬ã¹ã«ãµããŒããããŠãŒã¶ãŒæ§æã®ãŠã§ã ã¢ããªã±ãŒã·ã§ã³ã«ã€ããŠããµããŒãããŸãã
ã³ã³ãããŒåãããã«ã¹ã¿ãã€ãºå¯èœãªç°å¢:
ãããžã§ã¯ãã¯ã³ã³ãããŒåãããåé¢ãããç°¡åã«å€æŽå¯èœã§ãã äžè²«æ§ãšåçŸæ§ã確ä¿ããªãããç¹å®ã®ããŒãºã«åãããŠãµã³ãã« ãããžã§ã¯ããé©åãããããšãã§ããŸãã
NVIDIA AI Workbench ãµã³ãã« ãããžã§ã¯ããã芧ãã ãã
ã
ãã«ãã¢ãŒãã«ä»®æ³ã¢ã·ã¹ã¿ã³ã ãµã³ãã« ãããžã§ã¯ã
ãã®ãããžã§ã¯ãã§ã¯ããŠã§ãæ€çŽ¢ãžã®ãã©ãŒã«ããã¯ã䌎ããã«ãã¢ãŒãã«
æ€çŽ¢æ¡åŒµçæ (RAG)
ãã€ãã©ã€ã³ã䜿çšããŠãç¬èªã®ä»®æ³ã¢ã·ã¹ã¿ã³ããããããšãæ§ç¯ã§ããŸãã ãŠãŒã¶ãŒã¯ã2 ã€ã® RAG ããŒã¹ã®ã¢ããªã±ãŒã·ã§ã³ãæäœããŠãAI Workbench ã®è©³çŽ°ãåŠãã ãããŠãŒã¶ãŒ ããã¥ã¡ã³ããåç
§ããããèªèº«ã®ã€ã³ã¹ããŒã«ã®ãã©ãã«ã·ã¥ãŒãã£ã³ã°ãããããããã㯠RAG ãã€ãã©ã€ã³ãç¬èªã®ã«ã¹ã¿ã 補åã«éäžããããããããšãã§ããŸãã
Control-Panel:
補åã®ããã¥ã¡ã³ããæäœããããã®ã«ã¹ã¿ãã€ãºå¯èœãª Gradio ã¢ããªã§ã¯ããŠã§ãããŒãžãPDFãç»åãåç»ãæ°žç¶çãªãã¯ãã« ã¹ãã¢ã«ã¢ããããŒãããããããåãåããã§ããŸãã æšè«ã«é¢ããŠã¯ãNVIDIA API ã«ã¿ãã°ã®ããã«ãã¯ã©ãŠã ãšã³ããã€ã³ããéžæããããã»ã«ããã¹ãåã®ãšã³ããã€ã³ãã䜿çšããŠãç¬èªã®æšè«ãå®è¡ã§ããŸãã
Public-Chat:
補åããã¥ã¡ã³ããèªã¿èŸŒãŸãããšãGradio ã¢ããªã¯ç°¡çŽ åããããèªã¿åãå°çšããã£ããããããšãªããæ°ãã AI Workbench ã¢ããªå
±ææ©èœãéããŠãšã³ã ãŠãŒã¶ãŒãšå
±æã§ããŸãã
å³ 4. Public-Chat ãŠã§ã ã¢ããªã¯ãèªã¿åãå°çšã®ã·ã³ãã«ãªãã£ãã ã¢ããªã±ãŒã·ã§ã³ã§ããšã³ããŠãŒã¶ãŒã®å©çšãšå
±æã容æã«ããŸã
Competition-Kernel ãµã³ãã« ãããžã§ã¯ã
ãã®ãããžã§ã¯ãã¯ãKaggle ã³ã³ããã£ã·ã§ã³ã«åãçµãéã«ç°¡åãªããŒã«ã« ãšã¯ã¹ããªãšã³ã¹ãæäŸããŸãã AI Workbench ãéããŠãããŒã«ã« ãã·ã³ããŸãã¯ã¯ã©ãŠã ã€ã³ã¹ã¿ã³ã¹ã掻çšããã³ã³ããã£ã·ã§ã³ã®ããŒã¿ã»ãããã³ãŒãã®äœæãã¢ãã«ã®æ§ç¯ãçµæã®æåºãªã©ãç°¡åã«å®è¡ããããšãã§ããŸãã Competition Kernel ãããžã§ã¯ãã§ã¯ã以äžãæäŸããŸãã
ç¬èªã® GPU ã§éçºãšãã¹ããè¡ããæ°åã§ã»ããã¢ãããšã«ã¹ã¿ãã€ãºãè¡ã管çãããäœéšã
GitHub ãŸã㯠GitLab ã«ããã³ãŒãã®ããŒãžã§ã³ ã³ã³ãããŒã«ãšè¿œè·¡ãã³ã©ãã¬ãŒã·ã§ã³ã容æã
ããŒã«ã«ã§å°çš IDE ã䜿çšãããã¯ãŒ: å
ç¢ãªãããã°ãã€ã³ããªãžã§ã³ããªã³ãŒãè£å®ãåºç¯ãªã«ã¹ã¿ãã€ãº ãªãã·ã§ã³ã
æ¢åã®ããŒã¿ ãœãŒã¹ (å€éšãŸãã¯ç¬èª) ãžã®ç°¡åãªãã©ã°ã€ã³ã
ã€ã³ã¿ãŒãããã䜿ããªã? åé¡ãããŸããããªãã©ã€ã³ã§ãéçºã§ããŸãã
ä»ããå§ããŸããã
ãã® NVIDIA AI Workbench ã®ãªãªãŒã¹ã¯ãGPU ã·ã¹ãã å
šäœã§ AI éçºã«åæ»ãªäœéšãæäŸãã倧ããªäžæ©ãšãªããŸãã ãã®ãªãªãŒã¹ã«ã¯ãGit ã®ãµããŒãã®æ¡åŒµããã«ãã³ã³ãããŒç°å¢ã®ãµããŒããå®å
šãªãŠã§ã ã¢ããªå
±æãªã©ãAI ã¯ãŒã¯ããŒãã§ã®éçºãšã³ã©ãã¬ãŒã·ã§ã³ã®å¹çåãªã©ã®æ°æ©èœãå«ãŸããŸãã ãã®ãªãªãŒã¹ã§å©çšå¯èœãšãªã£ã 3 ã€ã®æ°ãããµã³ãã« ãããžã§ã¯ãã§ãããã®æ©èœããè©Šãããã ãããç¬èªã®ãããžã§ã¯ããäœæããããšãã§ããŸãã
AI Workbench ãå§ããã«ã¯ã
ãŠã§ãããŒãžããã¢ããªã±ãŒã·ã§ã³ãã€ã³ã¹ããŒã«ããŠãã ãã
ã ã€ã³ã¹ããŒã«ãšæŽæ°ã®è©³çŽ°ã«ã€ããŠã¯ã
NVIDIA AI Workbench ã®ããã¥ã¡ã³ã
ãåç
§ããŠãã ããã
ããŒã¿ ãµã€ãšã³ã¹ãã RAG ãŸã§ãããŸããŸãª
NVIDIA AI Workbench ã®ãµã³ãã« ãããžã§ã¯ã
ããçšæããŠããŸãã
åé¡ãå ±åããããä»ã®éçºè
ã«ãã AI Workbench ã®æŽ»çšæ¹æ³ã確èªããã«ã¯ã
NVIDIA AI Workbench éçºè
ãã©ãŒã©ã
ã«ã¢ã¯ã»ã¹ããŠãã ããã
é¢é£æ
å ±
DLI ã³ãŒã¹:
察話å AI Building Conversational AI Applications (ã¢ããªã±ãŒã·ã§ã³ã®æ§ç¯ )
GTC ã»ãã·ã§ã³:
Breaking Barriers: How NVIDIA AI Workbench Makes AI Accessible to All (éå£ã®æç Ž: NVIDIA AI Workbench ã«ãã AI ããã¹ãŠã®äººã
ã«èº«è¿ã«ããæ¹æ³)
ãŠã§ãããŒ:
Virtual Desktop in the Era of AI (AI æ代ã®ä»®æ³ãã¹ã¯ããã)
ãŠã§ãããŒ:
Jumpstart AI Development With Virtual Workstations (ä»®æ³ã¯ãŒã¯ã¹ããŒã·ã§ã³ã§ AI éçºãå é) |
Subsets and Splits