en_url
stringlengths 70
147
| en_title
stringlengths 35
105
| en_content
stringlengths 5.28k
21.5k
| jp_url
stringlengths 76
153
| jp_title
stringlengths 16
72
| jp_content
stringlengths 3.21k
16.7k
|
---|---|---|---|---|---|
https://developer.nvidia.com/blog/three-building-blocks-for-creating-ai-virtual-assistants-for-customer-service-with-an-nvidia-nim-agent-blueprint/ | Three Building Blocks for Creating AI Virtual Assistants for Customer Service with an NVIDIA AI Blueprint | In todayâs fast-paced business environment, providing exceptional customer service is no longer just a nice-to-haveâitâs a necessity. Whether addressing technical issues, resolving billing questions, or providing service updates, customers expect quick, accurate, and personalized responses at their convenience. However, achieving this level of service comes with significant challenges.
Legacy approaches, such as static scripts or manual processes, often fall short when it comes to delivering personalized and real-time support. Additionally, many customer service operations rely on sensitive and fragmented data, which is subject to strict data governance and privacy regulations. With the rise of generative AI, companies aim to revolutionize customer service by enhancing operational efficiency, cutting costs, and maximizing ROI.
Integrating AI into existing systems presents challenges related to transparency, accuracy, and security, which can impede adoption and disrupt workflows. To overcome these hurdles, companies are leveraging generative AI-powered virtual assistants to manage a wide range of tasks, ultimately improving response times and freeing up resources.
This post outlines how developers can use the
NVIDIA AI Blueprint for AI virtual assistants
to scale operations with generative AI. By leveraging this information, including sample code, businesses can meet the growing demands for exceptional customer service while ensuring data integrity and governance. Whether improving existing systems or creating new ones, this blueprint empowers teams to meet customer needs with efficient and meaningful interactions.
Smarter AI virtual assistants with an AI query engine using retrieval-augmented generation
When building an AI virtual assistant, itâs important to align with the unique use case requirements, institutional knowledge, and needs of the organization. Traditional bots, however, often rely on rigid frameworks and outdated methods that struggle to meet the evolving demands of todayâs customer service landscape.
Across every industry, AI-based assistants can be transformational. For example, telecommunications companies, and the majority of retail and service providers, can use AI virtual assistants to enhance customer experience by offering support 24 hours a day, 7 days a week while handling a wide range of customer queries in multiple languages and providing dynamic, personalized interactions that streamline troubleshooting and account management. This helps reduce wait times and ensures consistent service across diverse customer needs.
Another example is within the healthcare insurance payor industry, where ensuring a positive member experience is critical. Virtual assistants enhance this experience by providing personalized support to members, addressing their claims, coverage inquiries, benefits, and payment issues, all while ensuring compliance with healthcare regulations. This also helps reduce the administrative burden on healthcare workers.
With the NVIDIA AI platform, organizations can create an AI query engine that uses
retrieval-augmented generation (RAG)
to connect AI applications to enterprise data. The AI virtual assistant blueprint enables developers to quickly get started building solutions that provide enhanced customer experiences. It is built using the following
NVIDIA NIM
microservices:
NVIDIA NIM for LLM:
Brings the power of state-of-the-art large language models (LLMs) to applications, providing unmatched natural language processing with remarkable efficiency.
Llama 3.1 70B Instruct NIM
:
Powers complex conversations with superior contextual understanding, reasoning, and text generation.
NVIDIA NeMo
Retriever NIM:
This collection provides easy access to state-of-the-art models that serve as foundational building blocks for RAG pipelines. These pipelines, when integrated into virtual assistant solutions, enable seamless access to enterprise data, unlocking institutional knowledge via fast, accurate, and scalable answers.
NeMo
Retriever Embedding NIM
:
Boosts text question-answering retrieval performance, providing high-quality embeddings for the downstream virtual assistant.
NeMo
Retriever Reranking NIM
:
Enhances the retrieval performance further with a fine-tuned reranker, finding the most relevant passages to provide as context when querying an LLM.
The blueprint is designed to integrate seamlessly with existing customer service applications without breaking information security mandates. Thanks to the portability of NVIDIA NIM, organizations can integrate data wherever it resides. By bringing generative AI to the data, this architecture enables AI virtual assistants to provide more personalized experiences tailored to each customer by leveraging their unique profiles, user interaction histories, and other relevant data.
A blueprint is a starting point that can be customized for an enterpriseâs unique use case. For example, integrate other NIM microservices, such as the
Nemotron 4 Hindi 4B Instruct
, to enable an AI virtual assistant to communicate in the local language. Other microservices can enable additional capabilities such as synthetic data generation and model fine-tuning to better align with your specific use case requirements. Give the AI virtual assistant a humanlike interface when connected to the digital human AI Blueprint.
With the implementation of a RAG backend with proprietary data (both company and user profile and their specific data), the AI virtual assistant can engage in highly contextual conversations, addressing the specifics of each customerâs needs in real-time. Additionally, the solution operates securely within your existing governance frameworks, ensuring compliance with privacy and security protocols especially when working with sensitive data.
Three building blocks for creating your own AI virtual assistant
As a developer, you can build your own AI virtual assistant that retrieves the most relevant and up-to-date information, in real time, with ever-improving humanlike responses. Figure 1 shows the AI virtual assistant architecture diagram which includes three functional components.
Figure 1. The NVIDIA AI Blueprint for AI virtual assistants
1. Data ingestion and retrieval pipeline
Pipeline administrators use the ingestion pipeline to load structured and unstructured data into the databases. Examples of structured data include customer profiles, order history, and order status. Unstructured data includes product manuals, the product catalog, and supporting material such as FAQ documents.
2. AI agent
The AI virtual assistant is the second functional component. Users interact with the virtual assistant through a user interface. An AI agent, implemented in the LangGraph agentic LLM programming framework, plans how to handle complex customer queries and solves recursively. The LangGraph agent uses the tool calling feature of the
Llama 3.1 70B Instruct NIM
to retrieve information from both the unstructured and structured data sources, then generates an accurate response.
The AI agent also uses short-term and long-term memory functions to enable multi-turn conversation history. The active conversation queries and responses are embedded so they can be retrieved later in the conversation as additional context. This allows more human-like interactions and eliminates the need for customers to repeat information theyâve already shared with the agent.
Finally, at the end of the conversation, the AI agent summarizes the discussion along with a sentiment determination and stores the conversation history in the structured database. Subsequent interactions from the same user can be retrieved as additional context in future conversations. Call summarization and conversation history retrieval can reduce call time and improve customer experience. Sentiment determination can provide valuable insights to the customer service administrator regarding the agentâs effectiveness.
3. Operations pipeline
The customer operations pipeline is the third functional component of the overall solution. This pipeline provides important information and insight to the customer service operators. Administrators can use the operations pipeline to review chat history, user feedback, sentiment analysis data, and call summaries. The analytics microservice, which leverages the Llama 3.1 70B Instruct NIM, can be used to generate analytics such as average call time, time to resolution, and customer satisfaction. The analytics are also leveraged as user feedback to retrain the LLM models to improve accuracy.
You can find the complete example of how to get started with this Blueprint on the
NVIDIA AI Blueprint GitHub repository.
Get to production with NVIDIA partners
NVIDIA consulting partners are helping enterprises adopt world-class AI virtual assistants built using NVIDIA accelerated computing and
NVIDIA AI Enterprise software
, which includes NeMo, NIM microservices, and AI Blueprints.
Accenture
The Accenture AI Refinery
built on
NVIDIA AI Foundry
helps design autonomous, intent-driven customer interactions, enabling businesses to tailor the journey to the individual through innovative channels such as digital humans or interaction agents. Specific use cases can be tailored to meet the needs of each industry, for example, telco call centers, insurance policy advisors, pharmaceutical interactive agents or automotive dealer network agents.
Deloitte
Deloitte Frontline AI enhances the customer service experience with digital avatars and LLM agents built with NVIDIA AI Blueprints that are accelerated by NVIDIA technologies such as NVIDIA ACE, NVIDIA Omniverse, NVIDIA Riva, and NIM.
Wipro
Wipro Enterprise Generative AI (WeGA) Studio accelerates industry-specific use cases including contact center agents across healthcare, financial services, retail, and more.
Tech Mahindra
Tech Mahindra is leveraging the NVIDIA AI Blueprint for digital humans to build solutions for customer service. Using RAG and NVIDIA NeMo, the solution provides the ability for a trainee to stop an agent during a conversation by raising a hand to ask clarifying questions. The system is designed to connect with microservices on the backend with a refined learning management system) which can be deployed across many industry use cases.
Infosys
Infosys Cortex
, part of
Infosys Topaz
, is an AI-driven customer engagement platform that integrates NVIDIA AI Blueprints and the NVIDIA NeMo, Riva, and ACE technologies for generative AI, speech AI, and digital human capabilities to deliver specialized and individualized, proactive, and on-demand assistance to every member of a customer service organization, consequently playing a pivotal role in enhancing customer experience, improving operational efficiency, and reducing costs.
Tata Consultancy Services
The Tata Consultancy Services (TCS) virtual agent, powered by NVIDIA NIM, and integrated with ServiceNowâs IT Virtual Agent is designed to optimize IT and HR support. This solution uses prompt-tuning and RAG to improve response times, accuracy, and provide multi-turn conversational capabilities. Benefits include reduced service desk costs, fewer support tickets, enhanced knowledge utilization, faster deployment, and a better overall employee and customer experience.
Quantiphi
Quantiphi
is integrating NVIDIA AI Blueprints into its conversational AI solutions to enhance customer service with lifelike digital avatars. These state-of-the-art avatars, powered by NVIDIA Tokkio and ACE technologies,
NVIDIA NIM microservices
and
NVIDIA NeMo
, seamlessly integrate with existing enterprise applications, enhancing operations and customer experiences with increased realism. Fine-tuned NIM deployments for digital avatar workflows have proven to be highly cost-effective, reducing enterprise spending on tokens.
SoftServe
SoftServe Digital Concierge
, accelerated by NVIDIA AI Blueprints and NVIDIA NIM microservices, uses NVIDIA ACE, NVIDIA Riva, and the NVIDIA Audio2Face NIM microservice to deliver a highly realistic virtual assistant. Thanks to the Character Creator tool, it delivers speech and facial expressions with remarkable accuracy and lifelike detail.
With RAG capabilities from NVIDIA NeMo Retriever, SoftServe Digital Concierge can intelligently respond to customer queries by referencing context and delivering specific, up-to-date information. It simplifies complex queries into clear, concise answers and can also provide detailed explanations when needed.
EXL
EXLâs Smart Agent Assist offering is a contact center AI solution leveraging NVIDIA Riva, NVIDIA NeMo, and NVIDIA NIM microservices. EXL plans to augment their solution using the NVIDIA AI Blueprint for AI virtual agents.
This week at
NVIDIA AI Summit India
, NVIDIA consulting partners announced a collaboration with NVIDIA to transform India into a Front Office for AI. Using NVIDIA technologies, these consulting giants can help customers tailor the customer service agent blueprint to build unique virtual assistants using their preferred AI modelâincluding sovereign LLMs from India-based model makersâand run it in production efficiently on the infrastructure of their choice.
Get started
To try the blueprint for free, and to see system requirements, navigate to the
Blueprint Card
.
To start building applications using those microservices, visit the
NVIDIA API catalog
. To
sign in
, youâll be prompted to enter a personal or business email address to access different options for building with NIM. For more information, see the
NVIDIA NIM FAQ
.
This post was originally published on 10/23/2024. | https://developer.nvidia.com/ja-jp/blog/three-building-blocks-for-creating-ai-virtual-assistants-for-customer-service-with-an-nvidia-nim-agent-blueprint/ | NVIDIA AI Blueprint ã§ã«ã¹ã¿ã㌠ãµãŒãã¹åãã® AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããäœæãã 3 ã€ã®æ§æèŠçŽ | Reading Time:
2
minutes
ä»æ¥ã®ããŸããããããžãã¹ç°å¢ã§ã¯ãåªããã«ã¹ã¿ã㌠ãµãŒãã¹ãæäŸããããšã¯ããã¯ãåã«ãããã°è¯ãããšãã§ã¯ãªãããå¿
èŠäžå¯æ¬ ãªããšãã§ããæè¡çãªåé¡ãžã®å¯Ÿå¿ãè«æ±ã«é¢ãã質åã®è§£æ±ºããµãŒãã¹ã®ææ°æ
å ±ã®æäŸãªã©ã顧客ã¯ãè¿
éãã€æ£ç¢ºã§ã顧客ã®éœåã«ã«ã¹ã¿ãã€ãºããã察å¿ãæåŸ
ããŠããŸãããããããã®ã¬ãã«ã®ãµãŒãã¹ãå®çŸããã«ã¯ã倧ããªèª²é¡ã䌎ããŸãã
ããŒãœãã©ã€ãºããããªã¢ã«ã¿ã€ã ã®ãµããŒããæäŸããã«ã¯ãå€ãã®å Žåãéçãªã¹ã¯ãªãããæäœæ¥ã«ããããã»ã¹ãšãã£ãåŸæ¥ã®ã¢ãããŒãã§ã¯äžååã§ããããã«ãå€ãã®ã«ã¹ã¿ã㌠ãµãŒãã¹æ¥åã§ã¯ãæ©å¯æ§ãé«ããã€æççãªããŒã¿ãåãæ±ãããšã«ãªããå³ããããŒã¿ç®¡çãšãã©ã€ãã·ãŒèŠå¶ã®å¯Ÿè±¡ãšãªããŸããçæ AI ã®å°é ã«ãããäŒæ¥ã¯éçšå¹çã®åäžãã³ã¹ãåæžãROI ã®æ倧åã«ãã£ãŠã«ã¹ã¿ã㌠ãµãŒãã¹ã«é©åœãèµ·ããããšãç®æããŠããŸãã
AI ãæ¢åã®ã·ã¹ãã ã«çµã¿èŸŒãéã«ã¯ãéææ§ã粟床ãã»ãã¥ãªãã£ã«é¢ãã課é¡ã«çŽé¢ããå°å
¥ã劚ããã¯ãŒã¯ãããŒãäžæãããããšããããããããŸãããããããããŒãã«ãå
æããããã«ãäŒæ¥ã¯çæ AI ã掻çšããããŒãã£ã« ã¢ã·ã¹ã¿ã³ããå©çšããŠå¹
åºãã¿ã¹ã¯ã管çããæçµçã«å¿çæéãççž®ããŠããªãœãŒã¹ã解æŸããŠããŸãã
ãã®æçš¿ã§ã¯ãéçºè
ãã
AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã« NVIDIA AI Blueprint
ã䜿çšããŠãçæ AI ã§æ¥åãæ¡åŒµããæ¹æ³ã«ã€ããŠèª¬æããŸãããµã³ãã« ã³ãŒããå«ããã®æ
å ±ã掻çšããããšã§ãäŒæ¥ã¯ãããŒã¿ã®æŽåæ§ãšããŒã¿ ã¬ããã³ã¹ã確ä¿ããªãããåªããã«ã¹ã¿ã㌠ãµãŒãã¹ãžã®é«ãŸãèŠæ±ã«å¿ããããšãã§ããŸããæ¢åã®ã·ã¹ãã ã®æ¹åãŸãã¯æ°ããã·ã¹ãã ã®æ§ç¯ã«ãããããããã® Blueprint ã«ãã£ãŠããŒã ã¯å¹ççã§æå³ã®ãããããšããéããŠé¡§å®¢ã®ããŒãºã«å¯Ÿå¿ããããšãã§ããŸãã
æ€çŽ¢æ¡åŒµçæ (RAG) ã䜿çšãã AI ã¯ãšãª ãšã³ãžã³ã«ããã¹ããŒã㪠AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ã
AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããæ§ç¯ããå Žåãç¬èªã®ãŠãŒã¹ ã±ãŒã¹èŠä»¶ããã³çµç¹ã®ç¥èãããŒãºã«åãããŠèª¿æŽããããšãéèŠã§ããåŸæ¥ã®ãããã§ã¯ãå€ãã®å Žåãæè»æ§ã®ä¹ãããã¬ãŒã ã¯ãŒã¯ãšæ代é
ãã®ã¡ãœãããå©çšããŠãããä»æ¥ã®ã«ã¹ã¿ã㌠ãµãŒãã¹ã®ãããªåžžã«å€åãç¶ããèŠæ±ã«å¯Ÿå¿ã§ããŸããã
ããããæ¥çã§ãAI ããŒã¹ã®ã¢ã·ã¹ã¿ã³ããé©æ°çãªååšãšãªãåŸãŸããããšãã°ãéä¿¡äŒç€Ÿãå°å£²ããµãŒãã¹ ãããã€ããŒã®å€§å€æ°ã¯ãAI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã䜿çšããŠã24 æé 365 æ¥çšŒåãããµããŒããæäŸããªãããå€èšèªã§å¹
åºã顧客ã®åãåããã«å¯Ÿå¿ãããã©ãã«ã·ã¥ãŒãã£ã³ã°ãã¢ã«ãŠã³ã管çãåçåããããã€ãããã¯ã§ããŒãœãã©ã€ãºããããããšããæäŸããããšã§ã顧客äœéšãåäžããããšãã§ããŸããããã«ãããåŸ
ã¡æéãççž®ããããŸããŸãªé¡§å®¢ããŒãºã«å¯ŸããŠäžè²«ãããµãŒãã¹ãæäŸããããšãã§ããŸãã
ããã²ãšã€ã®äŸãšããŠãå»çä¿éºã®æ¯ææ¥çã§ã¯ãå å
¥è
ã«ãšã£ãŠæºè¶³åºŠã®é«ãäœéšã確å®ã«æäŸããããšãéèŠã§ããããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã¯ãå»çèŠå¶ã®éµå®ã確ä¿ããªãããå å
¥è
ã«ããŒãœãã©ã€ãºããããµããŒããæäŸããè«æ±ãè£åã«é¢ããåãåããã絊ä»éãæ¯æãã«é¢ããåé¡ã«å¯ŸåŠããããšã§ãããããäœéšãåäžããŠããŸããããã«ãããå»çåŸäºè
ã®ç®¡çäžã®è² æ
ã軜æžããããšãã§ããŸãã
NVIDIA AI ãã©ãããã©ãŒã ã䜿çšããããšã§ãäŒæ¥ã¯ã
æ€çŽ¢æ¡åŒµçæ (RAG)
ã䜿çšãã AI ã¯ãšãª ãšã³ãžã³ãäœæããAI ã¢ããªã±ãŒã·ã§ã³ãäŒæ¥ããŒã¿ã«æ¥ç¶ããããšãã§ããŸããAI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã® Blueprint ã«ãããéçºè
ã¯ãããæŽç·Žããã顧客äœéšãæäŸãããœãªã¥ãŒã·ã§ã³ãè¿
éã«æ§ç¯ãéå§ããããšãã§ããŸãããã® Blueprint ã¯ã以äžã®
NVIDIA NIM
ãã€ã¯ããµãŒãã¹ã䜿çšããŠæ§ç¯ãããŸãã
LLM åã NVIDIA NIM:
æå
端ã®å€§èŠæš¡èšèªã¢ãã« (LLM) ã®ãã¯ãŒãã¢ããªã±ãŒã·ã§ã³ã«åãå
¥ãã倧å¹
ã«å¹çåããŠãåè¶ããèªç¶èšèªåŠçãæäŸããŸãã
Llama 3.1 70B Instruct NIM
:
åªããæèç解ãæšè«ãããã¹ãçæã§è€éãªäŒè©±ãå¯èœã§ãã
NVIDIA NeMo
Retriever NIM:
RAG ãã€ãã©ã€ã³ã®åºç€ãšãªãæ§æèŠçŽ ã§ããæå
端ã¢ãã«ã«ç°¡åã«ã¢ã¯ã»ã¹ã§ããŸãããã® RAG ãã€ãã©ã€ã³ã«ãã£ãŠãããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã¯äŒæ¥ããŒã¿ãžã®ã·ãŒã ã¬ã¹ãªã¢ã¯ã»ã¹ãå¯èœã«ãªããè¿
éãã€æ£ç¢ºã§ã¹ã±ãŒã©ãã«ãªåçã§ãçµç¹ã®ç¥èã掻çšã§ããŸãã
NeMo
Retriever Embedding NIM
:
ããã¹ãã® QA æ€çŽ¢ã¿ã¹ã¯ã«ç¹åãããŠãããããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã¯ãã®é«å質ã®ããã¹ãåã蟌ã¿ãå©çšããŸãã
NeMo
Retriever Reranking NIM
:
ãã¡ã€ã³ãã¥ãŒãã³ã°ããããªã©ã³ãã³ã° ã¢ãã«ã§ãããåã蟌ã¿ã¢ãã«ãšäœµçšããããšã§æ€çŽ¢æ§èœãããã«åäžãããããšãã§ããŸããå
¥åæã«æãé¢é£æ§ã®é«ãæç« ãèŠä»ãåºããLLM ã«æèãšããŠæž¡ããŸãã
ãã® Blueprint ã¯ãæ
å ±ã»ãã¥ãªãã£ã«é¢ãã矩åã«åããããšãªããæ¢åã®ã«ã¹ã¿ã㌠ãµãŒãã¹ ã¢ããªã±ãŒã·ã§ã³ãšã·ãŒã ã¬ã¹ã«çµ±åã§ããããã«èšèšãããŠããŸããNVIDIA NIM ã®ç§»æ€æ§ã®ãããã§ãäŒæ¥ã¯ãããŒã¿ãã©ãã«ãã£ãŠãçµ±åããããšãã§ããŸããçæ AI ãããŒã¿ã«åãå
¥ããããšã§ãAI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã¯ã顧客åºæã®ãããã¡ã€ã«ããŠãŒã¶ãŒãšã®å¯Ÿè©±å±¥æŽããã®ä»ã®é¢é£ããŒã¿ãªã©ã掻çšããŠãå顧客ã«åãããããããŒãœãã©ã€ãºãããäœéšãæäŸã§ããããã«ãªããŸãã
Blueprint ã¯ãäŒæ¥ç¬èªã®ãŠãŒã¹ ã±ãŒã¹ã«åãããŠã«ã¹ã¿ãã€ãºãå¯èœãª âåå°â ã®ãããªãã®ã§ããããšãã°ã
Nemotron 4 Hindi 4B Instruct
ãªã©ä»ã® NIM ãã€ã¯ããµãŒãã¹ãçµ±åããã°ãAI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããçŸå°ã®èšèªã§ã³ãã¥ãã±ãŒã·ã§ã³ã§ããããã«ãªããŸãããã®ä»ã®ãã€ã¯ããµãŒãã¹ã«ãããåæããŒã¿ã®çæãã¢ãã«ã®ãã¡ã€ã³ãã¥ãŒãã³ã°ãªã©ã®è¿œå æ©èœãå¯èœã«ãªããç¹å®ã®ãŠãŒã¹ ã±ãŒã¹èŠä»¶ã«é©åãããããšãã§ããŸãããŸããããžã¿ã« ãã¥ãŒãã³ AI Blueprint ã«æ¥ç¶ãããšãAI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã«äººéã®ãããªã€ã³ã¿ãŒãã§ã€ã¹ãæäŸãããŸãã
ç¬èªã®ããŒã¿ (äŒæ¥ããŠãŒã¶ãŒã®ãããã¡ã€ã«ãç¹å®ã®ããŒã¿) ãåãã RAG ããã¯ãšã³ããå®è£
ããããšã§ãAI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã¯ãæèã«æ²¿ã£ã察話ãè¡ãããªã¢ã«ã¿ã€ã ã§å顧客ã®ããŒãºã®ç¹å®äºé
ã«å¯Ÿå¿ããããšãã§ããŸããããã«ããã®ãœãªã¥ãŒã·ã§ã³ã¯ãã§ã«éçšããŠããã¬ããã³ã¹ ãã¬ãŒã ã¯ãŒã¯å
ã§å®å
šã«éçšãããç¹ã«æ©å¯ããŒã¿ãæ±ãéã«ã¯ããã©ã€ãã·ãŒãšã»ãã¥ãªã㣠ãããã³ã«ã®éµå®ãä¿èšŒããŸãã
ç¬èªã® AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããæ§ç¯ãã 3 ã€ã®æ§æèŠçŽ
éçºè
ãšããŠãæãé¢é£æ§ã®é«ãææ°ã®æ
å ±ããªã¢ã«ã¿ã€ã ã§ååŸããåžžã«äººéã®ãããªå¿çãã§ããããæ¥ã
é²åããç¬èªã® AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããæ§ç¯ã§ããŸããå³ 1 ã¯ã3 ã€ã®æ©èœã³ã³ããŒãã³ããå«ã AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã®ã¢ãŒããã¯ãã£å³ã§ãã
å³ 1. AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãåãã® NVIDIA AI Blueprint
1. ããŒã¿ã®åã蟌ã¿ãšæ€çŽ¢ãã€ãã©ã€ã³
ãã€ãã©ã€ã³ç®¡çè
ã¯ãåã蟌㿠(Ingest) ãã€ãã©ã€ã³ã䜿çšããŠãæ§é åããŒã¿ãéæ§é åããŒã¿ãããŒã¿ããŒã¹ã«èªã¿èŸŒãããšãã§ããŸããæ§é åããŒã¿ã®äŸãšããŠã顧客ãããã¡ã€ã«ã泚æå±¥æŽãçºéç¶æ³ãªã©ããããŸããéæ§é åããŒã¿ã«ã¯ã補åããã¥ã¢ã«ã補åã«ã¿ãã°ãFAQ ããã¥ã¡ã³ããªã©ã®ãµããŒãè³æãå«ãŸããŸãã
2. AI ãšãŒãžã§ã³ã
2 ã€ç®ã®æ©èœã³ã³ããŒãã³ã㯠AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ã ã§ãããŠãŒã¶ãŒã¯ããŠãŒã¶ãŒ ã€ã³ã¿ãŒãã§ã€ã¹ãä»ããŠããŒãã£ã« ã¢ã·ã¹ã¿ã³ããšå¯Ÿè©±ããŸãããšãŒãžã§ã³ãå LLM ããã°ã©ãã³ã° ãã¬ãŒã ã¯ãŒã¯ã§ãã LangGraph ã§å®è£
ããã AI ãšãŒãžã§ã³ããã顧客ããã®è€éãªåãåããã«å¯Ÿå¿ããæ¹æ³ãèšç»ãããã®åãåãããååž°çã«è§£æ±ºããŸããLangGraph ãšãŒãžã§ã³ãã¯
Llama3.1 70B Instruct NIM
ã®ããŒã«åŒã³åºãæ©èœã䜿çšããŠãéæ§é åããŒã¿ãšæ§é åããŒã¿ã®äž¡æ¹ããæ
å ±ãååŸããæ£ç¢ºãªå¿çãçæããŸãã
ãŸã AI ãšãŒãžã§ã³ãã«ãããçæã¡ã¢ãªãšé·æã¡ã¢ãªã®æ©èœã䜿çšããŠãã«ãã¿ãŒã³ã®å¯Ÿè©±å±¥æŽãå®çŸã§ããŸããã¢ã¯ãã£ããªäŒè©±ã«å¯Ÿããåãåãããå¿çãåã蟌ãŸããŠãããããäŒè©±ã®åŸåã§è¿œå ã®æèãšããŠæ€çŽ¢ãå©çšã§ããŸããããã«ããããã人éã«è¿ããããšããå¯èœã«ãªãã顧客ããã§ã«ãšãŒãžã§ã³ããšå
±æããæ
å ±ãç¹°ãè¿ãæäŸããå¿
èŠããªããªããŸãã
æçµçã«ãäŒè©±ã®æåŸã« AI ãšãŒãžã§ã³ããææ
ã®å€å®ãšãšãã«è°è«ãèŠçŽããæ§é åããŒã¿ããŒã¹ã«äŒè©±å±¥æŽãä¿åããŸãããŠãŒã¶ãŒãšã®å¯Ÿè©±ã¯ãä»åŸã®äŒè©±ã§è¿œå ã®æèãšããŠæ€çŽ¢ã§ããŸããé話ã®èŠçŽãšäŒè©±å±¥æŽãæ€çŽ¢ããããšã§ãé話æéãççž®ãã顧客äœéšãåäžãããããšãã§ããŸããææ
å€å®ã«ãã£ãŠããšãŒãžã§ã³ãã®æå¹æ§ã«é¢ãã貎éãªæŽå¯ãã«ã¹ã¿ã㌠ãµãŒãã¹ç®¡çè
ã«æäŸã§ããŸãã
3. éçšãã€ãã©ã€ã³
顧客éçšãã€ãã©ã€ã³ã¯ããœãªã¥ãŒã·ã§ã³å
šäœã® 3 ã€ç®ã®æ§æèŠçŽ ã§ãããã®ãã€ãã©ã€ã³ã¯ãã«ã¹ã¿ã㌠ãµãŒãã¹ ãªãã¬ãŒã¿ãŒã«éèŠãªæ
å ±ãšæŽå¯ãæäŸããŸãã管çè
ã¯ãéçšãã€ãã©ã€ã³ã䜿çšããŠããã£ããå±¥æŽããŠãŒã¶ãŒã®ãã£ãŒãããã¯ãææ
åæããŒã¿ãé話ã®èŠçŽã確èªããããšãã§ããŸããLlama 3.1 70B Instruct NIM ã掻çšããåæãã€ã¯ããµãŒãã¹ã䜿çšããŠãå¹³åé話æéã解決ãŸã§ã®æéã顧客æºè¶³åºŠãªã©ã®åæãçæã§ããŸãããŸãåæçµæã¯ããŠãŒã¶ãŒ ãã£ãŒãããã¯ãšããŠã掻çšãããLLM ã¢ãã«ãåãã¬ãŒãã³ã°ããŠç²ŸåºŠãåäžããŸãã
NVIDIA ããŒãããŒãšæ¬çªç°å¢ã«çæ
NVIDIA ã®ã³ã³ãµã«ãã£ã³ã° ããŒãããŒã¯ãåäŒæ¥ããNVIDIA ã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ãšãNeMoãNIM ãã€ã¯ããµãŒãã¹ãAI Blueprint ãå«ã
NVIDIA AI Enterprise ãœãããŠã§ã¢
ã§æ§ç¯ãããäžçæ°Žæºã® AI ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããå°å
¥ã§ããããã«æ¯æŽããŠããŸãã
Accenture
NVIDIA AI Foundry
äžã«æ§ç¯ããã
Accenture AI Refinery
ã¯ãèªåŸçã§é¡§å®¢ã®æå³ã«æ²¿ã£ã察話ãèšèšããäŒæ¥ãããžã¿ã« ãã¥ãŒãã³ãã€ã³ã¿ã©ã¯ã·ã§ã³ ãšãŒãžã§ã³ããªã©ã®é©æ°çãªãã£ãã«ãéããŠãå人ã«åãããŠã«ã¹ã¿ãã€ãºã§ããããã«ããŸããç¹å®ã®ãŠãŒã¹ ã±ãŒã¹ã¯ãéä¿¡äŒç€Ÿã®ã³ãŒã« ã»ã³ã¿ãŒãä¿éºå¥çŽã®ã¢ããã€ã¶ãŒãå»è¬åã®ã€ã³ã¿ã©ã¯ãã£ã ãšãŒãžã§ã³ããèªåè»ãã£ãŒã©ãŒã®ãããã¯ãŒã¯ ãšãŒãžã§ã³ããªã©ãåæ¥çã®ããŒãºã«åãããŠã«ã¹ã¿ãã€ãºã§ããŸãã
Deloitte
Deloitte Frontline AI ã¯ãNVIDIA ACEãNVIDIA OmniverseãNVIDIA RivaãNIM ãªã©ã® NVIDIA ã®ãã¯ãããžã«ãã£ãŠå éããã NVIDIA AI Blueprint ãå©çšããŠæ§ç¯ãããããžã¿ã« ã¢ãã¿ãŒã LLM ãšãŒãžã§ã³ãã§ã«ã¹ã¿ã㌠ãµãŒãã¹äœéšãåäžããŠããŸãã
Wipro
Wipro Enterprise Generative AI (WeGA) Studio ã¯ããã«ã¹ã±ã¢ãéèãµãŒãã¹ãå°å£²ãªã©ã®ã³ã³ã¿ã¯ã ã»ã³ã¿ãŒã®ãšãŒãžã§ã³ããå«ãæ¥çåºæã®ãŠãŒã¹ ã±ãŒã¹ãå éããŠããŸãã
Tech Mahindra
Tech Mahindra ã¯ãããžã¿ã« ãã¥ãŒãã³åãã® NVIDIA AI Blueprint ã掻çšããŠãã«ã¹ã¿ã㌠ãµãŒãã¹åãã®ãœãªã¥ãŒã·ã§ã³ãæ§ç¯ããŠããŸããRAG ãš NVIDIA NeMo ã䜿çšãããã®ãœãªã¥ãŒã·ã§ã³ã¯ããã¬ãŒãã³ã°åè¬è
ããäŒè©±äžã«æãæããŠæ確ãªè³ªåãããããšã§ããšãŒãžã§ã³ããæ¢ããæ©èœãæäŸããŸãããã®ã·ã¹ãã ã¯ãå€ãã®æ¥çã®ãŠãŒã¹ ã±ãŒã¹ã§ãããã€ã§ããæŽç·ŽãããåŠç¿ç®¡çã·ã¹ãã ã§ãããã¯ãšã³ãã®ãã€ã¯ããµãŒãã¹ãšæ¥ç¶ããããã«èšèšãããŠããŸãã
Infosys
Infosys Topaz
ã®äžéšã§ãã
Infosys Cortex
ã¯ãAI ã掻çšãã顧客ãšã³ã²ãŒãžã¡ã³ã ãã©ãããã©ãŒã ã§ãããçæ AIãã¹ããŒã AIãããžã¿ã« ãã¥ãŒãã³æ©èœãå®çŸãã NVIDIA AI Blueprint ãš NVIDIA NeMoãRivaãACE æè¡ãçµ±åããã«ã¹ã¿ã㌠ãµãŒãã¹çµç¹ã®ããããã¡ã³ããŒã«å°éçã§å人ã«åãããããã¢ã¯ãã£ããã€ãªã³ããã³ãã®æ¯æŽãæäŸããããšã§ã顧客äœéšã®åäžãéçšå¹çã®æ¹åãã³ã¹ãåæžã«éèŠãªåœ¹å²ãæãããŸãã
Tata Consultancy Services
NVIDIA NIM ãæèŒã ServiceNow ã® IT ä»®æ³ãšãŒãžã§ã³ããšçµ±åããã Tata Consultancy Services (TCS) ã®ä»®æ³ãšãŒãžã§ã³ãã¯ãIT ãš HR ã®ãµããŒããæé©åããããã«èšèšãããŠããŸãããã®ãœãªã¥ãŒã·ã§ã³ã¯ãããã³ãã ãã¥ãŒãã³ã°ãš RAG ã䜿çšããŠãå¿çæéã粟床ãåäžããããã«ãã¿ãŒã³ã®äŒè©±æ©èœãæäŸããŸãããµãŒãã¹ ãã¹ã¯ã®ã³ã¹ãåæžããµããŒã ãã±ããã®æžå°ããã¬ããžæŽ»çšã®åŒ·åãããè¿
éãªãããã€ããããŠåŸæ¥å¡ãšé¡§å®¢ã®å
šäœçãªäœéšã®åäžãªã©ã®ã¡ãªããããããŸãã
Quantiphi
Quantiphi
ã¯ãNVIDIA AI Blueprint ã察話å AI ãœãªã¥ãŒã·ã§ã³ã«çµ±åãããªã¢ã«ãªããžã¿ã« ã¢ãã¿ãŒã§ã«ã¹ã¿ã㌠ãµãŒãã¹ã匷åããŠããŸããNVIDIA Tokkio ãš ACEã
NVIDIA NIM ãã€ã¯ããµãŒãã¹
ã
NVIDIA NeMo
ãæèŒããæå
端ã®ã¢ãã¿ãŒããæ¢åã®ãšã³ã¿ãŒãã©ã€ãº ã¢ããªã±ãŒã·ã§ã³ãšã·ãŒã ã¬ã¹ã«çµ±åãããªã¢ãªãã£ãé«ããªããéçšãšé¡§å®¢äœéšãåäžãããŸããããžã¿ã« ã¢ãã¿ãŒ ã¯ãŒã¯ãããŒã«ãã¡ã€ã³ãã¥ãŒãã³ã°ããã NIM ã®ãããã€ã¯ãè²»çšå¯Ÿå¹æãé«ããäŒæ¥ã®ããŒã¯ã³ã«å¯Ÿããæ¯åºãåæžããããšãå®èšŒãããŠããŸãã
SoftServe
SoftServe Digital Concierge
ã¯ãNVIDIA AI Blueprint ãš NVIDIA NIM ãã€ã¯ããµãŒãã¹ã«ãã£ãŠå éãããŠãããNVIDIA ACEãNVIDIA RivaãNVIDIA Audio2Face NIM ãã€ã¯ããµãŒãã¹ã䜿çšããŠãéåžžã«ãªã¢ã«ãªããŒãã£ã« ã¢ã·ã¹ã¿ã³ããæäŸããŸããCharacter Creator ããŒã«ã䜿çšããããšã§ãé³å£°ãé¡ã®è¡šæ
ãé©ãã»ã©æ£ç¢ºãã€ãªã¢ã«ã«è©³çŽ°ãåçŸã§ããŸãã
NVIDIA NeMo Retriever ã® RAG æ©èœã«ãããSoftServe Digital Concierge ã¯ãæèãåç
§ããç¹å®ã®ææ°æ
å ±ãæäŸããããšã§ã顧客ããã®åãåããã«ã€ã³ããªãžã§ã³ãã«å¯Ÿå¿ã§ããŸããè€éãªåãåãããç°¡çŽ åããæ確ã§ç°¡æœãªåçã«ãŸãšããå¿
èŠã«å¿ããŠè©³çŽ°ãªèª¬æãæäŸããããšãã§ããŸãã
EXL
EXL ã® Smart Agent Assist 補åã¯ãNVIDIA RivaãNVIDIA NeMoãNVIDIA NIM ãã€ã¯ããµãŒãã¹ã掻çšããã³ã³ã¿ã¯ã ã»ã³ã¿ãŒ AI ãœãªã¥ãŒã·ã§ã³ã§ããEXL ã¯ãAI ä»®æ³ãšãŒãžã§ã³ãåãã® NVIDIA AI Blueprint ã䜿çšããŠããœãªã¥ãŒã·ã§ã³ã匷åããäºå®ã§ãã
NVIDIA AI Summit India
ã§ãNVIDIA ã³ã³ãµã«ãã£ã³ã° ããŒãããŒããã€ã³ãã AI ã®ããã³ã ãªãã£ã¹ã«å€é©ããããã«ãNVIDIA ãšã®ã³ã©ãã¬ãŒã·ã§ã³ãçºè¡šããŸãããNVIDIA ãã¯ãããžã䜿çšããããšã§ããããã®ã³ã³ãµã«ãã£ã³ã°å€§æã¯ã顧客ãã«ã¹ã¿ã㌠ãµãŒãã¹ ãšãŒãžã§ã³ãã® Blueprint ãã«ã¹ã¿ãã€ãºãã奜ã¿ã® AI ã¢ãã« (ã€ã³ãã«æ ç¹ã眮ãã¢ãã« ã¡ãŒã«ãŒãæäŸãããœããªã³ LLM ãå«ã) ã䜿çšããŠç¬èªã®ããŒãã£ã« ã¢ã·ã¹ã¿ã³ããæ§ç¯ããåžæã®ã€ã³ãã©ã§å¹ççã«æ¬çªçšŒåã§ããããã«ããŸãã
ä»ããå§ãã
Blueprint ãç¡æã§è©Šããããã·ã¹ãã èŠä»¶ã確èªããã«ã¯ã
Blueprint ã«ãŒã
ããåç
§ãã ããããããã®ãã€ã¯ããµãŒãã¹ã䜿çšããŠã¢ããªã±ãŒã·ã§ã³ã®æ§ç¯ãå§ããã«ã¯ã
NVIDIA API ã«ã¿ãã°
ã«ã¢ã¯ã»ã¹ããŠãã ããã
ãµã€ã³ã€ã³
ããã«ã¯ãNIM ã§æ§ç¯ããããŸããŸãªãªãã·ã§ã³ã«ã¢ã¯ã»ã¹ãããããå人çšãŸãã¯ããžãã¹çšã®ã¡ãŒã« ã¢ãã¬ã¹ãå
¥åããå¿
èŠããããŸãã詳现ã«ã€ããŠã¯ã
NVIDIA NIM FAQ
ãã芧ãã ããã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
éèéšéåãã®å®å
šã§å¹ççãªããŒãã£ã« ã¢ã·ã¹ã¿ã³ã
GTC ã»ãã·ã§ã³:
çæ AI ã®èª²é¡ãžã®å¯Ÿå¿ãšå¯èœæ§ã®æŽ»çš: NVIDIA ã®ãšã³ã¿ãŒãã©ã€ãº ãããã€ããåŸãããæŽå¯
NGC ã³ã³ãããŒ:
retail-shopping-advisor-chatbot-service
NGC ã³ã³ãããŒ:
retail-shopping-advisor-frontend-service
ãŠã§ãããŒ:
éèãµãŒãã¹ ã³ã³ã¿ã¯ã ã»ã³ã¿ãŒåãã® AI é³å£°å¯Ÿå¿ããŒãã£ã« ã¢ã·ã¹ã¿ã³ãã®æ§ç¯ãšå°å
¥æ¹æ³
ãŠã§ãããŒ:
éä¿¡äŒæ¥ã察話å AI ã§é¡§å®¢äœéšãå€é©ããæ¹æ³ |
https://developer.nvidia.com/blog/hymba-hybrid-head-architecture-boosts-small-language-model-performance/ | Hymba Hybrid-Head Architecture Boosts Small Language Model Performance | Transformers, with their attention-based architecture, have become the dominant choice for language models (LMs) due to their strong performance, parallelization capabilities, and long-term recall through key-value (KV) caches. However, their quadratic computational cost and high memory demands pose efficiency challenges. In contrast, state space models (SSMs) like Mamba and Mamba-2 offer constant complexity and efficient hardware optimization but struggle with memory recall tasks, affecting their performance on general benchmarks.
NVIDIA researchers recently proposed
Hymba
, a family of small language models (SLMs) featuring a hybrid-head parallel architecture that integrates transformer attention mechanisms with SSMs to achieve both enhanced efficiency and improved performance. In Hymba, attention heads provide high-resolution recall, while SSM heads enable efficient context summarization.
The novel architecture of Hymba reveals several insights:
Overhead in attention:
Over 50% of attention computation can be replaced by cheaper SSM computation.
Local attention dominance:
Most global attention can be replaced by local attention without sacrificing performance on general and recall-intensive tasks, thanks to the global information summarized by SSM heads.
KV cache redundancy:
Key-value cache is highly correlated across heads and layers, so it can be shared across heads (group query attention) and layers (cross-layer KV cache sharing).
Softmax attention limitation:
Attention mechanisms are constrained to sum to one, limiting sparsity, and flexibility. We introduce learnable meta-tokens that are prepended to prompts, storing critical information and alleviating the âforced-to-attendâ burden associated with attention mechanisms.
This post shows that Hymba 1.5B performs favorably against state-of-the-art open-source models of similar size, including Llama 3.2 1B, OpenELM 1B, Phi 1.5, SmolLM2 1.7B, Danube2 1.8B, and Qwen2.5 1.5B. Compared to Transformer models of similar size, Hymba also achieves higher throughput and requires 10x less memory to store cache.
Hymba 1.5B is released to the
Hugging Face
collection and
GitHub
.
Hymba 1.5B performance
Figure 1 compares Hymba 1.5B against sub-2B models (Llama 3.2 1B, OpenELM 1B, Phi 1.5, SmolLM2 1.7B, Danube2 1.8B, Qwen2.5 1.5B) in terms of average task accuracy, cache size (MB) relative to sequence length, and throughput (tok/sec).
Figure 1. Performance comparison of Hymba 1.5B Base against sub-2B models
In this set of experiments, the tasks include MMLU, ARC-C, ARC-E, PIQA, Hellaswag, Winogrande, and SQuAD-C. The throughput is measured on an NVIDIA A100 GPU with a sequence length of 8K and a batch size of 128 using PyTorch. For models encountering out of memory (OOM) issues during throughput measurement, the batch size was halved until the OOM is resolved to measure the maximal achievable throughput without OOM.
Hymba model design
SSMs such as Mamba were introduced to address the quadratic complexity and large inference-time KV cache issues of transformers. However, due to their low-resolution memory, SSMs struggle with memory recall and performance. To overcome these limitations, we propose a road map for developing efficient and high-performing small LMs in Table 1.
Configuration
Commonsense reasoning (%) â
Recall (%) â
Throughput (token/sec) â
Cache size (MB) â
Design reason
Ablations on 300M model size and 100B training tokens
Transformer (Llama)
44.08
39.98
721.1
414.7
Accurate recall while inefficient
State-space models (Mamba)
42.98
19.23
4720.8
1.9
Efficient while inaccurate recall
A. + Attention heads (sequential)
44.07
45.16
776.3
156.3
Enhance recall capabilities
B. + Multi-head heads (parallel)
45.19
49.90
876.7
148.2
Better balance of two modules
C. + Local / global attention
44.56
48.79
2399.7
41.2
Boost compute/cache efficiency
D. + KV cache sharing
45.16
48.04
2756.5
39.4
Cache efficiency
E. + Meta-tokens
45.59
51.79
2695.8
40.0
Learned memory initialization
Scaling to 1.5B model size and 1.5T training tokens
F. + Size / data
60.56
64.15
664.1
78.6
Further boost task performance
G. + Extended context length (2Kâ8K)
60.64
68.79
664.1
78.6
Improve multishot and recall tasks
Table 1. Design road map of the Hymba model
Fused hybrid modules
Fusing attention and SSM heads in parallel within a hybrid-head module outperforms sequential stacking, according to the ablation study. Hymba fuses attention and SSM heads in parallel within a hybrid head module, enabling both heads to process the same information simultaneously. This architecture improves reasoning and recall accuracy.
Figure 2. The hybrid-head module in Hymba
Efficiency and KV cache optimization
While attention heads improve task performance, they increase KV cache requirements and reduce throughput. To mitigate this, Hymba optimizes the hybrid-head module by combining local and global attention and employing cross-layer KV cache sharing. This improves throughput by 3x and reduces cache by almost 4x without sacrificing performance.
Figure 3. Hymba model architecture
Meta-tokens
A set of 128 pretrained embeddings prepended to inputs, functioning as learned cache initialization to enhance focus on relevant information. These tokens serve a dual purpose:
Mitigating attention drain by acting as backstop tokens, redistributing attention effectively
Encapsulating compressed world knowledge
Figure 4. Interpretation of Hymba from the memory aspect
Model analysis
This section presents an apples-to-apples comparison across different architectures under the same training settings. We then visualize the attention maps of SSM and Attention in different pretrained models. Finally, we perform head importance analysis for Hymba through pruning. All the analyses in this section help to illustrate how and why the design choices for Hymba are effective.
Apples-to-apples comparison
We performed an apples-to-apples comparison of Hymba, pure Mamba2, Mamba2 with FFN, Llama3 style, and Samba style (Mamba-FFN-Attn-FFN) architectures. All models have 1 billion parameters and are trained from scratch for 100 billion tokens from SmolLM-Corpus with exactly the same training recipe. All results are obtained through lm-evaluation-harness using a zero-shot setting on Hugging Face models. Hymba performs the best on commonsense reasoning as well as question answering and recall-intensive tasks.
Table 2 compares various model architectures on language modeling and recall-intensive and commonsense reasoning tasks, with Hymba achieving strong performance across metrics. Hymba demonstrates the lowest perplexity in language tasks (18.62 for Wiki and 10.38 for LMB) and solid results in recall-intensive tasks, particularly in SWDE (54.29) and SQuAD-C (44.71), leading to the highest average score in this category (49.50).
Model
Language (PPL) â
Recall intensive (%) â
Commonsense reasoning (%) â
Mamba2
15.88
43.34
52.52
Mamba2 w/ FFN
17.43
28.92
51.14
Llama3
16.19
47.33
52.82
Samba
16.28
36.17
52.83
Hymba
14.5
49.5
54.57
Table 2. Comparison of architectures trained on 100 billion tokens under the same settings
In commonsense reasoning and question answering, Hymba outperforms other models in most tasks, such as SIQA (31.76) and TruthfulQA (31.64), with an average score of 54.57, slightly above Llama3 and Mamba2. Overall, Hymba stands out as a balanced model, excelling in both efficiency and task performance across diverse categories.
Attention map visualization
We further categorized elements in the attention map into four types:
Meta:
Attention scores from all real tokens to meta-tokens. This category reflects the modelâs preference for attending to meta-tokens. In attention maps, they are usually located in the first few columns (for example, 128 for Hymba) if a model has meta-tokens.
BOS:
Attention scores from all real tokens to the beginning-of-sequence token. In the attention map, they are usually located in the first column right after the meta-tokens.
Self:
Attention scores from all real tokens to themselves. In the attention map, they are usually located in the diagonal line.
Cross:
Attention scores from all real tokens to other real tokens. In the attention map, they are usually located in the off-diagonal area.
The attention pattern of Hymba is significantly different from that of vanilla Transformers. In vanilla Transformers, attention scores are more concentrated on BOS, which is consistent with the findings in Attention Sink. In addition, vanilla Transformers also have a higher proportion of Self attention scores. In Hymba, meta-tokens, attention heads, and SSM heads work complementary to each other, leading to a more balanced distribution of attention scores across different types of tokens.
Specifically, meta-tokens offload the attention scores from BOS, enabling the model to focus more on the real tokens. SSM heads summarize the global context, which focuses more on current tokens (Self attention scores). Attention heads, on the other hand, pay less attention to Self and BOS tokens, and more attention to other tokens (that is, Cross attention scores). This suggests that the hybrid-head design of Hymba can effectively balance the attention distribution across different types of tokens, potentially leading to better performance.
Figure 5. Schematics of the attention map of Hymba as a combination of meta-tokens, sliding window attention, and Mamba contributions
Figure 6. Sum of the attention score from different categories in Llama 3.2 3B and Hymba 1.5B
Heads importance analysis
We analyzed the relative importance of attention and SSM heads in each layer by removing them and recording the final accuracy. Our analysis reveals the following:
The relative importance of attention/SSM heads in the same layer is input-adaptive and varies across tasks, suggesting that they can serve different roles when handling various inputs.
The SSM head in the first layer is critical for language modeling, and removing it causes a substantial accuracy drop to random guess levels.
Generally, removing one attention/SSM head results in an average accuracy drop of 0.24%/1.1% on Hellaswag, respectively.
Figure 7. The achieved accuracy, measured using 1K samples from Hellaswag, after removing the Attention or SSM heads in each layer
Model architecture and training best practices
This section outlines key architectural decisions and training methodologies for Hymba 1.5B Base and Hymba 1.5B Instruct.
Model architecture
Hybrid architecture:
Mamba is great at summarization and usually closer focuses on the current token, while attention is more precise and acts as snapshot memory. Combining them in parallel merges these benefits, but standard sequential fusion does not. We chose a 5:1 parameter ratio between SSM and attention heads.
Sliding window attention:
Full attention heads are preserved in three layers (first, last, and middle), with sliding window attention heads used in the remaining 90% layers.
Cross-layer KV cache sharing:
Implemented between every two consecutive attention layers. It is done in addition to GQA KV cache sharing between heads.
Meta-tokens:
These 128 tokens are learnable with no supervision, helping to avoid entropy collapse problems in large language models (LLMs) and mitigate the attention sink phenomenon. Additionally, the model stores general knowledge in these tokens.
Training best practices
Pretraining:
We opted for two-stage base model training. Stage 1 maintained a constant large learning rate and used less filtered large corpus data. Continuous learning rate decay was then performed to 1e-5 using high-quality data. This approach enables continuous training and resuming of Stage 1.
Instruction fine-tuning:
Instruct model tuning is performed in three stages. First, SFT-1 provides the model with strong reasoning abilities by training on code, math, function calling, role play, and other task-specific data. Second, SFT-2 teaches the model to follow human instructions. Finally, DPO is leveraged to align the model with human preferences and improve the modelâs safety.
Figure 8. Training pipeline adapted for the Hymba model family
Performance and efficiency evaluation
With only 1.5T pretraining tokens, the Hymba 1.5B model performs the best among all small LMs and achieves better throughput and cache efficiency than all transformer-based LMs.
For example, when benchmarking against the strongest baseline, Qwen2.5, which is pretrained on 13x more tokens, Hymba 1.5B achieves a 1.55% average accuracy improvement, 1.41x throughput, and 2.90x cache efficiency. Compared to the strongest small LM trained on fewer than 2T tokens, namely h2o-danube2, our method achieves a 5.41% average accuracy improvement, 2.45x throughput, and 6.23x cache efficiency.
Model
# Para-ms
Train tokens
Token
per sec
Cache
(MB)
MMLU 5-
shot
ARC-E 0-shot
ARC-C 0-shot
PIQA 0-shot
Wino. 0-shot
Hella. 0-shot
SQuAD -C
1-shot
Avg
Open
ELM-1
1.1B
1.5T
246
346
27.06
62.37
19.54
74.76
61.8
48.37
45.38
48.57
Rene
v0.1
1.3B
1.5T
800
113
32.94
67.05
31.06
76.49
62.75
51.16
48.36
52.83
Phi
1.5
1.3B
0.15T
241
1573
42.56
76.18
44.71
76.56
72.85
48
30.09
55.85
Smol
LM
1.7B
1T
238
1573
27.06
76.47
43.43
75.79
60.93
49.58
45.81
54.15
Cosmo
1.8B
.2T
244
1573
26.1
62.42
32.94
71.76
55.8
42.9
38.51
47.2
h20
dan-ube2
1.8B
2T
271
492
40.05
70.66
33.19
76.01
66.93
53.7
49.03
55.65
Llama 3.2 1B
1.2B
9T
535
262
32.12
65.53
31.39
74.43
60.69
47.72
40.18
50.29
Qwen
2.5
1.5B
18T
469
229
60.92
75.51
41.21
75.79
63.38
50.2
49.53
59.51
AMD
OLMo
1.2B
1.3T
387
1049
26.93
65.91
31.57
74.92
61.64
47.3
33.71
48.85
Smol
LM2
1.7B
11T
238
1573
50.29
77.78
44.71
77.09
66.38
53.55
50.5
60.04
Llama
3.2 3B
3.0B
9T
191
918
56.03
74.54
42.32
76.66
69.85
55.29
43.46
59.74
Hymba
1.5B
1.5T
664
79
51.19
76.94
45.9
77.31
66.61
53.55
55.93
61.06
Table 2. Hymba 1.5B Base model results
Instructed models
The Hymba 1.5B Instruct model achieves the highest performance on an average of all tasks, outperforming the previous state-of-the-art model, Qwen 2.5 Instruct, by around 2%. Specifically, Hymba 1.5B surpasses all other models in GSM8K/GPQA/BFCLv2 with a score of 58.76/31.03/46.40, respectively. These results indicate the superiority of Hymba 1.5B, particularly in areas requiring complex reasoning capabilities.
Model
# Params
MMLU â
IFEval â
GSM8K â
GPQA â
BFCLv2 â
Avg. â
SmolLM
1.7B
27.80
25.16
1.36
25.67
-*
20.00
OpenELM
1.1B
25.65
6.25
56.03
21.62
-*
27.39
Llama 3.2
1.2B
44.41
58.92
42.99
24.11
20.27
38.14
Qwen2.5
1.5B
59.73
46.78
56.03
30.13
43.85
47.30
SmolLM2
1.7B
49.11
55.06
47.68
29.24
22.83
40.78
Hymba 1.5B
1.5B
52.79
57.14
58.76
31.03
46.40
49.22
Table 3. Hymba 1.5B Instruct model results
Conclusion
The new Hymba family of small LMs features a hybrid-head architecture that combines the high-resolution recall capabilities of attention heads with the efficient context summarization of SSM heads. To further optimize the performance of Hymba, learnable meta-tokens are introduced to act as a learned cache for both attention and SSM heads, enhancing the modelâs focus on salient information. Through the road map of Hymba, comprehensive evaluations, and ablation studies, Hymba sets new state-of-the-art performance across a wide range of tasks, achieving superior results in both accuracy and efficiency. Additionally, this work provides valuable insights into the advantages of hybrid-head architectures, offering a promising direction for future research in efficient LMs.
Learn more about
Hybma 1.5B Base
and
Hymba 1.5B Instruct
.
Acknowledgments
This work would not have been possible without contributions from many people at NVIDIA, including Wonmin Byeon, Zijia Chen, Ameya Sunil Mahabaleshwarkar, Shih-Yang Liu, Matthijs Van Keirsbilck, Min-Hung Chen, Yoshi Suhara, Nikolaus Binder, Hanah Zhang, Maksim Khadkevich, Yingyan Celine Lin, Jan Kautz, Pavlo Molchanov, and Nathan Horrocks. | https://developer.nvidia.com/ja-jp/blog/hymba-hybrid-head-architecture-boosts-small-language-model-performance/ | Hymba ãã€ããªãã ããã ã¢ãŒããã¯ãã£ãå°èŠæš¡èšèªã¢ãã«ã®ããã©ãŒãã³ã¹ãåäž | Reading Time:
4
minutes
Transformer ã¯ããã® Attention ããŒã¹ã®ã¢ãŒããã¯ãã£ã«ããã匷åãªããã©ãŒãã³ã¹ã䞊ååèœåãããã³ KV (Key-Value) ãã£ãã·ã¥ãéããé·æèšæ¶ã®ãããã§ãèšèªã¢ãã« (LM) ã®äž»æµãšãªã£ãŠããŸããããããäºæ¬¡èšç®ã³ã¹ããšé«ãã¡ã¢ãªèŠæ±ã«ãããå¹çæ§ã«èª²é¡ãçããŠããŸããããã«å¯ŸããMamba ã Mamba-2 ã®ãããªç¶æ
空éã¢ãã« (SSMs) ã¯ãè€éããäžå®ã«ããŠå¹ççãªããŒããŠã§ã¢æé©åãæäŸããŸãããã¡ã¢ãªæ³èµ·ã¿ã¹ã¯ãèŠæã§ããã¯äžè¬çãªãã³ãããŒã¯ã§ã®ããã©ãŒãã³ã¹ã«åœ±é¿ãäžããŠããŸãã
NVIDIA ã®ç 究è
ã¯æè¿ãå¹çæ§ãšããã©ãŒãã³ã¹ã®äž¡æ¹ãåäžãããããã«ãTransformer ã® Attention ã¡ã«ããºã ã SSM ãšçµ±åãããã€ããªãã ããã䞊åã¢ãŒããã¯ãã£ãç¹åŸŽãšããå°èŠæš¡èšèªã¢ãã« (SLM) ãã¡ããªã§ãã
Hymba
ãææ¡ããŸãããHymba ã§ã¯ãAttention ããããé«è§£å床ã®èšæ¶èœåãæäŸããSSM ããããå¹ççãªã³ã³ããã¹ãã®èŠçŽãå¯èœã«ããŸãã
Hymba ã®æ°ããªã¢ãŒããã¯ãã£ã¯ãããã€ãã®æŽå¯ãæããã«ããŠããŸãã
Attention ã®ãªãŒããŒããã:
Attention èšç®ã® 50% 以äžããããå®äŸ¡ãª SSM èšç®ã«çœ®ãæããããšãã§ããŸãã
ããŒã«ã« Attention ã®åªäœæ§:
SSM ãããã«ããèŠçŽãããã°ããŒãã«æ
å ±ã®ãããã§ãäžè¬çãªã¿ã¹ã¯ãã¡ã¢ãªæ³èµ·ã«éäžããã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ãç ç²ã«ããããšãªããã»ãšãã©ã®ã°ããŒãã« Attention ãããŒã«ã« Attention ã«çœ®ãæããããšãã§ããŸãã
KV ãã£ãã·ã¥åé·æ§:
Key-value ãã£ãã·ã¥ã¯ããããéãšã¬ã€ã€ãŒéã§é«ãçžé¢æ§ãããããããããé (GQA: Group Query Attention) ããã³ã¬ã€ã€ãŒé (Cross-layer KV ãã£ãã·ã¥å
±æ) ã§å
±æã§ããŸãã
Softmax ã® Attention ã®å¶é:
Attention ã¡ã«ããºã ã¯ãåèšã 1 ã«ãªãããã«å¶éãããŠãããçæ§ãšæè»æ§ã«å¶éããããŸããNVIDIA ã¯ãããã³ããã®å
é ã«åŠç¿å¯èœãªã¡ã¿ããŒã¯ã³ãå°å
¥ããéèŠãªæ
å ±ãæ ŒçŽããAttention ã¡ã«ããºã ã«é¢é£ããã匷å¶çã« Attention ãè¡ããè² æ
ã軜æžããŸãã
ãã®èšäºã§ã¯ãHymba 1.5B ãåæ§ã®èŠæš¡ã§ããæå
端ã®ãªãŒãã³ãœãŒã¹ ã¢ãã«ãLlama 3.2 1BãOpenELM 1BãPhi 1.5ãSmolLM2 1.7BãDanube2 1.8BãQwen2.5 1.5B ãªã©ãšæ¯èŒããŠãè¯å¥œãªããã©ãŒãã³ã¹ãçºæ®ããããšã瀺ãããŠããŸããåçã®ãµã€ãºã® Transformer ã¢ãã«ãšæ¯èŒãããšãHymba ã¯ããé«ãã¹ã«ãŒããããçºæ®ãããã£ãã·ã¥ãä¿åããããã«å¿
èŠãªã¡ã¢ãªã 10 åã® 1 ã§æžã¿ãŸãã
Hymba 1.5B ã¯
Hugging Face
ã³ã¬ã¯ã·ã§ã³ãš
GitHub
ã§å
¬éãããŠããŸãã
Hymba 1.5B ã®ããã©ãŒãã³ã¹
å³ 1 ã¯ãHymba 1.5B ãš 2B æªæºã®ã¢ãã« (Llama 3.2 1BãOpenELM 1BãPhi 1.5ãSmolLM2 1.7BãDanube2 1.8BãQwen2.5 1.5B) ããå¹³åã¿ã¹ã¯ç²ŸåºŠãã·ãŒã±ã³ã¹é·ã«å¯Ÿãããã£ãã·ã¥ ãµã€ãº (MB)ãã¹ã«ãŒããã (tok/sec) ã§æ¯èŒãããã®ã§ãã
å³ 1. Hymba 1.5B Base ãš 2B æªæºã®ã¢ãã«ã®ããã©ãŒãã³ã¹æ¯èŒ
ãã®äžé£ã®å®éšã«ã¯ãMMLUãARC-CãARC-EãPIQAãHellaswagãWinograndeãSQuAD-C ãªã©ã®ã¿ã¹ã¯ãå«ãŸããŠããŸããã¹ã«ãŒãããã¯ãã·ãŒã±ã³ã¹é· 8Kãããã ãµã€ãº 128 㧠PyTorch ã䜿çšã㊠NVIDIA A100 GPU ã§æž¬å®ããŸããã¹ã«ãŒããã枬å®äžã«ã¡ã¢ãªäžè¶³ (OOM: Out of Memory) åé¡ãçºçããã¢ãã«ã§ã¯ãOOM ã解決ããããŸã§ããã ãµã€ãºãååã«ããŠãOOM ãªãã§éæå¯èœãªæ倧ã¹ã«ãŒãããã枬å®ããŸããã
Hymba ã¢ãã«ã®ãã¶ã€ã³
Mamba ã®ãã㪠SSM ã¯ãTransformer ã®äºæ¬¡çãªè€éæ§ãšæšè«æã® KV ãã£ãã·ã¥ã倧ããåé¡ã«å¯ŸåŠããããã«å°å
¥ãããŸãããããããã¡ã¢ãªè§£å床ãäœãããã«ãSSM ã¯èšæ¶æ³èµ·ãšããã©ãŒãã³ã¹ã®ç¹ã§èŠæŠããŠããŸãããããã®å¶éãå
æããããã«ãè¡š 1 ã§å¹ççã§é«æ§èœãªå°èŠæš¡èšèªã¢ãã«ãéçºããããã®ããŒãããããææ¡ããŸãã
æ§æ
åžžèæšè« (%) â
ãªã³ãŒã« (%) â
ã¹ã«ãŒããã (token/sec) â
ãã£ãã·ã¥ ãµã€ãº (MB) â
èšèšçç±
300M ã¢ãã« ãµã€ãºãš 100B ãã¬ãŒãã³ã° ããŒã¯ã³ã®ã¢ãã¬ãŒã·ã§ã³
Transformer (Llama)
44.08
39.98
721.1
414.7
éå¹ççãªããæ£ç¢ºãªèšæ¶
ç¶æ
空éã¢ãã« (Mamba)
42.98
19.23
4720.8
1.9
å¹ççã ãäžæ£ç¢ºãªèšæ¶
A. + Attention ããã (é£ç¶)
44.07
45.16
776.3
156.3
èšæ¶èœåã匷å
B. + è€æ°ããã (䞊å)
45.19
49.90
876.7
148.2
2 ã€ã®ã¢ãžã¥ãŒã«ã®ãã©ã³ã¹ã®æ¹å
C. + ããŒã«ã« / ã°ããŒãã« Attention
44.56
48.79
2399.7
41.2
æŒç® / ãã£ãã·ã¥ã®å¹çãåäž
D. + KV ãã£ãã·ã¥å
±æ
45.16
48.04
2756.5
39.4
ãã£ãã·ã¥å¹çå
E. + ã¡ã¿ããŒã¯ã³
45.59
51.79
2695.8
40.0
åŠç¿ããèšæ¶ã®åæå
1.5B ã¢ãã« ãµã€ãºãš 1.5T ãã¬ãŒãã³ã° ããŒã¯ã³ãžã®ã¹ã±ãŒãªã³ã°
F. + ãµã€ãº / ããŒã¿
60.56
64.15
664.1
78.6
ã¿ã¹ã¯ ããã©ãŒãã³ã¹ã®ãããªãåäž
G. + ã³ã³ããã¹ãé·ã®æ¡åŒµ (2Kâ8K)
60.64
68.79
664.1
78.6
ãã«ãã·ã§ãããšãªã³ãŒã« ã¿ã¹ã¯ã®æ¹å
è¡š 1. Hymba ã¢ãã«ã®ãã¶ã€ã³ ããŒãããã
èååãã€ããªãã ã¢ãžã¥ãŒã«
ã¢ãã¬ãŒã·ã§ã³ç 究ã«ãããšããã€ããªãã ããã ã¢ãžã¥ãŒã«å
㧠Attention ãš SSM ãããã䞊åã«ããŠèåããã»ãããã·ãŒã±ã³ã·ã£ã«ã«ã¹ã¿ããã³ã°ããããåªããŠããããšãåãã£ãŠããŸããHymba ã¯ããã€ããªãã ããã ã¢ãžã¥ãŒã«å
㧠Attention ãš SSM ãããã䞊åã«èåãããäž¡ããããåæã«åãæ
å ±ãåŠçã§ããããã«ããŸãããã®ã¢ãŒããã¯ãã£ã¯ãæšè«ãšèšæ¶ã®æ£ç¢ºããé«ããŸãã
å³ 2. Hymba ã®ãã€ããªãã ããã ã¢ãžã¥ãŒã«
å¹çæ§ãš KV ãã£ãã·ã¥ã®æé©å
Attention ãããã¯ã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ãåäžãããŸãããKV ãã£ãã·ã¥ã®èŠæ±ãå¢å€§ãããã¹ã«ãŒããããäœäžãããŸãããããç·©åããããã«ãHymba ã¯ããŒã«ã«ããã³ã°ããŒãã«ã® Attention ãçµã¿åããã Cross-layer KV ãã£ãã·ã¥å
±æãæ¡çšããããšã§ããã€ããªãã ããã ã¢ãžã¥ãŒã«ãæé©åããŸããããã«ãããããã©ãŒãã³ã¹ãç ç²ã«ããããšãªãã¹ã«ãŒãããã 3 ååäžãããã£ãã·ã¥ãã»ãŒ 4 åã® 1 ã«åæžãããŸãã
å³ 3. Hymba ã¢ãã«ã®ã¢ãŒããã¯ãã£
ã¡ã¿ããŒã¯ã³
å
¥åã®å
é ã«çœ®ããã 128 ã®äºååŠç¿æžã¿ã®åã蟌ã¿ã®ã»ããã§ãããåŠç¿æžã¿ãã£ãã·ã¥ã®åæåãšããŠæ©èœããé¢é£æ
å ±ãžã®æ³šæã匷åããŸãããã®ãããªããŒã¯ã³ã«ã¯ 2 ã€ã®ç®çããããŸãã
ããã¯ã¹ããã ããŒã¯ã³ãšããŠæ©èœããAttention ãå¹æçã«ååé
ããããšã§ Attention ã®æµåºã軜æžãã
å§çž®ãããäžçç¥èãã«ãã»ã«åãã
å³ 4. ã¡ã¢ãªã®åŽé¢ããèŠã Hymba ã®è§£é
ã¢ãã«è§£æ
ãã®ã»ã¯ã·ã§ã³ã§ã¯ãåäžã®ãã¬ãŒãã³ã°èšå®ã«ãããç°ãªãã¢ãŒããã¯ãã£ãæ¯èŒããæ¹æ³ã玹ä»ããŸãããããããSSM ãš Attention ã® Attention ããããç°ãªãåŠç¿æžã¿ã¢ãã«ã§å¯èŠåããæåŸã«ãåªå® (pruning) ãéã㊠Hymba ã®ãããéèŠåºŠåæãè¡ããŸãããã®ã»ã¯ã·ã§ã³ã®ãã¹ãŠã®åæã¯ãHymba ã®ãã¶ã€ã³ã«ãããéžæã®ä»çµã¿ãšããããå¹æçãªçç±ã説æããã®ã«åœ¹ç«ã¡ãŸãã
åäžæ¡ä»¶ã§ã®æ¯èŒ
HymbaãçŽç²ãª Mamba2ãMamba2 ãš FFNãLlama3 ã¹ã¿ã€ã«ãSamba ã¹ã¿ã€ã« (Mamba-FFN-Attn-FFN) ã®ã¢ãŒããã¯ãã£ãåäžæ¡ä»¶ã§æ¯èŒããŸããããã¹ãŠã®ã¢ãã«ã 10 åã®ãã©ã¡ãŒã¿ãŒã§ããŸã£ããåããã¬ãŒãã³ã° ã¬ã·ã㧠SmolLM-Corpus ãã 1,000 åããŒã¯ã³ããŒãããåŠç¿ããŠããŸãããã¹ãŠã®çµæã¯ãHugging Face ã¢ãã«ã§ãŒãã·ã§ããèšå®ã䜿çšã㊠lm-evaluation-harness ãéããŠååŸãããŠããŸããHymba ã¯ãåžžèæšè«ã ãã§ãªãã質åå¿çã¿ã¹ã¯ãèšæ¶æ³èµ·ã¿ã¹ã¯ã§ãæé«ã®ããã©ãŒãã³ã¹ãçºæ®ããŸãã
è¡š 2 ã¯ãèšèªã¢ããªã³ã°ã¿ã¹ã¯ãšèšæ¶æ³èµ·ã¿ã¹ã¯ããã³åžžèæšè«ã¿ã¹ã¯ã«é¢ããããŸããŸãªã¢ãã« ã¢ãŒããã¯ãã£ãæ¯èŒããŠãããHymba ã¯ãã¹ãŠã®è©äŸ¡åºæºã§åè¶ããããã©ãŒãã³ã¹ãéæããŠããŸããHymba ã¯ãèšèªã¢ããªã³ã°ã¿ã¹ã¯ã§æãäœã Perplexity ã瀺ã (Wiki 㧠18.62ãLMB 㧠10.38)ãç¹ã« SWDE (54.29) ãš SQuAD-C (44.71) ã®èšæ¶æ³èµ·ã¿ã¹ã¯ã«ãããŠå
å®ãªçµæã瀺ãããã®ã«ããŽãªã§æé«ã®å¹³åã¹ã³ã¢ (49.50) ãéæããŸããã
ã¢ãã«
èšèªã¢ããªã³ã° (PPL) â
èšæ¶æ³èµ·å (%) â
åžžèæšè« (%) â
Mamba2
15.88
43.34
52.52
Mamba2 ãš FFN
17.43
28.92
51.14
Llama3
16.19
47.33
52.82
Samba
16.28
36.17
52.83
Hymba
14.5
49.5
54.57
è¡š 2. åãèšå®ã§ 1,000 åããŒã¯ã³ã§åŠç¿ãããã¢ãŒããã¯ãã£ã®æ¯èŒ
åžžèæšè«ãšè³ªåå¿çã«ãããŠãHymba ã¯å¹³åã¹ã³ã¢ 54.57 ã§ã SIQA (31.76) ã TruthfulQA (31.64) ãªã©ã®ã»ãšãã©ã®ã¿ã¹ã¯ã§ãLlama3 ã Mamba2 ãããäžåã£ãŠããŸããå
šäœçã«ãHymba ã¯ãã©ã³ã¹ã®åããã¢ãã«ãšããŠéç«ã£ãŠãããå€æ§ãªã«ããŽãªã§å¹çæ§ãšã¿ã¹ã¯ ããã©ãŒãã³ã¹ã®äž¡æ¹ã§åªããŠããŸãã
Attention ãããã®å¯èŠå
ããã«ãAttention ãããã®èŠçŽ ã 4 ã€ã®ã¿ã€ãã«åé¡ããŸããã
Meta:
ãã¹ãŠã®å®ããŒã¯ã³ããã¡ã¿ããŒã¯ã³ãžã® Attention ã¹ã³ã¢ããã®ã«ããŽãªã¯ãã¢ãã«ãã¡ã¿ããŒã¯ã³ã« Attention ãåããåŸåãåæ ãããã®ã§ããAttention ãããã§ã¯ãéåžžãã¢ãã«ã«ã¡ã¿ããŒã¯ã³ãããå Žåãæåã®æ°å (äŸãã° Hymba ã®å Žå㯠128) ã«äœçœ®ããŠããŸãã
BOS:
ãã¹ãŠã®å®ããŒã¯ã³ããã»ã³ãã³ã¹ã®éå§ããŒã¯ã³ãŸã§ã® Attention ã¹ã³ã¢ãAttention ãããã§ã¯ãéåžžãã¡ã¿ããŒã¯ã³ã®çŽåŸã®æåã®åã«äœçœ®ããŸãã
Self:
ãã¹ãŠã®å®ããŒã¯ã³ããããèªèº«ãžã® Attention ã¹ã³ã¢ãAttention ãããã§ã¯ãéåžžã察è§ç·äžã«äœçœ®ããŠããŸãã
Cross:
ãã¹ãŠã®å®ããŒã¯ã³ããä»ã®å®ããŒã¯ã³ãžã® Attention ã¹ã³ã¢ãAttention ãããã§ã¯ãéåžžã察è§ç·å€ã®é åã«äœçœ®ããŠããŸãã
Hymba ã® Attention ãã¿ãŒã³ã¯ãvanilla (å å·¥ãããŠããªã) Transformer ã®ãããšã¯å€§ããç°ãªããŸããvanilla Transformer ã® Attention ã¹ã³ã¢ã¯ BOS ã«éäžããŠãããAttention Sink ã®çµæãšäžèŽããŠããŸããããã«ãvanilla Transformer ã¯ãSelf-Attention ã¹ã³ã¢ã®æ¯çãé«ããªã£ãŠããŸããHymba ã§ã¯ãã¡ã¿ããŒã¯ã³ãAttention ããããSSM ããããäºãã«è£å®ãåãããã«æ©èœããç°ãªãã¿ã€ãã®ããŒã¯ã³éã§ããããã©ã³ã¹ã®åãã Attention ã¹ã³ã¢ã®ååžãå®çŸããŠããŸãã
å
·äœçã«ã¯ãã¡ã¿ããŒã¯ã³ã BOS ããã® Attention ã¹ã³ã¢ããªãããŒãããããšã§ãã¢ãã«ãããå®éã®ããŒã¯ã³ã«éäžã§ããããã«ãªããŸããSSM ãããã¯ã°ããŒãã«ãªã³ã³ããã¹ããèŠçŽããçŸåšã®ããŒã¯ã³ (Self-Attention ã¹ã³ã¢) ã«ããéç¹ã眮ããŸããäžæ¹ãAttention ãããã¯ãSelf ãš BOS ããŒã¯ã³ã«å¯Ÿãã泚æãäœããä»ã®ããŒã¯ã³ (ããªãã¡ãCross Attention ã¹ã³ã¢) ãžã®æ³šæãé«ããªããŸããããã¯ãHymba ã®ãã€ããªãã ããã ãã¶ã€ã³ããç°ãªãã¿ã€ãã®ããŒã¯ã³éã® Attention ååžã®ãã©ã³ã¹ãå¹æçã«åãããšãã§ããããã©ãŒãã³ã¹ã®åäžã«ã€ãªããå¯èœæ§ãããããšã瀺åããŠããŸãã
å³ 5. ã¡ã¿ããŒã¯ã³ãSliding Window AttentionãMamba è²¢ç®ã®çµã¿åããã«ãã Hymba ã® Attention ãããã®æŠç¥å³
å³ 6. Llama 3.2 3B ãš Hymba 1.5B ã®ç°ãªãã«ããŽãªããã® Attention ã¹ã³ã¢ã®åèš
ãããéèŠåºŠåæ
åã¬ã€ã€ãŒã®Attention ãš SSM ãããã®çžå¯ŸçãªéèŠæ§ãåæããããã«ããããããåé€ããŠæçµçãªç²ŸåºŠãèšé²ããŸãããåæã®çµæã以äžã®ããšãæããã«ãªããŸããã
åãã¬ã€ã€ãŒã®Â Attention / SSM ãããã®çžå¯ŸçãªéèŠæ§ã¯å
¥åé©å¿ã§ãããã¿ã¹ã¯ã«ãã£ãŠç°ãªããŸããããã¯ãããŸããŸãªå
¥åã®åŠçã«ãããŠãç°ãªã圹å²ãæããå¯èœæ§ãããããšã瀺åããŠããŸãã
æåã®ã¬ã€ã€ãŒã® SSM ãããã¯èšèªã¢ããªã³ã°ã¿ã¹ã¯ã«äžå¯æ¬ ã§ããããåé€ãããšãã©ã³ãã æšæž¬ã¬ãã«ã«ãŸã§å€§å¹
ã«ç²ŸåºŠãäœäžããŸãã
äžè¬çã«ãAttention / SSM ãããã 1 ã€åé€ãããšãHellaswag ã§ã¯ããããå¹³å 0.24%/1.1% 粟床ãäœäžããŸãã
å³ 7. Hellaswag ã® 1K ãµã³ãã«ã䜿çšããŠæž¬å®ãããåã¬ã€ã€ãŒã® Attention ãŸã㯠SSM ããããåé€ããåŸã®éæ粟床
ã¢ãã« ã¢ãŒããã¯ãã£ãšåŠç¿ã®ãã¹ã ãã©ã¯ãã£ã¹
ãã®ã»ã¯ã·ã§ã³ã§ã¯ãHymba 1.5B Base ãš Hymba 1.5B Instruct ã®äž»èŠã¢ãŒããã¯ãã£äžã®æ±ºå®äºé
ãšåŠç¿æ¹æ³ã®æŠèŠã«ã€ããŠèª¬æããŸãã
ã¢ãã« ã¢ãŒããã¯ãã£
ãã€ããªãã ã¢ãŒããã¯ãã£:
Mamba ã¯èŠçŽã«åªããéåžžã¯çŸåšã®ããŒã¯ã³ã«ããéç¹ã眮ããŸããAttention ã¯ããæ£ç¢ºã§ã¹ãããã·ã§ãã ã¡ã¢ãªãšããŠæ©èœããŸããæšæºçãªã·ãŒã±ã³ã·ã£ã«èåã§ã¯ãªãã䞊åã«çµã¿åãããããšã§å©ç¹ãçµ±åããããšãã§ããŸããSSM ãš Attention ãããéã®ãã©ã¡ãŒã¿ãŒæ¯ã¯ 5:1 ãéžæããŸããã
Sliding Window Attention:
å®å
šãª Attention ããã㯠3 ã€ã®ã¬ã€ã€ãŒ (æåãæåŸãäžé) ã«ç¶æãããæ®ãã® 90% ã®ã¬ã€ã€ãŒã§ Sliding Window Attention ãããã䜿çšãããŸãã
Cross-layer KV ãã£ãã·ã¥å
±æ:
é£ç¶ãã 2 ã€ã® Attention ã¬ã€ã€ãŒéã«å®è£
ãããŸããããã¯ããããéã® GQA KV ãã£ãã·ã¥å
±æã«å ããŠè¡ãããŸãã
ã¡ã¿ããŒã¯ã³:
ãããã® 128 ããŒã¯ã³ã¯æåž«ãªãåŠç¿ãå¯èœã§ããã倧èŠæš¡èšèªã¢ãã« (LLM) ã«ããããšã³ããããŒåŽ©å£ã®åé¡ãåé¿ããAttention Sink çŸè±¡ãç·©åããã®ã«åœ¹ç«ã¡ãŸããããã«ãã¢ãã«ã¯ãããã®ããŒã¯ã³ã«äžè¬çãªç¥èãæ ŒçŽããŸãã
åŠç¿ã®ãã¹ã ãã©ã¯ãã£ã¹
äºååŠç¿:
2 段éã®ããŒã¹ã¢ãã«åŠç¿ãéžæããŸãããã¹ããŒãž 1 ã§ã¯ãäžå®ã®é«ãåŠç¿çãç¶æãããã£ã«ã¿ãªã³ã°ãããŠããªã倧èŠæš¡ãªã³ãŒãã¹ ããŒã¿ã®äœ¿çšããŸãããç¶ããŠãé«å質ã®ããŒã¿ãçšã㊠1e-5 ãŸã§ç¶ç¶çã«åŠç¿çãæžè¡°ãããŸããããã®ã¢ãããŒãã«ãããã¹ããŒãž 1 ã®ç¶ç¶çãªåŠç¿ãšåéãå¯èœã«ãªããŸãã
æ瀺ãã¡ã€ã³ãã¥ãŒãã³ã°:
æ瀺ã¢ãã«ã®èª¿æŽã¯ 3 ã€ã®æ®µéã§è¡ãããŸãããŸããSFT-1 ã¯ãã³ãŒããæ°åŠãé¢æ°åŒã³åºããããŒã« ãã¬ã€ããã®ä»ã®ã¿ã¹ã¯åºæã®ããŒã¿ã§åŠç¿ãå®æœãã匷åãªæšè«èœåãã¢ãã«ã«ä»äžããŸãã次ã«ãSFT-2 ã¯ã¢ãã«ã«äººéã®æ瀺ã«åŸãããšãæããŸããæåŸã«ãDPO ã掻çšããŠãã¢ãã«ã人éã®å¥œã¿ã«åãããã¢ãã«ã®å®å
šæ§ãé«ããŸãã
å³ 8. Hymba ã¢ãã« ãã¡ããªã«é©å¿ããåŠç¿ãã€ãã©ã€ã³
ããã©ãŒãã³ã¹ãšå¹çæ§ã®è©äŸ¡
1.5T ã®äºååŠç¿ããŒã¯ã³ã ãã§ãHymba 1.5B ã¢ãã«ã¯ãã¹ãŠã®å°èŠæš¡èšèªã¢ãã«ã®äžã§æé«ã®æ§èœãçºæ®ããTransformer ããŒã¹ã® LM ãããåªããã¹ã«ãŒããããšãã£ãã·ã¥å¹çãå®çŸããŸãã
äŸãã°ã13 å以äžã®ããŒã¯ã³æ°ã§äºååŠç¿ãããæã匷åãªããŒã¹ã©ã€ã³ã§ãã Qwen2.5 ã«å¯ŸããŠãã³ãããŒã¯ããå ŽåãHymba 1.5B ã¯å¹³å粟床ã 1.55%ãã¹ã«ãŒãããã 1.41 åããã£ãã·ã¥å¹çã 2.90 åã«åäžããŸãã2T æªæºã®ããŒã¯ã³ã§åŠç¿ãããæã匷åãªå°èŠæš¡èšèªã¢ãã«ãããªãã¡ h2o-danube2 ãšæ¯èŒãããšããã®æ¹æ³ã¯å¹³å粟床ã 5.41%ãã¹ã«ãŒãããã 2.45 åããã£ãã·ã¥å¹çã 6.23 åã«åäžããŠããŸãã
ã¢ãã«
ãã©ã¡ãŒã¿ãŒæ°
åŠç¿ããŒã¯ã³
ããŒã¯ã³
(1 ç§ããã)
ãã£ãã·ã¥
(MB)
MMLU 5-
shot
ARC-E 0-shot
ARC-C 0-shot
PIQA 0-shot
Wino. 0-shot
Hella. 0-shot
SQuAD -C
1-shot
å¹³å
OpenELM-1
1.1B
1.5T
246
346
27.06
62.37
19.54
74.76
61.8
48.37
45.38
48.57
Renev0.1
1.3B
1.5T
800
113
32.94
67.05
31.06
76.49
62.75
51.16
48.36
52.83
Phi1.5
1.3B
0.15T
241
1573
42.56
76.18
44.71
76.56
72.85
48
30.09
55.85
SmolLM
1.7B
1T
238
1573
27.06
76.47
43.43
75.79
60.93
49.58
45.81
54.15
Cosmo
1.8B
.2T
244
1573
26.1
62.42
32.94
71.76
55.8
42.9
38.51
47.2
h20dan-ube2
1.8B
2T
271
492
40.05
70.66
33.19
76.01
66.93
53.7
49.03
55.65
Llama 3.2 1B
1.2B
9T
535
262
32.12
65.53
31.39
74.43
60.69
47.72
40.18
50.29
Qwen2.5
1.5B
18T
469
229
60.92
75.51
41.21
75.79
63.38
50.2
49.53
59.51
AMDOLMo
1.2B
1.3T
387
1049
26.93
65.91
31.57
74.92
61.64
47.3
33.71
48.85
SmolLM2
1.7B
11T
238
1573
50.29
77.78
44.71
77.09
66.38
53.55
50.5
60.04
Llama3.2 3B
3.0B
9T
191
918
56.03
74.54
42.32
76.66
69.85
55.29
43.46
59.74
Hymba
1.5B
1.5T
664
79
51.19
76.94
45.9
77.31
66.61
53.55
55.93
61.06
è¡š 2. Hymba 1.5B ããŒã¹ ã¢ãã«ã®çµæ
æ瀺ã¢ãã«
Hymba 1.5B Instruct ã¢ãã«ã¯ãå
šã¿ã¹ã¯å¹³åã§æé«ã®ããã©ãŒãã³ã¹ãéæããçŽè¿ã®æé«æ§èœã¢ãã«ã§ãã Qwen 2.5 Instruct ãçŽ 2% äžåããŸãããç¹ã«ãHymba 1.5B 㯠GSM8K/GPQA/BFCLv2 ã§ããããã 58.76/31.03/46.40 ã®ã¹ã³ã¢ã§ä»ã®ãã¹ãŠã®ã¢ãã«ãäžåã£ãŠããŸãããããã®çµæã¯ãç¹ã«è€éãªæšè«èœåãå¿
èŠãšããåéã«ãããŠãHymba 1.5B ã®åªäœæ§ã瀺ããŠããŸãã
ã¢ãã«
ãã©ã¡ãŒã¿ãŒæ°
MMLU â
IFEval â
GSM8K â
GPQA â
BFCLv2 â
å¹³åâ
SmolLM
1.7B
27.80
25.16
1.36
25.67
-*
20.00
OpenELM
1.1B
25.65
6.25
56.03
21.62
-*
27.39
Llama 3.2
1.2B
44.41
58.92
42.99
24.11
20.27
38.14
Qwen2.5
1.5B
59.73
46.78
56.03
30.13
43.85
47.30
SmolLM2
1.7B
49.11
55.06
47.68
29.24
22.83
40.78
Hymba 1.5B
1.5B
52.79
57.14
58.76
31.03
46.40
49.22
è¡š 3. Hymba 1.5B Instruct ã¢ãã«ã®çµæ
ãŸãšã
æ°ãã Hymba ãã¡ããªã®å°èŠæš¡èšèªã¢ãã«ã¯ããã€ããªãã ããã ã¢ãŒããã¯ãã£ãæ¡çšããAttention ãããã®é«è§£åãªèšæ¶èœåãš SSM ãããã®å¹ççãªã³ã³ããã¹ãã®èŠçŽãçµã¿åãããŠããŸããHymba ã®ããã©ãŒãã³ã¹ãããã«æé©åããããã«ãåŠç¿å¯èœãªã¡ã¿ããŒã¯ã³ãå°å
¥ãããAttention ããããš SSM ãããã®äž¡æ¹ã§åŠç¿æžã¿ãã£ãã·ã¥ãšããŠæ©èœããé¡èãªæ
å ±ã«æ³šç®ããã¢ãã«ã®ç²ŸåºŠã匷åããŸãããHymba ã®ããŒãããããå
æ¬çãªè©äŸ¡ãã¢ãã¬ãŒã·ã§ã³ç 究ãéããŠãHymba ã¯å¹
åºãã¿ã¹ã¯ã«ããã£ãŠæ°ããªæå
端ã®ããã©ãŒãã³ã¹ã確ç«ããæ£ç¢ºããšå¹çæ§ã®äž¡é¢ã§åªããçµæãéæããŸãããããã«ããã®ç 究ã¯ããã€ããªãã ããã ã¢ãŒããã¯ãã£ã®å©ç¹ã«é¢ãã貎éãªæŽå¯ããããããå¹ççãªèšèªã¢ãã«ã®ä»åŸã®ç 究ã«ææãªæ¹åæ§ã瀺ããŠããŸãã
Hybma 1.5B Base
ãš
Hymba 1.5B Instruct
ã®è©³çŽ°ã¯ãã¡ããã芧ãã ããã
è¬èŸ
ãã®ææã¯ãWonmin ByeonãZijia ChenãAmeya Sunil MahabaleshwarkarãShih-Yang LiuãMatthijs Van KeirsbilckãMin-Hung ChenãYoshi SuharaãNikolaus BinderãHanah ZhangãMaksim KhadkevichãYingyan Celine LinãJan KautzãPavlo MolchanovãNathan Horrocks ãªã©ãNVIDIA ã®å€ãã®ã¡ã³ããŒã®è²¢ç®ãªãããŠã¯å®çŸããŸããã§ããã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
Optimizing Large Language Models: An Experimental Approach to Pruning and Fine-Tuning LLama2 7B (倧èŠæš¡èšèªã¢ãã«ã®æé©å: LLama2 7B ã®åªå®ãšãã¡ã€ã³ãã¥ãŒãã³ã°ã®å®éšçã¢ãããŒã)
GTC ã»ãã·ã§ã³:
Accelerating End-to-End Large Language Models System using a Unified Inference Architecture and FP8 (çµ±äžæšè«ã¢ãŒããã¯ãã£ãš FP8 ãçšãããšã³ãããŒãšã³ãã®å€§èŠæš¡èšèªã¢ãã« ã·ã¹ãã ã®é«éå)
NGC ã³ã³ãããŒ:
Llama-3.1-Nemotron-70B-Ins
truct
NGC ã³ã³ãããŒ:
Llama-3-Swallow-70B-Instruct-v0.1
SDK:
NeMo Megatron |
https://developer.nvidia.com/blog/deploying-fine-tuned-ai-models-with-nvidia-nim/ | Deploying Fine-Tuned AI Models with NVIDIA NIM | For organizations adapting AI foundation models with domain-specific data, the ability to rapidly create and deploy fine-tuned models is key to efficiently delivering value with enterprise generative AI applications.
NVIDIA NIM
offers prebuilt, performance-optimized inference microservices for the latest AI foundation models, including
seamless deployment
of models customized using parameter-efficient fine-tuning (PEFT).
In some cases, itâs ideal to use methods like continual pretraining, DPO, supervised fine-tuning (SFT), or model merging, where the underlying model weights are adjusted directly in the training or customization process, unlike PEFT with low-rank adaptation (LoRA). In these cases, inference software configuration for the model must be updated for optimal performance given the new weights.
Rather than burden you with this often lengthy process, NIM can automatically build a
TensorRT-LLM
inference engine performance optimized for the adjusted model and GPUs in your local environment, and then load it for running inference as part of a single-step model deployment process.
In this post, we explore how to rapidly deploy NIM microservices for models that have been customized through SFT by using locally built, performance-optimized TensorRT-LLM inference engines. We include all the necessary commands as well as some helpful options, so you can try it out on your own today.
Prerequisites
To run this tutorial, you need an NVIDIA-accelerated compute environment with access to 80 GB of GPU memory and which has
git-lfs
installed.
Before you can pull and deploy a NIM microservice in an NVIDIA-accelerated compute environment, you also need an NGC API key.
Navigate to the
Meta Llama 3 8B Instruct
model listing in the NVIDIA API Catalog.
Choose
Login
at the top right and follow the instructions.
When youâre logged in, choose
Build with this NIM
on the
model page
.
Choose
Self-Hosted API
and follow either option to access NIM microservices access:
NVIDIA Developer Program membership with free access to NIM for research, development, and testing only.
The 90-day NVIDIA AI Enterprise license, which includes access to NVIDIA Enterprise Support.
After you provide the necessary details for your selected access method, copy your NGC API key and be ready to move forward with NIM. For more information, see
Launch NVIDIA NIM for LLMs
.
Getting started with NIM microservices
Provide your NGC CLI API key as an environment variable in your compute environment:
export NGC_API_KEY=<<YOUR API KEY HERE>>
You also must point to, create, and modify permissions for a directory to be used as a cache during the optimization process:
export NIM_CACHE_PATH=/tmp/nim/.cache
mkdir -p $NIM_CACHE_PATH
chmod -R 777 $NIM_CACHE_PATH
To demonstrate locally built, optimized TensorRT-LLM inference engines for deploying fine-tuned models with NIM, you need a model that has undergone customization through SFT. For this tutorial, use the
NVIDIA OpenMath2-Llama3.1-8B
model, which is a customization of
Metaâs Llama-3.1-8B
using the
OpenMathInstruct-2
dataset.
The base model must be available as a downloadable NIM for LLMs. For more information about downloadable NIM microservices, see the
NIM Type: Run Anywhere filter
in the NVIDIA API Catalog.
All you need is the weights to this model, which can be obtained in several ways. For this post, clone the model repository using the following commands:
git lfs install
git clone https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B
export MODEL_WEIGHT_PARENT_DIRECTORY=$PWD
Now that you have the model weights collected, move on to the next step: firing up the microservice.
Selecting from available performance profiles
Based on your selected model and hardware configuration, the most applicable inference performance profile available is automatically selected. There are two available performance profiles for local inference engine generation:
Latency:
Focused on delivering a NIM microservice that is optimized for latency.
Throughput:
Focused on delivering a NIM microservice that is optimized for batched throughput.
For more information about supported features, including available precision, see the
Support Matrix
topic in the NVIDIA NIM documentation.
Example using an SFT model
Create a locally built TensorRT-LLM inference engine for OpenMath2-Llama3.1-8B by running the following commands:
docker run -it --rm --gpus all \
--user $(id -u):$(id -g)\
--network=host \
--shm-size=32GB \
-e NGC_API_KEY \
-e NIM_FT_MODEL=/opt/weights/hf/OpenMath2-Llama3.1-8B \
-e NIM_SERVED_MODEL_NAME=OpenMath2-Llama3.1-8B \
-v $NIM_CACHE_PATH:/opt/nim/.cache \
-v $MODEL_WEIGHT_PARENT_DIRECTORY:/opt/weights/hf \
nvcr.io/nim/meta/llama-3.1-8b-instruct:1.3.0
The command is nearly identical to the typical command youâd use to deploy a NIM microservice. In this case, youâve added the extra
NIM_FT_MODEL
parameter, which points to the OpenMath2-Llama3.1-8B model.
With that, NIM builds an optimized inference engine locally. To perform inference using this new NIM microservice, run the following Python code example:
from openai import OpenAI
client = OpenAI(
base_url = "http://localhost:8000/v1",
api_key = "none"
)
completion = client.chat.completions.create(
model="OpenMath2-Llama3.1-8B",
messages=[{"role":"user","content":"What is your name?"}],
temperature=0.2,
top_p=0.7,
max_tokens=100,
stream=True
)
for chunk in completion:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
Video 1. How to Deploy Fine-Tuned AI Models
Building an optimized TensorRT-LLM engine with a custom performance profile
On
supported GPUs
, you can use a similar command to spin up your NIM microservice. Follow the
Model Profile
instructions to launch your microservice and determine which profiles are accessible for it.
export IMG_NAME="nvcr.io/nim/meta/llama-3.1-8b-instruct:1.3.0"
docker run --rm --gpus=all -e NGC_API_KEY=$NGC_API_KEY $IMG_NAME list-model-profiles
Assuming youâre in an environment with two (or more) H100 GPUs, you should see the following profiles available:
tensorrt_llm-h100-bf16-tp2âpp1-throughput
tensorrt_llm-h100-bf16-tp2-pp1-latency
Re-run the command and provide an additional environment variable to specify the desired profile:
docker run --rm --gpus=all \
-e NGC_API_KEY \
-e NIM_FT_MODEL=/opt/weights/hf/OpenMath2-Llama3.1-8B \
-e NIM_SERVED_MODEL_NAME=OpenMath2-Llama3.1-8B \
-e NIM_MODEL_PROFILE=tensorrt_llm-h100-bf16-tp2-pp1-latency \
-v $NIM_CACHE_PATH:/opt/nim/.cache \
-v $MODEL_WEIGHT_PARENT_DIRECTORY:/opt/weights/hf \
$IMG_NAME
Now that youâve relaunched your NIM microservice with the desired profile, use Python to interact with the model:
from openai import OpenAI
client = OpenAI(
base_url = "http://localhost:8000/v1",
api_key = "none"
)
completion = client.chat.completions.create(
model="llama-3.1-8b-instruct",
messages=[{"role":"user","content":"What is your name?"}],
temperature=0.2,
top_p=0.7,
max_tokens=100,
stream=True
)
for chunk in completion:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
Conclusion
Whether youâre using
PEFT
or SFT methods for model customization, NIM accelerates customized model deployment for high-performance inferencing in a few simple steps. With optimized TensorRT-LLM inference engines built automatically in your local environment, NIM is unlocking new possibilities for rapidly deploying accelerated AI inferencing anywhere.
Learn more and get started today by visiting the NVIDIA
API catalog
and checking out the
documentation
. To engage with NVIDIA and the NIM microservices community, see the NVIDIA
NIM developer forum
. | https://developer.nvidia.com/ja-jp/blog/deploying-fine-tuned-ai-models-with-nvidia-nim/ | NVIDIA NIM ã§ãã¡ã€ã³ãã¥ãŒãã³ã°ããã AI ã¢ãã«ã®ããã〠| Reading Time:
2
minutes
ãã¡ã€ã³åºæã®ããŒã¿ã§ AI åºç€ã¢ãã«ãé©å¿ãããŠããäŒæ¥ã«ãšã£ãŠããã¡ã€ã³ãã¥ãŒãã³ã°ãããã¢ãã«ãè¿
éã«äœæãããããã€ããèœåã¯ãäŒæ¥ã®çæ AI ã¢ããªã±ãŒã·ã§ã³ã§å¹ççã«äŸ¡å€ãæäŸããããã®éµãšãªããŸãã
NVIDIA NIM
ã¯ãParapeter-efficient Fine-tuning (PEFT) ãçšããŠã«ã¹ã¿ãã€ãºããã¢ãã«ã®
ã·ãŒã ã¬ã¹ãªãããã€
ãªã©ãææ°ã® AI åºç€ã¢ãã«åãã«ãã«ããããããã©ãŒãã³ã¹ãæé©åããæšè«ãã€ã¯ããµãŒãã¹ãæäŸããŸãã
å Žåã«ãã£ãŠã¯ãLow-rank Adaptation (LoRA) ã䜿çšãã PEFT ãšã¯ç°ãªããç¶ç¶äºååŠç¿ãDPOãæåž«ãããã¡ã€ã³ãã¥ãŒãã³ã° (SFT: Supervised Fine-tuning)ãã¢ãã« ããŒãžãªã©ã®ææ³ãå©çšããåºç€ãšãªãã¢ãã«ã®éã¿ããã¬ãŒãã³ã°ãã«ã¹ã¿ãã€ãºã®éçšã§çŽæ¥èª¿æŽããã®ãçæ³çã§ãããã®ãããªå Žåãæ°ããéã¿ãèæ
®ããæé©ãªããã©ãŒãã³ã¹ãå®çŸããã«ã¯ãã¢ãã«ã®æšè«ãœãããŠã§ã¢æ§æãæŽæ°ããå¿
èŠããããŸãã
ãã®é·æéãèŠããããã»ã¹ã«è² æ
ãå²ãã®ã§ã¯ãªããNIM ã¯ã調æŽãããã¢ãã«ãš GPU ã«åãããŠæé©åãã
TensorRT-LLM
æšè«ãšã³ãžã³ãããŒã«ã«ç°å¢ã§ãã«ãããããŒããããããåäžã¹ãããã®ã¢ãã« ããã〠ããã»ã¹ã®äžç°ãšããŠæšè«ãå®è¡ã§ããŸãã
ãã®æçš¿ã§ã¯ãããã©ãŒãã³ã¹ãæé©åãã TensorRT-LLM æšè«ãšã³ãžã³ãããŒã«ã«ã§ãã«ãããŠãSFT ã§ã«ã¹ã¿ãã€ãºãããã¢ãã«ã«å¯Ÿãã NIM ãã€ã¯ããµãŒãã¹ãè¿
éã«ãããã€ããæ¹æ³ã説æããŸããå¿
èŠãªã³ãã³ããšäŸ¿å©ãªãªãã·ã§ã³ãã玹ä»ããŸãã®ã§ãæ¯éä»ãããè©Šããã ããã
åææ¡ä»¶
ãã®ãã¥ãŒããªã¢ã«ãå®è¡ããã«ã¯ã80 GB ã® GPU ã¡ã¢ãªãæ〠NVIDIA ã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ç°å¢ãš
git-lfs
ã®ã€ã³ã¹ããŒã«ãå¿
èŠã§ãã
NVIDIA ã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ç°å¢ã§ãNIM ãã€ã¯ããµãŒãã¹ã pull ããŠãããã€ããã«ã¯ãNGC API ããŒãå¿
èŠã§ãã
NVIDIA API ã«ã¿ãã°ã®ã¢ãã«äžèŠ§ãã
Meta Llama 3 8B Instruct
ã«ç§»åããŸãã
å³äžã®
[Login]
ãéžæããæ瀺ã«åŸã£ãŠãã ããã
ãã°ã€ã³ãããã
ã¢ãã« ããŒãž
ã§
[Build with this NIM]
ãéžæããŸãã
[Self-Hosted API]
ãéžæããããããã®ãªãã·ã§ã³ã«åŸã£ãŠãNIM ãã€ã¯ããµãŒãã¹ãžã¢ã¯ã»ã¹ããŸãã
NVIDIA éçºè
ããã°ã©ã ã®ã¡ã³ããŒã§ããã°ãç 究ãéçºããã¹ãã«éã NIM ã«ç¡æã§ã¢ã¯ã»ã¹ããããšãã§ããŸãã
90 æ¥éã® NVIDIA AI Enterprise ã©ã€ã»ã³ã¹ã«ã¯ãNVIDIA Enterprise ãµããŒããžã®ã¢ã¯ã»ã¹ãå«ãŸããŠããŸãã
éžæããã¢ã¯ã»ã¹æ¹æ³ã«å¿
èŠãªè©³çŽ°æ
å ±ãæäŸããããNGC API ããŒãã³ããŒããŠãNIM ãé²ããæºåãããŸãã詳现ã«ã€ããŠã¯ã
Launch NVIDIA NIM for LLMs
ãåç
§ããŠãã ããã
NIM ãã€ã¯ããµãŒãã¹ãã¯ããã
å©çšäžã®ã³ã³ãã¥ãŒãã£ã³ã°ç°å¢ã®ç°å¢å€æ°ãšããŠãNGC API ããŒãæäŸããŸãã
export NGC_API_KEY=<<YOUR API KEY HERE>>
ãŸããæé©ååŠçäžã«ãã£ãã·ã¥ãšããŠäœ¿çšãããã£ã¬ã¯ããªãäœæããŠãããŒããã·ã§ã³ãå€æŽããŠãæå®ããå¿
èŠããããŸãã
export NIM_CACHE_PATH=/tmp/nim/.cache
mkdir -p $NIM_CACHE_PATH
chmod -R 777 $NIM_CACHE_PATH
NIM ã§ãã¡ã€ã³ãã¥ãŒãã³ã°ãããã¢ãã«ããããã€ããããã«ãæé©ãª TensorRT-LLM æšè«ãšã³ãžã³ãããŒã«ã«ã§ãã«ãããå®èšŒã«ã¯ãSFT ã«ãã£ãŠã«ã¹ã¿ãã€ãºããã¢ãã«ãå¿
èŠã§ãããã®ãã¥ãŒããªã¢ã«ã§ã¯ã
OpenMathInstruct-2
ããŒã¿ã»ããã䜿çšããŠã
Meta ã® Llama-3.1-8B
ãã«ã¹ã¿ãã€ãºãã
NVIDIA OpenMath2-Llama3.1-8B
ã¢ãã«ã䜿çšããŸãã
ããŒã¹ ã¢ãã«ã¯ãããŠã³ããŒãå¯èœãª NIM for LLMs ãšããŠå©çšå¯èœã§ãªããã°ãªããŸãããããŠã³ããŒãå¯èœãª NIM ãã€ã¯ããµãŒãã¹ã®è©³çŽ°ã«ã€ããŠã¯ãNVIDIA API ã«ã¿ãã°ã®ã
NIM Type: Run Anywhere filter
ããåç
§ããŠãã ããã
å¿
èŠãªã®ã¯ãã®ã¢ãã«ã®éã¿ã ãã§ãããã¯ããŸããŸãªæ¹æ³ããããŸãããã®æçš¿ã§ã¯ã以äžã®ã³ãã³ãã䜿çšããŠã¢ãã« ãªããžããªãã¯ããŒã³ããŸãã
git lfs install
git clone https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B
export MODEL_WEIGHT_PARENT_DIRECTORY=$PWD
ããã§ã¢ãã«ã®éã¿ãåéã§ããã®ã§ã次ã®ã¹ãããã®ãã€ã¯ããµãŒãã¹ã®èµ·åã«é²ã¿ãŸãã
å©çšå¯èœãªããã©ãŒãã³ã¹ ãããã¡ã€ã«ããéžæãã
éžæããã¢ãã«ãšããŒããŠã§ã¢ã®æ§æã«åºã¥ããŠãå©çšå¯èœãªãã®ã®äžããæãé©åãªæšè«ããã©ãŒãã³ã¹ ãããã¡ã€ã«ãèªåçã«éžæãããŸããããŒã«ã«æšè«ãšã³ãžã³ã®çæã«ã¯ã以äžã® 2 ã€ã®ããã©ãŒãã³ã¹ ãããã¡ã€ã«ãå©çšã§ããŸãã
ã¬ã€ãã³ã·:
ã¬ã€ãã³ã·ã«æé©åããã NIM ãã€ã¯ããµãŒãã¹ã®æäŸã«éç¹ã眮ããŸãã
ã¹ã«ãŒããã:
ããã ã¹ã«ãŒãããã«æé©åããã NIM ãã€ã¯ããµãŒãã¹ã®æäŸã«éç¹ã眮ããŸãã
å©çšå¯èœãªç²ŸåºŠãªã©ããµããŒãæ©èœã®è©³çŽ°ã«ã€ããŠã¯ãNVIDIA NIM ããã¥ã¡ã³ãã®
ãµããŒãæ
å ±
ã®ãããã¯ãåç
§ããŠãã ããã
SFT ã¢ãã«ã䜿çšããäŸ
以äžã®ã³ãã³ããå®è¡ããŠãããŒã«ã«ç°å¢ã§ãã«ããã OpenMath2-Llama3.1-8B çšã® TensorRT-LLM æšè«ãšã³ãžã³ãäœæããŸãã
docker run -it --rm --gpus all \
--user $(id -u):$(id -g)\
--network=host \
--shm-size=32GB \
-e NGC_API_KEY \
-e NIM_FT_MODEL=/opt/weights/hf/OpenMath2-Llama3.1-8B \
-e NIM_SERVED_MODEL_NAME=OpenMath2-Llama3.1-8B \
-v $NIM_CACHE_PATH:/opt/nim/.cache \
-v $MODEL_WEIGHT_PARENT_DIRECTORY:/opt/weights/hf \
nvcr.io/nim/meta/llama-3.1-8b-instruct:1.3.0
ãã®ã³ãã³ãã¯ãNIM ãã€ã¯ããµãŒãã¹ããããã€ããããã«äœ¿çšããå
žåçãªã³ãã³ããšã»ãŒåãã§ãããã®å Žåãè¿œå ã® NIM_FT_MODEL ãã©ã¡ãŒã¿ãŒãè¿œå ããOpenMath2-Llama3.1-8B ã¢ãã«ãæããŠããŸãã
ããã«ãããNIM ã¯æé©åãããæšè«ãšã³ãžã³ãããŒã«ã«ç°å¢ã§ãã«ãããŸãããã®æ°ãã NIM ãã€ã¯ããµãŒãã¹ã䜿çšããŠæšè«ãè¡ãã«ã¯ã以äžã® Python ã³ãŒã ãµã³ãã«ãå®è¡ããŸãã
from openai import OpenAI
client = OpenAI(
base_url = "http://localhost:8000/v1",
api_key = "none"
)
completion = client.chat.completions.create(
model="OpenMath2-Llama3.1-8B",
messages=[{"role":"user","content":"What is your name?"}],
temperature=0.2,
top_p=0.7,
max_tokens=100,
stream=True
)
for chunk in completion:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
åç» 1. ãã¡ã€ã³ãã¥ãŒãã³ã°ããã AI ã¢ãã«ããããã€ããæ¹æ³
ã«ã¹ã¿ã ããã©ãŒãã³ã¹ ãããã¡ã€ã«ã§æé©åããã TensorRT-LLM ãšã³ãžã³ã®ãã«ã
ãµããŒããããŠãã GPU
ãªããåæ§ã®ã³ãã³ãã䜿çšããŠãNIM ãã€ã¯ããµãŒãã¹ãèµ·åã§ããŸãã
ã¢ãã« ãããã¡ã€ã«
ã®æé ã«åŸã£ãŠãã€ã¯ããµãŒãã¹ãèµ·åããã©ã®ãããã¡ã€ã«ã«ã¢ã¯ã»ã¹ã§ãããã確èªããŸãã
export IMG_NAME="nvcr.io/nim/meta/llama-3.1-8b-instruct:1.3.0"
docker run --rm --gpus=all -e NGC_API_KEY=$NGC_API_KEY $IMG_NAME list-model-profiles
H100 GPU ã䜿çšããŠãããšä»®å®ãããšã以äžã®ãããã¡ã€ã«ãå©çšå¯èœã§ããããšãããããŸãã
tensorrt_llm-h100-bf16-tp2-pp1-latency
tensorrt_llm-h100-bf16-tp1-pp1-throughput
ã³ãã³ããåå®è¡ããç®çã®ãããã¡ã€ã«ãæå®ããç°å¢å€æ°ãè¿œå ããŸãã
docker run --rm --gpus=all \
--user $(id -u):$(id -g)\
--network=host \
--shm-size=32GB \
-e NGC_API_KEY \
-e NIM_FT_MODEL=/opt/weights/hf/OpenMath2-Llama3.1-8B \
-e NIM_SERVED_MODEL_NAME=OpenMath2-Llama3.1-8B \
-e NIM_MODEL_PROFILE=tensorrt_llm-h100-bf16-tp2-pp1-latency \
-v $NIM_CACHE_PATH:/opt/nim/.cache \
-v $MODEL_WEIGHT_PARENT_DIRECTORY:/opt/weights/hf \
$IMG_NAME
ç®çã®ãããã¡ã€ã«ã§ NIM ãã€ã¯ããµãŒãã¹ãåèµ·åããã®ã§ãPython ã䜿çšããŠã¢ãã«ãšããåãããŸãã
from openai import OpenAI
client = OpenAI(
base_url = "http://localhost:8000/v1",
api_key = "none"
)
completion = client.chat.completions.create(
model="llama-3.1-8b-instruct",
messages=[{"role":"user","content":"What is your name?"}],
temperature=0.2,
top_p=0.7,
max_tokens=100,
stream=True
)
for chunk in completion:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
ãŸãšã
ã¢ãã«ã®ã«ã¹ã¿ãã€ãºã«
PEFT
ãŸã㯠SFT ã䜿çšããŠããå Žåã§ããNIM ã¯ãé«æ§èœãªæšè«ã®ããã«ã«ã¹ã¿ãã€ãºãããã¢ãã«ã®ãããã€ãããããªã¹ãããã§ç°¡åã«é«éåããŸããæé©åããã TensorRT-LLM æšè«ãšã³ãžã³ãããŒã«ã«ç°å¢ã§èªåçã«ãã«ãããããšã§ãNIM ã¯ãé«éåããã AI æšè«ãã©ãã«ã§ãè¿
éã«ãããã€ã§ããããæ°ããªå¯èœæ§ãåŒãåºããŠããŸãã詳现ã«ã€ããŠã¯ã
NVIDIA API ã«ã¿ãã°
ã«ã¢ã¯ã»ã¹ããNVIDIA NIM ããã¥ã¡ã³ãã®
ãã¡ã€ã³ãã¥ãŒãã³ã°ãããã¢ãã«ã®ãµããŒã
ãã芧ãã ããã
NVIDIA NIM éçºè
ãã©ãŒã©ã
ã§ã¯ãNVIDIA ããã³ NIM ãã€ã¯ããµãŒãã¹ ã³ãã¥ããã£ãšã®äº€æµããããšãã§ããŸãã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
Kubernetes çš Oracle ã³ã³ãã㌠ãšã³ãžã³ã䜿çšãã OCI ã® NVIDIA Nemotron LLM ã®ãã¡ã€ã³ãã¥ãŒãã³ã°ãšããã〠(Oracle æäŸ)
GTC ã»ãã·ã§ã³:
äŒæ¥ãå é: 次äžä»£ AI ãããã€ãå®çŸããããŒã«ãšãã¯ããã¯
GTC ã»ãã·ã§ã³:
NVIDIA NeMo ã«ããå€æ§ãªèšèªã§ã®åºç€ãšãªã倧èŠæš¡èšèªã¢ãã«ã®ã«ã¹ã¿ãã€ãº
NGC ã³ã³ãããŒ:
Phind-CodeLlama-34B-v2-Instruct
NGC ã³ã³ãããŒ:
Phi-3-Mini-4K-Instruct
NGC ã³ã³ãããŒ:
Mistral-NeMo-Minitron-8B-Instruct |
https://developer.nvidia.com/blog/mastering-llm-techniques-data-preprocessing/ | Mastering LLM Techniques: Data Preprocessing | The advent of
large language models (LLMs)
marks a significant shift in how industries leverage AI to enhance operations and services. By automating routine tasks and streamlining processes, LLMs free up human resources for more strategic endeavors, thus improving overall efficiency and productivity.
Training and
customizing LLMs
for high accuracy is fraught with challenges, primarily due to their dependency on high-quality data. Poor data quality and inadequate volume can significantly reduce model accuracy, making dataset preparation a critical task for AI developers.
Datasets frequently contain duplicate documents, personally identifiable information (PII), and formatting issues. Some datasets even house toxic or harmful information that poses risks to users. Training models on these datasets without proper processing can result in higher training time and lower model quality. Another significant challenge is the scarcity of data. Model builders are running out of publicly available data to train on, prompting many to turn to third-party vendors or generate synthetic data using advanced LLMs.
In this post, we will describe data processing techniques and best practices for optimizing LLM performance by improving data quality for training. We will introduce
NVIDIA NeMo Curator
and how it addresses these challenges, demonstrating real-world data processing use cases for LLMs.
Text processing pipelines and best practices
Dealing with the preprocessing of large data is nontrivial, especially when the dataset consists of mainly web-scraped data which is likely to contain large amounts of ill-formatted, low-quality data.
Figure 1. Text processing pipelines that can be built using NeMo Curator
Figure 1 shows a comprehensive text processing pipeline, including the following steps at a high-level:
Download the dataset from the source and extract to a desirable format such as JSONL.
Apply preliminary text cleaning, such as Unicode fixing and language separation.
Apply both standard and custom-defined filters to the dataset based on specific quality criteria.
Perform various levels of deduplication (exact, fuzzy, and semantic).
Selectively apply advanced quality filtering, including model-based quality filtering, PII redaction, distributed data classification, and task decontamination.
Blend curated datasets from multiple sources to form a unified dataset.
The sections below dive deeper into each of these stages.
Download and extract text
The initial step in data curation involves downloading and preparing datasets from various common sources such as Common Crawl, specialized collections such as arXiv and PubMed, or private on-prime datasets, each potentially containing terabytes of data.
This crucial phase requires careful consideration of storage formats and extraction methods, as publicly hosted datasets often come in compressed formats (for example, .warc.gz, tar.gz, or zip files) that need to be converted to more manageable formats (such as .jsonl or .parquet) for further processing.
Preliminary text cleaning
Unicode fixing and language identification represent crucial early steps in the data curation pipeline, particularly when dealing with large-scale web-scraped text corpora. This phase addresses two fundamental challenges: improperly decoded Unicode characters, and the presence of multiple languages within the dataset.
Unicode formatting issues often arise from incorrect character encoding or multiple encoding/decoding cycles. Common problems include special characters appearing as garbled sequences (for example, âcaféâ showing as âcaféâ). Language identification and separation are equally important, especially for curators who are interested in curating monolingual datasets. Moreover, some of the data curation steps such as heuristic filtering, and model-based quality classifiers are language-specific.
This preliminary preprocessing step ensures clean, properly encoded text in identified languages, forming the foundation for all subsequent curation steps.
Heuristic filtering
Heuristic filtering employs rule-based metrics and statistical measures to identify and remove low-quality content.
The process typically evaluates multiple quality dimensions, such as document length, repetition patterns, punctuation distribution, and structural integrity of the text. Common heuristic filters include:
Word count filter:
Filters out snippets that are too brief to be meaningful or suspiciously long.
Boilerplate string filter:
Identifies and removes text containing excessive boilerplate content.
N-gram repetition filter:
Identifies repeated phrases at different lengths and removes documents with excessive repetition that might indicate low-quality or artificially generated content.
For heuristic filtering, the best practice is to implement a cascading approach. This enables more nuanced quality control while maintaining transparency in the filtering process. For improved performance, batch filtering can be implemented to process multiple documents simultaneously, significantly reducing computation time when dealing with large-scale datasets.
Deduplication
Deduplication is essential for improving model training efficiency, reducing computational costs, and ensuring data diversity. It helps prevent models from overfitting to repeated content and improves generalization. The process can be implemented through three main approaches: exact, fuzzy, and semantic deduplication. These form a comprehensive strategy for handling different types of duplicates in large-scale datasets, from identical copies to conceptually similar content.
Exact deduplication
Exact deduplication focuses on identifying and removing completely identical documents. This method generates hash signatures for each document and groups documents by their hashes into buckets, keeping only one document per bucket. While this method is computationally efficient, fast and reliable, itâs limited to detecting perfectly matching content and may miss semantically equivalent documents with minor variations.
Fuzzy deduplication
Fuzzy deduplication addresses near-duplicate content using MinHash signatures and Locality-Sensitive Hashing (LSH) to identify similar documents.
The process involves the following steps:
Compute MinHash signatures for documents.
Use LSH to group similar documents into buckets. One document might belong to one or more buckets.
Compute Jaccard similarity between documents within the same buckets.
Based on the Jaccard similarity, transform the similarity matrix to a graph and identify connected components in the graph.
Documents within a connected component are considered fuzzy duplicates.
Remove identified duplicates from the dataset.
This method is particularly valuable for identifying content with minor modifications, detecting partial document overlaps, and finding documents with different formatting but similar content. It strikes a balance between computational efficiency and duplicate detection capability.
Semantic deduplication
Semantic deduplication represents the most sophisticated approach, employing advanced embedding models to capture semantic meaning combined with clustering techniques to group semantically similar content. Research has shown that semantic deduplication can effectively reduce dataset size while maintaining or improving model performance. Itâs especially valuable for identifying paraphrased content, translated versions of the same material, and conceptually identical information.
Semantic deduplication consists of the following steps:
Each data point is embedded using a pretrained model.
The embeddings are clustered into k clusters using k-means clustering.
Within each cluster, pairwise cosine similarities are computed.
Data pairs with cosine similarity above a threshold are considered semantic duplicates.
From each group of semantic duplicates within a cluster, one representative datapoint is kept and the rest are removed.
Model-based quality filtering
Model-based quality filtering employs various types of models to evaluate and filter content based on quality metrics. The choice of model type significantly impacts both the effectiveness of filtering and the computational resources required, making it crucial to select the appropriate model for specific use cases.
Different types of models that can be used for quality filtering include:
N-gram based classifiers:
The simplest approach uses n-gram based bag-of-words classifiers like fastText, which excel in efficiency and practicality, as they require minimal training data (100,000 to 1,000,000 samples).
BERT-style classifiers:
BERT-style classifiers represent a middle-ground approach, offering better quality assessment through Transformer-based architectures. They can capture more complex linguistic patterns and contextual relationships, making them effective for quality assessment.
LLMs:
LLMs provide the most sophisticated quality assessment capabilities, leveraging their extensive knowledge to evaluate text quality. While they offer superior understanding of content quality, they have significant computational requirements thus they are best suited for smaller-scale applications, such as fine-tuning datasets.
Reward models:
Reward models represent a specialized category designed specifically for evaluating conversational data quality. These models can assess multiple quality dimensions simultaneously but similar to LLMs, they have significant computational requirements.
The optimal selection of quality filtering models should consider both the dataset scale and available computational resources. For large-scale pretraining datasets, combining lightweight models for initial filtering with advanced models for final quality assessment often provides the best balance of efficiency and effectiveness. For smaller, specialized datasets where quality is crucial, using models like LLMs or reward models becomes more feasible and beneficial.
PII redaction
Personally Identifiable Information (PII) redaction involves identifying and removing sensitive information from datasets to protect individual privacy and ensure compliance with data protection regulations.
This process is particularly important when dealing with datasets that contain personal information, from direct identifiers like names and social security numbers to indirect identifiers that could be used to identify individuals when combined with other data.
Modern PII redaction employs various techniques to protect sensitive information, including:
Replacing sensitive information with symbols (for example, XXX-XX-1234 for U.S. Social Security Numbers) while maintaining data format and structure.
Substituting sensitive data with non-sensitive equivalents that maintain referential integrity for analysis purposes.
Eliminating sensitive information when its presence is not necessary for downstream tasks.
Overall, PII redaction helps maintain data privacy, comply with regulations, and build trust with users while preserving the utility of their datasets for training and analysis purposes.
Distributed data classification
Data classification plays a vital role in data curation. This process helps organize and categorize data based on various attributes such as domain and quality, ensuring data is well-balanced and representative of different knowledge domains.
Domain classification helps LLMs understand the context and specific domain of input text by identifying and categorizing content based on subject matter. The domain information serves as valuable auxiliary data, enabling developers to build more diverse training datasets while identifying and filtering out potentially harmful or unwanted content. For example, using the AEGIS Safety Model, which classifies content into 13 critical risk categories, developers can effectively identify and filter harmful content from training data.
When dealing with pretraining corpora that often contain billions of documents, running inference for classification becomes computationally intensive and time-consuming. Therefore, distributed data classification is necessary to overcome these challenges. This is achieved by chunking the datasets across multiple GPU nodes to accelerate the classification task in a distributed manner.
Task decontamination
After training, LLMs are usually evaluated by their performance on downstream tasks consisting of unseen test data. Downstream task decontamination is a step that addresses the potential leakage of test data into training datasets, which can provide misleading evaluation results. The decontamination process typically involves several key steps:
Identifying potential downstream tasks and their test sets.
Converting test data into n-gram representations.
Searching for matching n-grams in the training corpus.
Removing or modifying contaminated sections while preserving document coherence.
This systematic approach helps ensure the effectiveness of decontamination while minimizing unintended impacts on data quality, ultimately contributing to more reliable model evaluation and development.
Blending and shuffling
Data blending and shuffling represent the final steps in the data curation pipeline, combining multiple curated datasets while ensuring proper randomization for optimal model training. This process is essential for creating diverse, well-balanced training datasets that enable better model generalization and performance. Data blending involves merging data from multiple sources into a unified dataset, creating more comprehensive and diverse training data. The blending process is implemented using two approaches:
Online: Data combination occurs during training
Offline: Datasets are combined before training
Each approach offers distinct advantages depending on the specific requirements of the training process and the intended use of the final dataset.
Synthetic data generation
Having navigated the intricacies of the preprocessing stage, we now confront a formidable challenge in the realm of LLM development: the scarcity of data. The insatiable appetite of LLMs for vast training datasets, even for fine-tuning purposes, frequently outstrips the availability of domain-specific or language-particular data. To this end,
synthetic data generation (SDG)
is a powerful approach that leverages LLMs to create artificial datasets that mimic real-world data characteristics while maintaining privacy and ensuring data utility. This process uses external LLM services to generate high-quality, diverse, and contextually relevant data that can be used for pretraining, fine-tuning, or evaluating other models.
SDG empowers LLMs by enabling adaptation to low-resource languages, supporting domain specialization, and facilitating knowledge distillation across models, making it a versatile tool for expanding model capabilities. SDG has become particularly valuable in scenarios where real data is scarce, sensitive, or difficult to obtain.
Figure 2. General synthetic data generation architecture with NeMo Curator
The synthetic data pipeline encompasses three key stages: Generate, Critique, and Filter.
Generate:
Use prompt engineering to generate synthetic data for various tasks. Taking
Nemotron-4
as an example, SDG is applied to generate training data for five different types of tasks: open-ended QA, closed-ended QA, writing assignments, coding, and math problems.
Critique:
Use methods like LLM reflection, LLM-as-judge, reward model inference, and other agents to evaluate the quality of synthetic data. The evaluation results can be used as feedback to SDG LLM to generate better results or filter out low quality data. A prime example is the
Nemotron-4-340B reward NIM
, which assesses data quality through five key attributes: Helpfulness, Correctness, Coherence, Complexity, and Verbosity. By setting appropriate thresholds for these attribute scores, the filtering process ensures that only high-quality synthetic data is retained, while filtering out low-quality or inappropriate content.
Filter:
Steps like deduplication and PII redaction to further improve SDG data quality.
Note, however, SDG is not suitable in all cases. Hallucinations from external LLMs can introduce unreliable information, compromising data integrity. Additionally, the generated dataâs distribution may not align with the target distribution, potentially leading to poor real-world performance. In such cases, using SDG could actually harm the systemâs effectiveness rather than improve it.
Data processing for building sovereign LLMs
As noted previously, open-source LLMs excel in English but struggle with other languages, especially those of Southeast Asia. This is primarily due to a lack of training data in these languages, limited understanding of local cultures, and insufficient tokens to capture unique linguistic structures and expressions.
To fully meet customer needs, enterprises in non-English-speaking countries must go beyond generic models and customize them to capture the nuances of their local languages, ensuring a seamless and impactful customer experience. For example, using NeMo Curator, Viettel Solutions processed
high-quality Vietnamese data
to increase accuracy by 10%, reduce the dataset size by 60% and accelerate training time by 3x.
The main steps for this use case are:
Download several Vietnamese and multilingual datasets (Wikipedia, Vietnamese news corpus,
OSCAR
, and C4) and convert to Parquet for efficient handling and processing of large datasets.
Combine, standardize, and shard into a single dataset
Apply unicode reformatting, exact deduplication, quality filtering (heuristic and classifier-based).
You can
follow along with the full tutorial
.
Improve data quality with NVIDIA NeMo Curator
So far, we have discussed the importance of data quality in improving the accuracy of LLMs and explored various data processing techniques. Developers can now try these techniques directly through
NeMo Curator
. It provides a customizable and modular interface that enables developers to build on top of it easily.
NeMo Curator uses NVIDIA RAPIDS GPU-accelerated libraries like cuDF, cuML, and cuGraph, and Dask to speed up workloads on multinode multi-GPUs, reducing processing time and scale as needed. For example, by using GPUs to accelerate the data processing pipelines,
Zyphra reduced the total cost of ownership (TCO)
by 50% and processed the data 10x faster (from 3 weeks to 2 days).
To get started, check out the
NVIDIA/NeMo-Curator GitHub repository
and available
tutorials
that cover various data curation workflows, such as:
Data processing for pretraining
Data processing for customization
SDG pipelines
You can also gain access through a
NeMo framework container
and request enterprise support with an
NVIDIA AI Enterprise
license. | https://developer.nvidia.com/ja-jp/blog/mastering-llm-techniques-data-preprocessing/ | LLM ãã¯ããã¯ã®ç¿åŸ: ããŒã¿ã®ååŠç | Reading Time:
2
minutes
倧èŠæš¡èšèªã¢ãã« (LLM)
ã®åºçŸã¯ãäŒæ¥ã AI ã掻çšããŠæ¥åãšãµãŒãã¹ã匷åããæ¹æ³ã«å€§ããªå€åããããããŸãããLLM ã¯æ¥åžžçãªäœæ¥ãèªååããããã»ã¹ãåçåããããšã§ã人çãªãœãŒã¹ãããæŠç¥çãªåãçµã¿ã«å²ãåœãŠãããšã§ãå
šäœçãªå¹çæ§ãšçç£æ§ãåäžãããŸãã
LLM ãé«ç²ŸåºŠã«ãã¬ãŒãã³ã°ããã³
ã«ã¹ã¿ãã€ãº
ããã«ã¯ãé«å質ãªããŒã¿ãå¿
èŠãšãªããããå€ãã®èª²é¡ã䌎ããŸããããŒã¿ã®è³ªãäœããéãååã§ãªããšãã¢ãã«ã®ç²ŸåºŠã倧å¹
ã«äœäžããå¯èœæ§ããããããAI éçºè
ã«ãšã£ãŠããŒã¿ã»ããã®æºåã¯éèŠãªäœæ¥ã® 1 ã€ãšãªã£ãŠããŸãã
ããŒã¿ã»ããã«ã¯åŸã
ã«ããŠéè€ããããã¥ã¡ã³ããå人ãç¹å®ã§ããæ
å ± (PII)ããã©ãŒãããã«é¢ããåé¡ãååšããŸããããŒã¿ã»ããã®äžã«ã¯ããŠãŒã¶ãŒã«ãªã¹ã¯ãããããæ害ãªæ
å ±ãäžé©åãªæ
å ±ãå«ãŸããŠãããã®ãããããŸããé©åãªåŠçãè¡ããã«ãããã£ãããŒã¿ã»ããã§ã¢ãã«ããã¬ãŒãã³ã°ãããšããã¬ãŒãã³ã°æéãé·åŒããããã¢ãã«ã®å質ãäœäžããå ŽåããããŸãããã 1 ã€ã®å€§ããªèª²é¡ã¯ããŒã¿ã®äžè¶³ã§ããã¢ãã«éçºè
ã¯ãã¬ãŒãã³ã°çšã®å
¬éããŒã¿ã䜿ãæããã€ã€ãããå€ãã®äººã
ããµãŒãããŒãã£ã®ãã³ããŒã«äŸé Œããããé«åºŠãª LLM ã䜿çšããŠåæããŒã¿ãçæãããããããã«ãªã£ãŠããŸãã
ãã®èšäºã§ã¯ããã¬ãŒãã³ã°çšã®ããŒã¿ã®å質ãåäžããããšã§ LLM ã®ããã©ãŒãã³ã¹ãæé©åããããã®ããŒã¿åŠçãã¯ããã¯ãšãã¹ã ãã©ã¯ãã£ã¹ã«ã€ããŠèª¬æããŸãããŸãã
NVIDIA NeMo Curator
ã®æŠèŠããã³åè¿°ãã課é¡ãžã®å¯ŸåŠæ¹æ³ã説æããLLM ã®å®éã®ããŒã¿åŠçã®ãŠãŒã¹ ã±ãŒã¹ãã玹ä»ããŸãã
ããã¹ãåŠçãã€ãã©ã€ã³ãšãã¹ã ãã©ã¯ãã£ã¹
倧èŠæš¡ããŒã¿ã®ååŠçã¯å®¹æã§ã¯ãããŸãããç¹ã«ãããŒã¿ã»ãããäž»ã«Web ã¹ã¯ã¬ã€ãã³ã°ãããããŒã¿ã§æ§æãããŠããã倧éã®äžé©åãªãã©ãŒãããã®äœå質ããŒã¿ãå«ãŸããŠããå¯èœæ§ãé«ãå Žåã¯ãªãããã§ãã
å³ 1. NeMo Curator ã䜿çšããŠæ§ç¯ã§ããããã¹ãåŠçãã€ãã©ã€ã³
å³ 1 ã¯ã以äžã®æé ãå«ãå
æ¬çãªããã¹ãåŠçãã€ãã©ã€ã³ã®æŠèŠã瀺ããŠããŸãã
ãœãŒã¹ããããŒã¿ã»ãããããŠã³ããŒãããJSONL ãªã©ã®æãŸãããã©ãŒãããã§æœåºããŸãã
Unicode ã®ä¿®æ£ãèšèªã«ããåé¡ãªã©ãäºåçãªããã¹ã ã¯ãªãŒãã³ã°ãé©çšããŸãã
ç¹å®ã®å質åºæºã«åºã¥ããŠãæšæºçãªãã£ã«ã¿ãŒãšã«ã¹ã¿ã å®çŸ©ã®ãã£ã«ã¿ãŒã®äž¡æ¹ãããŒã¿ã»ããã«é©çšããŸãã
ããŸããŸãªã¬ãã«ã®éè€æé€ (å³å¯ãææ§ãæå³ç) ãå®è¡ããŸãã
ã¢ãã«ããŒã¹ã®å質ãã£ã«ã¿ãªã³ã°ãå人æ
å ± (PII) ã®åé€ã(åæ£åŠçã«ãã) ããŒã¿åé¡ãäžæµã¿ã¹ã¯ã®æ±æé€å»ãªã©ã®é«åºŠãªå質ãã£ã«ã¿ãªã³ã°ãå¿
èŠã«å¿ããŠéžæçã«é©çšããŸãã
è€æ°ã®ãœãŒã¹ããåéããã粟éžãããããŒã¿ã»ãããäžäœåããçµ±åããããŒã¿ã»ãããäœæããŸãã
以äžã®ã»ã¯ã·ã§ã³ã§ã¯ããããã®å段éã«ã€ããŠè©³ãã説æããŸãã
ããã¹ããããŠã³ããŒãããŠæœåº
ããŒã¿ ãã¥ã¬ãŒã·ã§ã³ã®æåã®ã¹ãããã§ã¯ã Common Crawl ã®ãããªãããŸããŸãªäžè¬çãªãœãŒã¹ãarXiv ã PubMed ãªã©ã®å°éçãªã³ã¬ã¯ã·ã§ã³ãèªç€Ÿä¿æã®ãã©ã€ããŒã ããŒã¿ãªã©ããããŒã¿ã»ãããããŠã³ããŒãããŠæºåããŸãããããã®ããŒã¿ã»ããã«ã¯ããããããã©ãã€ãåäœã®ããŒã¿ãå«ãŸããŠããå¯èœæ§ããããŸãã
ãã®éèŠãªãã§ãŒãºã§ã¯ãä¿å圢åŒãšæœåºæ¹æ³ãæ
éã«æ€èšããå¿
èŠããããŸããäžè¬ã«å
¬éãããã¹ããããŠããããŒã¿ã»ããã¯å§çž®åœ¢åŒ (äŸ: .warc.gzãtar.gzãzip ãã¡ã€ã«) ã§æäŸãããããšãå€ããããåŸç¶ã®åŠçã®ããã«ããæ±ããããåœ¢åŒ (.jsonl ã .parquet ãªã©) ã«å€æããå¿
èŠããããŸãã
äºåçãªããã¹ã ã¯ãªãŒãã³ã°
Unicode ã®ä¿®æ£ãšèšèªã«ããåé¡ã¯ãç¹ã«å€§èŠæš¡ãª Web ã¹ã¯ã¬ã€ãã³ã°ã«ããããã¹ã ã³ãŒãã¹ãæ±ãå ŽåãããŒã¿ ãã¥ã¬ãŒã·ã§ã³ ãã€ãã©ã€ã³ã®éèŠãªåæã¹ãããã§ãããã®ãã§ãŒãºã§ã¯ãäžé©åã«ãã³ãŒãããã Unicode æåãšãããŒã¿ã»ããå
ã«è€æ°ã®èšèªãååšãããšãã 2 ã€ã®åºæ¬çãªèª²é¡ã«å¯ŸåŠããŸãã
Unicode 圢åŒã«é¢ããåé¡ã¯ãå€ãã®å Žåãæåãšã³ã³ãŒãã®èª€ããããšã³ã³ãŒã/ãã³ãŒã ãµã€ã¯ã«ãè€æ°åå®è¡ãããããšã«ãã£ãŠçºçããŸããããããåé¡ãšããŠã¯ãç¹æ®æåãæååãããæåå (äŸ:ãcaféãããcaféããšè¡šç€ºããã) ãšããŠè¡šç€ºãããããšãæããããŸããèšèªã®èå¥ãšåé¡ã¯ãç¹ã«åäžèšèªã®ããŒã¿ã»ããã®ãã¥ã¬ãŒã·ã§ã³ã«é¢å¿ã®ããéçºè
ã«ãšã£ãŠã¯åæ§ã«éèŠã§ããããã«ããã¥ãŒãªã¹ãã£ã㯠ãã£ã«ã¿ãªã³ã°ãã¢ãã«ããŒã¹ã®å質åé¡åšãªã©ã®ããŒã¿ ãã¥ã¬ãŒã·ã§ã³ã®ã¹ãããã®äžéšã¯èšèªã«äŸåããŠããŸãã
ãã®äºåçãªååŠçã¹ãããã§ã¯ãèå¥ãããèšèªã§é©åã«ãšã³ã³ãŒããããã¯ãªãŒã³ãªããã¹ãã確ä¿ããããã®åŸã®ãã¥ã¬ãŒã·ã§ã³ã¹ãããã®åºç€ãšãªããŸãã
ãã¥ãŒãªã¹ãã£ã㯠ãã£ã«ã¿ãªã³ã°
ãã¥ãŒãªã¹ãã£ã㯠ãã£ã«ã¿ãªã³ã°ã§ã¯ãã«ãŒã«ããŒã¹ã®è©äŸ¡ææšãšçµ±èšç尺床ã䜿çšããŠãäœå質ãªã³ã³ãã³ããç¹å®ããåé€ããŸãã
ãã®ããã»ã¹ã¯éåžžãããã¥ã¡ã³ãã®é·ããç¹°ãè¿ããã¿ãŒã³ãå¥èªç¹ã®ååžãããã¹ãã®æ§é çæŽåæ§ãªã©ãè€æ°ã®å質åºæºã§è©äŸ¡ãããŸããäžè¬çãªãã¥ãŒãªã¹ãã£ã㯠ãã£ã«ã¿ãŒã«ã¯ä»¥äžã®ãããªãã®ããããŸãã
åèªæ°ãã£ã«ã¿ãŒ:
æå³ããªããªãã»ã©çãããããŸãã¯çãããã»ã©ã«é·ãããããã¹ãããã£ã«ã¿ãªã³ã°ããŸãã
å®åæãã£ã«ã¿ãŒ:
éå°ãªå®åæãå«ãããã¹ããç¹å®ããåé€ããŸãã
N-gram å埩ãã£ã«ã¿ãŒ:
ç°ãªãé·ãã§ç¹°ãè¿ããããã¬ãŒãºãç¹å®ããäœå質ãŸãã¯äººå·¥çã«çæãããã³ã³ãã³ãã§ããå¯èœæ§ãããéå°ãªå埩ãå«ãææžãåé€ããŸãã
ãã¥ãŒãªã¹ãã£ã㯠ãã£ã«ã¿ãªã³ã°ã®å Žåã¯ãã«ã¹ã±ãŒã ã¢ãããŒããæ¡ãã®ãæåã®æ¹æ³ã§ããããã«ããããã£ã«ã¿ãªã³ã° ããã»ã¹ã®éææ§ãç¶æããªãããããç¹çŽ°ãªå質管çãå¯èœã«ãªããŸããåŠçããã©ãŒãã³ã¹ãåäžãããããã«ãããã ãã£ã«ã¿ãªã³ã°ãæ¡çšããŠè€æ°ã®ããã¥ã¡ã³ããåæã«åŠçãããšå€§èŠæš¡ãªããŒã¿ã»ãããæ±ãéã®èšç®æéã倧å¹
ã«ççž®ããããšãã§ããŸãã
éè€æé€
éè€æé€ã¯ãã¢ãã«ã®ãã¬ãŒãã³ã°å¹çã®åäžãèšç®ã³ã¹ãã®åæžãããŒã¿ã®å€æ§æ§ã®ç¢ºä¿ã«äžå¯æ¬ ã§ããç¹°ãè¿ãåºçŸããã³ã³ãã³ãã«ã¢ãã«ãéå°é©åããã®ãé²ããæ±çšæ§ãé«ããŸãããã®ããã»ã¹ã¯ãå³å¯ãææ§ãæå³ãšãã 3 ã€ã®äž»ãªéè€æé€ã¢ãããŒããéããŠå®è£
ã§ããŸãããããã¯ãåäžã®ã³ããŒããæŠå¿µçã«é¡äŒŒããã³ã³ãã³ããŸã§ã倧èŠæš¡ããŒã¿ã»ããå
ã®ç°ãªãã¿ã€ãã®éè€ãåŠçããå
æ¬çãªæŠç¥ã圢æããŸãã
å³å¯ãªéè€æé€
å³å¯ãªéè€æé€ã¯ãå®å
šã«åäžã®ããã¥ã¡ã³ããèå¥ããåé€ããããšã«éç¹ã眮ããŠããŸãããã®æ¹æ³ã§ã¯ãããã¥ã¡ã³ãããšã«ããã·ã¥çœ²åãçæããããã·ã¥ããšã«ããã¥ã¡ã³ããã°ã«ãŒãåããŠãã±ããã«æ ŒçŽãããã±ããããšã« 1 ã€ã®ããã¥ã¡ã³ãã®ã¿ãæ®ããŸãããã®æ¹æ³ã¯èšç®å¹çãé«ããé«éãã€ä¿¡é Œæ§ãé«ãã®ã§ãããå®å
šã«äžèŽããã³ã³ãã³ãã®æ€åºã«éå®ããããããæå³çã«ã¯åçãªã®ã«ãããã«ç°ãªãææžãèŠéãå¯èœæ§ããããŸãã
ææ§ãªéè€æé€
ææ§ãªéè€æé€ã¯ãMinHash 眲åãšå±ææ§éæåããã·ã¥å (LSH: Locality-Sensitive Hashing) ã䜿çšããŠãé¡äŒŒããããã¥ã¡ã³ããèå¥ããã»ãŒéè€ããã³ã³ãã³ãã«å¯ŸåŠããŸãã
ãã®ããã»ã¹ã«ã¯ã以äžã®ã¹ããããå«ãŸããŸãã
ããã¥ã¡ã³ãã® MinHash 眲åãèšç®ããŸãã
LSH ã䜿çšããŠãé¡äŒŒããããã¥ã¡ã³ãããã±ããã«ã°ã«ãŒãåããŸãã1 ã€ã®ããã¥ã¡ã³ãã 1 ã€ä»¥äžã®ãã±ããã«å±ããå ŽåããããŸãã
åããã±ããå
ã®ããã¥ã¡ã³ãé㧠Jaccard é¡äŒŒåºŠãèšç®ããŸãã
Jaccard é¡äŒŒåºŠã«åºã¥ããŠãé¡äŒŒåºŠè¡åãã°ã©ãã«å€æããã°ã©ãå
ã®é£çµæåãç¹å®ããŸãã
é£çµæåå
ã®ããã¥ã¡ã³ãã¯ææ§ãªéè€ãšèŠãªãããŸãã
ç¹å®ããéè€ãããŒã¿ã»ããããåé€ããŸãã
ãã®æ¹æ³ã¯ã軜埮ãªå€æŽãå ããããã³ã³ãã³ãã®ç¹å®ãéšåçãªããã¥ã¡ã³ãã®éè€ã®æ€åºãç°ãªããã©ãŒãããã§ãããé¡äŒŒããã³ã³ãã³ããæã€ããã¥ã¡ã³ãã®æ€çŽ¢ã«ç¹ã«æçšã§ããèšç®å¹çãšéè€æ€åºèœåã®ãã©ã³ã¹ãåããŠããŸãã
æå³çãªéè€æé€
æå³çãªéè€æé€ã¯ãæãæŽç·Žãããã¢ãããŒãã§ãããé«åºŠãªåã蟌ã¿ã¢ãã«ã䜿çšããŠã»ãã³ãã£ãã¯ãªæå³ãæããã¯ã©ã¹ã¿ãªã³ã°æè¡ãšçµã¿åãããŠæå³çã«é¡äŒŒããã³ã³ãã³ããã°ã«ãŒãåããŸããç 究ã§ã¯ãæå³çãªéè€æé€ã¯ãã¢ãã«ã®ããã©ãŒãã³ã¹ãç¶æãŸãã¯æ¹åããªãããããŒã¿ã»ããã®ãµã€ãºãå¹æçã«çž®å°ã§ããããšã瀺ãããŠããŸããèšãæããããã³ã³ãã³ããåãçŽ æã®ç¿»èš³çãæŠå¿µçã«åäžã®æ
å ±ãç¹å®ããã®ã«ç¹ã«æçšã§ãã
æå³ã«ããéè€æé€ã¯ã以äžã®ã¹ãããã§æ§æãããŸãã
åããŒã¿ ãã€ã³ãããäºååŠç¿æžã¿ã¢ãã«ã䜿çšããŠåã蟌ãŸããŸãã
åã蟌ã¿ã¯ãk-means ã䜿çšã㊠k åã®ã¯ã©ã¹ã¿ãŒã«ã°ã«ãŒãåãããŸãã
åã¯ã©ã¹ã¿ãŒå
ã§ããã¢ããšã®ã³ãµã€ã³é¡äŒŒåºŠãèšç®ãããŸãã
éŸå€ãè¶
ããã³ãµã€ã³é¡äŒŒåºŠãæããããŒã¿ ãã¢ã¯ãæå³ã®éè€ãšèŠãªãããŸãã
ã¯ã©ã¹ã¿ãŒå
ã®æå³çãªéè€ã®åã°ã«ãŒãããã1 ã€ã®ä»£è¡šçãªããŒã¿ãã€ã³ããä¿æãããæ®ãã¯åé€ãããŸãã
ã¢ãã«ããŒã¹ã®å質ãã£ã«ã¿ãªã³ã°
ã¢ãã«ããŒã¹ã®å質ãã£ã«ã¿ãªã³ã°ã§ã¯ãããŸããŸãªçš®é¡ã®ã¢ãã«ã䜿çšããŠãå質ææšã«åºã¥ããŠã³ã³ãã³ããè©äŸ¡ããŠãã£ã«ã¿ãªã³ã°ããŸããã¢ãã«ã®çš®é¡ã®éžæã¯ããã£ã«ã¿ãªã³ã°ã®æå¹æ§ãšå¿
èŠãªèšç®ãªãœãŒã¹ã®äž¡æ¹ã«å€§ããªåœ±é¿ãåãŒããããç¹å®ã®ãŠãŒã¹ ã±ãŒã¹ã«é©åãªã¢ãã«ãéžæããããšãéèŠã§ãã
å質ãã£ã«ã¿ãªã³ã°ã«äœ¿çšã§ããã¢ãã«ã«ã¯ã以äžã®çš®é¡ããããŸãã
N-gram ããŒã¹ã®åé¡åš:
æãåçŽãªã¢ãããŒãã¯ãfastText ã®ãã㪠N-gram ããŒã¹ã® Bag-of-Words åé¡åšã䜿çšããæ¹æ³ã§ããå¿
èŠãªãã¬ãŒãã³ã° ããŒã¿ (10 äžïœ100 äžãµã³ãã«) ãæãå°ãªãæžããããå¹çæ§ãšå®çšæ§ã«åªããŠããŸãã
BERT ã¹ã¿ã€ã«ã®åé¡åš:
BERT ã¹ã¿ã€ã«ã®åé¡åšã¯äžéçãªã¢ãããŒãã§ãããTransformer ããŒã¹ã®ã¢ãŒããã¯ãã£ãéããŠãã質ã®é«ãè©äŸ¡ãæäŸããŸããããè€éãªèšèªãã¿ãŒã³ãæèäžã®é¢ä¿ãæããããšãã§ããå質è©äŸ¡ã«å¹æçã§ãã
LLM:
LLM ã¯ãããã¹ãã®å質è©äŸ¡ã«å¹
åºãç¥èã掻çšããæãæŽç·Žãããå質è©äŸ¡æ©èœãæäŸããŸããã³ã³ãã³ãã®å質ãããæ·±ãç解ã§ããŸãããèšç®èŠä»¶ãé«ãããããã¡ã€ã³ãã¥ãŒãã³ã°çšã®ããŒã¿ã»ãããªã©ãå°èŠæš¡ãªã¢ããªã±ãŒã·ã§ã³ã«åããŠããŸãã
å ±é
¬ã¢ãã«:
å ±é
¬ã¢ãã«ã¯ãäŒè©±ããŒã¿ã®å質ãè©äŸ¡ã«ç¹åãèšèšãããå°éã«ããŽãªã§ãããããã®ã¢ãã«ã¯è€æ°ã®å質åºæºãåæã«è©äŸ¡ã§ããŸãããLLM ãšåããé«ãèšç®èŠä»¶ãæ±ããããŸãã
æé©ãªå質ãã£ã«ã¿ãªã³ã° ã¢ãã«ã®éžæã«ã¯ãããŒã¿ã»ããã®èŠæš¡ãšå©çšå¯èœãªèšç®ãªãœãŒã¹ã®äž¡æ¹ãèæ
®ããå¿
èŠããããŸãã倧èŠæš¡ãªäºååŠç¿ããŒã¿ã»ããã®å Žåãåæãã£ã«ã¿ãªã³ã°ã«ã¯è»œéãªã¢ãã«ã䜿çšããæçµçãªå質è©äŸ¡ã«ã¯é«åºŠãªã¢ãã«ãçµã¿åãããããšã§ãå¹çæ§ãšæå¹æ§ã®ãã©ã³ã¹ãåŸãããŸããå質ãéèŠãšãªãå°èŠæš¡ã§å°éçãªããŒã¿ã»ããã®å Žåã¯ãLLM ãå ±é
¬ã¢ãã«ãªã©ã®ã¢ãã«ã䜿çšããããšããããå®çŸçã§æçãšãªããŸãã
PII ã®åé€
å人ãç¹å®ã§ããæ
å ± (PII) ã®åé€ã«ã¯ãå人ã®ãã©ã€ãã·ãŒãä¿è·ããããŒã¿ä¿è·èŠå¶ã«å¯Ÿããéµå®ã確å®ã«ããããã«ãããŒã¿ã»ããå
ã®æ©å¯æ
å ±ãèå¥ããã³åé€ããããšãå«ãŸããŸãã
ãã®ããã»ã¹ã¯ãæ°åã瀟äŒä¿éçªå·ãªã©ã®çŽæ¥çãªèå¥åãããä»ã®ããŒã¿ãšçµã¿åãããããšã§å人ãèå¥ã§ããéæ¥çãªèå¥åãŸã§ãå人æ
å ±ãå«ãããŒã¿ã»ãããæ±ãå Žåã«ã¯ç¹ã«éèŠã§ãã
ææ°ã® PII åé€ã§ã¯ãæ©å¯æ
å ±ãä¿è·ããããã«ã以äžãå«ãããŸããŸãªæè¡ãçšããããŠããŸãã
ããŒã¿åœ¢åŒãšæ§é ãç¶æããªãããæ©å¯æ
å ±ãèšå·ã«çœ®ãæãã (ããšãã°ãç±³åœç€ŸäŒä¿éçªå·ã®å Žå XXX-XX-1234 ã«çœ®ãæãã)ã
åæã®ç®çã§åç
§æŽåæ§ãç¶æããªãããæ©å¯ããŒã¿ãæ©å¯ã§ãªãåçã®ããŒã¿ã«çœ®ãæããã
äžæµã¿ã¹ã¯ã«å¿
èŠã§ãªãå Žåããã®æ©å¯æ
å ±ãåé€ããã
å
šäœãšã㊠PII ã®åé€ã¯ãããŒã¿ã®ãã©ã€ãã·ãŒãä¿è·ããèŠå¶ãéµå®ãããã¬ãŒãã³ã°ãšåæã®ç®çã§ããŒã¿ã»ããã®æçšæ§ãç¶æããªããããŠãŒã¶ãŒãšä¿¡é Œé¢ä¿ãæ§ç¯ããã®ã«åœ¹ç«ã¡ãŸãã
(åæ£åŠçã«ãã) ããŒã¿åé¡
ããŒã¿åé¡ã¯ãããŒã¿ ãã¥ã¬ãŒã·ã§ã³ã«ãããŠéèŠãªåœ¹å²ãæãããŸãããã®ããã»ã¹ã§ã¯ããã¡ã€ã³ãå質ãªã©å€æ§ãªå±æ§ã«åºã¥ããŠããŒã¿ãæŽçããåé¡ããããšã§ããŒã¿ã®ãã©ã³ã¹ãåããããŸããŸãªç¥èãã¡ã€ã³ã代衚ãããã®ãšãªãããã«ããŸãã
ãã¡ã€ã³åé¡ã¯ãäž»é¡ã«åºã¥ããŠã³ã³ãã³ããèå¥ããŠã«ããŽãªãŒåãããããšã§ãLLM ãå
¥åããã¹ãã®ã³ã³ããã¹ããç¹å®ã®ãã¡ã€ã³ãç解ããã®ã«åœ¹ç«ã¡ãŸãããã¡ã€ã³æ
å ±ã¯ãéçºè
ãæœåšçã«æ害ãŸãã¯äžèŠãªã³ã³ãã³ããç¹å®ãããã£ã«ã¿ãªã³ã°ããªãããããå€æ§ãªãã¬ãŒãã³ã° ããŒã¿ã»ãããæ§ç¯ããããšãå¯èœã«ãã貎éãªè£å©çæ
å ±ãšãªããŸããããšãã°ãã³ã³ãã³ãã 13 ã®é倧ãªãªã¹ã¯ ã«ããŽãªã«åé¡ãã AEGIS Safety Model ã䜿çšããããšã§ãéçºè
ã¯ãã¬ãŒãã³ã° ããŒã¿ããæ害ãªã³ã³ãã³ããå¹æçã«èå¥ãããã£ã«ã¿ãªã³ã°ããããšãã§ããŸãã
æ°ååãã®ããã¥ã¡ã³ããå«ãŸããŠããããšãå€ãäºååŠç¿ã³ãŒãã¹ãæ±ãå Žåãåé¡ãè¡ãããã®æšè«ãå®è¡ããã®ã«å€ãã®èšç®åŠçãšæéãå¿
èŠãšãªããŸãããããã£ãŠããããã®èª²é¡ãå
æããã«ã¯ãåæ£åŠçãé©çšã§ããããŒã¿åé¡ãå¿
èŠã§ããããã¯ãããŒã¿ã»ãããè€æ°ã® GPU ããŒãã«åå²ããããšã§ãåé¡ã¿ã¹ã¯ãé«éåããããšã«ãã£ãŠå®çŸãããŸãã
äžæµã¿ã¹ã¯ã®æ±æé€å»
ãã¬ãŒãã³ã°ã®åŸãLLM ã¯éåžžãèŠããªããã¹ã ããŒã¿ã§æ§æãããäžæµã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ã«ãã£ãŠè©äŸ¡ãããŸããäžæµã¿ã¹ã¯ã®æ±æé€å»ã¯ããã¹ã ããŒã¿ããã¬ãŒãã³ã° ããŒã¿ã»ããã«æ··å
¥ãæŒæŽ©ããå¯èœæ§ã«å¯ŸåŠããã¹ãããã§ããããã¯æå³ããªãè©äŸ¡çµæããããããªã¹ã¯ãæããŸããæ±æé€å»ããã»ã¹ã«ã¯ãéåžžã以äžã®äž»èŠãªã¹ããããå«ãŸããŸãã
æœåšçãªäžæµã¿ã¹ã¯ãšãã®ãã¹ã ã»ãããç¹å®ããŸãã
ãã¹ã ããŒã¿ã N-gram è¡šçŸã«å€æããŸãã
ãã¬ãŒãã³ã° ã³ãŒãã¹ã§äžèŽãã N-gram ãæ€çŽ¢ããŸãã
ããã¥ã¡ã³ãã®æŽåæ§ãç¶æããªãããæ±æãããã»ã¯ã·ã§ã³ãåé€ãŸãã¯ä¿®æ£ããŸãã
ãã®äœç³»çãªã¢ãããŒãã¯ãããŒã¿ã®å質ã«å¯Ÿããæå³ããªã圱é¿ãæå°éã«æããªãããæ±æé€å»ã®å¹æã確å®ãªãã®ã«ããŠãæçµçã«ã¯ãããä¿¡é Œæ§ã®é«ãã¢ãã«ã®è©äŸ¡ãšéçºã«è²¢ç®ããŸãã
ãã¬ã³ããšã·ã£ããã«
ããŒã¿ã®ãã¬ã³ããšã·ã£ããã«ã¯ãããŒã¿ ãã¥ã¬ãŒã·ã§ã³ ãã€ãã©ã€ã³ã«ãããæçµã¹ãããã§ãããè€æ°ã®ãã¥ã¬ãŒã·ã§ã³ãããããŒã¿ã»ãããçµã¿åããããšåæã«é©åãªã©ã³ãã æ§ã確ä¿ããæé©ãªã¢ãã« ãã¬ãŒãã³ã°ãå®çŸããŸãããã®ããã»ã¹ã¯ãã¢ãã«ã®äžè¬åãšããã©ãŒãã³ã¹ãåäžããããå€æ§ã§ãã©ã³ã¹ã®åãããã¬ãŒãã³ã° ããŒã¿ã»ãããäœæããäžã§äžå¯æ¬ ã§ããããŒã¿ã®ãã¬ã³ãã§ã¯ãè€æ°ã®ãœãŒã¹ããã®ããŒã¿ãçµ±åããŠåäžã®ããŒã¿ã»ããã«çµåããããå
æ¬çã§å€æ§ãªãã¬ãŒãã³ã° ããŒã¿ãäœæããŸãããã¬ã³ã ããã»ã¹ã¯ã次㮠2 ã€ã®ã¢ãããŒãã䜿çšããŠå®è£
ãããŸãã
ãªã³ã©ã€ã³: ãã¬ãŒãã³ã°äžã«ããŒã¿ãçµåããã
ãªãã©ã€ã³: ãã¬ãŒãã³ã°åã«ããŒã¿ã»ãããçµåããã
ããããã®ã¢ãããŒãã«ã¯ããã¬ãŒãã³ã° ããã»ã¹ã®ç¹å®ã®èŠä»¶ãšæçµçãªããŒã¿ã»ããã®äœ¿çšç®çã«å¿ããŠç°ãªãå©ç¹ããããŸãã
åæããŒã¿ã®çæ
ååŠçãã§ãŒãºã®è€éãªããã»ã¹ãçµããŸããããçŸåšãLLM éçºã®åéã§ã¯ããŒã¿ã®äžè¶³ãšãã倧ããªèª²é¡ã«çŽé¢ããŠããŸããLLM ãåŠç¿çšããŒã¿ã»ããã倧éã«å¿
èŠãšããã®ã¯ããã¥ãŒãã³ã°ãç®çãšããå Žåã§ãåæ§ã§ããããã®é£œããªãèŠæ±ã¯ãç¹å®ã®ãã¡ã€ã³ãèšèªã«ç¹åããããŒã¿ã®å
¥æå¯èœæ§ãäžåãããšãå°ãªããããŸããããã®åé¡ã«å¯ŸåŠãã
åæããŒã¿çæ (SDG: Synthetic Data Generation)
ã¯ãLLM ã掻çšããŠããã©ã€ãã·ãŒã®ä¿è·ãšããŒã¿ã®æçšæ§ã確ä¿ããªãããçŸå®ã®ããŒã¿ç¹æ§ãæš¡å£ãã人工çãªããŒã¿ã»ãããçæãã匷åãªã¢ãããŒãã§ãããã®ããã»ã¹ã§ã¯å€éš LLM ãµãŒãã¹ã䜿çšããŠãäºååŠç¿ããã¡ã€ã³ãã¥ãŒãã³ã°ãä»ã®ã¢ãã«ã®è©äŸ¡ã«äœ¿çšã§ãããé«å質ã§å€æ§ãã€æèçã«é¢é£æ§ã®é«ãããŒã¿ãçæããŸãã
SDG ã¯ãäœãªãœãŒã¹èšèªã« LLM ãé©å¿ã§ããããã«ããããšã§ããã¡ã€ã³ã®å°éæ§ããµããŒãããã¢ãã«éã®ç¥èã®æœåºãä¿é²ããã¢ãã«æ©èœãæ¡åŒµããæ±çšçãªããŒã«ã«ãªããŸããSDG ã¯ãç¹ã«å®ããŒã¿ãäžè¶³ããŠããããæ©å¯ã§ãã£ãããååŸããã®ãå°é£ã ã£ããããã·ããªãªã«ãããŠãéèŠãªååšãšãªã£ãŠããŸãã
å³ 2. NeMo Curator ã«ããäžè¬çãªåæããŒã¿çæã¢ãŒããã¯ãã£
åæããŒã¿ ãã€ãã©ã€ã³ã«ã¯ãçæãæ¹è©ããã£ã«ã¿ãŒã® 3 ã€ã®äž»èŠãªã¹ãããããããŸãã
çæ:
ããã³ãã ãšã³ãžãã¢ãªã³ã°ã䜿çšããŠãããŸããŸãªã¿ã¹ã¯çšã®åæããŒã¿ãçæããŸãã
Nemotron-4
ãäŸã«ãšããšãSDG ã¯ã5 çš®é¡ã®ç°ãªãã¿ã¹ã¯ (èªç±åœ¢åŒ QAãéžæåŒ QAãèšè¿°åŒèª²é¡ãã³ãŒãã£ã³ã°ãæ°åŠåé¡) ã®ãã¬ãŒãã³ã° ããŒã¿ãçæããããã«é©çšãããŸãã
æ¹è©:
LLM ReflectionãLLM-as-judgeãå ±é
¬ã¢ãã«æšè«ããã®ä»ã®ãšãŒãžã§ã³ããªã©ã®ææ³ã䜿çšããŠãåæããŒã¿ã®å質ãè©äŸ¡ããŸããè©äŸ¡çµæ㯠SDG LLM ãžã®ãã£ãŒãããã¯ãšããŠäœ¿çšããããè¯ãçµæãçæããããäœå質ããŒã¿ããã£ã«ã¿ãªã³ã°ãããããããšãã§ããŸãã代衚çãªäŸã¯
Nemotron-4-340B reward NIM
ã§ããããã¯ã5 ã€ã®äž»èŠãªå±æ§ãããªãã¡ Helpfulness (æçšæ§)ãCorrectness (æ£ç¢ºæ§)ãCoherence (äžè²«æ§)ãComplexity (è€éæ§)ãVerbosity (åé·æ§) ãéããŠããŒã¿ã®å質ãè©äŸ¡ããŸãããããã®å±æ§ã¹ã³ã¢ã«é©åãªéŸå€ãèšå®ããããšã§ããã£ã«ã¿ãªã³ã°åŠçã§ã¯ãäœå質ãŸãã¯äžé©åãªã³ã³ãã³ããé€å€ããªãããé«å質ãªåæããŒã¿ã®ã¿ãä¿æãããããã«ãªããŸãã
ãã£ã«ã¿ãŒ:
éè€æé€ã PII ã®åé€ãªã©ã®ã¹ãããã§ãSDG ããŒã¿ã®å質ãããã«åäžãããŸãã
ãã ããSDG ããã¹ãŠã®ã±ãŒã¹ã«é©ããŠããããã§ã¯ãªãããšã«æ³šæããŠãã ãããå€éš LLM ã«ããå¹»èŠã¯ãä¿¡é Œæ§ã®äœãæ
å ±ããããããããŒã¿ã®æŽåæ§ãæãªãå¯èœæ§ããããŸããå ããŠãçæãããããŒã¿ã®ååžãã¿ãŒã²ããã®ååžãšäžèŽããªãå¯èœæ§ããããçŸå®äžçã®ããã©ãŒãã³ã¹ã«æªåœ±é¿ãåãŒãå¯èœæ§ããããŸãããã®ãããªå Žåã¯ãSDG ã䜿çšããããšã§ãã·ã¹ãã ã®å¹çæ§ãæ¹åããã©ãããããããäœäžãããå¯èœæ§ããããŸãã
ãœããªã³ AI LLM æ§ç¯ã®ããã®ããŒã¿åŠç
ãªãŒãã³ãœãŒã¹ LLM ã¯è±èªã§ã¯åªããŠããŸããããã®ä»ã®èšèªãç¹ã«æ±åã¢ãžã¢ã®èšèªã§ã¯èŠæŠããŠããŸãããã®äž»ãªåå ã¯ããããã®èšèªã®ãã¬ãŒãã³ã° ããŒã¿ã®äžè¶³ãçŸå°ã®æåã«å¯Ÿããç解ãéãããŠããããšãç¬èªã®èšèªæ§é ãšè¡šçŸãæããã®ã«ååãªããŒã¯ã³ãäžè¶³ããŠããããšã§ãã
è±èªå以å€ã®åœã
ã®äŒæ¥ã¯ã顧客ã®ããŒãºãå®å
šã«æºãããããæ±çšã¢ãã«ã«ãšã©ãŸãããçŸå°ã®èšèªã®ãã¥ã¢ã³ã¹ãæããããã«ã¢ãã«ãã«ã¹ã¿ãã€ãºããã·ãŒã ã¬ã¹ã§ã€ã³ãã¯ãã®ãã顧客äœéšã確ä¿ããå¿
èŠããããŸããäŸãã°ãViettel Solutions ã¯ãNeMo Curator ã䜿çšããŠã
é«å質ãªãããã èªããŒã¿
ãåŠçãã粟床ã 10% åäžãããããŒã¿ã»ããã®ãµã€ãºã 60% åæžãããã¬ãŒãã³ã°ã 3 åé«éåããŸããã
ãã®ãŠãŒã¹ ã±ãŒã¹ã®äž»ãªæé ã¯æ¬¡ã®ãšããã§ãã
ããã€ãã®ãããã èªããã³å€èšèªããŒã¿ã»ãã (Wikipediaããããã èªãã¥ãŒã¹ ã³ãŒãã¹ã
OSCAR
ãC4) ãããŠã³ããŒããã倧èŠæš¡ãªããŒã¿ã»ãããå¹ççã«åŠçããããã«ãParquet ã«å€æããŸãã
è€æ°ã®ããŒã¿ã»ãããçµåãæšæºåããåäžã®ããŒã¿ã»ããã«ã·ã£ãŒãããŸãã
Unicode ã®åãã©ãŒããããå³å¯ãªéè€æé€ãå質ãã£ã«ã¿ãªã³ã° (ãã¥ãŒãªã¹ãã£ãã¯ããã³åé¡åšããŒã¹) ãé©çšããŸãã
詳现ã¯ããã®
ãã¥ãŒããªã¢ã«
ãåç
§ããŠãã ããã
NVIDIA NeMo Curator ã«ããããŒã¿ã®å質åäž
ãããŸã§ãLLM ã®ç²ŸåºŠåäžã«ãããããŒã¿å質ã®éèŠæ§ã«ã€ããŠããããŠããŸããŸãªããŒã¿åŠçææ³ã«ã€ããŠèª¬æããŠããŸãããéçºè
ã¯ã
NeMo Curator
ãä»ããŠçŽæ¥ãããã®ææ³ãè©Šãããšãã§ããŸããNeMo Curator ã¯ãã«ã¹ã¿ãã€ãºå¯èœãªã¢ãžã¥ãŒã«åŒã®ã€ã³ã¿ãŒãã§ã€ã¹ãæäŸããŠãããããéçºè
ã¯ãããããŒã¹ã«ç°¡åã«æ§ç¯ããããšãã§ããŸãã
NeMo Curator ã¯ãcuDFãcuMLãcuGraphãDask ãªã©ã® NVIDIA RAPIDS GPU ã§é«éåãããã©ã€ãã©ãªã䜿çšããŠããã«ãããŒãããã«ã GPU ã«ãããã¯ãŒã¯ããŒããé«éåããå¿
èŠã«å¿ããŠã¹ã±ãŒã«ãããåŠçæéãåæžã§ããŸããäŸãã°ãGPU ã䜿çšããŠããŒã¿åŠçã®ãã€ãã©ã€ã³ãé«éåããããšã§ã
Zyphra ã¯ç·ææã³ã¹ã (TCO)
ã 50% åæžããããŒã¿ã 10 åé«éã«åŠçããŠããŸã (3 é±éãã 2 æ¥é)ã
ãŸãã¯ã
NVIDIA/NeMo-Curator GitHub ãªããžããª
ãšã以äžã®ããŸããŸãªããŒã¿ ãã¥ã¬ãŒã·ã§ã³ã®ã¯ãŒã¯ãããŒã網çŸ
ããŠãã
ãã¥ãŒããªã¢ã«
ãã芧ãã ããã
äºååŠç¿ã®ããã®ããŒã¿åŠç
ã«ã¹ã¿ãã€ãºã®ããã®ããŒã¿åŠç
SDG ãã€ãã©ã€ã³
ãŸãã
NeMo ãã¬ãŒã ã¯ãŒã¯ ã³ã³ãããŒ
ãä»ããŠã¢ã¯ã»ã¹ãã
NVIDIA AI Enterprise
ã©ã€ã»ã³ã¹ã§ãšã³ã¿ãŒãã©ã€ãº ãµããŒãããªã¯ãšã¹ãããããšãã§ããŸãã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
ã»ãã¥ã¢ãªãšã³ã¿ãŒãã©ã€ãº ããŒã¿ã§ã«ã¹ã¿ã LLM ã¢ããªãæ°åã§æ§ç¯ãã
GTC ã»ãã·ã§ã³:
LLM ã€ã³ãã©ã®æ§ç¯ããã¬ãŒãã³ã°é床ã®é«éåãçæ AI ã€ãããŒã·ã§ã³ã®æšé²ã®ããã®ãšã³ãããŒãšã³ãã®ãœãªã¥ãŒã·ã§ã³ã®èšèš (Aivres æäŸ)
NGC ã³ã³ãããŒ:
genai-llm-playground
NGC ã³ã³ãããŒ:
rag-application-query-decomposition-agent
ãŠã§ãããŒ:
AI ã«ããå»çã¯ãŒã¯ãããŒã®å€é©: CLLM ãæ·±ãæãäžãã |
https://developer.nvidia.com/blog/expanding-ai-agent-interface-options-with-2d-and-3d-digital-human-avatars/ | Expanding AI Agent Interface Options with 2D and 3D Digital Human Avatars | When interfacing with
generative AI
applications, users have multiple communication optionsâtext, voice, or through digital avatars.
Traditional chatbot or copilot applications have text interfaces where users type in queries and receive text-based responses. For hands-free communication, speech AI technologies like
automatic speech recognition
(ASR) and
text-to-speech
(TTS) facilitate verbal interactions, ideal for scenarios like phone-based customer service. Moreover, combining digital avatars with speech capabilities provides a more dynamic interface for users to engage visually with the application. According to Gartner, by 2028, 45% of organizations with more than 500 employees will leverage employee AI avatars to expand the capacity of human capital.
1
Digital avatars can vary widely in styleâsome use cases benefit from photorealistic 3D or 2D avatars, while other use cases work better with a stylized, or cartoonish avatar.
3D Avatars
offer fully immersive experiences, showcasing lifelike movements and photorealism. Developing these avatars requires specialized software and technical expertise, as they involve intricate body animations and high-quality renderings.
2D Avatars
are quicker to develop and ideal for web-embedded solutions. They offer a streamlined approach to creating interactive AI, often requiring artists for design and animation but less intensive in terms of technical resources.
To kickstart your creation of a photo-realistic digital human, the
NVIDIA AI Blueprint on digital humans for customer service
can be tailored for various use cases. This functionality is now included with support for the NVIDIA Maxine
Audio2Face-2D
NIM microservice. âAdditionally, the blueprint now offers flexibility in rendering for 3D avatar developers to use
Unreal Engine
.
How to add a talking digital avatar to your agent application
In the AI Blueprint for digital humans, a user interacts with an
AI agent
that leverages
NVIDIA ACE
technology (Figure 1).
Figure 1. Architecture diagram for the NVIDIA AI Blueprint for digital humans
The audio input from the user is sent to the ACE agent which orchestrates the communication between various NIM microservices. The ACE agent uses the
Riva Parakeet NIM
to convert the audio to text, which is then processed by a RAG pipeline. The RAG pipeline uses the NVIDIA NeMo Retriever
embedding
and
reranking
NIM microservices, and an
LLM NIM
, to respond with relevant context from stored documents.
Finally, the response is converted back to speech via Riva TTS, animating the digital human using the Audio2Face-3D NIM or Audio2Face-2D NIM.
Considerations when designing your AI agent application
In global enterprises, communication barriers across languages can slow down operations. AI-powered avatars with multilingual capabilities communicate across languages effortlessly. The digital human AI Blueprint provides conversational AI capabilities that simulate human interactions that accommodate usersâ speech styles and languages through Riva ASR, neural machine translation (NMT) along with intelligent interruption and barge-in support.
One of the key benefits of digital human AI agents is their ability to function as âalways-onâ resources for employees and customers alike. RAG-powered AI agents continuously learn from interactions and improve over time, providing more accurate responses and better user experiences.
For enterprises considering digital human interfaces, choosing the right avatar and rendering option depends on the use case and customization preferences.
Use Case
: 3D avatars are ideal for highly immersive use cases like in physical stores, kiosks or primarily one-to-one interactions, while 2D avatars are effective for web or mobile conversational AI use cases.
Development and Customization Preferences
: Teams with 3D and animation expertise can leverage their skillset to create an immersive and ultra-realistic avatar, while teams looking to iterate and customize quickly can benefit from the simplicity of 2D avatars.
Scaling Considerations:
Scaling is an important consideration when evaluating avatars and corresponding rendering options. Stream throughput, especially for 3D avatars, is highly dependent on the choice and quality of the character asset used, the desired output resolution and the rendering option of choice (Omniverse Renderer or Unreal Engine) can play a critical role in determining per stream compute footprint.
NVIDIA Audio2Face-2D allows creation of lifelike 2D avatars from just a portrait image and voice input. Easy and simple configurations allow developers to quickly iterate and produce target avatars and animations for their digital human use cases. With real-time output and cloud-native deployment, 2D digital humans are ideal for interactive use cases and streaming avatars for interactive web-embedded solutions.
For example, enterprises looking to deploy AI agents across multiple devices and inserting digital humans into web- or mobile-first customer journeys, can benefit from the reduced hardware demands of 2D avatars.
3D photorealistic avatars provide an unmatched immersive experience for use cases demanding âhighly empathetic user engagement. NVIDIA Audio2Face-3D and Animation NIM microservices animate a 3D character by generating blendshapes along with subtle head and body animation to create an immersive, photorealistic avatar. The digital human AI Blueprint now supports two rendering options for 3D avatars, including Omniverse Renderer and Unreal Engine Renderer, providing developers the flexibility to integrate the rendering option of their choice.
To explore how digital humans can enhance your enterprise, visit the
NVIDIA API catalog
to learn about the different avatar options.
Getting started with digital avatars
For hands-on development with Audio2Face-2D and Unreal Engine NIM microservices,
apply for ACE Early Access
or dive into the digital human AI Blueprint
technical blog
to learn how you can add digital human interfaces to personalize chatbot applications.
1
Gartner®, Hype Cycle for the Future of Work, 2024 by Tori Paulman, Emily Rose McRae, etc., July 2024
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. | https://developer.nvidia.com/ja-jp/blog/expanding-ai-agent-interface-options-with-2d-and-3d-digital-human-avatars/ | 2D ãš 3D ã®ããžã¿ã« ãã¥ãŒãã³ ã¢ãã¿ãŒã«ãã AI ãšãŒãžã§ã³ã ã€ã³ã¿ãŒãã§ã€ã¹ ãªãã·ã§ã³ã®æ¡åŒµ | Reading Time:
2
minutes
ãŠãŒã¶ãŒã
çæ AI
ã¢ããªã±ãŒã·ã§ã³ã䜿ã£ãŠããåãããéã«ã¯ãããã¹ããé³å£°ãããžã¿ã« ã¢ãã¿ãŒãªã©è€æ°ã®ã³ãã¥ãã±ãŒã·ã§ã³ ãªãã·ã§ã³ãå©çšããããšãã§ããŸãã
åŸæ¥ã®ãã£ããããããã³ãã€ããã ã¢ããªã±ãŒã·ã§ã³ã§ã¯ããŠãŒã¶ãŒãåãåãããå
¥åããããã¹ãããŒã¹ã®å¿çãåä¿¡ããããã¹ã ã€ã³ã¿ãŒãã§ã€ã¹ã䜿çšããŠããŸãããã³ãºããªãŒã®ã³ãã¥ãã±ãŒã·ã§ã³ã§ã¯ã
èªåé³å£°èªè
(ASR: Automatic Speech Recognition) ã
é³å£°åæ
(TTS: Text-To-Speech) ãªã©ã®é³å£° AI æè¡ã«ãããé»è©±ã䜿çšããã«ã¹ã¿ã㌠ãµãŒãã¹ãªã©ã®ã·ããªãªã«æé©ãªå£é ã«ããããåãã容æã«ãªããŸããããã«ãããžã¿ã« ã¢ãã¿ãŒã«é³å£°æ©èœãæãããããšã§ããŠãŒã¶ãŒãã¢ããªã±ãŒã·ã§ã³ãèŠèŠçã«äœ¿çšã§ããããããã€ãããã¯ãªã€ã³ã¿ãŒãã§ã€ã¹ãæäŸã§ããŸããGartner ã«ãããšã2028 幎ãŸã§ã«ãåŸæ¥å¡ 500 å以äžã®çµç¹ã® 45% ãã人çè³æ¬ã®èœåæ¡å€§ã®ããã«ã AI ã¢ãã¿ãŒã®åŸæ¥å¡ã掻çšããããã«ãªãããã§ãã
1
ããžã¿ã« ã¢ãã¿ãŒã®ã¹ã¿ã€ã«ã¯æ§ã
ã§ããã©ããªã¢ãªã¹ãã£ãã¯ãª 3D ãŸã㯠2D ã®ã¢ãã¿ãŒãé©ããŠããã±ãŒã¹ãããã°ãå®ååãããã¢ãã¿ãŒã挫ç»ã®ãããªã¢ãã¿ãŒã®æ¹ãé©ããŠããã±ãŒã¹ããããŸãã
3D ã¢ãã¿ãŒ
ã¯ããªã¢ã«ãªåããšåå®æ§ãåçŸããå®å
šãªæ²¡å
¥äœéšãæäŸããŸãããã®ãããªã¢ãã¿ãŒã®éçºã«ã¯ãè€éãªããã£ãŒ ã¢ãã¡ãŒã·ã§ã³ãé«å質ã®ã¬ã³ããªã³ã°ãå¿
èŠãšãªããããå°éçãªãœãããŠã§ã¢ãæè¡çãªå°éç¥èãå¿
èŠã«ãªããŸãã
2D ã¢ãã¿ãŒ
ã¯éçºãè¿
éã§ãWeb ã«çµã¿èŸŒã¿ãœãªã¥ãŒã·ã§ã³ã«æé©ã§ããã€ã³ã¿ã©ã¯ãã£ã㪠AI ã®äœæã«åççãªã¢ãããŒããæäŸãããã¶ã€ã³ãã¢ãã¡ãŒã·ã§ã³ã«ã¯ã¢ãŒãã£ã¹ããå¿
èŠã«ãªãããšãå€ãã§ãããæè¡çãªãªãœãŒã¹ã®é¢ã¯ããã»ã©è² æ
ã«ãªããŸããã
ãã©ããªã¢ãªã¹ãã£ãã¯ãªããžã¿ã« ãã¥ãŒãã³ã®äœæãå§ããã«ãããã
ã«ã¹ã¿ã㌠ãµãŒãã¹åãããžã¿ã« ãã¥ãŒãã³ã® NVIDIA AI Blueprint
ã¯ãããŸããŸãªãŠãŒã¹ ã±ãŒã¹ã«åãããŠã«ã¹ã¿ãã€ãºããããšãã§ããŸãããã®æ©èœã¯çŸåšãNVIDIA Maxine
Audio2Face-2D
NIM ãã€ã¯ããµãŒãã¹ã®ãµããŒãã«å«ãŸããŠããŸããããã«ããã® Blueprint ã§ã¯ã3D ã¢ãã¿ãŒéçºè
ã
Unreal Engine
ã䜿çšã§ãããããã¬ã³ããªã³ã°ã«æè»æ§ãæãããŠããŸãã
ãšãŒãžã§ã³ã ã¢ããªã±ãŒã·ã§ã³ã«äŒè©±ããããžã¿ã« ã¢ãã¿ãŒãè¿œå ããæ¹æ³
ããžã¿ã« ãã¥ãŒãã³åã AI Blueprint ã§ã¯ããŠãŒã¶ãŒã
NVIDIA ACE
æè¡ã掻çšãã
AI ãšãŒãžã§ã³ã
ãšå¯Ÿè©±ããŸã (å³ 1)ã
å³ 1. ããžã¿ã« ãã¥ãŒãã³åã NVIDIA AI Blueprint ã®ã¢ãŒããã¯ãã£
ãŠãŒã¶ãŒã«ããé³å£°å
¥åã¯ãããŸããŸãª NIM ãã€ã¯ããµãŒãã¹éã®éä¿¡ã調æŽãã ACE ãšãŒãžã§ã³ãã«éä¿¡ãããŸããACE ãšãŒãžã§ã³ãã¯ã
Riva Parakeet NIM
ã䜿çšããŠé³å£°ãããã¹ãã«å€æãããã®ããã¹ã㯠RAG ãã€ãã©ã€ã³ã§åŠçãããŸããRAG ãã€ãã©ã€ã³ã§ã¯ãNIM ãã€ã¯ããµãŒãã¹ã®
åã蟌ã¿
ãš
ãªã©ã³ã¯
ãè¡ã NVIDIA NeMo Retriever ãš
LLM NIM
ã䜿çšããŠãä¿åãããããã¥ã¡ã³ãããé¢é£ããã³ã³ããã¹ããçšããŠå¿çããŸãã
æåŸã«ãRiva TTS ãä»ããŠãã®å¿çãé³å£°ã«å€æããAudio2Face-3D NIM ãŸã㯠Audio2Face-2D NIM ã䜿çšããŠããžã¿ã« ãã¥ãŒãã³ãã¢ãã¡ãŒã·ã§ã³åããŸãã
AI ãšãŒãžã§ã³ã ã¢ããªã±ãŒã·ã§ã³ãèšèšããéã«èæ
®ãã¹ããã€ã³ã
ã°ããŒãã«äŒæ¥ã§ã¯ãèšèªã®å£ã«ããã³ãã¥ãã±ãŒã·ã§ã³ã®é害ãæ¥åã®åŠšããšãªãããšããããŸããå€èšèªæ©èœãåãã AI æèŒã¢ãã¿ãŒã䜿çšããã°ãèšèªã®å£ãè¶
ããåæ»ãªã³ãã¥ãã±ãŒã·ã§ã³ãåãããšãã§ããŸããããžã¿ã« ãã¥ãŒãã³ AI Blueprint ã¯ãRiva ASR ããã¥ãŒã©ã«æ©æ¢°ç¿»èš³ (NMT: Neural Machine Translation) ã«å ããã€ã³ããªãžã§ã³ããªå²ã蟌ã¿ãããŒãžã€ã³æ©èœãåãããŠãŒã¶ãŒã®è©±ãæ¹ãèšèªã«æè»ã«å¯Ÿå¿ã§ããã人éããã察話å AI ãå®çŸããŸãã
ããžã¿ã« ãã¥ãŒãã³ AI ãšãŒãžã§ã³ãã®äž»ãªå©ç¹ã® 1 ã€ã¯ãåŸæ¥å¡ãšé¡§å®¢ã®äž¡è
ã«ãšã£ãŠãåžžæ皌åããããªãœãŒã¹ãšããŠæ©èœã§ããããšã§ããRAG ãæèŒãã AI ãšãŒãžã§ã³ãã¯ããããšãããç¶ç¶çã«åŠç¿ããæéã®çµéãšãšãã«æ¹åããŠãããããããæ£ç¢ºãªå¯Ÿå¿ãšããåªãããŠãŒã¶ãŒäœéšãæäŸããããšãã§ããŸãã
ããžã¿ã« ãã¥ãŒãã³ ã€ã³ã¿ãŒãã§ã€ã¹ãæ€èšããŠããäŒæ¥ã«ãšã£ãŠãé©åãªã¢ãã¿ãŒãšã¬ã³ããªã³ã° ãªãã·ã§ã³ã®éžæã¯ããŠãŒã¹ ã±ãŒã¹ãã«ã¹ã¿ãã€ãºèšå®ã«äŸåããŸãã
ãŠãŒã¹ ã±ãŒã¹
: 3D ã¢ãã¿ãŒã¯ãå®åºèãããªã¹ã¯ (ç¡äººç«¯æ«) ãªã©ã䞻㫠1察 1 ã®ãããšãã®ãããªãéåžžã«æ²¡å
¥æã®é«ããŠãŒã¹ ã±ãŒã¹ã«æé©ã§ããã2D ã¢ãã¿ãŒã¯ãWeb ãã¢ãã€ã«ã®å¯Ÿè©±å AI ãŠãŒã¹ ã±ãŒã¹ã«å¹æçã§ãã
éçºãšã«ã¹ã¿ãã€ãºã®èšå®
: 3D ãã¢ãã¡ãŒã·ã§ã³ã®å°éç¥èãæã€ããŒã ã¯ããã®ã¹ãã«ã掻çšããŠæ²¡å
¥æã®ããè¶
ãªã¢ã«ãªã¢ãã¿ãŒãäœæã§ããŸããäžæ¹ãå埩äœæ¥ãã«ã¹ã¿ãã€ãºãè¿
éã«è¡ãããããŒã ã«ã¯ãã·ã³ãã«ãª 2D ã¢ãã¿ãŒãæå¹ã§ãã
ã¹ã±ãŒãªã³ã°ã®èæ
®ãã¹ããã€ã³ã
: ã¢ãã¿ãŒãšå¯Ÿå¿ããã¬ã³ããªã³ã° ãªãã·ã§ã³ãè©äŸ¡ããéã«ãã¹ã±ãŒãªã³ã°ã¯èæ
®ãã¹ãéèŠãªãã€ã³ãã§ããã¹ããªãŒã ã®ã¹ã«ãŒãããã¯ãç¹ã« 3D ã¢ãã¿ãŒã®å Žåã䜿çšãããã£ã©ã¯ã¿ãŒ ã¢ã»ããã®éžæãšå質ã«ãã£ãŠå€§ããç°ãªããŸããåžæããåºå解å床ãéžæããã¬ã³ããªã³ã° ãªãã·ã§ã³ (Omniverse Renderer ãŸã㯠Unreal Engine) ã¯ãã¹ããªãŒã ãããã®èšç®ãããããªã³ãã決å®ããäžã§éèŠãªåœ¹å²ãæãããŸãã
NVIDIA Audio2Face-2D ã§ã¯ãé¡åçãšé³å£°å
¥åã ãã§ãªã¢ã«ãª 2D ã¢ãã¿ãŒãäœæã§ããŸããç°¡åã§ã·ã³ãã«ãªæ§æã®ãããéçºè
ã¯ããžã¿ã« ãã¥ãŒãã³ã®ãŠãŒã¹ ã±ãŒã¹ã«åãããã¢ãã¿ãŒãã¢ãã¡ãŒã·ã§ã³ãè¿
éã«ç¹°ãè¿ãäœæã§ããŸãããªã¢ã«ã¿ã€ã åºåãšã¯ã©ãŠã ãã€ãã£ãã®ãããã€ã«ããã2D ããžã¿ã« ãã¥ãŒãã³ã¯ãã€ã³ã¿ã©ã¯ãã£ããªãŠãŒã¹ ã±ãŒã¹ããã€ã³ã¿ã©ã¯ãã£ã㪠Web çµã¿èŸŒã¿ãœãªã¥ãŒã·ã§ã³åãã®ã¹ããªãŒãã³ã° ã¢ãã¿ãŒã«æé©ã§ãã
ããšãã°ãè€æ°ã®ããã€ã¹ã« AI ãšãŒãžã§ã³ãããããã€ããWeb ãŸãã¯ã¢ãã€ã« ãã¡ãŒã¹ãã®ã«ã¹ã¿ã㌠ãžã£ãŒããŒã«ããžã¿ã« ãã¥ãŒãã³ãå°å
¥ããããšããŠããäŒæ¥ã«ã¯ã2D ã¢ãã¿ãŒã¯ããŒããŠã§ã¢èŠä»¶ã軜æžããã®ã§ã¡ãªããããããŸãã
3D ã®ãã©ããªã¢ãªã¹ãã£ãã¯ãªã¢ãã¿ãŒã¯ãé«ãå
±æãèŠæ±ããããŠãŒã¶ãŒ ãšã³ã²ãŒãžã¡ã³ããå¿
èŠãšãããŠãŒã¹ ã±ãŒã¹ã«ãæ¯é¡ã®ãªã没å
¥äœéšãæäŸããŸããNVIDIA Audio2Face-3D ãšã¢ãã¡ãŒã·ã§ã³ NIM ãã€ã¯ããµãŒãã¹ã¯ãç¹çŽ°ãªé éšãšèº«äœã®ã¢ãã¡ãŒã·ã§ã³ãšãšãã«ãã¬ã³ãã·ã§ã€ããçæãã没å
¥æã®ãããã©ããªã¢ãªã¹ãã£ãã¯ãªã¢ãã¿ãŒãäœæããããšã§ã3D ãã£ã©ã¯ã¿ãŒãã¢ãã¡ãŒã·ã§ã³åããŸããããžã¿ã« ãã¥ãŒãã³ AI Blueprint ã¯ã3D ã¢ãã¿ãŒã®ã¬ã³ããªã³ã° ãªãã·ã§ã³ããšããŠãOmniverse ã¬ã³ãã©ãŒãš Unreal-Engine ã¬ã³ãã©ãŒããµããŒãããŠãããéçºè
ãéžæããã¬ã³ããªã³ã° ãªãã·ã§ã³ãæè»ã«çµ±åã§ããããã«ãªããŸããã
ããžã¿ã« ãã¥ãŒãã³ãäŒæ¥ã匷åããæ¹æ³ã«ã€ããŠã¯ã
NVIDIA API ã«ã¿ãã°
ã«ã¢ã¯ã»ã¹ããŠãããŸããŸãªã¢ãã¿ãŒã®ãªãã·ã§ã³ãã芧ãã ããã
ããžã¿ã« ã¢ãã¿ãŒãå§ãã
Audio2Face-2D ãš Unreal Engine NIM ãã€ã¯ããµãŒãã¹ã䜿çšããå®è·µçãªéçºã«ã€ããŠã¯ã
ACE æ©æã¢ã¯ã»ã¹ã«ç³ã蟌ã
ããããžã¿ã« ãã¥ãŒãã³ AI Blueprint ã®
æè¡ããã°
ã«ã¢ã¯ã»ã¹ããŠããã£ããããã ã¢ããªã±ãŒã·ã§ã³ãããŒãœãã©ã€ãºããããã«ããžã¿ã« ãã¥ãŒãã³ ã€ã³ã¿ãŒãã§ã€ã¹ãè¿œå ããæ¹æ³ã«ã€ããŠåŠã¶ããšãã§ããŸãã
1
Gartner®, Hype Cycle for the Future of Work, 2024 by Tori Paulman, Emily Rose McRae, etc., July 2024
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
Enhancing the Digital Human Experience with Cloud Microservices Accelerated by Generative AI
GTC ã»ãã·ã§ã³:
Build a World of Interactive Avatars Based on NVIDIA Omniverse, AIGC, and LLM
NGC ã³ã³ãããŒ:
ACE ãšãŒãžã§ã³ã ãµã³ãã« ããã³ããšã³ã
SDK:
NVIDIA Tokkio
ãŠã§ãããŒ:
How Telcos Transform Customer Experiences with Conversational AI |
https://developer.nvidia.com/blog/ai-ran-goes-live-and-unlocks-a-new-ai-opportunity-for-telcos/ | AI-RAN Goes Live and Unlocks a New AI Opportunity for Telcos | AI is transforming industries, enterprises, and consumer experiences in new ways. Generative AI models are moving towards reasoning,
agentic AI
is enabling new outcome-oriented workflows and
physical AI
is enabling endpoints like cameras, robots, drones, and cars to make decisions and interact in real time.
The common glue between all these use cases is the need for pervasive, reliable, secure, and super-fast connectivity.
Telecommunication networks must prepare for this new kind of AI traffic, which can come directly through the fronthaul wireless access network or backhauled from the public or private cloud as a completely standalone AI inferencing traffic generated by enterprise applications.
Local wireless infrastructure offers an ideal place to process AI inferencing. This is where a new approach to telco networks, AI radio access network (
AI-RAN
), stands out.
Traditional CPU or ASIC-based RAN systems are designed only for RAN use and cannot process AI traffic today. AI-RAN enables a common GPU-based infrastructure that can run both wireless and AI workloads concurrently, turning networks from single-purpose to multi-purpose infrastructures and turning sites from cost-centers to revenue sources.
With a strategic investment in the right kind of technology, telcos can leap forward to become the AI grid that facilitates the creation, distribution, and consumption of AI across industries, consumers, and enterprises. This moment in time presents a massive opportunity for telcos to build a fabric for AI training (creation) and AI inferencing (distribution) by repurposing their central and distributed infrastructures.
SoftBank and NVIDIA fast-forward AI-RAN commercialization
SoftBank has turned the AI-RAN vision into reality, with its
successful outdoor field trial
in Fujisawa City, Kanagawa, Japan, where NVIDIA-accelerated hardware and
NVIDIA Aerial
software served as the technical foundation.
This achievement marks multiple steps forward for AI-RAN commercialization and provides real proof points addressing industry requirements on technology feasibility, performance, and monetization:
Worldâs first outdoor 5G AI-RAN field trial running on an NVIDIA-accelerated computing platform. This is an end-to-end solution based on a full-stack, virtual 5G RAN software integrated with 5G core.
Carrier-grade virtual RAN performance achieved.
AI and RAN multi-tenancy and orchestration achieved.
Energy efficiency and economic benefits validated compared to existing benchmarks.
A new solution to unlock AI marketplace integrated on an AI-RAN infrastructure.
Real-world AI applications showcased, running on an AI-RAN network.
Above all, SoftBank aims to commercially release their own AI-RAN product for worldwide deployment in 2026.
To help other mobile network operators get started on their AI-RAN journey now, SoftBank is also planning to offer a reference kit comprising the hardware and software elements required to trial AI-RAN in a fast and easy way.
End-to-end AI-RAN solution and field results
SoftBank developed their AI-RAN solution by integrating hardware and software components from NVIDIA and ecosystem partners and hardening them to meet carrier-grade requirements. Together, the solution enables a full 5G vRAN stack that is 100% software-defined, running on NVIDIA GH200 (CPU+GPU), NVIDIA Bluefield-3 (NIC/DPU), and Spectrum-X for fronthaul and backhaul networking. It integrates with 20 radio units and a 5G core network and connects 100 mobile UEs.
The core software stack includes the following components:
SoftBank-developed and optimized 5G RAN Layer 1 functions such as channel mapping, channel estimation, modulation, and forward-error-correction, using
NVIDIA Aerial CUDA-Accelerated-RAN
libraries
Fujitsu software for Layer 2 functions
Red Hatâs OpenShift Container Platform (OCP) as the container virtualization layer, enabling different types of applications to run on the same underlying GPU computing infrastructure
A SoftBank-developed E2E AI and RAN orchestrator, to enable seamless provisioning of RAN and AI workloads based on demand and available capacity
The underlying hardware is the
NVIDIA GH200 Grace Hopper Superchip
, which can be used in various configurations from distributed to centralized RAN scenarios. This implementation uses multiple GH200 servers in a single rack, serving AI and RAN workloads concurrently, for an aggregated-RAN scenario. This is comparable to deploying multiple traditional RAN base stations.
In this pilot, each GH200 server was able to process 20 5G cells using 100-MHz bandwidth, when used in RAN-only mode. For each cell, 1.3 Gbps of peak downlink performance was achieved in ideal conditions, and 816Mbps was demonstrated with carrier-grade availability in the outdoor deployment.
AI-RAN multi-tenancy achieved
One of the first principles of AI-RAN technology is to be able to run RAN and AI workloads concurrently and without compromising carrier-grade performance. This multi-tenancy can be either in time or space: dividing the resources based on time of day or based on percentage of compute. This also implies the need for an orchestrator that can provision, de-provision, or shift workloads seamlessly based on available capacity.
At the Fujisawa City trial, concurrent AI and RAN processing was successfully demonstrated over GH200 based on static allocation of resources between RAN and AI workloads (Figure 1).
Figure 1. AI and RAN concurrency and total GPU utilization
Each NVIDIA GH200 server constitutes multiple MIGs (Multi-Instance GPU), that enable a single GPU to be divided into multiple isolated GPU instances. Each instance has its own dedicated resources, such as memory, cache, and compute cores, and can operate independently.
The SoftBank orchestrator intelligently assigns whole GPUs or some MIGs within a GPU to run AI and some to run RAN workloads and switches them dynamically when needed. It is also possible to statically allocate a certain percentage of compute for RAN and AI, for example, 60% for RAN and 40% for AI instead of demand-based allocation.
The goal is to maximize capacity utilization. With AI-RAN, telcos can achieve almost 100% utilization compared to 33% capacity utilization for typical RAN-only networks. This is an increase of up to 3x while still catering to peak RAN loads, thanks to dynamic orchestration and prioritization policies.
Enabling an AI-RAN marketplace
With a new capacity for AI computing now available on distributed AI-RAN infrastructure, the question arises of how to bring AI demand to this AI computing supply.
To solve this, SoftBank used a serverless API powered by NVIDIA AI Enterprise to deploy and manage AI workloads on AI-RAN, with security, scale, and reliability. The NVIDIA AI Enterprise serverless API is hosted on the AI-RAN infrastructure and integrated with the SoftBank E2E AI-RAN orchestrator. It connects to any public or private cloud running the same API, to dispatch external AI inferencing jobs to the AI-RAN server when compute is available (Figure 2).
Figure 2. AI marketplace solution integrated with SoftBank AI-RAN
This solution enables an AI marketplace, helping SoftBank deliver localized, low-latency, secured inferencing services. It also demonstrated the importance of AI-RAN in helping telcos become the AI distribution grid, particularly for external AI inferencing jobs, and opened a new revenue opportunity.
AI-RAN applications showcased
In this outdoor trial, new edge AI applications developed by SoftBank were demonstrated over the live AI-RAN network:
Remote support of autonomous vehicles over 5G
Factory multi-modal AI applications
Robotics applications
Remote support of autonomous vehicles over 5G
The key requirements of the social implementation of autonomous driving are vehicle safety and reducing operational costs.
At the Fujisawa City trial, SoftBank demonstrated an autonomous vehicle, relaying its front camera video using 5G to an AI-based remote support service hosted on the AI-RAN server. Multi-modal AI models analyzed the video stream, did risk assessment, and sent recommended actions to autonomous vehicles using text over 5G.
This is an example of explainable AI as well, as all the actions of the autonomous vehicle could be monitored and explained through summarized text and logging for remote support.
Factory multi-modal AI applications
In this use case, multi-modal inputs including video, audio, and sensor data, are streamed using 5G into the AI-RAN server. Multiple LLMs, VLMs, retrieval-augmented generation (RAG) pipelines, and NVIDIA NIM microservices hosted on the AI-RAN server are used to coalesce these inputs and make the knowledge accessible through a chat interface to users using 5G.
This fits well for factory monitoring, construction site inspections, and similar complex indoor and outdoor environments. The use case demonstrates how edge AI-RAN enables local data sovereignty by keeping data access and analysis local, secure, and private, which is a mandatory requirement of most enterprises.
Robotics applications
SoftBank demonstrated the benefit of edge AI inferencing for a robot connected over 5G. A robodog was trained to follow a human based on voice and motion.
The demo compared the response time of the robot when the AI inferencing was hosted on the local AI-RAN server to when it was hosted on the central cloud. The difference was apparent and obvious. The edge-based inference robodog followed the humanâs movements instantly, while the cloud-based inference robot struggled to keep up.
Accelerating the AI-RAN business case with the Aerial RAN Computer-1
While the AI-RAN vision has been embraced by the industry, the energy efficiency and economics of GPU-enabled infrastructure remain key requirements, particularly how they compare to traditional CPUâ and ASIC-based RAN systems.
With this live field trial of AI-RAN, SoftBank and NVIDIA have not only proven that GPU-enabled RAN systems are feasible and high-performant, but they are also significantly better in energy efficiency and economic profitability.
NVIDIA recently announced the
Aerial RAN Computer-1
based on the next-generation NVIDIA Grace Blackwell superchips as the recommended AI-RAN deployment platform. The goal is to migrate SoftBank 5G vRAN software from NVIDIA GH200 to NVIDIA Aerial RAN Computer-1 based on GB200-NVL2, which is an easier shift given the code is already CUDA-ready.
With
GB200-NVL2
, the available compute for AI-RAN will increase by a factor of 2x. The AI processing capabilities will improve by 5x for Llama-3 inferencing, 18x for data processing, and 9x for vector database search compared to prior H100 GPU systems.
For this evaluation, we compared the target deployment platform, Aerial RAN Computer-1 based on GB200 NVL2, with the latest generation of x86 and the best-in-class custom RAN product benchmarks and validated the following findings:
Accelerated AI-RAN offers best-in-class AI performance
Accelerated AI-RAN is sustainable RAN
Accelerated AI-RAN is highly profitable
Accelerated AI-RAN offers best-in-class AI performance
In 100% AI-only mode, each GB200-NVL2 server generates 25000 tokens/second, which translates to $20/hr of available monetizable compute per server, or $15K/month per server.
Keeping in mind that the average revenue per user (ARPU) of wireless services today ranges between $5â50/month depending on the country, AI-RAN opens a new multi-billion-dollar AI revenue opportunity that is orders of magnitude higher than revenues from RAN-only systems.
The token AI workload used is Llama-3-70B FP4, showcasing that AI-RAN is already capable of running the worldâs most advanced LLM models.
Accelerated AI-RAN is sustainable RAN
In 100% RAN-only mode, GB200-NVL2 server power performance in Watt/Gbps shows the following benefits:
40% less power consumption than the best-in-class custom RAN-only systems today
60% less power consumption than x86-based vRAN
For an even comparison, this assumes the same number of 100-MHz 4T4R cells and 100% RAN-only workload across all platforms.
Figure 3. RAN power consumption and performance (watt/Gbps)
Accelerated AI-RAN is highly profitable
For this evaluation, we used the scenario of covering one district in Tokyo with 600 cells as the common baseline for RAN deployment for each of the three platforms being compared. We then looked at multiple scenarios for AI and RAN workload distribution, ranging from RAN-only to RAN-heavy or AI-heavy.
In the AI-heavy scenario (Figure 4), we used a one-third RAN and two-third AI workload distribution:
For every dollar of CapEx investment in accelerated AI-RAN infrastructure based on NVIDIA GB200 NVL2, telcos can generate 5x the revenue over 5 years.
From an ROI perspective, the overall investment delivers a 219% return, considering all CapEx and OpEx costs.This is of course specific to SoftBank, as it uses local country costs assumptions.
Figure 4. AI-RAN economics for covering one Tokyo district with 600 cells
33% AI and 67% RAN
67% AI and 33% RAN
$ of revenue per $ of CapEx
2x
5x
ROI %
33%
219%
Table 1. AI-heavy scenario compared to RAN-heavy results
In the RAN-heavy scenario, we used two-thirds RAN and one-third AI workload distribution and found that revenue divided by CapEx for NVIDIA-accelerated AI-RAN is 2x, with a 33% ROI over 5 years, using SoftBank local cost assumptions.
In the RAN-only scenario, NVIDIA Aerial RAN Computer-1 is more cost-efficient than custom RAN-only solutions, which underscores the benefits of using accelerated computing for radio signal processing.
From these scenarios, it is evident that AI-RAN is highly profitable as compared to RAN-only solutions, in both AI-heavy and RAN-heavy modes. In essence, AI-RAN transforms traditional RAN from a cost center to a profit center.
The profitability per server improves with higher AI use. Even in RAN-only, AI-RAN infrastructure is more cost-efficient than custom RAN-only options.
Key assumptions used for the revenue and TCO calculations include the following:
The respective number of platforms, servers, and racks for each platform are calculated using a common baseline of deploying 600 cells on the same frequency, 4T4R.
The total cost of ownership (TCO) is calculated over 5 years and includes the cost of hardware, software, and vRAN and AI operating costs.
For the new AI revenue calculation, we used $20/hr/server based on GB200 NVL2 AI performance benchmarks.
OpEx costs are based on local Japan power costs and arenât extensible worldwide.
ROI % = (new AI revenues â TCO) / TCO
This validation of AI revenue upside, energy efficiency, and profitability of AI-RAN leaves no doubts about the feasibility, performance, and economic benefits of the technology.
Going forward, exponential gains with each generation of NVIDIA superchips, such as Vera Rubin, will multiply these benefits by orders of magnitude further, enabling the much-awaited business transformation of telco networks.
Looking ahead
SoftBank and NVIDIA are
continuing to collaborate
toward the commercialization of AI-RAN and bringing new applications to life. The next phase of the engagements will entail work on AI-for-RAN to improve spectral efficiency and on NVIDIA Aerial Omniverse digital twins to simulate accurate physical networks in the digital world for fine-tuning and testing.
NVIDIA AI Aerial lays the foundation for operators and ecosystem partners globally to use the power of accelerated computing and software-defined RAN + AI to transform 5G and 6G networks. You can now use NVIDIA Aerial RAN Computer-1 and AI Aerial software libraries to develop your own implementation of AI-RAN.
NVIDIA AI Enterprise is also helping create new AI applications for telcos, hostable on AI-RAN, as is evident from this trial where many NVIDIA software toolkits have been used. This includes NIM microservices for generative AI, RAG, VLMs, NVIDIA Isaac for robotics training, NVIDIA NeMo, RAPIDS, NVIDIA Triton for inferencing, and a serverless API for AI brokering.
The telecom industry is at the forefront of a massive opportunity to become an AI service provider. AI-RAN can kickstart this new renaissance for telcos worldwide, using accelerated computing as the new foundation for wireless networks.
This announcement marks a breakthrough moment for AI-RAN technology, proving its feasibility, carrier-grade performance, superior energy efficiency, and economic value. Every dollar of CapEx invested in NVIDIA-accelerated AI-RAN infrastructure generates 5x revenues, while being 6G-ready.
The journey to AI monetization can start now. | https://developer.nvidia.com/ja-jp/blog/ai-ran-goes-live-and-unlocks-a-new-ai-opportunity-for-telcos/ | AI-RAN ãéä¿¡äºæ¥è
åãã«æ°ãã AI ã®ããžãã¹ ãã£ã³ã¹ããããã | Reading Time:
4
minutes
AI ã¯ãæ¥çãäŒæ¥ãæ¶è²»è
ã®äœéšãæ°ããæ¹æ³ã§å€é©ããŠããŸãã çæ AI ã¢ãã«ã¯æšè«ã«ç§»è¡ãã
ãšãŒãžã§ã³ãå AI
ã¯æ°ããçµæéèŠã®ã¯ãŒã¯ãããŒãå¯èœã«ã
ãã£ãžã«ã« AI
ã«ãããã«ã¡ã©ãããããããããŒã³ãèªåè»ãªã©ã®ãšã³ããã€ã³ãããªã¢ã«ã¿ã€ã ã§ææ決å®ãè¡ãã察話ã§ããããã«ãªããŸãã
ãããã®ãŠãŒã¹ ã±ãŒã¹ã«å
±éããã®ã¯ãæ®åããä¿¡é Œæ§ãé«ããå®å
šã§ãè¶
é«éãªæ¥ç¶ãå¿
èŠã§ããããšã§ãã
éä¿¡ãããã¯ãŒã¯ã¯ãããã³ãããŒã«ç¡ç·ã¢ã¯ã»ã¹ ãããã¯ãŒã¯ãä»ããŠçŽæ¥éä¿¡ããããããšã³ã¿ãŒãã©ã€ãº ã¢ããªã±ãŒã·ã§ã³ã«ãã£ãŠçæããããããªã㯠ã¯ã©ãŠããŸãã¯ãã©ã€ããŒã ã¯ã©ãŠãããã®ããã¯ããŒã«ããã®å®å
šã«ã¹ã¿ã³ãã¢ãã³ã® AI æšè«ãã©ãã£ãã¯ã®ãããªæ°ããçš®é¡ã® AI ãã©ãã£ãã¯ã«åããå¿
èŠããããŸãã
ããŒã«ã« ã¯ã€ã€ã¬ã¹ ã€ã³ãã©ã¹ãã©ã¯ãã£ã¯ãAI æšè«ãåŠçããã®ã«æé©ãªå ŽæãæäŸããŸãã ããã¯ãéä¿¡äŒç€Ÿ ãããã¯ãŒã¯ã«å¯Ÿããæ°ããã¢ãããŒãã§ãã AI ç¡ç·ã¢ã¯ã»ã¹ ãããã¯ãŒã¯ (
AI-RAN
) ã®ç¹åŸŽã§ãã
åŸæ¥ã® CPU ãŸã㯠ASIC ããŒã¹ã® RAN ã·ã¹ãã ã¯ãRAN ã®ã¿ã®ããã«èšèšãããŠãããçŸåšã§ã¯ AI ãã©ãã£ãã¯ãåŠçã§ããŸããã AI-RAN ã¯ãã¯ã€ã€ã¬ã¹ãš AI ã®ã¯ãŒã¯ããŒããåæã«å®è¡ã§ããå
±éã® GPU ããŒã¹ã®ã€ã³ãã©ã¹ãã©ã¯ãã£ãæäŸããŸããããã«ããããããã¯ãŒã¯ãåäžç®çããå€ç®çã€ã³ãã©ã¹ãã©ã¯ãã£ã«å€ããã³ã¹ã ã»ã³ã¿ãŒãããããã£ãã ã»ã³ã¿ãŒã«å€ããããŸãã
é©åãªçš®é¡ã®ãã¯ãããžã«æŠç¥çæè³ãè¡ãããšã§ãéä¿¡äŒç€Ÿã¯æ¥çãæ¶è²»è
ãäŒæ¥ã«ããã£ãŠ AI ã®äœæãé
ä¿¡ã䜿çšã容æã«ããã AI ã°ãªãããžãšé£èºããããšãã§ããŸããä»ãéä¿¡äŒç€Ÿã«ãšã£ãŠãäžå€®éäžçã§åæ£ãããã€ã³ãã©ã¹ãã©ã¯ãã£ãåå©çšããããšã§ãAI ãã¬ãŒãã³ã° (äœæ) ãš AI æšè« (é
ä¿¡) ã®ããã®ãã¡ããªãã¯ãæ§ç¯ãã倧ããªæ©äŒãšãªããŸãã
SoftBank ãš NVIDIA ã AI-RANã®åçšåãé²ãã
SoftBank ã¯ãNVIDIA ã¢ã¯ã»ã©ã¬ãŒã·ã§ã³ ããŒããŠã§ã¢ãš NVIDIA Aerial ãœãããŠã§ã¢ãæè¡åºç€ãšããŠæŽ»çšãã
ç¥å¥å·çè€æ²¢åžã§å±å€
ãã£ãŒã«ã ãã©ã€ã¢ã«ãæåããã
AI-RAN ããžã§ã³ã
çŸå®ã®ãã®ã«ããŸããã
ãã®éæã¯ãAI-RAN ã®åçšåã«åãã倧ããªåé²ã§ããããã¯ãããžã®å®çŸæ§ãããã©ãŒãã³ã¹ãåçåã«é¢ããæ¥çã®èŠä»¶ã«å¯Ÿå¿ããå®èšŒãã€ã³ããæäŸããŸãã
NVIDIA ã®ã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã° ãã©ãããã©ãŒã ã§å®è¡ãããäžçåã®å±å€ 5G AI-RAN ãã£ãŒã«ã ãã©ã€ã¢ã«ã ããã¯ã5G ã³ã¢ãšçµ±åããããã«ã¹ã¿ãã¯ã®ä»®æ³ 5G RAN ãœãããŠã§ã¢ã«åºã¥ããšã³ãããŒãšã³ãã®ãœãªã¥ãŒã·ã§ã³ã§ãã
ãã£ãªã¢ ã°ã¬ãŒãã®ä»®æ³ RAN ã®ããã©ãŒãã³ã¹ãå®çŸã
AI ãš RAN ã®ãã«ãããã³ããšãªãŒã±ã¹ãã¬ãŒã·ã§ã³ãå®çŸã
ãšãã«ã®ãŒå¹çãšçµæžçãªã¡ãªããããæ¢åã®ãã³ãããŒã¯ãšæ¯èŒããŠæ€èšŒãããŸããã
AI-RAN ã€ã³ãã©ã¹ãã©ã¯ãã£ã«çµ±åããã AI ããŒã±ãããã¬ã€ã¹ãæäŸããæ°ãããœãªã¥ãŒã·ã§ã³ã
AI-RAN ãããã¯ãŒã¯ã§å®è¡ãããå®éã® AI ã¢ããªã±ãŒã·ã§ã³ã玹ä»ãããŸãã
äœããããSoftBank ã¯ãäžçäžã«å±éããããã«ãç¬èªã® AI-RAN 補åãåæ¥çã«ãªãªãŒã¹ããããšãç®æããŠããŸãã
ä»ã®éä¿¡äºæ¥è
ãä»ãã AI-RAN ã®å°å
¥ãæ¯æŽããããã«ãSoftBank ã¯ãAI-RAN ãè©Šçšããããã«å¿
èŠãªããŒããŠã§ã¢ãšãœãããŠã§ã¢ã®èŠçŽ ã§æ§æããããªãã¡ã¬ã³ã¹ ãããããç°¡åãã€è¿
éã«æäŸããäºå®ã§ãã
ãšã³ãããŒãšã³ãã® AI-RAN ãœãªã¥ãŒã·ã§ã³ãšãã£ãŒã«ã ãã©ã€ã¢ã«ã®çµæ
SoftBank ã¯ãNVIDIA ãšãšã³ã·ã¹ãã ããŒãããŒã®ããŒããŠã§ã¢ãšãœãããŠã§ã¢ ã³ã³ããŒãã³ããçµ±åãããã£ãªã¢ã°ã¬ãŒãã®èŠä»¶ãæºããããã«åŒ·åããããšã§ãAI-RAN ãœãªã¥ãŒã·ã§ã³ãéçºããŸããã ãã®ãœãªã¥ãŒã·ã§ã³ã¯ãNVIDIA GH200 (CPU+GPU)ãNVIDIA Bluefield-3 (NIC/DPU)ãããã³ãããŒã«ããã³ããã¯ããŒã« ãããã¯ãŒãã³ã°çšã® Spectrum-X ã§å®è¡ããã 100% ãœãããŠã§ã¢ ããã¡ã€ã³ãã®å®å
šãª 5G vRAN ã¹ã¿ãã¯ãå®çŸããŸãã 20 å°ã®ç¡ç·ãŠããããš 5G ã³ã¢ ãããã¯ãŒã¯ãçµ±åãã100 å°ã®ã¢ãã€ã« UE ãæ¥ç¶ããŸãã
ã³ã¢ ãœãããŠã§ã¢ ã¹ã¿ãã¯ã«ã¯ã以äžã®ã³ã³ããŒãã³ããå«ãŸããŠããŸãã
SoftBank ã
NVIDIA Aerial CUDA-Accelerated-RAN
ã©ã€ãã©ãªã䜿çšããŠã 5G RAN ã¬ã€ã€ãŒ 1 ã®ãã£ãã« ãããã³ã°ããã£ãã«æšå®ãå€èª¿ãåæ¹ãšã©ãŒèšæ£ãªã©ã®æ©èœãéçºããæé©åããŸããã
ã¬ã€ã€ãŒ 2 æ©èœåã Fujitsu ãœãããŠã§ã¢
ã³ã³ãããŒã®ä»®æ³åã¬ã€ã€ãŒãšããŠã® Red Hat ã® OpenShift Container Platform (OCP) ã«ãããåãåºç€ãšãªã GPU ã³ã³ãã¥ãŒãã£ã³ã° ã€ã³ãã©ã¹ãã©ã¯ãã£ã§ç°ãªãã¿ã€ãã®ã¢ããªã±ãŒã·ã§ã³ãå®è¡ãããŸã
SoftBank ãéçºãã E2EãAI ãš RAN ãªãŒã±ã¹ãã¬ãŒã¿ãŒãéèŠãšäœ¿çšå¯èœãªå®¹éã«åºã¥ã㊠RAN ãš AI ã®ã¯ãŒã¯ããŒãã®ã·ãŒã ã¬ã¹ãªããããžã§ãã³ã°ãå¯èœã«ããŸãã
åºç€ãšãªãããŒããŠã§ã¢ã¯ã
NVIDIA GH200 Grace Hopper Superchip
ã§ãããåæ£åããéäžå RAN ã·ããªãªãŸã§ãããŸããŸãªæ§æã§äœ¿çšã§ããŸãã ãã®å®è£
ã§ã¯ãéçŽããã RAN ã®ã·ããªãªã®ããã«ã1 ã€ã®ã©ãã¯ã§è€æ°ã® GH200 ãµãŒããŒã䜿çšããAI ãš RAN ã®ã¯ãŒã¯ããŒããåæã«åŠçããŸãã ããã¯ãåŸæ¥ã® RAN åºå°å±ãè€æ°å±éããã®ã«çžåœããŸãã
ãã®ãã€ãããã§ã¯ãRAN ã®ã¿ã®ã¢ãŒãã§äœ¿çšãããå Žåãå GH200 ãµãŒããŒã¯ã100 MHz 垯åå¹
㧠20 åã® 5G ã»ã«ãåŠçããããšãã§ããŸããã åã»ã«ã§ã¯ãçæ³çãªæ¡ä»¶äžã§ 1.3 Gbps ã®ããŒã¯ ããŠã³ãªã³ã¯æ§èœãéæãããå±å€å±éã§ã¯ãã£ãªã¢ã°ã¬ãŒãã®å¯çšæ§ã§ 816 Mbps ãå®èšŒãããŸããã
AI-RAN ã®ãã«ãããã³ããå®çŸ
AI-RAN ãã¯ãããžã®ç¬¬äžã®ååã® 1 ã€ã¯ããã£ãªã¢ã°ã¬ãŒãã®ããã©ãŒãã³ã¹ãæãªãããšãªããRAN ãš AI ã®ã¯ãŒã¯ããŒããåæã«å®è¡ã§ããããšã§ãã ãã®ãã«ãããã³ãã¯ãæéãŸãã¯ç©ºéã®ããããã§å®è¡ã§ããæé垯ãŸãã¯ã³ã³ãã¥ãŒãã£ã³ã°ã®å²åã«åºã¥ããŠãªãœãŒã¹ãåå²ããŸãã ãŸããããã¯ã䜿çšå¯èœãªå®¹éã«åºã¥ããŠãã¯ãŒã¯ããŒããã·ãŒã ã¬ã¹ã«ããããžã§ãã³ã°ãããããžã§ãã³ã°ã®è§£é€ãã·ããã§ãããªãŒã±ã¹ãã¬ãŒã¿ãŒã®å¿
èŠæ§ãæå³ããŸãã
è€æ²¢åžã®å®èšŒå®éšã§ã¯ãRAN ãš AI ã¯ãŒã¯ããŒãéã®ãªãœãŒã¹ã®éçå²ãåœãŠã«åºã¥ããŠãGH200 äžã§ã® AI ãš RAN ã®åæåŠçãå®èšŒãããŸããã (å³ 1)ã
å³ 1. AI ãš RAN ã®åæåŠçãš GPU ã®åèšäœ¿çšç
å NVIDIA GH200 ãµãŒããŒã¯ãè€æ°ã® MIG (ãã«ãã€ã³ã¹ã¿ã³ã¹ GPU) ã§æ§æããã1 ã€ã® GPU ãè€æ°ã®ç¬ç«ãã GPU ã€ã³ã¹ã¿ã³ã¹ã«åå²ã§ããŸãã åã€ã³ã¹ã¿ã³ã¹ã«ã¯ãã¡ã¢ãªããã£ãã·ã¥ãã³ã³ãã¥ãŒãã£ã³ã° ã³ã¢ãªã©ãç¬èªã®å°çšãªãœãŒã¹ããããç¬ç«ããŠåäœã§ããŸãã
SoftBank ãªãŒã±ã¹ãã¬ãŒã¿ãŒã¯ãAI ãå®è¡ããããã« GPU å
šäœãŸã㯠GPU ã®äžéšãã€ã³ããªãžã§ã³ãã«å²ãåœãŠãRAN ã®ã¯ãŒã¯ããŒããå®è¡ããå¿
èŠã«å¿ããŠåçã«åãæ¿ããŸãã éèŠã«åºã¥ãå²ãåœãŠã§ã¯ãªããRAN ãš AI ã«äžå®ã®å²ãåœãŠããRAN ã« 60% ãš AI ã« 40% ã®ã³ã³ãã¥ãŒãã£ã³ã°ãéçã«å²ãåœãŠãããšãã§ããŸãã
ç®æšã¯ã容é䜿çšçãæ倧åããããšã§ãã AI-RAN ã䜿çšãããšãéä¿¡äŒç€Ÿã¯ãéåžžã® RAN ã®ã¿ã®ãããã¯ãŒã¯ã§ã® 33% ã®å®¹é䜿çšçãšæ¯èŒããŠãã»ãŒ 100% ã®äœ¿çšçãå®çŸã§ããŸãã ããã¯ãåçãªãªãŒã±ã¹ãã¬ãŒã·ã§ã³ãšåªå
é äœä»ãããªã·ãŒã®ãããã§ãããŒã¯ã® RAN ã®è² è·ã«å¯Ÿå¿ããªãããæ倧 3 åã®å¢å ã§ãã
AI-RAN ããŒã±ãããã¬ã€ã¹ã®å®çŸ
åæ£å AI-RAN ã€ã³ãã©ã¹ãã©ã¯ãã£ã§ AI ã³ã³ãã¥ãŒãã£ã³ã°ã®æ°ããæ©èœãå©çšã§ããããã«ãªã£ãããããã® AI ã³ã³ãã¥ãŒãã£ã³ã°ã®äŸçµŠã« AI ã®éèŠãã©ã®ããã«åã蟌ãããšããçåãçããŸãã
ãã®åé¡ã解決ããããã«ãSoftBank ã¯ãNVIDIA AI ãšã³ã¿ãŒãã©ã€ãº ã掻çšãããµãŒããŒã¬ã¹ API ã䜿çšããŠãã»ãã¥ãªãã£ãæ¡åŒµæ§ãä¿¡é Œæ§ãåã㊠AI-RAN 㧠AI ã¯ãŒã¯ããŒããå±éãã管çããŸããã NVIDIA AI ãšã³ã¿ãŒãã©ã€ãºã®ãµãŒããŒã¬ã¹ API ã¯ãAI-RAN ã€ã³ãã©ã¹ãã©ã¯ãã£ã§ãã¹ããããSoftBank E2E AI-RAN ãªãŒã±ã¹ãã¬ãŒã¿ãŒãšçµ±åãããŠããŸãã åã API ãå®è¡ãããããªã㯠ã¯ã©ãŠããŸãã¯ãã©ã€ããŒã ã¯ã©ãŠãã«æ¥ç¶ããã³ã³ãã¥ãŒãã£ã³ã°ãå©çšå¯èœã«ãªã£ããšãã«ãå€éšã® AI æšè«ãžã§ãã AI-RAN ãµãŒããŒã«å²ãåœãŠãŸã (å³ 2)ã
å³ 2. SoftBank AI-RAN ãšçµ±åããã AI ããŒã±ãããã¬ã€ã¹ ãœãªã¥ãŒã·ã§ã³
ãã®ãœãªã¥ãŒã·ã§ã³ã«ãã AI ããŒã±ãããã¬ã€ã¹ãå®çŸãããœãããã³ã¯ã¯ããŒã«ã©ã€ãºãããäœé
延ã®å®å
šãªæšè«ãµãŒãã¹ãæäŸã§ããããã«ãªããŸãã ãŸããç¹ã«å€éšã® AI æšè«ã®ä»äºã®ããã«ãéä¿¡äŒç€Ÿã AI é
ä¿¡ã°ãªããã«ãªãã®ãæ¯æŽããäžã§ AI-RAN ã®éèŠæ§ãå®èšŒããæ°ããåçã®æ©äŒãäœããŸãã
AI-RAN ã¢ããªã±ãŒã·ã§ã³ã玹ä»
ãã®å±å€ã®è©Šçšã§ã¯ãSoftBank ãéçºããæ°ãããšããž AI ã¢ããªã±ãŒã·ã§ã³ãã©ã€ã AI-RAN ãããã¯ãŒã¯ã§ãã¢ã³ã¹ãã¬ãŒã·ã§ã³ãããŸããã
5G ãä»ããèªåé転è»ã®ãªã¢ãŒã ãµããŒã
å·¥å Žåºè·æã®ãã«ãã¢ãŒãã«
ãããã£ã¯ã¹ ã¢ããªã±ãŒã·ã§ã³
5G ãä»ããèªåé転è»ã®ãªã¢ãŒã ãµããŒã
èªåé転ã®ç€ŸäŒçå®è£
ã®éèŠãªèŠä»¶ã¯ãè»ã®å®å
šæ§ãšéçšã³ã¹ãã®åæžã§ãã
è€æ²¢åžã®å®èšŒå®éšã§ã¯ããœãããã³ã¯ãèªåé転è»ãå®æŒããåæ¹ã«ã¡ã©ã®æ åã 5G 㧠AI-RAN ãµãŒããŒã«ãã¹ãããã AI ããŒã¹ã®é éãµããŒã ãµãŒãã¹ã«äžç¶ããã ãã«ãã¢ãŒãã« AI ã¢ãã«ã¯ããã㪠ã¹ããªãŒã ãåæãããªã¹ã¯è©äŸ¡ãè¡ãã5G ãä»ããããã¹ãã䜿çšããŠèªåé転è»ã«æšå¥šã®ã¢ã¯ã·ã§ã³ãéä¿¡ããŸããã
ããã¯ã説æå¯èœãª AI ã®äŸã§ããããŸãããªã¢ãŒã ãµããŒãã®ããã®èŠçŽãããããã¹ããšãã°ãéããŠãèªåé転è»ã®ãã¹ãŠã®åäœãç£èŠãã説æããããšãã§ããŸããã
å·¥å Žåºè·æã®ãã«ãã¢ãŒãã«
ãã®ãŠãŒã¹ ã±ãŒã¹ã§ã¯ããããªããªãŒãã£ãªãã»ã³ãµãŒ ããŒã¿ãå«ããã«ãã¢ãŒãã«å
¥åãã5G ã䜿çšã㊠AI-RAN ãµãŒããŒã«ã¹ããªãŒãã³ã°ãããŸãã AI-RAN ãµãŒããŒã§ãã¹ãããããè€æ°ã® LLMãVLMãæ€çŽ¢æ¡åŒµçæ (RAG) ãã€ãã©ã€ã³ãNVIDIA NIM ãã€ã¯ããµãŒãã¹ã¯ããããã®å
¥åãçµ±åãã5G ã䜿çšãããŠãŒã¶ãŒããã£ãã ã€ã³ã¿ãŒãã§ã€ã¹ãä»ããŠæ
å ±ã«ã¢ã¯ã»ã¹ã§ããããã«ããããã«äœ¿çšãããŸãã
ããã¯ãå·¥å Žã®ç£èŠã建èšçŸå Žã®æ€æ»ãåæ§ã®è€éãªå±å
ããã³å±å€ã®ç°å¢ã«æé©ã§ãã ãã®ãŠãŒã¹ ã±ãŒã¹ã§ã¯ããšããž AI-RAN ãããŒã¿ ã¢ã¯ã»ã¹ãšåæãããŒã«ã«ãå®å
šããã©ã€ããŒãã«ä¿ã€ããšã§ãããŒã«ã« ããŒã¿ã®äž»æš©ãå®çŸããæ¹æ³ã瀺ããŠããŸããããã¯ãã»ãšãã©ã®äŒæ¥ã«ãšã£ãŠå¿
é ã®èŠä»¶ã§ãã
ãããã£ã¯ã¹ ã¢ããªã±ãŒã·ã§ã³
SoftBank ã¯ã5G ãä»ããŠæ¥ç¶ãããããããã®ãšããž AI æšè«ã®å©ç¹ãå®èšŒããŸããã ããããã°ã¯ã声ãšåãã«åºã¥ããŠäººéãè¿œãããã«ãã¬ãŒãã³ã°ãããŸããã
ãã®ãã¢ã§ã¯ãAI æšè«ãããŒã«ã« AI-RAN ãµãŒããŒã§ãã¹ãããããšãã®ããããã®å¿çæéãšãã»ã³ãã©ã« ã¯ã©ãŠãã§ãã¹ãããããšãã®å¿çæéãæ¯èŒããŸããã ãã®éãã¯æçœã§ããã ãšããž ããŒã¹ã®æšè« ããããã°ã¯ã人éã®åããå³åº§ã«è¿œè·¡ããŸããããã¯ã©ãŠã ããŒã¹ã®æšè«ããããã¯ãè¿œãã€ãã®ã«èŠåŽããŸããã
Aerial RAN Computer-1 㧠AI-RAN ã®ããžãã¹ ã±ãŒã¹ãé«éå
AI-RAN ããžã§ã³ã¯æ¥çã§åãå
¥ããããŠããŸãããGPU 察å¿ã€ã³ãã©ã¹ãã©ã¯ãã£ã®ãšãã«ã®ãŒå¹çãšçµæžæ§ãç¹ã«åŸæ¥ã® CPU ããã³ ASIC ããŒã¹ã® RAN ã·ã¹ãã ãšã®æ¯èŒã¯äŸç¶ãšããŠéèŠãªèŠä»¶ã§ãã
AI-RAN ã®ãã®ã©ã€ã ãã£ãŒã«ã ãã©ã€ã¢ã«ã«ãããSoftBank ãš NVIDIA ã¯ãGPU 察å¿ã® RAN ã·ã¹ãã ãå®çŸå¯èœã§ãé«æ§èœã§ããããšãå®èšŒããã ãã§ãªãããšãã«ã®ãŒå¹çãšçµæžçãªåçæ§ã倧å¹
ã«åäžããŠããããšãå®èšŒããŸããã
NVIDIA ã¯æè¿ã次äžä»£ NVIDIA Grace Blackwell Superchip ãããŒã¹ã«ãã
Aerial RAN Computer-1
ãæšå¥š AI-RAN å±éãã©ãããã©ãŒã ãšããŠçºè¡šããŸããã ç®çã¯ãGB200-NVL2 ãããŒã¹ãšãã SoftBank 5G vRAN ãœãããŠã§ã¢ã NVIDIA GH200 ãã NVIDIA Aerial RAN Computer-1 ã«ç§»è¡ããããšã§ããããã¯ãã³ãŒãããã§ã« CUDA ã«å¯Ÿå¿ããŠããããã移è¡ã容æã§ãã
ãŸãã
GB200-NVL2
ã䜿çšãããšãAI-RAN ã§å©çšå¯èœãªã³ã³ãã¥ãŒãã£ã³ã°èœåã 2 åã«ãªããŸãã AI åŠçæ©èœã¯ã以åã® H100 GPU ã·ã¹ãã ãšæ¯èŒããŠãLlama-3 æšè«ã 5 åãããŒã¿åŠçã 18 åããã¯ãã« ããŒã¿ããŒã¹æ€çŽ¢ã 9 åã«æ¹åãããŸãã
ãã®è©äŸ¡ã®ããã«ãã¿ãŒã²ããã®å±é ãã©ãããã©ãŒã ãGB200 NVL2 ãããŒã¹ãšãã Aerial RAN Computer-1ãææ°äžä»£ã® x86 ãšã¯ã©ã¹æé«ã®ã«ã¹ã¿ã RAN 補åãã³ãããŒã¯ãæ¯èŒãã以äžã®çµæãæ€èšŒããŸããã
é«éåããã AI-RAN ã¯ãã¯ã©ã¹æé«ã® AI ããã©ãŒãã³ã¹ãæäŸããŸã
é«éåããã AI-RAN ã¯æç¶å¯èœãª RAN
é«éåããã AI-RAN ã¯ãéåžžã«åçæ§ãé«ã
é«éåããã AI-RAN ã¯ãã¯ã©ã¹æé«ã® AI ããã©ãŒãã³ã¹ãæäŸããŸã
100% AI ã®ã¿ã®ã¢ãŒãã§ã¯ãå GB200-NVL2 ãµãŒããŒã¯ãæ¯ç§ 25,000 ããŒã¯ã³ãçæããŸããããã¯ããµãŒã㌠1 å°ã®åçåå¯èœãªã³ã³ãã¥ãŒãã£ã³ã°ã®å©çšçã 20 ãã«/æéããŸãã¯ãµãŒããŒãããã®æ15,000 ãã«ã«æç®ããŸãã
çŸåšã®ã¯ã€ã€ã¬ã¹ ãµãŒãã¹ã®ãŠãŒã¶ãŒ 1 人ã®å¹³ååç (ARPU) ã¯ãåœã«ãã£ãŠã¯æ 5 ïœ 50 ãã«ã®ç¯å²ã§ããããšã«çæããŠãAI-RAN ã¯ãRAN ã®ã¿ã®ã·ã¹ãã ãããæ°åã®é«ããæ°ååãã«èŠæš¡ã® AI åçã®æ©äŒãæäŸããŸãã
䜿çšãããããŒã¯ã³ AI ã¯ãŒã¯ããŒãã¯ãLlama-3-70B FP4 ã§ãããAI-RAN ããã§ã«äžçã§æãé«åºŠãª LLM ã¢ãã«ãå®è¡ã§ããããšãå®èšŒããŸãã
é«éåããã AI-RAN ã¯æç¶å¯èœãª RAN
100% RAN ã®ã¿ã®ã¢ãŒãã§ã¯ãGB200-NVL2 ãµãŒããŒã®é»åããã©ãŒãã³ã¹ã¯ãã¯ãã/Gbps ã§ä»¥äžã®å©ç¹ããããŸãã
ä»æ¥ãã¯ã©ã¹æé«ã®ã«ã¹ã¿ã RAN ã®ã¿ã®ã·ã¹ãã ãšæ¯èŒããŠãæ¶è²»é»åã 40% åæž
x86 ããŒã¹ã® vRAN ãšæ¯èŒããŠãæ¶è²»é»åã 60% åæž
æ¯èŒã®ããã«ãããã¯ãã¹ãŠã®ãã©ãããã©ãŒã ã§åãæ°ã® 100 MHz 4T4R ã»ã«ãšã100% RAN ã®ã¿ã®ã¯ãŒã¯ããŒããæ³å®ããŠããŸãã
å³ 3. RAN ã®æ¶è²»é»åãšããã©ãŒãã³ã¹ (ã¯ãã/Gbps)
é«éåããã AI-RAN ã¯ãéåžžã«åçæ§ãé«ã
ãã®è©äŸ¡ã®ããã«ãæ¯èŒããã 3 ã€ã®ãã©ãããã©ãŒã ã®ãããã㧠RAN å±éã®å
±éã®ããŒã¹ã©ã€ã³ãšããŠãæ±äº¬éœã® 1 å°åºã 600 ã»ã«ã§ã«ããŒããã·ããªãªã䜿çšããŸããã 次ã«ãRAN ã®ã¿ãã RAN ãéããããŸã㯠AI ãéèŠãããŸã§ãAI ãš RAN ã®ã¯ãŒã¯ããŒãååžã®è€æ°ã®ã·ããªãªã調ã¹ãŸããã
AI ãå€ãã·ããªãª (å³ 4) ã§ã¯ãRAN ã 3 åã® 1ãAI ã¯ãŒã¯ããŒãã 3 åã® 2 ãåæ£ããŸããã
NVIDIA GB200 NVL2 ãããŒã¹ãšããé«éåããã AI-RAN ã€ã³ãã©ã¹ãã©ã¯ãã£ãžã®è³æ¬æ¯åº (CapEx) æè³é¡ã®1ãã«ã«å¯ŸããŠãéä¿¡äŒç€Ÿã¯ 5 幎é㧠5 åã®åçãçã¿åºãããšãã§ããŸãã
ROI ã®èŠ³ç¹ãããè³æ¬æ¯åºãšéçšæ¯åºã®ãã¹ãŠã®ã³ã¹ããèæ
®ããŠãæè³å
šäœã¯ 219% ã®ãªã¿ãŒã³ãå®çŸããŸããããã¯ãçŸå°ã®ã³ã¹ãæ³å®ã䜿çšããŠããããããã¡ãã SoftBank ç¹æã®ãã®ã§ãã
å³ 4. 600 ã»ã«ã§ 1 ã€ã®æ±äº¬éœå°åºãã«ããŒãã AI-RAN ã®çµæžæ§
33% AIãš 67% RAN
67% AI ãš 33% RAN
CapEx 1 ãã«ãããã®åç $
2x
5x
ROI %
33%
219%
è¡š 1. AI ãå€çšããã·ããªãªãšæ¯èŒããçµæ
RAN ãå€çšããã·ããªãªã§ã¯ã3 åã® 2 ã RANã3 åã® 1 ã AI ã¯ãŒã¯ããŒãåæ£ã«äœ¿çšããNVIDIA ã¢ã¯ã»ã©ã¬ãŒã·ã§ã³ AI-RAN ã® CapEx ã§å²ã£ãåç㯠2 åã«ãªããSoftBank ã®ããŒã«ã« ã³ã¹ãæ³å®ã䜿çšã㊠5 幎é㧠33% ã® ROI ãåŸãããããšãããããŸããã
RAN ã®ã¿ã®ã·ããªãªã§ã¯ãNVIDIA Aerial RAN Computer-1 ã¯ã«ã¹ã¿ã RAN ã®ã¿ã®ãœãªã¥ãŒã·ã§ã³ãããã³ã¹ãå¹çãé«ããç¡ç·ä¿¡å·åŠçã«ã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ã䜿çšãã倧ããªå©ç¹ãšãªããŸãã
ãããã®ã·ããªãªãããAI ãå€çšããã¢ãŒã RAN ãå€çšããã¢ãŒãã®äž¡æ¹ã§ãRAN ã®ã¿ã®ãœãªã¥ãŒã·ã§ã³ãšæ¯èŒããŠãAI-RAN ãé«ãåçæ§ãæããã«ãªããŸãã æ¬è³ªçã«ãAI-RAN ã¯ãåŸæ¥ã® RAN ãã³ã¹ã ã»ã³ã¿ãŒããå©çã»ã³ã¿ãŒã«å€é©ããŸãã
AI ã®äœ¿çšéã®å¢å ã«ããããµãŒããŒãããã®åçæ§ãåäžããŸãã RAN ã®ã¿ã®å Žåã§ããAI-RAN ã€ã³ãã©ã¹ãã©ã¯ãã£ã¯ãã«ã¹ã¿ã RAN ã®ã¿ã®ãªãã·ã§ã³ãããã³ã¹ãå¹çãé«ããªããŸãã
åçãš TCO ã®èšç®ã«äœ¿çšãããäž»ãªåææ¡ä»¶ã«ã¯ã次ã®ãã®ãå«ãŸããŸãã
åãã©ãããã©ãŒã ã®ãã©ãããã©ãŒã ããµãŒããŒãã©ãã¯ã®ããããã®æ°ã¯ãåãåšæ³¢æ°ã§ãã 4T4R 㧠600 ã»ã«ããããã€ããå
±éã®ããŒã¹ã©ã€ã³ã䜿çšããŠèšç®ãããŸãã
ç·ææã³ã¹ã (TCO) ã¯ã5 幎以äžã§èšç®ãããŠãããããŒããŠã§ã¢ããœãããŠã§ã¢ãvRANãAI ã®éçšã³ã¹ããå«ãŸããŠããŸãã
æ°ãã AI åçã®èšç®ã«ã¯ãGB200 NVL2 AI ããã©ãŒãã³ã¹ ãã³ãããŒã¯ã«åºã¥ããŠããµãŒããŒãããã®æé 20 ãã«ã䜿çšããŸããã
éçšæ¯åºã³ã¹ãã¯ãæ¥æ¬ã®çŸå°ã®é»åã³ã¹ãã«åºã¥ããŠãããäžççã«æ¡åŒµããããšã¯ã§ããŸããã
ROI % = (æ°ãã AI åç â TCO) / TCO
AI ã®åçã®åäžããšãã«ã®ãŒå¹çãåçæ§ãåçæ§ã®ãã®æ€èšŒã«ããããã®ãã¯ãããžã®å®çŸæ§ãããã©ãŒãã³ã¹ãçµæžçãªã¡ãªããã«çãã®äœå°ã¯ãããŸããã
ä»åŸãVera Rubin ãªã©ã® NVIDIAã¹ãŒããŒãããã®åäžä»£ãææ°é¢æ°çã«å¢å ããããšã§ããããã®ã¡ãªããã¯ããã«æ¡éãã«å¢å€§ããåŸ
æã®éä¿¡ãããã¯ãŒã¯ã®ããžãã¹å€é©ãå¯èœã«ãªããŸãã
å°æ¥ãèŠæ®ãã
SoftBank ãš NVIDIA ã¯ãAI-RAN ã®åæ¥åãšæ°ããã¢ããªã±ãŒã·ã§ã³ãçã¿åºãããã«ã
ç¶ç¶çã«åå
ããŠããŸãã ãã®å¥çŽã®æ¬¡ã®ãã§ãŒãºã§ã¯ãã¹ãã¯ãã«å¹çãåäžããã AI-for-RAN ã®åãçµã¿ãšããã¡ã€ã³ãã¥ãŒãã³ã°ãšãã¹ãã®ããã«ããžã¿ã« ãããã¯ãŒã¯ãã·ãã¥ã¬ãŒããã NVIDIA Aerial Omniverse ããžã¿ã« ãã€ã³ã®åãçµã¿ãå«ãŸããŸãã
NVIDIA AI Aerial ã¯ãäžçäžã®éä¿¡äºæ¥è
ãšãšã³ã·ã¹ãã ããŒãããŒããã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ãšãœãããŠã§ã¢ ããã¡ã€ã³ã RAN + AI ã®ãã¯ãŒã䜿çšããŠã5G ããã³ 6G ãããã¯ãŒã¯ãå€é©ããåºç€ãç¯ããŸãã NVIDIA Aerial RAN Computer-1 ãš AI Aerial ãœãããŠã§ã¢ ã©ã€ãã©ãªã䜿çšããŠãç¬èªã® AI-RAN å®è£
ãéçºã§ããããã«ãªããŸããã
NVIDIA AI ãšã³ã¿ãŒãã©ã€ãº ã¯ãå€ãã® NVIDIA ãœãããŠã§ã¢ ããŒã«ãããã䜿çšããããã®ãã©ã€ã¢ã«ãããæãããªããã«ãAI-RAN ã§ãã¹ãå¯èœãªéä¿¡äºæ¥è
åãã®æ°ãã AI ã¢ããªã±ãŒã·ã§ã³ã®äœæã«ãè²¢ç®ããŠããŸããããã«ã¯ãçæ AI åãã® NIM ãã€ã¯ããµãŒãã¹ãRAGãVLMããããã£ã¯ã¹ ãã¬ãŒãã³ã°çšã® NVIDIA IsaacãNVIDIA NeMoãRAPIDSãæšè«çšã® NVIDIA TritonãAI ãããŒã«ãŒçšãµãŒããŒã¬ã¹ API ãå«ãŸããŸãã
éä¿¡æ¥çã¯ãAI ãµãŒãã¹ ãããã€ããŒã«ãªã倧ããªãã£ã³ã¹ã®æåç·ã«ç«ã£ãŠããŸãã AI-RAN ã¯ãã¯ã€ã€ã¬ã¹ ãããã¯ãŒã¯ã®æ°ããåºç€ãšããŠã¢ã¯ã»ã©ã¬ãŒããã ã³ã³ãã¥ãŒãã£ã³ã°ã䜿çšããããšã§ãäžçäžã®éä¿¡äŒç€Ÿã«ãšã£ãŠãã®æ°ããå€é©ãä¿é²ã§ããŸãã
ãã®çºè¡šã¯ãAI-RAN ãã¯ãããžã®ç»æçãªç¬éã§ããããã®å®çŸæ§ããã£ãªã¢ã°ã¬ãŒãã®ããã©ãŒãã³ã¹ãåªãããšãã«ã®ãŒå¹çãçµæžçãªäŸ¡å€ã蚌æããŸããã NVIDIA ã®é«éåããã AI-RAN ã€ã³ãã©ã¹ãã©ã¯ãã£ã«æè³ãããè³æ¬æ¯åº 1 ãã«ã¯ã6G ã«å¯Ÿå¿ããªããã5 åã®åçãçã¿åºããŸãã
AI åçåãžã®åãçµã¿ã¯ãä»ããå§ããããŸãã
é¢é£æ
å ±
GTC ã»ãã·ã§ã³:
éä¿¡äŒç€Ÿãåœå®¶ AI ã€ã³ãã©ã¹ãã©ã¯ãã£ãšãã©ãããã©ãŒã ãã©ã®ããã«å®çŸããã
GTC ã»ãã·ã§ã³:
çŸä»£ã®éä¿¡äŒç€Ÿ Blueprint: AI ã䜿çšããŠå€é©ãšåçºæ
GTC ã»ãã·ã§ã³:
人工ç¥èœãéä¿¡ãå€é©ãã 3 ã€ã®æ¹æ³
SDK:
Aerial Omniverse ããžã¿ã« ãã€ã³
ãŠã§ãããŒ:
How Telcos Transform Customer Experiences with Conversational AI
ãŠã§ãããŒ:
å€èšèªé³å£° AI ã«ã¹ã¿ãã€ãºããããšãŒãžã§ã³ãã¢ã·ã¹ãã§éä¿¡äŒç€Ÿ ã³ã³ã¿ã¯ã ã»ã³ã¿ãŒ ãšãŒãžã§ã³ãã®åŒ·å |
https://developer.nvidia.com/blog/developing-a-172b-llm-with-strong-japanese-capabilities-using-nvidia-megatron-lm/ | Developing a 172B LLM with Strong Japanese Capabilities Using NVIDIA Megatron-LM | "Generative AI has the ability to create entirely new content that traditional machine learning (ML)(...TRUNCATED) | https://developer.nvidia.com/ja-jp/blog/developing-a-172b-llm-with-strong-japanese-capabilities-using-nvidia-megatron-lm/ | Megatron-LM ãçšããæ¥æ¬èªã«åŒ·ã 172B 倧èŠæš¡èšèªã¢ãã«ã®éçº | "Reading Time:\n2\nminutes\nçæ AI ã¯ããã®åè¶ããèœåã®ãããã§ãåŸæ¥ã®æ©(...TRUNCATED) |
https://developer.nvidia.com/blog/5x-faster-time-to-first-token-with-nvidia-tensorrt-llm-kv-cache-early-reuse/ | 5x Faster Time to First Token with NVIDIA TensorRT-LLM KV Cache Early Reuse | "In our previous\nblog post\n, we demonstrated how reusing the key-value (KV) cache by offloading it(...TRUNCATED) | https://developer.nvidia.com/ja-jp/blog/5x-faster-time-to-first-token-with-nvidia-tensorrt-llm-kv-cache-early-reuse/ | NVIDIA TensorRT-LLM ã® KV Cache Early Reuseã§ãTime to First Token ã 5 åé«éå | "Reading Time:\n2\nminutes\n以åã®\nããã°èšäº\nã§ã¯ãkey-value (KV) ãã£ãã·ã¥ã C(...TRUNCATED) |
https://developer.nvidia.com/blog/state-of-the-art-multimodal-generative-ai-model-development-with-nvidia-nemo/ | State-of-the-Art Multimodal Generative AI Model Development with NVIDIA NeMo | "Generative AI\nhas rapidly evolved from text-based models to multimodal capabilities. These models (...TRUNCATED) | https://developer.nvidia.com/ja-jp/blog/state-of-the-art-multimodal-generative-ai-model-development-with-nvidia-nemo/ | NVIDIA NeMo ã«ããæå
端ã®ãã«ãã¢ãŒãã«çæ AI ã¢ãã«éçº | "Reading Time:\n2\nminutes\nçæ AI\nã¯ãããã¹ãããŒã¹ã®ã¢ãã«ãããã«ãã¢ãŒ(...TRUNCATED) |
https://developer.nvidia.com/blog/frictionless-collaboration-and-rapid-prototyping-in-hybrid-environments-with-nvidia-ai-workbench/ | Frictionless Collaboration and Rapid Prototyping in Hybrid Environments with NVIDIA AI Workbench | "NVIDIA AI Workbench\nis a free development environment manager that streamlines data science, AI, a(...TRUNCATED) | https://developer.nvidia.com/ja-jp/blog/frictionless-collaboration-and-rapid-prototyping-in-hybrid-environments-with-nvidia-ai-workbench/ | "NVIDIA AI Workbench ã«ãããã€ããªããç°å¢ã«ãããã¹ã ãŒãºãªã³ã©ãã¬ãŒã·(...TRUNCATED) | "Reading Time:\n3\nminutes\nNVIDIA AI Workbench\nã¯ãéžæããã·ã¹ãã ã§ããŒã¿ ãµã€(...TRUNCATED) |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 130