본문 바로가기
조회 수 39 추천 수 0 댓글 0

단축키

Prev이전 문서

Next다음 문서

단축키

Prev이전 문서

Next다음 문서

NVIDIA가 호퍼 아키텍처 GPU와 HBM3e 메모리를 탑재한 H200, 그리고 HGX H200을 발표했습니다. 

 

메모리 대역폭은 4.8TB/s, 용량은 141GB로 H100보다 대역폭이 1.4배, 용량이 2배로 늘었습니다. 그래서 Llama2 70B는 1.9배, GPT-3 175B는 1.6배의 성능 향상이 있습니다. 

 

SC23—NVIDIA today announced it has supercharged the world’s leading AI computing platform with the introduction of the NVIDIA HGX™ H200. Based on NVIDIA Hopper™ architecture, the platform features the NVIDIA H200 Tensor Core GPU with advanced memory to handle massive amounts of data for generative AI and high performance computing workloads.

The NVIDIA H200 is the first GPU to offer HBM3e — faster, larger memory to fuel the acceleration of generative AI and large language models, while advancing scientific computing for HPC workloads. With HBM3e, the NVIDIA H200 delivers 141GB of memory at 4.8 terabytes per second, nearly double the capacity and 2.4x more bandwidth compared with its predecessor, the NVIDIA A100.

H200-powered systems from the world’s leading server manufacturers and cloud service providers are expected to begin shipping in the second quarter of 2024.

“To create intelligence with generative AI and HPC applications, vast amounts of data must be efficiently processed at high speed using large, fast GPU memory,” said Ian Buck, vice president of hyperscale and HPC at NVIDIA. “With NVIDIA H200, the industry’s leading end-to-end AI supercomputing platform just got faster to solve some of the world’s most important challenges.”

Perpetual Innovation, Perpetual Performance Leaps
The NVIDIA Hopper architecture delivers an unprecedented performance leap over its predecessor and continues to raise the bar through ongoing software enhancements with H100, including the recent release of powerful open-source libraries like NVIDIA TensorRT™-LLM.

The introduction of H200 will lead to further performance leaps, including nearly doubling inference speed on Llama 2, a 70 billion-parameter LLM, compared to the H100. Additional performance leadership and improvements with H200 are expected with future software updates.

NVIDIA H200 Form Factors
NVIDIA H200 will be available in NVIDIA HGX H200 server boards with four- and eight-way configurations, which are compatible with both the hardware and software of HGX H100 systems. It is also available in the NVIDIA GH200 Grace Hopper™ Superchip with HBM3e, announced in August.

With these options, H200 can be deployed in every type of data center, including on premises, cloud, hybrid-cloud and edge. NVIDIA’s global ecosystem of partner server makers — including ASRock RackASUS, Dell Technologies, Eviden, GIGABYTE, Hewlett Packard Enterprise, Ingrasys, Lenovo, QCT, Supermicro, Wistron and Wiwynn — can update their existing systems with an H200.

Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will be among the first cloud service providers to deploy H200-based instances starting next year, in addition to CoreWeaveLambda and Vultr.

Powered by NVIDIA NVLink™ and NVSwitch™ high-speed interconnects, HGX H200 provides the highest performance on various application workloads, including LLM training and inference for the largest models beyond 175 billion parameters.

An eight-way HGX H200 provides over 32 petaflops of FP8 deep learning compute and 1.1TB of aggregate high-bandwidth memory for the highest performance in generative AI and HPC applications.

When paired with NVIDIA Grace™ CPUs with an ultra-fast NVLink-C2C interconnect, the H200 creates the GH200 Grace Hopper Superchip with HBM3e — an integrated module designed to serve giant-scale HPC and AI applications.

Accelerate AI With NVIDIA Full-Stack Software
NVIDIA’s accelerated computing platform is supported by powerful software tools that enable developers and enterprises to build and accelerate production-ready applications from AI to HPC. This includes the NVIDIA AI Enterprise suite of software for workloads such as speech, recommender systems and hyperscale inference.

Availability
The NVIDIA H200 will be available from global system manufacturers and cloud service providers starting in the second quarter of 2024.

Watch Buck’s SC23 special address on Nov. 13 at 6 a.m. PT to learn more about the NVIDIA H200 Tensor Core GPU.

 

01_o.jpg

 

https://nvidianews.nvidia.com/news/nvidia-supercharges-hopper-the-worlds-leading-ai-computing-platform




List of Articles
번호 분류 제목 글쓴이 조회 수 날짜
공지 덕질 공통 이용규칙 및 안내 (업데이트중+ 2024-04-13) 😀컴덕824 3932 2024.04.14
공지 1000P를 모으면 다이소 상품권 1000원을 신청할 수 있습니다. file Private 4977 2024.02.14
5309 일반 노트북 랜선과 와이파이중 어느쪽이 더 전력소비가 심할까?? 😀익명666 4539 2023.01.25
5308 일반 usb 메모리 수명 관련 질문입니다 3 😀익명510 4131 2023.01.24
5307 일반 USB는 외장하드보다 수명이 짧나요? 😀익명404 4084 2023.01.24
5306 일반 HDD 어느 회사 것으로 사야 하나요?? 1 😀익명942 3984 2023.01.24
5305 일반 USB 꽂아두고 작업해도 괜찮나요? 2 😀익명010 3745 2023.01.24
5304 일반 저사양컴퓨터에 최적화 되어 있는 OS 윈도우 11 X Lite file 😀58852953 3488 2022.09.23
5303 일반 리눅스마스터 1급 자격증 딴 사람 있어? 😀55308247 3365 2022.09.21
5302 정보/소식 ppt도 ai로 쉽게 #chatgp #ai #ai툴 #ai자동화 #과제 #숙제 #인공지능 #챗gpt file 😀익명309 3300 2023.04.28
5301 정보/소식 인텔 코어 i3-14100, i5-14600이 긱벤치에 등록 😀컴덕451 3291 2023.10.22
5300 정보/소식 글을 영상으로 만들어주는 AI 툴 5가지 ( Text to Video AI🎥) file 😀익명596 3263 2023.04.28
5299 정보/소식 리눅스 드라이버에서 AMD RDNA3 리프레시가 발견 file 😀컴덕238 3230 2023.10.22
5298 정보/소식 SK하이닉스, ‘OCP 글로벌 서밋 2023’서 AI 혁신 이끌 차세대 메모리 솔루션 선봬 file 😀컴덕532 3209 2023.10.22
5297 일반 라자 코두리의 AI 기업, 인텔의 엔드게임 라이센스를 계약 😀컴덕171 3199 2023.10.22
5296 질문 3440X1440 해상도 그래픽카드 추천부탁드립니다 2 😀익명095 3196 2023.05.01
5295 일반 영상작업용은 7800x3d vs 13700k ?? 2 😀익명968 3189 2023.05.01
5294 정보/소식 삼성, HBM3E 메모리와 32Gb DDR5 등을 공개 file 😀컴덕801 3162 2023.10.22
5293 일반 지포스 RTX 4070 슈퍼, AD!03+16GB 조합 😀컴덕084 3130 2023.10.22
5292 질문 LG그램 노트북이 느려요. nvme ssd 캐시값에도 영향이 있나요? 1 😀익명294 3124 2023.05.01
5291 정보/소식 인텔, 이스라엘 비난 발언을 이유로 웹 서밋에서 탈퇴 😀컴덕864 3096 2023.10.22
5290 정보/소식 미국의 수출 제한으로 중국에서 4090 가격이 2배로 상승 😀컴덕107 3089 2023.10.22
목록
Board Pagination Prev 1 2 3 4 5 6 7 8 9 10 ... 266 Next
/ 266