본문 바로가기
조회 수 39 추천 수 0 댓글 0

단축키

Prev이전 문서

Next다음 문서

단축키

Prev이전 문서

Next다음 문서

NVIDIA가 호퍼 아키텍처 GPU와 HBM3e 메모리를 탑재한 H200, 그리고 HGX H200을 발표했습니다. 

 

메모리 대역폭은 4.8TB/s, 용량은 141GB로 H100보다 대역폭이 1.4배, 용량이 2배로 늘었습니다. 그래서 Llama2 70B는 1.9배, GPT-3 175B는 1.6배의 성능 향상이 있습니다. 

 

SC23—NVIDIA today announced it has supercharged the world’s leading AI computing platform with the introduction of the NVIDIA HGX™ H200. Based on NVIDIA Hopper™ architecture, the platform features the NVIDIA H200 Tensor Core GPU with advanced memory to handle massive amounts of data for generative AI and high performance computing workloads.

The NVIDIA H200 is the first GPU to offer HBM3e — faster, larger memory to fuel the acceleration of generative AI and large language models, while advancing scientific computing for HPC workloads. With HBM3e, the NVIDIA H200 delivers 141GB of memory at 4.8 terabytes per second, nearly double the capacity and 2.4x more bandwidth compared with its predecessor, the NVIDIA A100.

H200-powered systems from the world’s leading server manufacturers and cloud service providers are expected to begin shipping in the second quarter of 2024.

“To create intelligence with generative AI and HPC applications, vast amounts of data must be efficiently processed at high speed using large, fast GPU memory,” said Ian Buck, vice president of hyperscale and HPC at NVIDIA. “With NVIDIA H200, the industry’s leading end-to-end AI supercomputing platform just got faster to solve some of the world’s most important challenges.”

Perpetual Innovation, Perpetual Performance Leaps
The NVIDIA Hopper architecture delivers an unprecedented performance leap over its predecessor and continues to raise the bar through ongoing software enhancements with H100, including the recent release of powerful open-source libraries like NVIDIA TensorRT™-LLM.

The introduction of H200 will lead to further performance leaps, including nearly doubling inference speed on Llama 2, a 70 billion-parameter LLM, compared to the H100. Additional performance leadership and improvements with H200 are expected with future software updates.

NVIDIA H200 Form Factors
NVIDIA H200 will be available in NVIDIA HGX H200 server boards with four- and eight-way configurations, which are compatible with both the hardware and software of HGX H100 systems. It is also available in the NVIDIA GH200 Grace Hopper™ Superchip with HBM3e, announced in August.

With these options, H200 can be deployed in every type of data center, including on premises, cloud, hybrid-cloud and edge. NVIDIA’s global ecosystem of partner server makers — including ASRock RackASUS, Dell Technologies, Eviden, GIGABYTE, Hewlett Packard Enterprise, Ingrasys, Lenovo, QCT, Supermicro, Wistron and Wiwynn — can update their existing systems with an H200.

Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will be among the first cloud service providers to deploy H200-based instances starting next year, in addition to CoreWeaveLambda and Vultr.

Powered by NVIDIA NVLink™ and NVSwitch™ high-speed interconnects, HGX H200 provides the highest performance on various application workloads, including LLM training and inference for the largest models beyond 175 billion parameters.

An eight-way HGX H200 provides over 32 petaflops of FP8 deep learning compute and 1.1TB of aggregate high-bandwidth memory for the highest performance in generative AI and HPC applications.

When paired with NVIDIA Grace™ CPUs with an ultra-fast NVLink-C2C interconnect, the H200 creates the GH200 Grace Hopper Superchip with HBM3e — an integrated module designed to serve giant-scale HPC and AI applications.

Accelerate AI With NVIDIA Full-Stack Software
NVIDIA’s accelerated computing platform is supported by powerful software tools that enable developers and enterprises to build and accelerate production-ready applications from AI to HPC. This includes the NVIDIA AI Enterprise suite of software for workloads such as speech, recommender systems and hyperscale inference.

Availability
The NVIDIA H200 will be available from global system manufacturers and cloud service providers starting in the second quarter of 2024.

Watch Buck’s SC23 special address on Nov. 13 at 6 a.m. PT to learn more about the NVIDIA H200 Tensor Core GPU.

 

01_o.jpg

 

https://nvidianews.nvidia.com/news/nvidia-supercharges-hopper-the-worlds-leading-ai-computing-platform




List of Articles
번호 분류 제목 글쓴이 조회 수 날짜
공지 덕질 공통 이용규칙 및 안내 (업데이트중+ 2024-04-13) 😀컴덕824 3913 2024.04.14
공지 1000P를 모으면 다이소 상품권 1000원을 신청할 수 있습니다. file Private 4975 2024.02.14
5309 일반 Synology의 4 베이 NAS 장비 "DiskStation DS416j" 😀익명860 1364 2016.02.08
5308 일반 태블릿 PC의 충전을하면서 주변 기기를 사용할 수있는 OTG 지원 USB 허브 😀익명950 1284 2016.02.08
5307 일반 usb 3.0 포트를 사용하여 윈도우7 을 설치할 때하기 ( 인텔이 제공하는 유틸 ) 😀익명218 1323 2016.02.08
5306 일반 VMware 리믹스 OS 설치 방법 😀익명630 2275 2016.02.08
5305 일반 AERTsr64.exe 은 무엇인가 (Andrea's APO Access Service) 😀익명763 1250 2016.02.08
5304 일반 AMD 크림슨 드라이버 사용하시는분들 꼭 읽어보세요!! 😀익명835 1305 2016.02.08
5303 일반 노턴 백신 OEM으로 45개월 (1350일) 사용하기 😀익명662 1600 2016.02.08
5302 일반 블로그 사이드/아래에 공유버튼 넣기(삽입) 😀익명550 954 2016.02.08
5301 일반 주식수익 얼마나 났나요? 자랑해보시지요~^^ 20프로이상은 있을듯합니다 😀익명686 698 2016.02.15
5300 일반 카페베네 상장 난항에 투자자 눈물 😀익명092 684 2016.02.15
5299 일반 주식, 욕심은 화를 부르고. 그 화는 고스란히 가족들에게 짜증을 부릴겁니다 😀익명105 640 2016.02.22
5298 일반 원익IPS 추천합니다 😀익명789 581 2016.02.22
5297 일반 장이 너무 안좋네요 ㅜㅜ 😀익명780 624 2016.02.22
5296 일반 주식투자와 관련된 주식명언 😀익명023 910 2016.02.22
5295 일반 i5-6600 i5-6500 비교 1 😀익명451 1348 2016.02.22
5294 일반 ssd좀봐주세요 2 😀익명480 808 2016.02.27
5293 일반 크라운제과 어떻게 보시나요? 😀익명240 382 2016.02.28
5292 일반 흑자예상하며 기다린보람이 있군 😀익명123 386 2016.02.28
5291 일반 세계 주식 주요 지수 보는곳 입니다.모르시는분들을 위해 😀익명057 555 2016.02.28
5290 일반 한 2월 말쯤 총선테마가 시작될걸로 예상합니다. 😀익명623 483 2016.02.28
목록
Board Pagination Prev 1 2 3 4 5 6 7 8 9 10 ... 266 Next
/ 266