본문 바로가기
조회 수 39 추천 수 0 댓글 0

단축키

Prev이전 문서

Next다음 문서

단축키

Prev이전 문서

Next다음 문서

NVIDIA가 호퍼 아키텍처 GPU와 HBM3e 메모리를 탑재한 H200, 그리고 HGX H200을 발표했습니다. 

 

메모리 대역폭은 4.8TB/s, 용량은 141GB로 H100보다 대역폭이 1.4배, 용량이 2배로 늘었습니다. 그래서 Llama2 70B는 1.9배, GPT-3 175B는 1.6배의 성능 향상이 있습니다. 

 

SC23—NVIDIA today announced it has supercharged the world’s leading AI computing platform with the introduction of the NVIDIA HGX™ H200. Based on NVIDIA Hopper™ architecture, the platform features the NVIDIA H200 Tensor Core GPU with advanced memory to handle massive amounts of data for generative AI and high performance computing workloads.

The NVIDIA H200 is the first GPU to offer HBM3e — faster, larger memory to fuel the acceleration of generative AI and large language models, while advancing scientific computing for HPC workloads. With HBM3e, the NVIDIA H200 delivers 141GB of memory at 4.8 terabytes per second, nearly double the capacity and 2.4x more bandwidth compared with its predecessor, the NVIDIA A100.

H200-powered systems from the world’s leading server manufacturers and cloud service providers are expected to begin shipping in the second quarter of 2024.

“To create intelligence with generative AI and HPC applications, vast amounts of data must be efficiently processed at high speed using large, fast GPU memory,” said Ian Buck, vice president of hyperscale and HPC at NVIDIA. “With NVIDIA H200, the industry’s leading end-to-end AI supercomputing platform just got faster to solve some of the world’s most important challenges.”

Perpetual Innovation, Perpetual Performance Leaps
The NVIDIA Hopper architecture delivers an unprecedented performance leap over its predecessor and continues to raise the bar through ongoing software enhancements with H100, including the recent release of powerful open-source libraries like NVIDIA TensorRT™-LLM.

The introduction of H200 will lead to further performance leaps, including nearly doubling inference speed on Llama 2, a 70 billion-parameter LLM, compared to the H100. Additional performance leadership and improvements with H200 are expected with future software updates.

NVIDIA H200 Form Factors
NVIDIA H200 will be available in NVIDIA HGX H200 server boards with four- and eight-way configurations, which are compatible with both the hardware and software of HGX H100 systems. It is also available in the NVIDIA GH200 Grace Hopper™ Superchip with HBM3e, announced in August.

With these options, H200 can be deployed in every type of data center, including on premises, cloud, hybrid-cloud and edge. NVIDIA’s global ecosystem of partner server makers — including ASRock RackASUS, Dell Technologies, Eviden, GIGABYTE, Hewlett Packard Enterprise, Ingrasys, Lenovo, QCT, Supermicro, Wistron and Wiwynn — can update their existing systems with an H200.

Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will be among the first cloud service providers to deploy H200-based instances starting next year, in addition to CoreWeaveLambda and Vultr.

Powered by NVIDIA NVLink™ and NVSwitch™ high-speed interconnects, HGX H200 provides the highest performance on various application workloads, including LLM training and inference for the largest models beyond 175 billion parameters.

An eight-way HGX H200 provides over 32 petaflops of FP8 deep learning compute and 1.1TB of aggregate high-bandwidth memory for the highest performance in generative AI and HPC applications.

When paired with NVIDIA Grace™ CPUs with an ultra-fast NVLink-C2C interconnect, the H200 creates the GH200 Grace Hopper Superchip with HBM3e — an integrated module designed to serve giant-scale HPC and AI applications.

Accelerate AI With NVIDIA Full-Stack Software
NVIDIA’s accelerated computing platform is supported by powerful software tools that enable developers and enterprises to build and accelerate production-ready applications from AI to HPC. This includes the NVIDIA AI Enterprise suite of software for workloads such as speech, recommender systems and hyperscale inference.

Availability
The NVIDIA H200 will be available from global system manufacturers and cloud service providers starting in the second quarter of 2024.

Watch Buck’s SC23 special address on Nov. 13 at 6 a.m. PT to learn more about the NVIDIA H200 Tensor Core GPU.

 

01_o.jpg

 

https://nvidianews.nvidia.com/news/nvidia-supercharges-hopper-the-worlds-leading-ai-computing-platform




List of Articles
번호 분류 제목 글쓴이 조회 수 날짜
공지 덕질 공통 이용규칙 및 안내 (업데이트중+ 2024-04-13) 😀컴덕824 5132 2024.04.14
공지 1000P를 모으면 다이소 상품권 1000원을 신청할 수 있습니다. file Private 5117 2024.02.14
344 정보/소식 ppt도 ai로 쉽게 #chatgp #ai #ai툴 #ai자동화 #과제 #숙제 #인공지능 #챗gpt file 😀익명309 3300 2023.04.28
343 정보/소식 인텔 코어 i3-14100, i5-14600이 긱벤치에 등록 😀컴덕451 3294 2023.10.22
342 정보/소식 글을 영상으로 만들어주는 AI 툴 5가지 ( Text to Video AI🎥) file 😀익명596 3266 2023.04.28
341 정보/소식 리눅스 드라이버에서 AMD RDNA3 리프레시가 발견 file 😀컴덕238 3233 2023.10.22
340 정보/소식 SK하이닉스, ‘OCP 글로벌 서밋 2023’서 AI 혁신 이끌 차세대 메모리 솔루션 선봬 file 😀컴덕532 3212 2023.10.22
339 정보/소식 삼성, HBM3E 메모리와 32Gb DDR5 등을 공개 file 😀컴덕801 3163 2023.10.22
338 정보/소식 인텔, 이스라엘 비난 발언을 이유로 웹 서밋에서 탈퇴 😀컴덕864 3097 2023.10.22
337 정보/소식 미국의 수출 제한으로 중국에서 4090 가격이 2배로 상승 😀컴덕107 3089 2023.10.22
336 정보/소식 스레드리퍼 스톰 픽의 시네벤치 R23 점수 공개 file 😀컴덕079 3042 2023.10.22
335 정보/소식 라이젠 9 8950X, 시네벤치에서 14900K보다 높은 성능 file 😀컴덕519 3008 2023.10.22
334 정보/소식 삼성, 내년에 3백단 이상의 낸드 플래시 메모리를 양산 😀컴덕915 2987 2023.10.22
333 정보/소식 무어스레드, S90과 S4000 GPU 준비 중 😀컴덕263 2951 2023.10.22
332 정보/소식 AMD, 중국 직원 15%를 해고 😀컴덕919 2826 2023.10.22
331 정보/소식 Zero ASIC, 빠르고 간단하게 칩렛 ASIC를 설계하는 플랫폼 file 😀컴덕524 2603 2023.10.20
330 정보/소식 AMD, 3rd RDNA의 하이엔드 모바일 GPU인 '라데온 RX 7900M' 모바일 GPU 공식 출시 발표! file 😀컴덕105 2499 2023.10.20
329 정보/소식 미국, NVIDIA A800/H800의 중국 수출을 금지 😀컴덕356 2496 2023.10.20
328 정보/소식 AMD 5세대 'RYZEN THREADRIPPER(라이젠 스레드리퍼)' 워크스테이션 CPU 세부 제원 file 😀컴덕595 2454 2023.10.20
327 정보/소식 AMD, 라데온 7000(RX & W) GPU용 ROCm(5.7) 및 PyTorch(파이토치) 지원 리눅스 드라이버 발표 file 😀컴덕443 2423 2023.10.20
326 정보/소식 AMD, 라이젠 스레드리퍼 7000WX 시리즈를 발표 file 😀컴덕172 2387 2023.10.20
325 정보/소식 마이크론 7500 시리즈 데이터센터용 PCIe 4.0 SSD file 😀컴덕564 2386 2023.10.18
목록
Board Pagination Prev 1 2 3 4 5 6 7 8 9 10 ... 18 Next
/ 18