본문 바로가기
조회 수 39 추천 수 0 댓글 0

단축키

Prev이전 문서

Next다음 문서

단축키

Prev이전 문서

Next다음 문서

NVIDIA가 호퍼 아키텍처 GPU와 HBM3e 메모리를 탑재한 H200, 그리고 HGX H200을 발표했습니다. 

 

메모리 대역폭은 4.8TB/s, 용량은 141GB로 H100보다 대역폭이 1.4배, 용량이 2배로 늘었습니다. 그래서 Llama2 70B는 1.9배, GPT-3 175B는 1.6배의 성능 향상이 있습니다. 

 

SC23—NVIDIA today announced it has supercharged the world’s leading AI computing platform with the introduction of the NVIDIA HGX™ H200. Based on NVIDIA Hopper™ architecture, the platform features the NVIDIA H200 Tensor Core GPU with advanced memory to handle massive amounts of data for generative AI and high performance computing workloads.

The NVIDIA H200 is the first GPU to offer HBM3e — faster, larger memory to fuel the acceleration of generative AI and large language models, while advancing scientific computing for HPC workloads. With HBM3e, the NVIDIA H200 delivers 141GB of memory at 4.8 terabytes per second, nearly double the capacity and 2.4x more bandwidth compared with its predecessor, the NVIDIA A100.

H200-powered systems from the world’s leading server manufacturers and cloud service providers are expected to begin shipping in the second quarter of 2024.

“To create intelligence with generative AI and HPC applications, vast amounts of data must be efficiently processed at high speed using large, fast GPU memory,” said Ian Buck, vice president of hyperscale and HPC at NVIDIA. “With NVIDIA H200, the industry’s leading end-to-end AI supercomputing platform just got faster to solve some of the world’s most important challenges.”

Perpetual Innovation, Perpetual Performance Leaps
The NVIDIA Hopper architecture delivers an unprecedented performance leap over its predecessor and continues to raise the bar through ongoing software enhancements with H100, including the recent release of powerful open-source libraries like NVIDIA TensorRT™-LLM.

The introduction of H200 will lead to further performance leaps, including nearly doubling inference speed on Llama 2, a 70 billion-parameter LLM, compared to the H100. Additional performance leadership and improvements with H200 are expected with future software updates.

NVIDIA H200 Form Factors
NVIDIA H200 will be available in NVIDIA HGX H200 server boards with four- and eight-way configurations, which are compatible with both the hardware and software of HGX H100 systems. It is also available in the NVIDIA GH200 Grace Hopper™ Superchip with HBM3e, announced in August.

With these options, H200 can be deployed in every type of data center, including on premises, cloud, hybrid-cloud and edge. NVIDIA’s global ecosystem of partner server makers — including ASRock RackASUS, Dell Technologies, Eviden, GIGABYTE, Hewlett Packard Enterprise, Ingrasys, Lenovo, QCT, Supermicro, Wistron and Wiwynn — can update their existing systems with an H200.

Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will be among the first cloud service providers to deploy H200-based instances starting next year, in addition to CoreWeaveLambda and Vultr.

Powered by NVIDIA NVLink™ and NVSwitch™ high-speed interconnects, HGX H200 provides the highest performance on various application workloads, including LLM training and inference for the largest models beyond 175 billion parameters.

An eight-way HGX H200 provides over 32 petaflops of FP8 deep learning compute and 1.1TB of aggregate high-bandwidth memory for the highest performance in generative AI and HPC applications.

When paired with NVIDIA Grace™ CPUs with an ultra-fast NVLink-C2C interconnect, the H200 creates the GH200 Grace Hopper Superchip with HBM3e — an integrated module designed to serve giant-scale HPC and AI applications.

Accelerate AI With NVIDIA Full-Stack Software
NVIDIA’s accelerated computing platform is supported by powerful software tools that enable developers and enterprises to build and accelerate production-ready applications from AI to HPC. This includes the NVIDIA AI Enterprise suite of software for workloads such as speech, recommender systems and hyperscale inference.

Availability
The NVIDIA H200 will be available from global system manufacturers and cloud service providers starting in the second quarter of 2024.

Watch Buck’s SC23 special address on Nov. 13 at 6 a.m. PT to learn more about the NVIDIA H200 Tensor Core GPU.

 

01_o.jpg

 

https://nvidianews.nvidia.com/news/nvidia-supercharges-hopper-the-worlds-leading-ai-computing-platform




List of Articles
번호 분류 제목 글쓴이 조회 수 날짜
공지 덕질 공통 이용규칙 및 안내 (업데이트중+ 2024-04-13) 😀컴덕824 5201 2024.04.14
공지 1000P를 모으면 다이소 상품권 1000원을 신청할 수 있습니다. file Private 5196 2024.02.14
344 정보/소식 ‘도둑 시청’ 새 누누티비 문 열었다…OTT 업계 '한숨' 😀컴덕974 182 2023.06.24
343 정보/소식 [루머] AMD Ryzen™ 7000 시리즈 사양 유출, 16코어 Ryzen 9 7950X 최대 5.7GHz 부스트 클럭 file 😀익명094 1487 2023.04.02
342 정보/소식 [루머] AMD Ryzen™ Zen4 3D는 게임에서 일반 Zen 4보다 최대 30% 더 빠를 수 있 file 😀익명949 1566 2023.04.02
341 정보/소식 "수명 짧다지만 그래도" 아직 HDD를 포기할 수 없는 이유 file 😀익명566 1875 2023.04.03
340 정보/소식 1, L2 및 L3 캐시의 차이점: CPU 캐시는 어떻게 작동합니까? file 😀익명640 564 2023.05.22
339 정보/소식 10월에 인텔 14세대 코어 프로세서 출시 file 😀익명666 253 2023.07.26
338 정보/소식 1300W 소비전력? AMD Ryzen™ 7000 마더보드 전원 공급 장치 업그레이드 file 😀익명178 1459 2023.04.02
337 정보/소식 14세대 코어 프로세서, 코어 i를 마지막으로 사용 file 😀컴덕521 2169 2023.10.18
336 정보/소식 1분기 DRAM 현물 가격이 13~18% 오를 것으로 예상 file 😀컴덕793 346 2024.01.09
335 정보/소식 2023년 11월 슈퍼컴퓨터 성능 순위 file 😀컴덕534 43 2023.11.21
334 정보/소식 2023년 2분기 HDD 하드디스크 고장률 백블레이즈 드라이브 통계 file 😀익명854 364 2023.08.06
333 정보/소식 2023년 추천 토렌트 사이트 순위 secret 😀익명963 1425 2023.04.16
332 정보/소식 2ExaFLOPS의 엘 카피탄 슈퍼컴퓨터에 인스팅트 MI300A 설치 시작 file 😀익명988 89 2023.07.07
331 정보/소식 4070 가격인하 및 생산중단 😀익명427 41 2023.04.25
330 정보/소식 474.44 Kepler 지포스 드라이버 보안 업데이트 😀무명의컴덕717 370 2023.09.04
329 정보/소식 5000억 투자해 넷플릭스 맞설 국산 OTT 키운다 😀익명911 147 2023.06.24
328 정보/소식 7600X, 12900KS 및 12700K를 제치고 Userbench 데이터베이스에서 1위 file 😀익명056 1555 2023.04.02
327 정보/소식 AAEON UP 7000, 인텔 N 시리즈 탑재한 세계에서 가장 작은 플랫폼 file 😀익명735 262 2023.07.26
326 정보/소식 AAEON, 앨더레이크/젠4 기반 개발자용 보드 출시 file 😀익명324 387 2023.06.09
325 정보/소식 ADATA, LEGEND 970 PCIe Gen5 SSD 출시 file 😀익명927 78 2023.07.07
목록
Board Pagination Prev 1 2 3 4 5 6 7 8 9 10 ... 18 Next
/ 18