조회 수 1049 추천 수 0 댓글 0

단축키

Prev이전 문서

Next다음 문서

단축키

Prev이전 문서

Next다음 문서

Extra Form

NVIDIA가 호퍼 아키텍처 GPU와 HBM3e 메모리를 탑재한 H200, 그리고 HGX H200을 발표했습니다. 

 

메모리 대역폭은 4.8TB/s, 용량은 141GB로 H100보다 대역폭이 1.4배, 용량이 2배로 늘었습니다. 그래서 Llama2 70B는 1.9배, GPT-3 175B는 1.6배의 성능 향상이 있습니다. 

 

SC23—NVIDIA today announced it has supercharged the world’s leading AI computing platform with the introduction of the NVIDIA HGX™ H200. Based on NVIDIA Hopper™ architecture, the platform features the NVIDIA H200 Tensor Core GPU with advanced memory to handle massive amounts of data for generative AI and high performance computing workloads.

The NVIDIA H200 is the first GPU to offer HBM3e — faster, larger memory to fuel the acceleration of generative AI and large language models, while advancing scientific computing for HPC workloads. With HBM3e, the NVIDIA H200 delivers 141GB of memory at 4.8 terabytes per second, nearly double the capacity and 2.4x more bandwidth compared with its predecessor, the NVIDIA A100.

H200-powered systems from the world’s leading server manufacturers and cloud service providers are expected to begin shipping in the second quarter of 2024.

“To create intelligence with generative AI and HPC applications, vast amounts of data must be efficiently processed at high speed using large, fast GPU memory,” said Ian Buck, vice president of hyperscale and HPC at NVIDIA. “With NVIDIA H200, the industry’s leading end-to-end AI supercomputing platform just got faster to solve some of the world’s most important challenges.”

Perpetual Innovation, Perpetual Performance Leaps
The NVIDIA Hopper architecture delivers an unprecedented performance leap over its predecessor and continues to raise the bar through ongoing software enhancements with H100, including the recent release of powerful open-source libraries like NVIDIA TensorRT™-LLM.

The introduction of H200 will lead to further performance leaps, including nearly doubling inference speed on Llama 2, a 70 billion-parameter LLM, compared to the H100. Additional performance leadership and improvements with H200 are expected with future software updates.

NVIDIA H200 Form Factors
NVIDIA H200 will be available in NVIDIA HGX H200 server boards with four- and eight-way configurations, which are compatible with both the hardware and software of HGX H100 systems. It is also available in the NVIDIA GH200 Grace Hopper™ Superchip with HBM3e, announced in August.

With these options, H200 can be deployed in every type of data center, including on premises, cloud, hybrid-cloud and edge. NVIDIA’s global ecosystem of partner server makers — including ASRock RackASUS, Dell Technologies, Eviden, GIGABYTE, Hewlett Packard Enterprise, Ingrasys, Lenovo, QCT, Supermicro, Wistron and Wiwynn — can update their existing systems with an H200.

Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will be among the first cloud service providers to deploy H200-based instances starting next year, in addition to CoreWeaveLambda and Vultr.

Powered by NVIDIA NVLink™ and NVSwitch™ high-speed interconnects, HGX H200 provides the highest performance on various application workloads, including LLM training and inference for the largest models beyond 175 billion parameters.

An eight-way HGX H200 provides over 32 petaflops of FP8 deep learning compute and 1.1TB of aggregate high-bandwidth memory for the highest performance in generative AI and HPC applications.

When paired with NVIDIA Grace™ CPUs with an ultra-fast NVLink-C2C interconnect, the H200 creates the GH200 Grace Hopper Superchip with HBM3e — an integrated module designed to serve giant-scale HPC and AI applications.

Accelerate AI With NVIDIA Full-Stack Software
NVIDIA’s accelerated computing platform is supported by powerful software tools that enable developers and enterprises to build and accelerate production-ready applications from AI to HPC. This includes the NVIDIA AI Enterprise suite of software for workloads such as speech, recommender systems and hyperscale inference.

Availability
The NVIDIA H200 will be available from global system manufacturers and cloud service providers starting in the second quarter of 2024.

Watch Buck’s SC23 special address on Nov. 13 at 6 a.m. PT to learn more about the NVIDIA H200 Tensor Core GPU.

 

01_o.jpg

 

https://nvidianews.nvidia.com/news/nvidia-supercharges-hopper-the-worlds-leading-ai-computing-platform


0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0
List of Articles
번호 분류 제목 글쓴이 조회 수 날짜
공지 뉴스 구글 최신 뉴스 file 덕후냥이 1460 2024.12.12
공지 일반 샤오미 BE6500 라우터 실사용 후기 (Wi-Fi 7 + 2.5G 스위치 기능까지 ㄷㄷ) 4 덕후냥이 1031 2025.06.28
공지 🚨(뉴비필독) 전체공지 & 포인트안내 22 무명의덕질 29166 2024.11.04
공지 URL만 붙여넣으면 끝! 임베드 기능 무명의덕질 23167 2025.01.21
10641 정보 `보안` 차별성 강조하는 애플…이번엔 M1 맥에 보안 최강 - 매일경제 - 매일경제 뉴스봇 726 2021.02.21
10640 일반 ? 페어폰? file 덕후냥이 63 2022.03.03
10639 일반 ???: RTX 50이 전작보다 전기를 더먹는다고..? file 덕후냥이 81 2024.09.07
10638 일반 ???: 핫딜 올려봐~~~ 덕후냥이 108 2022.02.09
10637 .2Ghz에 도달한 AMD Ryzen 7 2700X 샘플 포착 회원_88687819 126 2018.04.02
10636 일반 .Android 8.0 오레오의 신기술은 어떤것이 있을까 김말이님 81 2017.08.24
10635 일반 .Android 8.0 오레오의 신기술은 어떤것이 있을까 김말이님 94 2017.08.24
10634 .exe 파일을 추출 하는 프로그램 2 유저_58365105 487 2019.01.25
10633 일반 .exe 파일을 추출 하는 프로그램 덕후냥이 495 2023.03.27
10632 정보 'LG V30' 출시 색상 암시? LG전자, 새로운 티저 공개(영상) 잭팟 99 2017.08.23
10631 일반 '지원 종료' 윈도 서버 2003 사용 기업을 위한 2가지 조언 라이너스~ 747 2015.07.20
10630 정보 ‘180억 매출’ 용산 전자업체 (컴퓨리) 파산 file 덕후냥이 222 2024.10.21
10629 일반 ‘30TB 벽’ 깬 32TB 하드 등장…소비자용 디스크 시대 저무는 이유 덕후냥이 214 2024.10.21
10628 정보 ‘갤럭시S21’을 3만원에?…통신3사, 최대 61만원 불법 지원 - 조선비즈 뉴스봇 749 2021.03.01
10627 정보 ‘굿바이 인텔’, 애플 자체 칩셋 탑재 맥북 첫 선 - 국민일보 뉴스봇 773 2021.02.21
10626 정보 ‘도둑 시청’ 새 누누티비 문 열었다…OTT 업계 '한숨' 덕후냥이 453 2023.06.24
10625 정보 ‘삼성폰 언박싱’ 유튜버의 한탄 “정말 열심히 만들었는데…” [IT선빵!] - 헤럴드경제 뉴... 뉴스봇 1036 2021.03.17
10624 정보 ‘아이폰 위탁생산’ 대만 폭스콘, 전기차 생산 추진 - 동아일보 뉴스봇 735 2021.02.22
10623 정보 ‘아이폰12 15만원→갤S21 17만원’ 구매 가격 낮아진다 [IT선빵!] - 헤럴드경제 뉴스 - ... 뉴스봇 771 2021.03.17
10622 정보 ‘아이폰12’ 8천만대 공급한다는 애플... 中 만리장성 무사히 넘을까 - 조선비즈 뉴스봇 775 2021.02.21
Board Pagination Prev 1 2 3 4 5 ... 533 Next
/ 533