본문 바로가기
조회 수 39 추천 수 0 댓글 0

단축키

Prev이전 문서

Next다음 문서

단축키

Prev이전 문서

Next다음 문서

NVIDIA가 호퍼 아키텍처 GPU와 HBM3e 메모리를 탑재한 H200, 그리고 HGX H200을 발표했습니다. 

 

메모리 대역폭은 4.8TB/s, 용량은 141GB로 H100보다 대역폭이 1.4배, 용량이 2배로 늘었습니다. 그래서 Llama2 70B는 1.9배, GPT-3 175B는 1.6배의 성능 향상이 있습니다. 

 

SC23—NVIDIA today announced it has supercharged the world’s leading AI computing platform with the introduction of the NVIDIA HGX™ H200. Based on NVIDIA Hopper™ architecture, the platform features the NVIDIA H200 Tensor Core GPU with advanced memory to handle massive amounts of data for generative AI and high performance computing workloads.

The NVIDIA H200 is the first GPU to offer HBM3e — faster, larger memory to fuel the acceleration of generative AI and large language models, while advancing scientific computing for HPC workloads. With HBM3e, the NVIDIA H200 delivers 141GB of memory at 4.8 terabytes per second, nearly double the capacity and 2.4x more bandwidth compared with its predecessor, the NVIDIA A100.

H200-powered systems from the world’s leading server manufacturers and cloud service providers are expected to begin shipping in the second quarter of 2024.

“To create intelligence with generative AI and HPC applications, vast amounts of data must be efficiently processed at high speed using large, fast GPU memory,” said Ian Buck, vice president of hyperscale and HPC at NVIDIA. “With NVIDIA H200, the industry’s leading end-to-end AI supercomputing platform just got faster to solve some of the world’s most important challenges.”

Perpetual Innovation, Perpetual Performance Leaps
The NVIDIA Hopper architecture delivers an unprecedented performance leap over its predecessor and continues to raise the bar through ongoing software enhancements with H100, including the recent release of powerful open-source libraries like NVIDIA TensorRT™-LLM.

The introduction of H200 will lead to further performance leaps, including nearly doubling inference speed on Llama 2, a 70 billion-parameter LLM, compared to the H100. Additional performance leadership and improvements with H200 are expected with future software updates.

NVIDIA H200 Form Factors
NVIDIA H200 will be available in NVIDIA HGX H200 server boards with four- and eight-way configurations, which are compatible with both the hardware and software of HGX H100 systems. It is also available in the NVIDIA GH200 Grace Hopper™ Superchip with HBM3e, announced in August.

With these options, H200 can be deployed in every type of data center, including on premises, cloud, hybrid-cloud and edge. NVIDIA’s global ecosystem of partner server makers — including ASRock RackASUS, Dell Technologies, Eviden, GIGABYTE, Hewlett Packard Enterprise, Ingrasys, Lenovo, QCT, Supermicro, Wistron and Wiwynn — can update their existing systems with an H200.

Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will be among the first cloud service providers to deploy H200-based instances starting next year, in addition to CoreWeaveLambda and Vultr.

Powered by NVIDIA NVLink™ and NVSwitch™ high-speed interconnects, HGX H200 provides the highest performance on various application workloads, including LLM training and inference for the largest models beyond 175 billion parameters.

An eight-way HGX H200 provides over 32 petaflops of FP8 deep learning compute and 1.1TB of aggregate high-bandwidth memory for the highest performance in generative AI and HPC applications.

When paired with NVIDIA Grace™ CPUs with an ultra-fast NVLink-C2C interconnect, the H200 creates the GH200 Grace Hopper Superchip with HBM3e — an integrated module designed to serve giant-scale HPC and AI applications.

Accelerate AI With NVIDIA Full-Stack Software
NVIDIA’s accelerated computing platform is supported by powerful software tools that enable developers and enterprises to build and accelerate production-ready applications from AI to HPC. This includes the NVIDIA AI Enterprise suite of software for workloads such as speech, recommender systems and hyperscale inference.

Availability
The NVIDIA H200 will be available from global system manufacturers and cloud service providers starting in the second quarter of 2024.

Watch Buck’s SC23 special address on Nov. 13 at 6 a.m. PT to learn more about the NVIDIA H200 Tensor Core GPU.

 

01_o.jpg

 

https://nvidianews.nvidia.com/news/nvidia-supercharges-hopper-the-worlds-leading-ai-computing-platform




List of Articles
번호 분류 제목 글쓴이 조회 수 날짜
공지 덕질 공통 이용규칙 및 안내 (업데이트중+ 2024-04-13) 😀컴덕824 3933 2024.04.14
공지 1000P를 모으면 다이소 상품권 1000원을 신청할 수 있습니다. file Private 4978 2024.02.14
344 정보/소식 가성비 있는 게이밍 PC를 위한 프로세서! Intel Core i5-14400F file 😀컴덕074 10 2024.05.20
343 정보/소식 중국산 미니PC 주의 비밀번호 터는 '악성코드' 심어 판매 역시 중국은 의심해봐야 돼 file 😀컴덕510 322 2024.04.06
342 정보/소식 펌) 점보 프레임뽕이 와서 과연 유용한가 테스트를 해보았습니다 😀컴덕814 420 2024.04.06
341 정보/소식 ASUS Tornado TUF-AX3000v2 분해, MU-MIMO 및 1G 병목 현상 file 😀컴덕440 313 2024.02.08
340 정보/소식 ASUS TUF-AX3000V2 2.5G 네트워크 포트는 WAN 포트로만 사용할 수 있습니다 file 😀컴덕209 266 2024.02.08
339 정보/소식 ASUS GT-AX6000 소형 패킷 포워딩 성능 테스트, USB 포트, D포인트 충족 file 😀컴덕927 325 2024.02.07
338 정보/소식 Xiaomi 13 Pro 및 7TR13090을 사용하여 MLO 효과 측정 file 😀컴덕491 305 2024.02.07
337 정보/소식 ASUS GT-AX6000 및 XDR6080 무선 비교 테스트 file 😀컴덕853 321 2024.02.07
336 정보/소식 TP-LINK BE5100 7DR5130 분해, 새로운 6nm 칩 file 😀컴덕262 308 2024.02.07
335 정보/소식 코어 i9-14900T의 긱벤치 성능 file 😀컴덕106 315 2024.02.07
334 정보/소식 애즈락, AMD 7nm FP6 모바일 프로세서 탑재 메인보드 발표 file 😀컴덕153 645 2024.01.15
333 정보/소식 NVIDIA의 중국 시장 AI 프로세서 판매량이 줄어들 듯 😀컴덕554 690 2024.01.15
332 정보/소식 라이젠 8000G, 스위트 스팟은 DDR5-6000 메모리 file 😀컴덕078 665 2024.01.15
331 정보/소식 지포스 RTX 4090D, 4090보다 6% 느림 file 😀컴덕223 629 2024.01.15
330 정보/소식 세계 최초 DP 2.1 UHBR20 탑재 OLED 게이밍 모니터 file 😀컴덕566 670 2024.01.15
329 정보/소식 중국 대신 인도 회사가 NVIDiA AI GPU를 대량 구매 😀컴덕257 713 2024.01.15
328 정보/소식 AMD 라이젠 9 7940HX 5.2GHz 프로세서 file 😀컴덕002 681 2024.01.15
327 정보/소식 피닉스 2 다이의 라이젠 8000G, PCIe 대역폭 제한 file 😀컴덕660 623 2024.01.15
326 정보/소식 글로벌 웹사이트 전세계 순위 (정확함) 😀컴덕104 601 2024.01.15
325 정보/소식 엔비디아, '지포스 RTX 4000 슈퍼 GPU 라인업' 슬라이드 유출(사양 및 출시 가격 공식 확인) file 😀컴덕536 323 2024.01.09
목록
Board Pagination Prev 1 2 3 4 5 6 7 8 9 10 ... 18 Next
/ 18