본문 바로가기
조회 수 33 추천 수 0 댓글 0

단축키

Prev이전 문서

Next다음 문서

단축키

Prev이전 문서

Next다음 문서

NVIDIA가 호퍼 아키텍처 GPU와 HBM3e 메모리를 탑재한 H200, 그리고 HGX H200을 발표했습니다. 

 

메모리 대역폭은 4.8TB/s, 용량은 141GB로 H100보다 대역폭이 1.4배, 용량이 2배로 늘었습니다. 그래서 Llama2 70B는 1.9배, GPT-3 175B는 1.6배의 성능 향상이 있습니다. 

 

SC23—NVIDIA today announced it has supercharged the world’s leading AI computing platform with the introduction of the NVIDIA HGX™ H200. Based on NVIDIA Hopper™ architecture, the platform features the NVIDIA H200 Tensor Core GPU with advanced memory to handle massive amounts of data for generative AI and high performance computing workloads.

The NVIDIA H200 is the first GPU to offer HBM3e — faster, larger memory to fuel the acceleration of generative AI and large language models, while advancing scientific computing for HPC workloads. With HBM3e, the NVIDIA H200 delivers 141GB of memory at 4.8 terabytes per second, nearly double the capacity and 2.4x more bandwidth compared with its predecessor, the NVIDIA A100.

H200-powered systems from the world’s leading server manufacturers and cloud service providers are expected to begin shipping in the second quarter of 2024.

“To create intelligence with generative AI and HPC applications, vast amounts of data must be efficiently processed at high speed using large, fast GPU memory,” said Ian Buck, vice president of hyperscale and HPC at NVIDIA. “With NVIDIA H200, the industry’s leading end-to-end AI supercomputing platform just got faster to solve some of the world’s most important challenges.”

Perpetual Innovation, Perpetual Performance Leaps
The NVIDIA Hopper architecture delivers an unprecedented performance leap over its predecessor and continues to raise the bar through ongoing software enhancements with H100, including the recent release of powerful open-source libraries like NVIDIA TensorRT™-LLM.

The introduction of H200 will lead to further performance leaps, including nearly doubling inference speed on Llama 2, a 70 billion-parameter LLM, compared to the H100. Additional performance leadership and improvements with H200 are expected with future software updates.

NVIDIA H200 Form Factors
NVIDIA H200 will be available in NVIDIA HGX H200 server boards with four- and eight-way configurations, which are compatible with both the hardware and software of HGX H100 systems. It is also available in the NVIDIA GH200 Grace Hopper™ Superchip with HBM3e, announced in August.

With these options, H200 can be deployed in every type of data center, including on premises, cloud, hybrid-cloud and edge. NVIDIA’s global ecosystem of partner server makers — including ASRock RackASUS, Dell Technologies, Eviden, GIGABYTE, Hewlett Packard Enterprise, Ingrasys, Lenovo, QCT, Supermicro, Wistron and Wiwynn — can update their existing systems with an H200.

Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will be among the first cloud service providers to deploy H200-based instances starting next year, in addition to CoreWeaveLambda and Vultr.

Powered by NVIDIA NVLink™ and NVSwitch™ high-speed interconnects, HGX H200 provides the highest performance on various application workloads, including LLM training and inference for the largest models beyond 175 billion parameters.

An eight-way HGX H200 provides over 32 petaflops of FP8 deep learning compute and 1.1TB of aggregate high-bandwidth memory for the highest performance in generative AI and HPC applications.

When paired with NVIDIA Grace™ CPUs with an ultra-fast NVLink-C2C interconnect, the H200 creates the GH200 Grace Hopper Superchip with HBM3e — an integrated module designed to serve giant-scale HPC and AI applications.

Accelerate AI With NVIDIA Full-Stack Software
NVIDIA’s accelerated computing platform is supported by powerful software tools that enable developers and enterprises to build and accelerate production-ready applications from AI to HPC. This includes the NVIDIA AI Enterprise suite of software for workloads such as speech, recommender systems and hyperscale inference.

Availability
The NVIDIA H200 will be available from global system manufacturers and cloud service providers starting in the second quarter of 2024.

Watch Buck’s SC23 special address on Nov. 13 at 6 a.m. PT to learn more about the NVIDIA H200 Tensor Core GPU.

 

01_o.jpg

 

https://nvidianews.nvidia.com/news/nvidia-supercharges-hopper-the-worlds-leading-ai-computing-platform




List of Articles
번호 분류 제목 글쓴이 조회 수 날짜
공지 에디터 업데이트+) GPT AI 기능을 포함하여 강력한 도구들을 사용해보세요 ⬆️ file 🍀플로버404 458 2024.04.16
공지 덕질 공통 이용규칙 및 안내 (업데이트중+ 2024-04-13) 😀컴덕824 763 2024.04.14
공지 1000P를 모으면 다이소 상품권 1000원을 신청할 수 있습니다. file Private 2658 2024.02.14
5280 일반 클리어 키캡 + 투명 축 (아이스실버 리니어) 교체 후기 file 😀컴덕034 276 2024.04.17
5279 일반 직구한 독거미 99배열 드디어 왔다!!! file 😀컴덕703 283 2024.04.17
5278 일반 그램 +뷰 2세대 샀어 file 😀컴덕610 258 2024.04.17
5277 일반 S24울 발열심한데 처음이라그래? 😀컴덕557 288 2024.04.17
5276 일반 공유기 추천 해줄 덬 !! 😀컴덕380 245 2024.04.17
5275 일반 인간을 가스라이팅 하고 자살 방법까지 알려주는 AI의 위험성 file 😀컴덕830 361 2024.04.16
5274 Lec 2 | MIT 6.00 Introduction to Computer Science and Programming, Fall 2008 file 😀컴덕213 445 2024.04.06
5273 일반 프로그래머 진짜 사라질 수도… 우려하던 세계 최초 AI 프로그래머 탄생… 😀컴덕518 337 2024.04.06
5272 정보/소식 중국산 미니PC 주의 비밀번호 터는 '악성코드' 심어 판매 역시 중국은 의심해봐야 돼 file 😀컴덕510 310 2024.04.06
5271 정보/소식 펌) 점보 프레임뽕이 와서 과연 유용한가 테스트를 해보았습니다 😀컴덕814 408 2024.04.06
5270 일반 클리앙 대체 사이트로 여기가 딱인듯 😀컴덕354 427 2024.03.31
5269 일반 21세기 국운 걸린 '반도체 3차 전쟁' (뉴스토리) / SBS file 😀컴덕864 540 2024.03.16
5268 일반 컴고수님들께 질문 2 😀컴덕321 685 2024.03.15
5267 질문 gstatic 이 사이트 무슨사이트인줄 알아??? 1 😀컴덕444 698 2024.03.13
5266 일반 일부러 컴퓨터 수리 사기당해본 유튜버 😀컴덕670 245 2024.02.21
5265 전문지식 wifi5 vs wifi6 와이파이5,6 차이점을 알아보자 file 😀컴덕702 353 2024.02.15
5264 일반 Bluetooth 헤드폰, 이어의 오디오 대기 시간에 대해 이야기해 보겠습니다. 😀컴덕995 443 2024.02.12
5263 일반 Bluetooth 블루투스 레이턴시 및 지연 시간 참고 file 😀컴덕753 422 2024.02.12
5262 28nm로 고정된 "무어의 법칙은 죽었다"는 새로운 증거 추가: 트랜지스터 비용 하락이 10년 전에 ... file 😀컴덕686 280 2024.02.08
5261 올해 4월 출시된 우분투 24.04 LTS 장기 지원 버전은 5년간 업데이트 지원을 받게 된다. file 😀컴덕898 282 2024.02.08
목록
Board Pagination Prev 1 2 3 4 5 6 7 8 9 10 ... 264 Next
/ 264