본문 바로가기

컴퓨터/노트북/인터넷

IT 컴퓨터 기기를 좋아하는 사람들의 모임방

조회 수 1011 추천 수 0 댓글 0

단축키

Prev이전 문서

Next다음 문서

단축키

Prev이전 문서

Next다음 문서

Extra Form

NVIDIA가 호퍼 아키텍처 GPU와 HBM3e 메모리를 탑재한 H200, 그리고 HGX H200을 발표했습니다. 

 

메모리 대역폭은 4.8TB/s, 용량은 141GB로 H100보다 대역폭이 1.4배, 용량이 2배로 늘었습니다. 그래서 Llama2 70B는 1.9배, GPT-3 175B는 1.6배의 성능 향상이 있습니다. 

 

SC23—NVIDIA today announced it has supercharged the world’s leading AI computing platform with the introduction of the NVIDIA HGX™ H200. Based on NVIDIA Hopper™ architecture, the platform features the NVIDIA H200 Tensor Core GPU with advanced memory to handle massive amounts of data for generative AI and high performance computing workloads.

The NVIDIA H200 is the first GPU to offer HBM3e — faster, larger memory to fuel the acceleration of generative AI and large language models, while advancing scientific computing for HPC workloads. With HBM3e, the NVIDIA H200 delivers 141GB of memory at 4.8 terabytes per second, nearly double the capacity and 2.4x more bandwidth compared with its predecessor, the NVIDIA A100.

H200-powered systems from the world’s leading server manufacturers and cloud service providers are expected to begin shipping in the second quarter of 2024.

“To create intelligence with generative AI and HPC applications, vast amounts of data must be efficiently processed at high speed using large, fast GPU memory,” said Ian Buck, vice president of hyperscale and HPC at NVIDIA. “With NVIDIA H200, the industry’s leading end-to-end AI supercomputing platform just got faster to solve some of the world’s most important challenges.”

Perpetual Innovation, Perpetual Performance Leaps
The NVIDIA Hopper architecture delivers an unprecedented performance leap over its predecessor and continues to raise the bar through ongoing software enhancements with H100, including the recent release of powerful open-source libraries like NVIDIA TensorRT™-LLM.

The introduction of H200 will lead to further performance leaps, including nearly doubling inference speed on Llama 2, a 70 billion-parameter LLM, compared to the H100. Additional performance leadership and improvements with H200 are expected with future software updates.

NVIDIA H200 Form Factors
NVIDIA H200 will be available in NVIDIA HGX H200 server boards with four- and eight-way configurations, which are compatible with both the hardware and software of HGX H100 systems. It is also available in the NVIDIA GH200 Grace Hopper™ Superchip with HBM3e, announced in August.

With these options, H200 can be deployed in every type of data center, including on premises, cloud, hybrid-cloud and edge. NVIDIA’s global ecosystem of partner server makers — including ASRock RackASUS, Dell Technologies, Eviden, GIGABYTE, Hewlett Packard Enterprise, Ingrasys, Lenovo, QCT, Supermicro, Wistron and Wiwynn — can update their existing systems with an H200.

Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will be among the first cloud service providers to deploy H200-based instances starting next year, in addition to CoreWeaveLambda and Vultr.

Powered by NVIDIA NVLink™ and NVSwitch™ high-speed interconnects, HGX H200 provides the highest performance on various application workloads, including LLM training and inference for the largest models beyond 175 billion parameters.

An eight-way HGX H200 provides over 32 petaflops of FP8 deep learning compute and 1.1TB of aggregate high-bandwidth memory for the highest performance in generative AI and HPC applications.

When paired with NVIDIA Grace™ CPUs with an ultra-fast NVLink-C2C interconnect, the H200 creates the GH200 Grace Hopper Superchip with HBM3e — an integrated module designed to serve giant-scale HPC and AI applications.

Accelerate AI With NVIDIA Full-Stack Software
NVIDIA’s accelerated computing platform is supported by powerful software tools that enable developers and enterprises to build and accelerate production-ready applications from AI to HPC. This includes the NVIDIA AI Enterprise suite of software for workloads such as speech, recommender systems and hyperscale inference.

Availability
The NVIDIA H200 will be available from global system manufacturers and cloud service providers starting in the second quarter of 2024.

Watch Buck’s SC23 special address on Nov. 13 at 6 a.m. PT to learn more about the NVIDIA H200 Tensor Core GPU.

 

01_o.jpg

 

https://nvidianews.nvidia.com/news/nvidia-supercharges-hopper-the-worlds-leading-ai-computing-platform


컴퓨터/노트북/인터넷

IT 컴퓨터 기기를 좋아하는 사람들의 모임방

List of Articles
번호 분류 제목 조회 수 날짜
공지 뉴스 구글 최신 뉴스 file 1388 2024.12.12
HOT글 일반 아 진짜 요새 SKT 해킹 뭐시기 때문에 신경 쓰여 죽겠어 ㅠㅠ 2 237 2025.05.20
공지 🚨(뉴비필독) 전체공지 & 포인트안내 3 file 25860 2024.11.04
공지 URL만 붙여넣으면 끝! 임베드 기능 20435 2025.01.21
1499 정보 KISA, 신규 악성코드 위협 공지...SKT 내부 서버에서 ‘BPF도어’ 악성코드 변종 8종 추가 발견 577 2025.05.10
1498 정보 MS가 인텔 18A 공정 계약을 체결? 1 file 596 2025.05.10
1497 정보 블루투스 6.1 발표. 전력 효율과 보안 향상 file 615 2025.05.10
1496 정보 중국 Hygon, 128코어 512스레드의 서버 프로세서 로드맵 공개 file 577 2025.05.10
1495 정보 인텔, 컴퓨텍스에서 아크 프로 B60 24GB를 발표? file 617 2025.05.10
1494 정보 듀얼GPU LSFG 후기모음 퀘존10개+PICE3.0*4정보추가 1 file 4711 2025.02.21
1493 정보 DDR5 메모리가 온다이 ECC 에러정보 교정한다고?? file 108 2024.12.22
1492 정보 제미나이 2.0 출시: 에이전트 시대를 위한 구글의 새로운 AI 모델 file 72 2024.12.12
1491 정보 N95, N100 단순 성능만 따져서 제품 비교 (추천ㄴ, 단순비교) file 173 2024.12.10
1490 정보 ip 확인 사이트 (ifconfig.kr) file 78 2024.12.08
1489 정보 ai 환각에 대처하는 AWS의 새로운 서비스 62 2024.12.08
1488 정보 Amazon, 차세대 ai 모델 'Amazon Nova' 공개 file 66 2024.12.08
1487 정보 ASUS Vivobook 15 (X1502VA-BQ079) 노트북 스펙 69 2024.12.05
1486 정보 스타링크 Min 월 요금제 $50 최대 속도 100Mbps 67 2024.12.05
1485 정보 스타링크 미니, 서비스 출시 임박 file 78 2024.12.05
1484 정보 불법 스트리밍 잡았다며?! 지금까지 왜 안잡고 있었지? file 2297 2024.12.02
1483 정보 2024/11월 3째주 최신 트래커 모음 1 225306 2024.11.22
1482 정보 AMD 뭐 발표했나부네 file 1470 2024.11.14
1481 정보 ‘180억 매출’ 용산 전자업체 (컴퓨리) 파산 file 189 2024.10.21
1480 정보 미국 소매점, AMD 9800X3D 524달러에 가격 등록 file 192 2024.10.21
Board Pagination Prev 1 2 3 4 5 6 7 8 9 10 ... 75 Next
/ 75