KAYTUS Launches All-QLC Flash Storage at AI EXPO 2026 for 10,000-GPU Clusters

来源: 编辑: 发布: 2026-05-10 01:04

KAYTUS’s next-generation all-QLC flash solution delivers fully linear performance scaling for massive GPU clusters, while reducing TCO by 70%, enabling ultra-large-scale computing for the era of agentic AI.

SINGAPORE--(BUSINESS WIRE)--At AI EXPO KOREA 2026, KAYTUS officially launched its All-QLC Flash Storage Solution, engineered to deliver high performance, massive scalability, and cost efficiency for 10,000-GPU clusters. The solution addresses data-delivery bottlenecks in ultra-large-scale AI training, helping maximize GPU resource utilization.

Based on the KR2280 and KR1180 server platforms, the solution is deeply integrated with industry-leading AI-native parallel file systems to eliminate data silos inherent in traditional tiered storage. Purpose-built for read-intensive AI workloads, it overcomes the horizontal scaling limitations of massive clusters. Verified test-data shows that, at exabyte-scale deployment, the solution delivers 10 TB/s aggregate bandwidth and 100 million IOPS. In addition, it reduces five-year TCO by 70% compared with traditional TLC-based solutions, accelerating model innovation for AI cloud providers and intelligent computing centers.

Limitations in Traditional AI Storage Architectures.

The explosive growth of AI is fundamentally transforming enterprise computing and storage requirements. Large-scale AI model training features highly read-intensive workloads that require tens of thousands of GPUs to concurrently access exabyte-scale datasets with sub-millisecond latency. Traditional storage architectures now face three major challenges:

  • Separated Data Silos: Traditional ETL processes require data to be moved from object storage to parallel file systems before training, resulting in time-consuming physical data migration. IDC research indicates that data teams spend 81% of their time on data preparation, slowing business iteration.
  • Workload and Media Mismatch: More than 90% of AI training involves high-frequency concurrent reads. In contrast, traditional TLC flash solutions provide excessive write endurance that is unnecessary for these read-intensive workloads, driving up procurement, space, and power costs for exabyte-scale clusters and resulting in inefficient resource utilization.
  • Scalability Bottlenecks: Traditional file systems were not designed to handle the I/O burst workloads generated by 10,000-GPU clusters. As clusters scale, metadata lock contention and communication overhead introduce latency spikes and degraded overall performance.

KAYTUS Solution: All-QLC Flash Storage for Delivering High Performance, Scalability, and Cost Efficiency.

The next-generation KAYTUS All- QLC Flash Storage Server Solution is purpose-built to unlock the full potential of read-intensive AI training workloads. By tightly integrating flagship compute nodes with industry-leading AI-native parallel file systems, the solution harnesses advanced hardware–software co-design to deliver breakthrough performance, seamless scalability, and superior cost efficiency for ultra-large-scale AI computing environments.

Architectural Innovation: Overcoming AI Training Efficiency Bottlenecks.

The KAYTUS solution establishes a unified namespace with native multi-protocol access across file, object, and block storage. By leveraging high-capacity QLC flash pools and NVMe-oF fully shared interconnects, it redefines the unified data plane for AI storage, effectively eliminating the data silos inherent in traditional tiered architectures. Data can now flow on demand to GPU nodes without cross-system migration, enabling sub-millisecond access, and significantly improving AI training data retrieval efficiency.

  • Hardware Optimization: Engineered for read-intensive workloads, the solution features a PCIe 5.0 direct-connect architecture that doubles single-node I/O bandwidth compared to the previous generation. Combined with NUMA-balanced optimization, it effectively eliminates internal throughput bottlenecks.
  • Software Synergy: The solution integrates NFS over RDMA and native GPU Direct Storage technology, enabling direct data paths from QLC flash to GPU memory. By leveraging a disaggregated architecture that decouples protocol processing from storage states, it eliminates east-west traffic and achieves fully linear scaling of bandwidth and throughput, from petabyte to exabyte scale.

10,000-GPU Cluster Benchmarks: Exceptional Performance, Scalability, and Cost Efficiency

In benchmark testing in an exabyte-scale storage environment for a 10,000-GPU data center, the solution—powered by KR2280 and KR1180 nodes and optimized with industry-leading AI-native parallel file systems—demonstrated its capability to scale seamlessly to support computing clusters of up to 10,000 GPUs.

  • Extreme Performance at Scale: The system delivers 10 TB/s sustained aggregate read bandwidth and 100 million random-read IOPS, enabling concurrent access for tens of thousands of GPUs. Performance scales linearly as additional nodes are added, while GPU utilization remains consistently above 95%, with no storage-side lock contention or queuing, effectively eliminating GPU data starvation.
  • Superior Cost Efficiency: Compared with traditional TLC all-flash solutions, the solution reduces five-year TCO by 70%, cuts power and cooling costs by more than 75%, helping enterprises avoid overpaying for unnecessary extra write endurance.

Metric (1 EB Capacity)

TLC SSD Solution

QLC SSD Solution

Difference

CAPEX

1.0

0.39

65% ↓

Power Cost

1.0

0.29

75% ↓

5-Year TCO

1.0

0.36

70% ↓

(Note: Based on 15.36T TLC vs 61.44T QLC drive units)

 

猜你还想看:

腾讯网友:〃得之我幸
评论:都说炫舞里面的人物身材好,我告诉你,你要是天天那么蹦跶你也瘦。

凤凰网友:昔年 °Cold
评论:不怕事多,就怕多事。

天猫网友:那憂愁的感覺
评论:当有人装B的时候、哥总是低下头。不是哥感到羞愧、而是哥在找砖头.

其它网友:念旧-  Tender
评论:暧昧的本质是激情,而爱情的本质是平淡。

猫扑网友:潇洒 小姐 Seve°
评论:我一直在希望的田野上奔跑,虽然也偶尔被失望绊倒。

天涯网友:pome 光感
评论:人生就像愤怒的小鸟,当你失败时,总有几只猪在笑。。。

本网网友:肆虐ヽ Ragingヽ
评论:我能想到最浪漫的事,就是看你一起慢慢变老,而我依然青春年少.

淘宝网友:藏背后的伤悲
评论:小时候哭着哭着就笑了,长大后笑着笑着就哭了。

百度网友:漃寞啲男亼ぃ
评论:不喜欢整理房间,他们都叫我乱室英雄。

搜狐网友:清心 Demon,
评论:爷爷说他们那个年代。谁考试不会答。就答说毛主席万岁。没人敢打叉。