SK Hynix plans to increase production of its sixth-generation 10-nanometer DRAM chips by approximately eightfold next year, positioning itself to capitalize on the artificial intelligence industry’s pivot from model training to large-scale deployment.
The expansion targets 1c DRAM, which the Icheon-based company developed in late 2024 using extreme ultraviolet lithography. The aggressive ramp-up reflects anticipation that AI inference workloads—where trained models make real-time predictions—will soon eclipse training as the primary driver of chip demand.
Market researcher IDC forecasts global investment in AI inference infrastructure will surpass training spending by year-end, marking a fundamental shift in semiconductor requirements. While training demands raw computational power, inference prioritizes memory bandwidth and capacity to serve millions of simultaneous queries.
SK Hynix’s timing aligns with its position as a critical supplier of high-bandwidth memory to Nvidia’s AI processors. The company holds roughly half the HBM market and has secured early qualification for next-generation products.
Yet questions remain about whether SK Hynix can maintain production quality while scaling so rapidly. The 1c process only began volume shipments this year, leaving limited runway to identify yield issues before the planned expansion. Competitors Samsung and Micron are pursuing similar capacity additions, potentially flooding the market if inference demand fails to materialize as projected.





