John Carmack contemplates using a long fiber line as an L2 cache for streaming AI data — a programmer envisions fiber as a substitute for DRAM
Yes, it's delay-line memory, but with lasers!
Get 3DTested's best news and in-depth reviews, straight to your inbox.
You are now subscribed
Your newsletter sign-up was successful
Anyone can set up an X account and voice their thoughts, but not every opinion deserves attention. When John Carmack tweets, people usually pay attention. His latest musings rely on a lengthy fiber loop as a kind of L2 cache to store AI model weights with near-zero latency and massive bandwidth.
Carmack arrived at the idea after realizing that single-mode fiber speeds have hit 256 Tb/s across a distance of 200 km. Using rough calculations based on the Doom box, he determined that 32 GB of data reside within the fiber cable at any given moment.
AI model weights can be retrieved in sequence for inference, and nearly so for training. Carmack’s next logical move, then, is employing the fiber loop as a data cache to ensure the AI accelerator remains continuously supplied. Consider conventional RAM merely as a buffer linking SSDs and the data processor, and explore ways to enhance or completely remove it.
256 Tb/s data rates over a 200 km distance have been shown on single-mode fiber optic, equating to 32 GB of data in transit, “stored” within the fiber, with a 32 TB/s bandwidth. Neural network inference and training may exhibit deterministic weight reference patterns, making it…February 6, 2026
The conversation generated a large volume of responses, several from individuals in high-ranking positions. Several noted that the idea itself resembles delay-line memory, evoking the mid-century era when mercury served as a medium and soundwaves functioned as data. Mercury's unpredictability made it difficult to handle, but Alan Turing himself suggested using a gin blend as a medium.
The primary real-world advantage of using a fiber line lies in energy savings, since maintaining DRAM requires considerable power, while handling light demands minimal energy. Moreover, light is consistent and straightforward to handle. Carmack observes that "fiber transmission may have a better growth trajectory than DRAM," but even setting aside basic logistics, 200 km of fiber would still likely be quite expensive.
Some observers noted additional constraints beyond the fact that Carmack's proposal would demand substantial fiber. Optical amplifiers and DSPs might offset some energy gains, and DRAM costs will need to fall eventually anyway. Some, including Elon Musk, even proposed vacuum as the medium (space lasers!), though the feasibility of such a design would be questionable.
Carmack’s tweet hinted at a more practical method of connecting existing flash memory chips directly to the accelerators, carefully accounting for timing. That would naturally call for a standardized interface agreed upon by flash and AI accelerator manufacturers, but given the massive investment in AI, that possibility doesn’t seem unrealistic at all.
Get 3DTested's best news and in-depth reviews, straight to your inbox.
Several groups of scientists have indeed examined variations of that concept, including Behemoth from 2021, FlashGNN and FlashNeuron from 2021, and more recently, the Augmented Memory Grid. It’s easy to picture one or more of these being implemented, assuming they haven’t been already.
Follow 3DTested on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

-
Gururu Assuming current solid state media for L2 cache provide means for verification, I would be curious to know how data in a fiber might be managed free from corruption.Reply -
Scott_Tx The same way any data transmitted over fiber is checked. Some kind of CRC code and if there's an error you read it from the storage again.Reply -
Gururu Reply
Sounds great!Scott_Tx said:The same way any data transmitted over fiber is checked. Some kind of CRC code and if there's an error you read it from the storage again. -
Moores_Ghost Carmack? Jim Keller came up with a similar idea back in 2024. Nice that John is adding to it, but we need to give credit where credit is due.Reply -
mrlaich So what's stopping this from being a backbone like structure for nervous system linings throughout a robot for "muscle memory" storage?Reply -
bit_user Reply
I think this isn't true, for transformer-type models.The article said:AI model weights can be accessed sequentially for inference -
bit_user Reply
Yeah, L2 caches usually include some ECC.Gururu said:Assuming current solid state media for L2 cache provide means for verification,
Well, PCIe 6.0 added a forward-error-correction (FEC) header. This avoids the need to retransmit, in the event of small numbers of errors. It's conceptually similar to ECC, I think.Gururu said:I would be curious to know how data in a fiber might be managed free from corruption.
For all I know, protocols used over long-haul fiber might already employ such a mechanism. -
JohnyFin I think that next logical step to store temp data are limits of speed of light. Correction error management is a challenge.Reply -
abufrejoval It's one of the reasons I like looking at old computing stuff so much, right from the start in the 1940's. There is a video out there where JP Eckert discusses the ENIAC design (he might also be dissing John von Neumann), that's just so full of insights into those early stages. Likewise I consider Grace Hopper's lectures a must-see.Reply
I keep thinking that by the late 1960's pretty near everything in computing had already been invented, because those guys were really looking in every direction. And the various technologies around RAM were incredibly diverse, they really tried anything that somehow could hold state, expand capacity or relieve the memory bottleneck.
Whether it was media (those delay lines, magnetic core, and then DRAM), virtual memory including fixed head magnetic drum drives, or the Harvard vs. The Princeton architecture or even content addressable memory, it had all been tried and tested. Also architecture things like >60 bit designs, single level store, or capability based computing went further than most manage even today.
Even in terms of the math theory around computing, most things were already settled by Turing, Post, Shannon etc.
Lithography has given the industry incredible growth, but also extreme tunnel vision, Unix has also been a huge throwback compared to Multics. -
Spuwho Bell Labs has already built an optical storage device. They switch in laser light using a high speed micromirror and then laser light just reflects indefinitely between mirrors until the micromirror switches the light back out. Some people consider it a time machine since the state of the light never changes as long as it is inside the apparatus.Reply
But it seems using a fiber is an overkill, when light switching seems more practical.