Bill Gates-backed silicon photonics startup develops optical transistors 10,000x smaller than current tech — optical chip can process 1,000 x 1,000 multiplication matrices
Will this let us break free from the limitations of current silicon technology?
Get 3DTested's best news and in-depth reviews, straight to your inbox.
You are now subscribed
Your newsletter sign-up was successful
Neurophos, an AI chip started based in Austin, Texas and backed by Bill Gates’ Gates Frontier Fund, says that it has developed an optical processing unit (OPU) that the company claims is ten times more powerful than Nvidia’s latest Vera Rubin NVL72 AI supercomputer in FP4 / INT4 compute workloads, while still consuming a similar amount of power. According to The Register, the company achieves this by using a larger matrix and a much higher clock speed.
“On chip, there is a single photonic sensor that is 1,000 by 1,000 in size,” Neurophos CEO Patrick Bowen told the publication. This is about 15 times larger than the usual 256 x 256 matrix used in most AI GPUs. Despite that, the company was able to make its optical transistor around 10,000 times smaller than what’s currently available. “The equivalent of the optical transistor that you get from Silicon Photonics factories today is massive. It’s like 2 mm long,” Bowen added. “You just can’t fit enough of them on a chip in order to get a compute density that remotely competes with digital CMOS today.”
The company’s first-generation accelerator will have "the optical equivalent" of one tensor core, at around 25 square mm in size. This pales in comparison with Nvidia’s Vera Rubin chip, which is reported to have 576 tensor cores, but the difference is how Neurophos is using the photonic die. But aside from its larger 1,000 x 1,000 Matrix tile size, the startup’s first OPU, which it calls the Tulkas T100, will operate at a cool 56 GHz — much higher than the 9.1 GHz world record achieved on an Intel Core i9-14900KF and the 2.6 GHz boost clock on the Nvidia RTX Pro 6000. This allows it to beat Nvidia’s AI GPUs despite appearing to be underpowered on paper.
More importantly, Bowen says that it built its optical transistors using current semiconductor fabrication technologies, so it could potentially tap fabs like Intel or TSMC to mass produce them. Nevertheless, the chips are still in the testing phase and are not expected to enter volume production until 2028. It also needs to address challenges, like the need for massive amounts of vector processing units and static memory (SRAM).
Photonics is a new frontier that many companies are paying attention to. Nvidia already uses Spectrum-X Ethernet photonics switch systems in its Rubin platform, while AMD is set to develop a $280 million hub, specifically geared toward researching silicon photonics. Either way, it appears that this latest development is just a new wrinkle on the photonics frontier, and we should expect many more developments to come as the technology matures.
Follow 3DTested on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
Get 3DTested's best news and in-depth reviews, straight to your inbox.

-
bit_user Huh. It seems the only part of this chip that's optical is just the tensor core. Everything else is conventional digital logic.Reply
I think they need to find a proper optical equivalent to SRAM, before we might be able to see a true optical CPU. -
QuarterSwede Reply
Great point. It seems to be the future since this can be made with existing fabs so I’m sure memory will be tackled next. Not sure chip design and memory design see the same scaling but I’m sure using optical for memory design will lead to much smaller parts as well. 10000x smaller? That’s a LOT of memory.bit_user said:Huh. It seems the only part of this chip that's optical is just the tensor core. Everything else is conventional digital logic.
I think they need to find a proper optical equivalent to SRAM, before we might be able to see a true optical CPU.
And not sure why the article writer decided to leave what photonics is till the end of the article. That would have been helpful to know up front for those of us that don’t live in this world. -
bit_user Reply
It's easy to say, but I'm not sure actually creating a purely optical memory is even a solved problem.QuarterSwede said:I’m sure memory will be tackled next.
The article was talking about how they managed to shrink an optical transistor that was previously 2 milli meters. That's where they got the 10k scale factor. Conventional SRAM cells are currently somewhere in the ballpark of 40 nano meters (source: https://semiwiki.com/forum/threads/sram-cell-scaling.12722/ ).QuarterSwede said:10000x smaller? That’s a LOT of memory.
So, the point of purely optical memory would be to support building purely optical CPUs, not necessarily improving density over conventional SRAM. -
KraakBal How do you make a transistor a few atoms big, 10000x smaller? Why does everyone keep lying?Reply -
bit_user Reply
The article says the size reduction is relative to the 2 millimeter size that optical transistors were, before now.KraakBal said:How do you make a transistor a few atoms big, 10000x smaller?
Why does everyone keep thinking they're comparing to electronic transistors? If you see a surprising headline, read the article!KraakBal said:Why does everyone keep lying?
Second paragraph:
The article said:the company was able to make its optical transistor around 10,000 times smaller than what’s currently available. “The equivalent of the optical transistor that you get from Silicon Photonics factories today is massive. It’s like 2 mm long,” Bowen added -
abufrejoval But can it run Cobol?Reply
It sounds great until you realize that it's little better than quantum computing, only able to accelerate some niche problems, many of them around AI, which isn't designed to empower consumers, but to control them. -
bit_user Reply
Well, the chip has a lot of digital electronics to feed the tensor core. So, perhaps.abufrejoval said:But can it run Cobol?
It's main feature is the optical tensor core. I don't know what effective precision that runs at, but it seems to happen in the analog domain. So, if you had a problem where you needed lots of fast matrix multiplies of limited accuracy, then it should do the trick. These days, most problems fitting that mold are neural network inferencing. I could imagine using it for other signal processing use cases, but the limited precision would probably restrict it to image/video processing.Abufrejoval said:It sounds great until you realize that it's little better than quantum computing, only able to accelerate some niche problems, many of them around AI,