Imagine if the cables carrying your internet traffic could think while they transmit. A research team from NTT Inc., Kyushu University, and the University of Tokyo has done that, building a working neural network that performs computations as light pulses travel through 20 kilometers of fiber optic cable.
The system, described in a paper published November 4 in Advanced Devices & Instrumentation, represents a fundamentally different approach to computing. Instead of the usual architecture where processors, memory, and networks are separate components constantly shuffling data back and forth, this technology merges thinking with transmission.
The researchers call their approach “compute-in-wire.” It sounds almost too simple: why not just do the computing inside the cable itself?
Why Your Data Spends Most of Its Time Waiting
Modern machine learning has a traffic problem. Training massive neural networks requires moving enormous datasets between storage, memory, and processors. That movement consumes both time and energy, often more than the actual computation.
Engineers have tried various workarounds. Compute-in-memory performs calculations directly inside storage chips. Compute-in-sensor processes images right at the camera pixel level, before the data goes anywhere. Each approach tries to eliminate unnecessary trips.
Optical fibers seemed like an unlikely place to put intelligence. We typically think of them as passive pipes, exceptionally good at moving information across long distances with minimal loss. The telecom bands offer more than 20 THz of bandwidth. But the NTT-led team saw potential in those very properties. Light’s inherent parallelism enables tera-scale operations per second while sipping femtojoules to attojoules per operation.
The trick was finding a way to install a neural network inside a single transmission line without requiring hundreds of parallel components. Previous photonic computing approaches needed separate physical channels or wavelengths for each data element. For a neural network with even a modest 1,000 nodes, that means 1,000 lasers, 1,000 modulators, 1,000 detectors. Not exactly practical.
When a Bug Becomes a Feature
The researchers found their solution in dispersion, an optical property that fiber engineers usually try to minimize. When light pulses travel through fiber, different wavelengths move at slightly different speeds. A sharp pulse entering the fiber emerges stretched and blurred at the other end.
Telecommunications companies spend considerable effort compensating for this spreading effect. But lead researcher Mitsumasa Nakajima and his colleagues realized that controlled dispersion could replace the need for multiple delay loops. The spreading naturally creates connections between different time slots in the signal, essentially building a convolutional filter into the physics itself.
Instead of treating optical fibers as passive transmission media, compute-in-wire actively performs machine learning computations while data are in transit, eliminating unnecessary data transfers between processing units.
Their “dispersive folded-in-time deep neural network” uses just one feedback loop with one nonlinear device and one modulator. Data encoded in different time slots travel through the loop repeatedly, with each round trip corresponding to one layer of the neural network. The system demonstrated respectable accuracy on standard benchmarks: 95.82% on handwritten digit recognition and 86.3% on clothing images.
Teaching Physics to Learn
Training a neural network built from light presents an interesting chicken-and-egg problem. You need to know how the physical system behaves to optimize it, but the behavior depends on hard-to-measure parameters like exact dispersion coefficients and nonlinear responses.
The team’s solution was building a software doppelganger. They constructed a detailed digital twin that mirrors the physical system’s behavior, including all the messy real-world effects. Train the model digitally using standard algorithms, then transfer the optimized settings to the actual hardware. It’s more cumbersome than pure software training, but it works.
Their experimental setup modulated optical signals at 60 billion samples per second. Data traveled through both a 20-kilometer standard fiber access line and a specialized dispersive element that provided additional controlled spreading. A photodetector and amplifier implemented the nonlinear activation functions that give neural networks their power, while lithium niobate modulators controlled signal intensity in the feedback loop.
The energy numbers look promising. Including all the transmitters and receivers, the system consumes roughly 1.15 picojoules per operation. But here’s where it gets interesting: if you consider the transmitter and receiver as communication infrastructure rather than computing overhead, the actual computational cost drops to 50 femtojoules per operation. Compare that to GPUs, which typically burn through 1,000 femtojoules (one picojoule) per operation at best.
The Latency Almost Disappears
Adding computation to a transmission line sounds like it might slow things down. In practice, the effect is negligible. The researchers calculate their five-layer network adds about 167 nanoseconds of processing time. Light traveling 20 kilometers through fiber with a refractive index of 1.45 takes roughly 96.7 microseconds. The computing overhead? Just 0.2% additional delay.
This matters for distributed machine learning, where large models get split across multiple processors connected by optical links. Current systems send data to a remote GPU cluster, wait for processing, then retrieve results. Each step involves latency from digital processing at network nodes and connection establishment protocols. The compute-in-wire approach potentially eliminates those delays by performing at least some of the neural network operations during transmission itself.
The technology has clear limitations. It works best with neural networks that repeat the same structure across layers. Training requires careful physical parameter estimation before optimization can begin. And real-world performance fell somewhat short of simulated results, likely due to optical losses, noise accumulation, and timing misalignments that the digital twin doesn’t fully capture.
Still, the concept opens intriguing possibilities. Imagine portions of a language model’s computation happening right inside the fiber connecting data centers, or image recognition starting during transmission from a remote camera. The researchers suggest their approach could eventually extend from local edge computing to genuinely distributed intelligence woven into the optical infrastructure itself. Whether that vision scales to modern deep learning’s complexity remains to be seen, but they’ve at least demonstrated that fiber optic cables can be more than dumb pipes.
Advanced Devices & Instrumentation: 10.34133/adi.0121
ScienceBlog.com has no paywalls, no sponsored content, and no agenda beyond getting the science right. Every story here is written to inform, not to impress an advertiser or push a point of view.
Good science journalism takes time — reading the papers, checking the claims, finding researchers who can put findings in context. We do that work because we think it matters.
If you find this site useful, consider supporting it with a donation. Even a few dollars a month helps keep the coverage independent and free for everyone.
