CogniFiber, a business located in Israel, raised $6 million in its first round of investment earlier this month with the ambitious objective of reinventing “how contemporary computing is handled.” After exchanging a few emails with Dr. Eyal Cohen, a co-founder of CogniFiber, we’ve learned a lot more about what may be the biggest breakthrough in computing technology in decades.
According to CogniFiber’s Deeplight unique technology, fiber-optic cables can perform “complicated algorithms” inside the fibre itself before the data reaches the terminal, which is a major advance in in-fiber computing.
To put it another way, the fibre itself performs the most of the work, with a little help from electronics. By 2022, CogniFiber intends to have a working proof-of-concept and a full-scale system prototype ready to show off at the CLEO international conference in May, which is dedicated to laser research and photonic applications.
As far as we know, these technologies won’t be making their way into computers or smartphones any time soon. Data centres and research facilities will be the primary locations where the AI sector will put most of the advantages it gains.
“A 100-fold increase in performance”
The Nvidia DGX-A100 is the baseline for Dr. Cohen’s claim that they can achieve 500M tasks per second using the usual benchmark comparison, MLPerf, which is 100 times greater than Nvidia’s present performance champion.
Many-core fibres (up to 100,000 cores per fibre), numerous wavelengths, processors per system (up to 1,800 per rack), and rack expansion are just some of the ways in which performance may be scaled.
How long will it take for the rest of the field to acclimate? Lightelligence, lightmatter, celestial.ai, and Luminous are among the silicon-based photonics businesses that Dr. Cohen believes will have difficulty competing with CogniFiber’s performance and energy efficiency. A similar strategy may be discouraged by the existence of ring-fenced IP (11 patent applications).
Another benefit of this system is that it is predicted to require just 500W of power, a quarter of what the competitors does. In terms of the all-important TOPS per watt statistic, that’s an improvement of many orders of magnitude. More than 100 Exa-operations per second are expected by 2026, with a POPs/Watt efficiency of one.
jitter, latency, and more
Is there any jitter on this system? As Dr. Cohen points out, “they created an FPGA-based synchronisation mechanism to reduce skew and jitter, and with a relatively modest clock (0.2-1Ghz, compared to 10-40Ghz), robust sampling of the output values during the stable section of the cycle is done.”
Is there a price per line? “Our alpha prototype targets 50-100Mhz (10x-20x acceleration) and our beta and products 0.5Ghz (100x acceleration), larger speeds will be offered later on.” Cohen chimed in, too.
For latency, there are two phases: comm (receiving data chunks from clients: 10G alpha, 100G beta, 400G for 2023 products) would take several msec, depending on the data amount and distance (like any other service provider). Up to 100ns delay (FPGA + I/O +optics) during the computation phase.
By the end of 2023, AI-as-a-service solutions should have reached the beta level, and the first commercial products, with entire systems retailing for about $1M USD, should be available.
Subtly charming pop culture geek. Amateur analyst. Freelance tv buff. Coffee lover