Abstract:
Training neural networks is hard. The industry is approaching the limits of siliconbased computing, both in terms of transistor size and chip dimensions. There are already examples of technologies that allow computations without using silicon. A paradigm for machine learning that could have enough representational power also exists. It is Reservoir Computing, which is also quite amenable for adaptation on non-silicon-based computing devices. In this work, I propose a specific type of laser-based reservoir computing scheme that builds on, and should improve, the existing solutions.