Neurons on a chip have demonstrated the capability to learn and play the classic arcade game Pong. This learning occurs through a real-time closed-loop system where electrical stimulation indicates the x and y positions of the ball relative to the paddle. The neural cultures exhibit self-organization and adapt their activity based on feedback, which improves their performance over time, suggesting a move towards goal-directed behavior indicative of intelligence[2][1].
In as little as minutes, these neuronal networks can track the ball and control the paddle, exhibiting a level of adaptability that outperforms traditional deep reinforcement learning algorithms in sample efficiency[2][1]. This advancement highlights the potential of biocomputers in learning and responding dynamically to stimuli in simulated environments.
Get more accurate answers with Super Search, upload files, personalized discovery feed, save searches and contribute to the PandiPedia.
Let's look at alternatives: