@Adynathos said in Good luck, Tim Berners-Lee: Do you train spiking nets (how would you even train them?) or create an architecture to see what it does? Yes, the usual method is to use STDP — Spike Timing Dependent Plasticity. The idea is that if one spike tends to come shortly after another one, the strength of the connection between the two is increased, but if they go the other way then the strength is decreased. It works pretty well, and is a decent way of describing how short-term memory works in biological neural networks too. There's a separate level of plasticity that is a better description of long-term memory, and that's where the structure of the network itself is modified; it corresponds to the creation of new synapses and the pruning of old ones. There's a (very smart indeed) student at work doing his PhD on making that work in a form that is simulatable at speed.