You can swap what is adjusted in a neural network. You can use fixed dot products and adjustable (parametric) activation functions like fi(x)=ai.x x=0, i=0 to m. The fast (Walsh) Hadamard transform can be used as a collection of fixed dot products. Such a net then is: transform, functions, transform, ... , transform. To stop the first transform from taking a spectrum you can apply a fixed randomly chosen (or sub-random) pattern of sign flips to the input to the net. The cost is n.log2(n) add subtracts and n multiplies per layer and 2.n parameters where n is the width of the layer.
@NerdML3 жыл бұрын
Great understanding
@hoaxuan70743 жыл бұрын
There is an unwritten book on dot products that neural net researchers should read. And in that book there is a chapter on the statistics of the dot product where you learn that the variance equation for linear combinations of random variables applies. And also often the CLT. Then you can learn how to turn the dot product into a general associative memory with a weight vector. You need a locality sensitive hash with bipolar (+1,-1) outputs. This fits the dot products preference for non-sparse inputs. To recall dot the hash with the weight vector. To train get the recall error and divide by the number of dimensions. Then add or subtract that value to each element in the weight vector according to the +1,-1 hash term to make the error zero. This adds a small amount of Gaussian noise to all the prior stored memories. To understand why think about if you used a true hash algorithm together with the CLT. Anyway you can remove all the noise by repeatedly storing all the traning pairs a few times. A fixed randomly chosen pattern of sign flips before the fast Hadamard transform gives a random projection. Repeat for better quality. Applying binarization to the random projection gives a locality sensitive hash.
@NerdML3 жыл бұрын
Can you just provide me the name of book?
@hoaxuan70743 жыл бұрын
@@NerdMLI am waiting to read that book too!!!! Who will write it? When will it be published? 😎
@NerdML3 жыл бұрын
So you have already read this from somewhere else....can you provide me that link....by the way where are you from?
@hoaxuan70743 жыл бұрын
@@NerdMLThere is a blog page with some information on Fast Transform fixed filter bank neural nets. There does seem to be a foundation problem with neural net research where the low level 'mechanics' like the dot product have not been properly studied. Then you must find out by your own effort.
@NerdML3 жыл бұрын
Oh I see
@facelessguy20934 жыл бұрын
You are genius Bro. Great Video
@NerdML3 жыл бұрын
Thank you so much 😀
@babritbehera40874 жыл бұрын
Great work ....
@NerdML4 жыл бұрын
Thanks man!!
@babritbehera40874 жыл бұрын
@@NerdML I do have few steps as a beginner to step into the coding part from scratch... To better understand code flow for a starter . Which I prepared a very low level programming to the ml
@NerdML4 жыл бұрын
That's really great...u can reach out to me through rahulsainipusa@gmail.com