467,132 Members | 1,178 Online

# Neural Networks

 A friend of mine just was over at my house explaining Neural Networks and I understood it as well as I could. Here is my own explination. A neural network has to first run in a loop 1,000's of times given it's input and output. It then naturally learns the simplest algorithm to generate that input and output, using a networked matrix of numbers that are run through a filter, and compared to the input. The algorithm to train a neural network looks like this: Start off with a random algorithm Provide Input (input layer) Multiply the random weights along with the input, and a static weight. (hidden layer) Run the weights through a Sigmoid function as a filter. Take one of the weighted networks, after it has run through a sigmoid function, and weight it again. (Output layer). Now we calculate all of our networks to see how much they disagree with the provided output. Then we loop through the weights, changing them all to a better fit. So each time we run this function it gets a little bit better. But ultimately we are going to be getting output that is either closer to zero or closer to one. The network just remembers at the final output layer what is the closest to 0 and 1 it can get too. IT'S REALLY COOL! This program below, makes its own binary XOR function. #include #include #include #include #define BPM_ITER 2000 #define BP_LEARNING (float)(0.5) // The learning coefficient. class CBPNet { public: CBPNet(); ~CBPNet() {}; float Train(float, float, float); float Run(float, float); private: float m_fWeights; // Weights for the 3 neurons. float Sigmoid(float); // The sigmoid function. }; CBPNet::CBPNet() { srand((unsigned)(time(NULL))); for (int i=0;i<3;i++) { for (int j=0;j<3;j++) { // For some reason, the Microsoft rand() function // generates a random integer. So, I divide by the // number by MAXINT/2, to get a num between 0 and 2, // the subtract one to get a num between -1 and 1. m_fWeights[i][j] = (float)(rand())/(32767/2) - 1; } } } float CBPNet::Train(float i1, float i2, float d) { // These are all the main variables used in the // routine. Seems easier to group them all here. float net1, net2, i3, i4, out; // Calculate the net values for the hidden layer neurons. net1 = 1 * m_fWeights + i1 * m_fWeights + i2 * m_fWeights; net2 = 1 * m_fWeights + i1 * m_fWeights + i2 * m_fWeights; // Use the hardlimiter function - the Sigmoid. i3 = Sigmoid(net1); i4 = Sigmoid(net2); // Now, calculate the net for the final output layer. net1 = 1 * m_fWeights + i3 * m_fWeights + i4 * m_fWeights; out = Sigmoid(net1); // We have to calculate the deltas for the two layers. // Remember, we have to calculate the errors backwards // from the output layer to the hidden layer (thus the // name 'BACK-propagation'). float deltas; deltas = out*(1-out)*(d-out); deltas = i4*(1-i4)*(m_fWeights)*(deltas); deltas = i3*(1-i3)*(m_fWeights)*(deltas); // Now, alter the weights accordingly. float v1 = i1, v2 = i2; for(int i=0;i<3;i++) { // Change the values for the output layer, if necessary. if (i == 2) { v1 = i3; v2 = i4; } m_fWeights[i] += BP_LEARNING*1*deltas[i]; m_fWeights[i] += BP_LEARNING*v1*deltas[i]; m_fWeights[i] += BP_LEARNING*v2*deltas[i]; } return out; } float CBPNet::Sigmoid(float num) { return (float)(1/(1+exp(-num))); } float CBPNet::Run(float i1, float i2) { // I just copied and pasted the code from the Train() function, // so see there for the necessary documentation. float net1, net2, i3, i4; net1 = 1 * m_fWeights + i1 * m_fWeights + i2 * m_fWeights; net2 = 1 * m_fWeights + i1 * m_fWeights + i2 * m_fWeights; i3 = Sigmoid(net1); i4 = Sigmoid(net2); net1 = 1 * m_fWeights + i3 * m_fWeights + i4 * m_fWeights; return Sigmoid(net1); } void main() { CBPNet bp; for (int i=0;i
• viewed: 2386
Share:
3 Replies
 On 6 Apr 2007 22:07:13 -0700 in comp.lang.c++, "CoreyWhite" Newsgroups: alt.magick,alt.native,comp.lang.c++,alt.2600 Apr 7 '07 #2
 It looks neurological ... hey look at this free book: http://www.relisoft.com/book/index.htm Apr 7 '07 #3
 On Apr 7, 5:55 am, "boson boss"

### This discussion thread is closed

Replies have been disabled for this discussion.

### Similar topics

 1 post views Thread by Aum | last post: by 1 post views Thread by Yaroslav Bulatov | last post: by reply views Thread by I. Myself | last post: by reply views Thread by I. Myself | last post: by 1 post views Thread by AndreaM | last post: by 4 posts views Thread by pradeep.blogs@gmail.com | last post: by 2 posts views Thread by mwojc | last post: by reply views Thread by buzzer | last post: by reply views Thread by YellowFin Announcements | last post: by reply views Thread by SwissProgrammer | last post: by 1 post views Thread by SwissProgrammer | last post: by reply views Thread by RonPar | last post: by 2 posts views Thread by Firas Rihan | last post: by reply views Thread by vikram77 | last post: by reply views Thread by hAPIcat | last post: by 3 posts views Thread by CD Tom | last post: by reply views Thread by AjayGohil | last post: by 2 posts views Thread by Petrol | last post: by