{"id":4369,"date":"2015-04-13T10:19:20","date_gmt":"2015-04-13T14:19:20","guid":{"rendered":"http:\/\/pages.vassar.edu\/magnes\/?p=4369"},"modified":"2015-04-13T16:16:41","modified_gmt":"2015-04-13T20:16:41","slug":"neural-networks-preliminary-data-brian-deer-and-tewa-kpulun","status":"publish","type":"post","link":"https:\/\/pages.vassar.edu\/magnes\/2015\/04\/13\/neural-networks-preliminary-data-brian-deer-and-tewa-kpulun\/","title":{"rendered":"Neural Networks: Preliminary Data &#8211; Brian Deer and Tewa Kpulun"},"content":{"rendered":"<p>As we began our project, we realized that the Ising model presented in Chapter 8 of Giordano and Nakanishi quickly introduces many aspects that are not relevant to our neural network model, namely the idea of a heat bath at temperature T. \u00a0Our neural network is essentially an Ising model assumed to have a temperature of 0, so much of Chapter 8 is unnecessary for our model. \u00a0So we skipped ahead and went straight to simple neural networks, as described in Section 12.3 of the text.<\/p>\n<p>The essential elements of our model are an NxN stored pattern, an interaction matrix\u00a0J<sub>i,j\u00a0<\/sub>of size N<sup>2<\/sup>x\u00a0N<sup>2<\/sup>, and an input pattern of size NxN. \u00a0After a pattern is stored (by creating the network&#8217;s interaction matrix), an input pattern (usually one similar, but not identical, to the stored pattern) is presented. \u00a0This pattern is then changed according to the Monte Carlo method, and, if things worked correctly, the output is the same as the stored pattern.<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"text-decoration: underline\">Stored\u00a0Pattern:<\/span><\/p>\n<p>We created a stored\u00a0pattern on MATLAB\u00a0by creating a matrix +1&#8217;s and -1&#8217;s. \u00a0Our first pattern is the letter &#8216;A&#8217;, following the examples in Section 12.3 of Giordano and Nakanishi, on a 10&#215;10 matrix. \u00a0As mentioned in our earlier posts, each element in the matrix represents a neuron and if the neuron is active it has a value of 1 and if it is inactive, its value is -1. Our &#8216;A&#8217; pattern is made up of +\/-1s\u00a0where the +1&#8217;s represent the pattern and the -1&#8217;s\u00a0represent the background. When\u00a0these numbers are displayed in the command window, the A pattern is not easy to see, so we use the imagesc command (along with an inverted gray colormap) to better visualize our patterns.<\/p>\n<p><span style=\"text-decoration: underline\">Element Indexing<\/span><\/p>\n<p>MATLAB indexes the elements in a matrix with two numbers: the first index refers to the row of the matrix, and the second index refers to the column. \u00a0For our neural networks, we want to be able to access every element in the array (twice, separately) in order to have every neuron interact with every other neuron. \u00a0The easiest way to do this is by using a single number index, i, which runs from 1 to \u00a0N<sup>2<\/sup>. \u00a0In our code, we use i and j. \u00a0To actually access the elements in our matrices, we have to use the normal double MATLAB indices, which we call call y and x. \u00a0The index i can be calculated from the normal y and x indices using equation 12.20 from the text.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\" size-full wp-image-4376 aligncenter\" src=\"http:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/Screen-Shot-2015-04-12-at-10.22.24-PM.png\" alt=\"Screen Shot 2015-04-12 at 10.22.24 PM\" width=\"146\" height=\"31\" \/><\/p>\n<p>In our code, we frequently have two sets of nested for loops, with which we are able to loop through the entire pattern twice, once with the index i and once with the index j. \u00a0For this reason, we use the indices y_i,x_i and y_j,x_j.<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"text-decoration: underline\">Interaction Energies J(i,j):<\/span><\/p>\n<p>The interaction energy matrix describes the interaction between every neuron with every other neuron. So if we start off with a P_stored matrix of size NxN, then the J matrix for it is going to be size N<sup>2<\/sup>x\u00a0N<sup>2<\/sup>. In order to determine the values for the J matrix we need to use equation 12.16 from the text,<\/p>\n<p><a href=\"http:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/Screen-Shot-2015-04-12-at-11.33.47-PM.png\"><img loading=\"lazy\" decoding=\"async\" class=\" size-full wp-image-4404 aligncenter\" src=\"http:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/Screen-Shot-2015-04-12-at-11.33.47-PM.png\" alt=\"Screen Shot 2015-04-12 at 11.33.47 PM\" width=\"147\" height=\"31\" \/><\/a><\/p>\n<p>where\u00a0s<sub>i <\/sub>is the value of neuron i in pattern m, and\u00a0s<sub>j<\/sub> is the value of neuron j in pattern m. \u00a0Thus, the J(i,j) matrix is created by multiplying the values of P_stored(y_i,x_i) with the values of P_stored(y_j,x_j). In order to do this on MATLAB we created two for loops where we held the y(rows) values constant and varied our x (columns) values. By doing this we compare P_stored(1,1) to P_stored(1,2), P_stored(1,3), P_stored(1,4)&#8230; Then after finishing with the first row the program goes on to compare P_stored(1,1) with P_stored(2,1), P_stored(2,2)&#8230; This continues until P_stored(1,1) is compared with all the other points in the matrix and then it does the same thing for P_stored(1,2), P_stored(1,3), and so on.<\/p>\n<p>Example 1:<\/p>\n<p>Below you will see a P_stored matrix and it&#8217;s corresponding J matrix. By using equation 12.20 and the double for loop, you are able to index the values for i and j which allows you to create J(i,j).<\/p>\n<p><a href=\"http:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/Screen-Shot-2015-04-12-at-10.40.15-PM.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-4377\" src=\"http:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/Screen-Shot-2015-04-12-at-10.40.15-PM.png\" alt=\"Screen Shot 2015-04-12 at 10.40.15 PM\" width=\"130\" height=\"83\" \/><\/a>\u00a0 Figure 1 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0\u00a0<a href=\"http:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/Screen-Shot-2015-04-12-at-10.40.26-PM.png\"><img loading=\"lazy\" decoding=\"async\" class=\" size-full wp-image-4378 alignnone\" src=\"http:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/Screen-Shot-2015-04-12-at-10.40.26-PM.png\" alt=\"Screen Shot 2015-04-12 at 10.40.26 PM\" width=\"209\" height=\"119\" \/><\/a>\u00a0Figure 2<\/p>\n<p>&nbsp;<\/p>\n<p>The interaction energy matrix values will later be used to help determine the amount of energy it takes to activate or inactivate a neuron.<\/p>\n<p><span style=\"text-decoration: underline\">Neural Network Model<\/span><\/p>\n<p>After finding our interaction energies, we now have an operational memory for our simple neural network. In order for our neural network to be operational we need to create a pattern within P_stored. In order to do this we created an image of the letter A by manually entering in the 1s to create the letter A. We then created a new matrix called P_in to represent the matrix that we will sweep using the Monte Carlo method. P_in is created by randomly flipping the value of\u00a0some of the neurons in order to distort our original A pattern. In order to make it easier to visualize, we used the function imagesc to turn our matrix values into an image.<\/p>\n<p>After conducting a Monte Carlo sweep over our distorted A-pattern P_in we should get in return the same pattern as our P_stored matrix. Thus, our neural network remembers being fed the desired pattern we stored earlier. In the figures below, the black squares are represented by +1&#8217;s in our matrix and the white squares represent the -1&#8217;s.<\/p>\n<p><a href=\"http:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/stored.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-4415\" src=\"http:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/stored-300x225.jpg\" alt=\"stored\" width=\"300\" height=\"225\" srcset=\"https:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/stored-300x225.jpg 300w, https:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/stored.jpg 560w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a> Figure 3<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-4417\" src=\"http:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/input-300x225.jpg\" alt=\"input\" width=\"300\" height=\"225\" srcset=\"https:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/input-300x225.jpg 300w, https:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/input.jpg 560w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/> \u00a0Figure 4<a href=\"http:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/neural-output.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\" size-medium wp-image-4390 alignnone\" src=\"http:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/neural-output-300x225.jpg\" alt=\"neural output\" width=\"300\" height=\"225\" srcset=\"https:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/neural-output-300x225.jpg 300w, https:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/neural-output.jpg 560w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a>\u00a0Figure 5<\/p>\n<p><span style=\"text-decoration: underline\">Monte Carlo Method<\/span><\/p>\n<p>The method by which the input pattern is changed into the output pattern is the Monte Carlo method. \u00a0The essence of this method is to calculate, for each neuron in the input pattern, the effective energy of the entire neural network in its current state using Equation 12.14.<\/p>\n<p><a href=\"http:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/Screen-Shot-2015-04-13-at-1.09.35-AM.png\"><img loading=\"lazy\" decoding=\"async\" class=\" size-full wp-image-4434 aligncenter\" src=\"http:\/\/pages.vassar.edu\/magnes\/files\/2015\/04\/Screen-Shot-2015-04-13-at-1.09.35-AM.png\" alt=\"Screen Shot 2015-04-13 at 1.09.35 AM\" width=\"149\" height=\"53\" \/><\/a><\/p>\n<p>If the total energy of the network with respect to the neuron i is greater than 0, then neuron i&#8217;s value is flipped (+1 -&gt; -1, or vice versa). \u00a0If the total energy of the neural network with respect to neuron i is less than 0, its value is unchanged.<\/p>\n<p>This process is carried out for every neuron i, so that every neuron is given a chance to flip. \u00a0Once every neuron has been checked, one sweep of the Monte Carlo method is complete.<\/p>\n<p>For our current code, we are only using one Monte Carlo sweep, but as we progress forward we will use multiple Monte Carlo sweeps to check if the output pattern is stable, and to see if multiple steps are needed in order to fully recall the stored pattern.<\/p>\n<p>Our full script that includes all the parts discussed above, is attached below.<\/p>\n<p><a href=\"https:\/\/drive.google.com\/a\/vassar.edu\/file\/d\/0B0lqHmJPnFabYzJjcGhzbHhoWWM\/view?usp=sharing\">Brian and Tewa Initial Code for Neural Networks Project<\/a><\/p>\n<p><span style=\"text-decoration: underline\">Future Plans<\/span><\/p>\n<p>Next, we will begin to characterize the properties of our model. \u00a0We will be investigating questions how different P_in is from the stored pattern, the maximum number of patterns that can be stored in a network, and how long pattern recall can take. \u00a0The work for this post was done exclusively together as a team, and moving forward we will begin to split up a bit more as we tackle independent questions. \u00a0As we do this, we will be splitting up our code into smaller chunks, some of which will be re-defined as functions, for easier interaction.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>As we began our project, we realized that the Ising model presented in Chapter 8 of Giordano and Nakanishi quickly introduces many aspects that are not relevant to our neural network model, namely the idea of a heat bath at temperature T. \u00a0Our neural network is essentially an Ising model assumed to have a temperature [&hellip;]<\/p>\n","protected":false},"author":2424,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-4369","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/pages.vassar.edu\/magnes\/wp-json\/wp\/v2\/posts\/4369","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pages.vassar.edu\/magnes\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/pages.vassar.edu\/magnes\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/pages.vassar.edu\/magnes\/wp-json\/wp\/v2\/users\/2424"}],"replies":[{"embeddable":true,"href":"https:\/\/pages.vassar.edu\/magnes\/wp-json\/wp\/v2\/comments?post=4369"}],"version-history":[{"count":9,"href":"https:\/\/pages.vassar.edu\/magnes\/wp-json\/wp\/v2\/posts\/4369\/revisions"}],"predecessor-version":[{"id":4550,"href":"https:\/\/pages.vassar.edu\/magnes\/wp-json\/wp\/v2\/posts\/4369\/revisions\/4550"}],"wp:attachment":[{"href":"https:\/\/pages.vassar.edu\/magnes\/wp-json\/wp\/v2\/media?parent=4369"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/pages.vassar.edu\/magnes\/wp-json\/wp\/v2\/categories?post=4369"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/pages.vassar.edu\/magnes\/wp-json\/wp\/v2\/tags?post=4369"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}