{"id":892,"date":"2011-04-21T11:50:47","date_gmt":"2011-04-21T15:50:47","guid":{"rendered":"http:\/\/blogs.vassar.edu\/ltt\/?p=892"},"modified":"2011-05-05T14:24:10","modified_gmt":"2011-05-05T18:24:10","slug":"preliminary-data-current-research-in-thought-detection","status":"publish","type":"post","link":"https:\/\/pages.vassar.edu\/ltt\/?p=892","title":{"rendered":"Preliminary Data- Current Research in Thought Detection"},"content":{"rendered":"<div><iframe loading=\"lazy\" title=\"The Present &amp; Future of Mind-Reading Technology\" frameborder=\"0\" width=\"625\" height=\"359\" src=\"https:\/\/geo.dailymotion.com\/player.html?video=x9pzcq&\" allowfullscreen allow=\"autoplay; fullscreen; picture-in-picture; web-share\"><\/iframe><\/div>\n<div><a href=\"http:\/\/pages.vassar.edu\/ltt\/files\/2011\/04\/composite_21.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-medium wp-image-897\" src=\"http:\/\/pages.vassar.edu\/ltt\/files\/2011\/04\/composite_21-254x300.jpg\" alt=\"\" width=\"254\" height=\"300\" srcset=\"https:\/\/pages.vassar.edu\/ltt\/files\/2011\/04\/composite_21-254x300.jpg 254w, https:\/\/pages.vassar.edu\/ltt\/files\/2011\/04\/composite_21.jpg 294w\" sizes=\"auto, (max-width: 254px) 100vw, 254px\" \/><\/a><\/div>\n<div>Cerf, M., Thiruvengadam, N., Mormann, F., Kraskov, A., Quiroga, R.Q., Koch, C. and Fried, I. \t\t(2010). On-line, voluntary control of human temporal lobe neurons. Nature, 467, \t\t\t1104-1108<\/div>\n<div>\n<p>Dr. Moran Cerf and colleagues have been able to connect the activity of single neurons to images on a computer screen. Participants are then able to fade these images in and out based on what they are thinking about. Initially, the study participants looked at hundreds of images in order for the researchers to build a database of image-neuron associations. The researchers then instructed participants to focus on one of two superimposed images. They were, with great accuracy, able to determine which picture the participants were looking at based on only their thoughts.<\/p>\n<p>Naselaris, T., Prenger, R.J., Kay, K.N., Oliver, M. and Gallant, J.L. (2009). Bayesian \t\t\treconstruction of natural images from human brain activity. Neuron, 63(6), 902-915<\/p>\n<p>This pioneering experiment involved the use of a newly developed machine that can recreate a moving image from the brain activity of a participant watching a video. While the results are grainy and crude, the researchers were able to make out, for instance, \u00a0the outline of a man in a white shirt from the brain activity of a participant watching a video with Steve Martin. Dr Gallant, the principle researcher, feels that his technology is close to becoming a practical form of mind reading, in which doctors can look into the mind of schizophrenic patients or judges can look into the minds of criminals.<\/p>\n<p>Kamitani, Y. and Tong, F. (2006). Decoding seen and attended motion directions from activity in \t\tthe human visual cortex. Current Biology, 16(11), 1096-1102<\/p>\n<p>This research group proposed that the areas of the brain used to deconstruct visual information is also involved with processing memories associated with this information. Using this basis, the researchers conducted a study in which volunteers were shown two different patterns, and told to remember their pattern for a short period after viewing. Based on fMRI brain activity patterns, the researchers were able to detect which pattern a participant was thinking about.<\/p>\n<p>Mitchell, T.M., Shinkareva, S.V., Carlson, A., Chang, K., Malave, V.L., Mason, R.A. and Just, M.A. \t\t(2008). Predicting human brain activity associated with the meanings of neurons. \t\t\tScience, 320, 1191-1195<\/p>\n<p>Mitchell and colleagues designed a computational model that predicts fMRI neural associations with words for which fMRI activation information is not yet available. This is one of the first forays into mind-reading that does not require the subject to be looking at a picture of what he\/she is thinking about. Their new model integrates the use of a trillion-word text corpus and observed fMRI data associated with several dozen concrete nouns. From this initial information, the researchers are able to, with a high degree of accuracy, predict fMRI activation for thousands of different concrete nouns.<\/p>\n<p>Haynes, J.D., Sakai, K., Rees, G., Gilbert, S., Frith, C. and Passingham, R.E. (2007). Reading \t\thidden intentions in the human brain. Current Biology, 17(4), 323-328<\/p>\n<p>Dr. Haynes and colleagues are interested in reading an individual\u2019s intentions rather than simply decoding information about a concrete object. In this study, the researchers allow the subjects to choose one of two tasks to complete. From fMRI readings, the researchers are able to reliably pick which task will be completed before the subject initiates the task. These results suggest that covert goals or intentions can be represented by patterns of activity in the pre-frontal cortex, which researchers are then able to decode using fMRI.<\/p>\n<p>Samantha Gross, \u201cIntel Shows Off \u2018Mind-Reading\u2019 Brain-Scan Technology,\u201d The Huffington Post, \tApril 8th, 2010, accessed April 19th, 2011, \t\t\t\t\t\t\t\thttp:\/\/www.huffingtonpost.com\/2010\/04\/08\/mind-reading-brainscan-so_n_530009.html<\/p>\n<p>Mind-reading research is not only being assessed in an academic research setting. Large corporations are researching the technology, and developing advanced machines that can decode fMRI information faster than humans. Intel Corporation showed off its own software that can quickly decode fMRI information, and reliably predict what an individual is thinking about based on neuronal activity. Researchers from Intel Labs believes this is the first step towards one day being able to control technology with our minds.<\/p>\n<p>(Above video from www.dailymotion.com\/video\/x9pzcq_the-present-future-of-mind-reading_news)<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Cerf, M., Thiruvengadam, N., Mormann, F., Kraskov, A., Quiroga, R.Q., Koch, C. and Fried, I. (2010). On-line, voluntary control of human temporal lobe neurons. Nature, 467, 1104-1108 Dr. Moran Cerf and colleagues have been able to connect the activity of single neurons to images on a computer screen. Participants are then able to fade these [&hellip;]<\/p>\n","protected":false},"author":831,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5500],"tags":[5516,5595,5515],"class_list":["post-892","post","type-post","status-publish","format-standard","hentry","category-group-13","tag-fmri","tag-group-13-2","tag-mind-reading"],"_links":{"self":[{"href":"https:\/\/pages.vassar.edu\/ltt\/index.php?rest_route=\/wp\/v2\/posts\/892","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pages.vassar.edu\/ltt\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/pages.vassar.edu\/ltt\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/pages.vassar.edu\/ltt\/index.php?rest_route=\/wp\/v2\/users\/831"}],"replies":[{"embeddable":true,"href":"https:\/\/pages.vassar.edu\/ltt\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=892"}],"version-history":[{"count":10,"href":"https:\/\/pages.vassar.edu\/ltt\/index.php?rest_route=\/wp\/v2\/posts\/892\/revisions"}],"predecessor-version":[{"id":1254,"href":"https:\/\/pages.vassar.edu\/ltt\/index.php?rest_route=\/wp\/v2\/posts\/892\/revisions\/1254"}],"wp:attachment":[{"href":"https:\/\/pages.vassar.edu\/ltt\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=892"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/pages.vassar.edu\/ltt\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=892"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/pages.vassar.edu\/ltt\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=892"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}