Current Applications of Artificial Intelligence

Image Unavailable
One of many applications of AI

With technology advancing faster than action potentials within axons, neuroscientists and investigators in related fields are beginning to understand the brain better than ever before. As the scientific sphere becomes more innovative and more intelligent, one begins to wonder just how complex the brain actually is. The modeling and creation of artificial intelligence is becoming an increasingly heated topic in neuroscience, and its importance cannot be understated. As an example, modeling allows scientists to further understand the hows (pathways) and whats (structures) of the mammalian brain, to give rise to new hypotheses on brain functions, to pinpoint functional processes to a cellular level, and finally, to determine the feasibility of a biologically validated artificial mammalian brain. Currently, the degree of complexity needed in terms of memory, computation, and communication is being proposed, giving scientists an idea of just how many computers are needed to match something as elaborate as the brain[1]. Taking the idea further, the Blue Brain Project attempts to reverse engineer the mammalian neocortex, hoping to eventually complete an accurate model of the human brain. More amazingly, University of Waterloo brings forward an unprecedented model so clever that it can complete basic IQ tests, mirror human working memory, and even demonstrate learning.

1.1 Proposition of Artificial Neural Systems

In 2007, Johansson and Lansner proposed the idea that an artificial nervous system with complexity and size similar to the brain could be created. Their target of the computational model was not a supercomputer that takes up the area of an entire room, but a model no larger than an actual human brain. In order to do this, they envisioned the biophysical recreation of every single neuron within the human brain. Unfortunately this feat is impossible with the current computing power scientists own today. Because of this, with the current technology available, they narrowed their focus towards just the mammalian neocortex. Being the largest structure in the mammalian brain and bearing the closest resemblance with artificial neural networks as well as current accepted models like the connectionist approach[2], the mammalian neocortex was deemed worthy of investigation. The neocortex receives sensory information in many modalities, integrates them, and produces perception and motor action required for survival[3]. Studies by lesion have shown that the mammalian neocortex is very flexible. By having the ability to reorganize and subunits that are redundant, this part of the brain can compensate for damages dealt to the injured area[4]. In other words, scientists chose the neocortex for its homogeneity.

1.1a Reasons for Modelling the Brain

The reasons for modeling the brain, according to Johansson and Lansner, was to further understand hows (the information is processed in the brain) and whats (parts of the brain are responsible for these processes), to give rise to new hypotheses on brain functions and characterize the processes to a cellular level, to develop and improve techniques used to treat brain related diseases, and to attempt to create a desirable form of artificial intelligence, since it is a model of a system that can perform various tasks in everyday life[1].

1.1b Degree of Complexity Required

Abstract model of minicolumns
Image Unavailable
Minicolumns are activated by extrinsic signals

In order to predict the amount of computing power needed for a model of the mammalian neocortex, scientists must first calculate and understand the actual number of neurons and synapses within the neocortex of mammals. It has been determined that a human cortex contains 2x10^10 neurons and 1.5x10^14 synapses[5] while a rat cortex contains 5x10^7 neurons and 4x10^11 synapses[6]. Focusing on the neocortex, Johansson and Lansner further narrowed their investigations to individual minicolumns of the neocortex. Spanning all layers of the cortex, a minicolumn contains roughly 100 neurons and is considered to be the functional unit of the cortex[7]. Each of the minicolumns work in conjunction, sending both inhibitory and excitatory signals in order to create a unified signal that is then interpreted. Using the assumptions that there are around 100 neurons per minicolumn in mammals, scientists were able to determine the number of minicolumns within different species (2x10^8 in humans and 1.6x10^5 in rats). The investigators then deduced the number of connections connecting to each neurons within the neocortex, which was 2.4x10^13 in humans and 2.4x10^10 in rats. These numbers are crucial for the proposition of computing power needed to create an actual biophysical model for the mammalian neocortex. The scientists were then able to provide an abstract model of connections between minicolumns, hypercolumns, and different layers within the cortex. According to the model, each minicolumn is inhibited by a common modulatory inhibitory synapse, and excitation of the minicolumns stems from the the inhibition of the modulator synapse[8]. Being functional units of the cortex and vastly homogeneous, the redundancy of minicolumns means that the death of neurons can be readily compensated.

Researchers found that each synaptic connection in the cortex required the processing power of 115MB/second, and to emulate this processing power the model must have a minimum of 1.5 gigaFLOPS per synaptic connection. FLOPS stands for FLoating point Operations Per Second, and it is a measure of computing performance that determines the number of instructions carried out by the computer[1]. To put into perspective, a typical Quad-Core processor in current computers have the processing power of 10 gigaFLOPS (2.5 gigaFLOPS per core)[1], which is roughly equivalent to the processing power of 6.5 synapses in the cortex. With that in mind, the current model deduced by Johansson and Lansner puts the processing power of the hypercolumn of the human neocortex at around 10^4 teraFLOPS[1], equivalent to one million modern computers put together.

2.1 Putting Artificial Networks Together: Blue Brain Project

The proposition by Johansson and Lansner was answered by the Blue Brain Project. The Blue Brain Project, founded by the Brain and Mind Institute of Switzerland, planned to completely reverse-engineer the mammalian brain down to a molecular degree. The project’s goal, while completely technologically driven, was not to just create an artificial network, but to also simulate individual neurons biologically[9]. Working with software company IBM, the institute utilized a supercomputer called “BlueGene”. Powerful to a degree of petaFLOPS, its processing power was comparable to hundreds of thousands of modern computers, or hundreds of synapses within the mammalian neocortex[10].

BlueGene Q
Image Unavailable
Processing power at 20 petaFLOPS

2.1a Current Achievements

Currently with the technology available, the Blue Brain Project is able to reverse engineer one hypercolumn of a rat’s neocortex. Serving as a basic unit of the brain and no larger than the head of a pin, a hypercolumn is an end product of millions of years of evolution[11]. Scientists of the Blue Brain Project view the cortical hypercolumn as a simple yet elegant piece of the puzzle that has the potential to explore many unknowns of molecular neuroscience. Within the cortical hypercolumn, the Blue Brain team have been isolating over fifty different types of neurons, understanding their biological mechanisms, and translating their signal traffic to algorithms that can be interpreted by powerful computers. These virtual cortical columns created by supercomputers can then be “run” and observed as realistic neurons in action. The cortical columns have already demonstrated a high degree of realism, and scientists hope to learn and test neuroscientific principles and models virtually in the near future[11].

2.1b Initial Goal

Initially, in order to completely reverse engineer a rat’s neocortex, scientists started from the most basic components within the nervous system: the neurons and the synapses. They then proceeded up the hierarchy and integrated individual neurons into microcircuits, then to mesocircuits, and finally to macrocircuits[11]. All of the virtual experiments were made possible by the program called “The Builder”. As a program that creates computer models of certain brain structures, the Builder takes experimental data from a working brain model and constructs the information into virtual data which can then be divided into different levels of organizations in the brain. Firstly, the Cell Builder took into consideration of all the different neurons that exhibited electrical behavior and assigned them to their respective locations. The greatest challenge about this was to consider all the possibilities of every combinations of neurons that could be integrated, and this was called predictive reverse engineering. After all the neurons were put in place, the Microcircuit Builder created a network of virtual neurons which essentially relayed information from cell to cell. In order to create virtual minicolumns, The Builder took the composition of cells in a specific area of the virtual brain (assigned by the Cell Builder), predicted all the possibilities of different combinations of neurons, and created the interconnectivity between the cells. Because all the minicolumns in the neocortex are intimately connected, the Mesocircuit Builder acted as a link for all microcircuits and created virtual hypercolumns that contains nearly 10,000 neurons and 10^8 synapses. The goal of the scientists for the near future is to use the Macrocircuit Builder and connect the hypercolumns, and to reverse engineer a larger portion of the neocortex, to a total of 100 million neurons by 2014[12].

Cell Builder
Image Unavailable
Analyzes data and creates virtual neurons

Microcircuit Builder
Image Unavailable
Connects virtual neurons

Mesocircuit Builder
Image Unavailable
Connects virtual microcircuits

2.1c Future Goals

The Blue Brain Project’s future goal is to completely reverse engineer the mammalian brain, in hopes to create brain simulation facilities that simulate both healthy and diseased brains with different scales and levels of detail, so that disease mechanisms can be investigated to a much more accurate degree. In order to achieve this ambitious goal, the scientists of the Blue Brain Project strive to create a biologically validated model the neocortical columns of young rats first. Using principles discovered from a more basic approach, the scientists are hopeful that they can build larger, more detailed brain models, and develop strategies to fully simulate a complete model of the human brain[11].

3.1 SPAUN (Somatic Pointer Architecture Unified Network)

From the previous two topics, it seems that a trememdous amount of computing power is required for the simulation of the brain. However, nothing is being said about AI and behavior. Within this section, the authors address the challenge faced by many neuroscientists where there is no connection between the complex behaviors of animals and the complex activities of their brains. Their solution is SPAUN, a functioning model that brings brain activity and bodily behavior together. SPAUN is a model of 2.5 million neurons with areas corresponding to cortical parts such as the inferotemporal cortex and motor cortex. It has an artificial arm and eye to receive input through a 28 by 28 pixel pad, and complete tasks such as image recognition, working memory, even reinforcement learning[13].

3.1a Structure and Mechanism

Structure & Mechanism
Image Unavailable
SPAUN bears close resemblance to a human brain

The scientists who created SPAUN made sure that the virtual brain contained all of the structures required for sensory input, decision making, motor functions, as well as modulatory pathways. To name a few, cortical structures of SPAUN contained the primary visual cortex, the primary motor cortex, the limbic systems, the thalamus, components of the prefrontal cortex, as well as the integrating thalamus. Excitatory and inhibitory pathways such as GABAergic and glutamatergic connections were also laid out in the virtual brain, attempting to emulate cortical mechanisms as precisely as possible[13].

The mechanism of the virtual brain relates closely to the functions of an actual brain, where SPAUN takes the sensory information received, selects appropriate responses, and carries out the selected motor actions[14]. This is achieved through the compression of “somatic pointers”. As its name suggests, Somatic Poiner Architecture Unified Network responds to these signals like a brain responds to action potentials, and a compression is analogous to encoding in the brain. When visual inputs are received by the bionic eye, the information is compressed to a format compatible with the structure that is higher up in the information processing system. As the information travels through SPAUN, all of the necessary components needed for interpretation work in conjunction in order to accurately emulate behavior. Finally, information is decompressed (retrieved) by the motor structure of SPAUN, and an action is carried out by the bionic arm[13]. As an example, when SPAUN is exposed to an image, the information first arrives at the V1 area and compressed, goes up the hierarchy of pressing (V2, V3, V4, IT), relayed through the thalamus and into the prefrontal area, and finally decompressed and carried out by the motor area.

3.1b Task Performances

SPAUN’s ability to complete different tasks sets it apart from all other virtual neural models. With only a bionic eye, a bionic arm and a 28 X 28 text pad, SPAUN was instructed to complete eight tests that challenged various cognitive abilities. The tasks included copy drawing, image recognition, reinforcement learning, serial working memory, counting, question answering, rapid variable creation, and fluid reasoning[13]. While all these tasks involved only numbers that ranged from 0 to 9, SPAUN was proficient in generating its own decisions and correctly pass all eight tests, something that no other model has ever achieved before. To elaborate, the tasks that emphasized human behavior the most were the reinforcement learning task[15] and serial working memory[16]. The reinforcement learning task demonstrated SPAUN’s ability to learn from mistakes and make its own decisions; question marks show up in front of the bionic eye, prompting SPAUN to guess a number between 0 and 3. SPAUN must guess a number that generates the most reward. The number “1” indicates a reward, while the number “0” indicates no reward. During the experiment, after SPAUN guessed several numbers that generated “0”, it soon figured out that the number 2 gave the most reward. SPAUN then proceeded to guess the number 2 several times until the reward number changed to another number. SPAUN recognized and learned from the mistakes it made and completed the reward task fluently[13]. In the serial working memory task, SPAUN was asked to repeat several numbers that showed up on the screen in front of the bionic eye. A short list of several numbers proved to be effortless for the virtual brain, as it recalled all the numbers quickly and accurately. However, as the lists became longer, SPAUN was only able to recall the numbers that were either the most recent or the ones shown first, depending on the amount of times the lists was exposed[13]. This was extremely fascinating because SPAUN was able to not only recall numbers, but to also demonstrate the phenomenon of primacy and recency exhibited in human working memory.

1. Johansson, C., Lansner, A. (2007). Towards cortex sized artificial neural systems. Neural Networks, 20(1), 48-61.
2. Feldman, J.A., Ballard, D. H. (1982). Connectionist models and their properties. Cognitive Science, 6(3), 205–254.
3. Drubach, D.A., Makley, M., Dodd, M. L. (2004) Manipulation of central nervous system plasticity: A new dimension in the care of neurologically impaired patients. Mayo Clinic Proceedings, 79(6), 796–800.
4. Kozloski, J., Hamzei-Sichani, F., Yuste R. (2001). Stereotyped position of local synaptic targets in neocortex, Science, 293(5531), 868–872.
5. Korbo, L., Pakkenberg, B., Ladefoged, O., Gundersen, H. J. G., Arlien-Søborg P., Pakkenberg, H. (1990). An efficient method for estimating the total number of neurons in rat brain cortex. Journal of Neuroscience Methods, 31(2), 93–100.
6. Miki, T., Fukui, Y., Itho, M., Hisano, S., Xie, Q., Takeuchi, Y. (1997). Estimation of the numerical densities of neurons and synapses in cerebral cortex. Brain Research Protocols, 2, 9–16
7. Buxhoeveden, D. P., Casanova M. F. (2002). The minicolumn hypothesis in neuroscience. Brain, 125(5), 935–951.
8. Lücke, J., Malsburg, C. V. D. (2004). Rapid processing and unsupervised learning in a model of the cortical macrocolumn. Neural Computation, 16(3), 501–533.
9. Markram, H. (2006). The Blue Brain Project. Nature Reviews Neuroscience, 7, 153-160.
10. "The Blue Gene/Q Computer chip" Retrieved 2013-03-22.
11. "Project Milestones". Blue Brain Project. Retrieved 2013-3-22.
12. "Henry Markram: Simulating the brain; the next decisive years, video 07:00". Retrieved 2013-03-24.
13. Eliasmith, C., Stewart, T. C., Choo, X., Bekolay, T., DeWolf, T., Tang, Yichuan., & Rasmussen, D. (2012). A large-scale model of the functioning brain. Science, 338(6111), 1202-1205.
14. Eliasmith, C., DeWolf, T. (2011). The neural optimal control hierarchy for motor control. Journal of Neural Engineering, 8(6), 065009
15. Schultz, W. (2000). Multiple reward signals in the brain. Nat. Rev. Neurosci, 1, 199
16. Murdock, B. B. (1993). A model for the storage and retrieval of item, associative, and serial-order information. Psychol. Rev. 100, 183

Add a New Comment
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License