A Robot Student |
![]() |
Machine Learning is the sub-field of Artificial Intelligence primarily concerned with discovering the fundamental laws governing learning and applying these principles to develop a machine/system that is able to learn. A primary goal of such a machine would be to use all available inputs from the environment in order to learn, without being specifically programmed to do so [1]. Currently, systems are able to learn from inputs using specific parameters. An example of a system that most people use every day would be an internet search engine, like Google. Machine perception (a more “human” goal of machine learning), allows the system/machine to collect sensory input through the faculties of vision, hearing, and touch [2]. Eventually, a machine should be able to not only learn like a human does, but collect input similarly as well. However, one of the most challenging obstacles that stand in the way of progress in this field is to develop a working model that represents how learning functions in humans and is applicable to a synthetic system [3]. There are currently many models in development to try and model learning which span many different fields of study ranging from Philosophy to Neuroscience. These models have had success in a variety of applications, but are so far limited in terms of their generalization.
Table of Contents
|
1. Introduction to Machine Learning
Learning, much like intelligence, becomes an increasingly difficult term to define as processes underlying it continue to be discovered. In the context of machine learning, the primary concern is with the changes that occur with machines/systems that perform tasks, usually associating it with Artificial Intelligence (A.I.). Currently, a machine is said to have learned when it has changed its program based on inputs, in order to improve its future performance [4]. However, these changes can be to any component of the system which also means that learning mechanisms may change as well. Thus, it is difficult to make a concrete system on which an A.I. can be developed. In any case, machine learning aims to create machines that will be able to change themselves according to the input available and the surrounding environment.
2. Artificial Neural Networks
The basic idea of a connectionist approach is that the processes of the mind can be modeled by an interconnected network of simple units. In this manner, the model would be analogous to the neurons (simple units) and synapses (connections) of the brain. However, the form of these simple units and connections is not limited to these examples. Depending on the specific activation of connections and the information being processed, the network should be able to exhibit complex global behavior [5]. Artificial neural networks are able to process the connections and single units in parallel, thus there is no set distinction between memory and processing. The most effective means for explaining an artificial neural network uses connectionism. However, these models often offer a simplified explanation of the networks that occur in the biological brain, so it is not yet clear whether or not artificial neural networks can be used in the future to elucidate processing that occurs in the human brain.
2.1 Modelling the Connectionist Approach
The simplest example of an artificial neural network involves three layers: an input layer, a hidden layer, and an output layer. The input layer receives information and sends this data to the second, hidden layer of units (the units represent neurons). The second layer sends this data to the third, output layer which converts the input into an output [6][7]. This would be the simplest case of a neural network, but this gets more complicated with a different number of layers and parameters that weigh the inputs and activations differently.
Artificial Neural Network |
![]() |
Fig 2.1.1 Connectionist Model of a simple Artificial Neural Network[7] |
The network concept increases its complexity with each simple unit having its own function and different parameters. In other words, each unit may become a composition of functions which may be a composition of functions themselves. The simplest way to visualize this is still with the connectionist model, which demonstrates how quickly things can get complicated. Another parameter that may change is the function of the output activation. The output activation is highly contingent on the input activation and the learning algorithms (rules) which the system must follow [8].
The most interesting aspect (and what has possibly garnered the most attention) of artificial neural networks is the possibility for the machine/system to learn in this model. A machine is said to have learned when it has used the data presented to it to optimally solve a problem. This also involves the ability of the machine to use existing knowledge in order to solve a problem. In a neural network, learning is often facilitated by use of a learning algorithm in the system. The learning algorithm used depends on the type of input available and the output activation desired. In this manner, the algorithms will change the weights of the interconnections depending on the input available in order for the machine to learn [9].
There is much debate between the connectionist and symbolic schools of machine learning, but as time progresses the two communities integrate their ideas [11]. Certain frameworks for learning have been developed that combine features of both schools of thought. These take into account that in order to effectively learn a machine should be able to efficaciously utilize already existing knowledge [12]. In the amalgamated framework shown (Figure 2.1.2), symbolic information is first put into the neural network, where examples are then used to amend the initial information. Refined information can then be extracted from the now enhanced neural network [10].
Integrated Framework |
![]() |
Fig 2.1.2 Integrated Symbolic/Connectionist Neural Network Framework [10] |
2.2 Links to a Biological Neural Network
Artificial neural networks are systems adapted from biological neural networks, proposed upon examination of the human Central Nervous System. This relationship can be easily seen when viewing examples of connectionist models. In an artificial neural network, there is a series of interconnections between the elements in each layer. In a biological neural network, the specific activation of a series of interconnected neurons represents a recognizable pathway [13]. This similarity has caused researchers to propose that these artificial neural network models are simple models of biological neurons within the brain. This has sparked research into artificial neural networks to see how closely they can approximate the actual processes underlying learning in actual, living brains [10].
3. Reinforcement Learning
Reinforcement learning concerns itself with what kinds of actions a machine should take in a given environment in order to maximize some type of reward [14]. However, this notion is a simplified case since, in most cases, the machine or system is unsure of how its actions will affect the environment and the inputs it will receive. In reinforcement learning, a machine must predict how its actions performed will affect the inputs and how actions will lead to an optimized reward. The machine discovers this optimization through a process of trial and error, since there is no existing knowledge assumed on behalf of the agent. This differs from the process of supervised learning, where the agent is presented with the correct input and output pairing.
Reinforcement Learning |
![]() |
Fig 3.1 Basic Framework for Reinforcement Learning [10] |
A basic, formalized framework for reinforcement learning is shown in the figure
An input vector (X) informs the machine which state (S) the environment is in. The machine decides which action to take, from a set of actions (A). An action performed will have an effect on the environment, which puts it into a new state. The new state results in a new input vector. The expected reward (R) depends on the actions taken and the state of the environment. So at time t = i, the reward can be represented as Ri=R(Xi, Ai) [15].
The goal of the machine (learner in this case) is to develop a policy that will maximize the cumulative reward obtained by optimizing the connections between input vectors and actions.
4.Applications of Machine Learning
Machine learning invites the possibility of machines which possess the ability to change their programming in response to the inputs available and the environment. This presents new and exciting possibilities in future technology, and has already provided great benefit to existing technologies.
4.1 Everyday Applications
Surprisingly, instances of machine learning are used and seen every day. Search engines, such as Google, Yahoo, Bing, etc. are all perfect examples of such instances, using pattern detection and data mining [16]. For example, the specific ads placed to the side of search engine webpages are chosen based on relationships that the system has discovered in data collected from you. An even simpler example of an everyday application of machine learning is an e-mail service. These programs learn to differentiate between junk vs. non-junk, specific to each user.
4.2 Applications in Development
The applications of Machine Learning become increasingly exciting as knowledge into the topic continues to grow. Neural networks have had notable performance on more “human”, “real-world” tasks such as speech understanding [17], optical character recognition [18], control of dynamic systems [19], and language learning [20]. Developments such as the Blue Brain Project, SPAUN, and IBMs Brain Chip are quite notable since these are some of the most advanced A.I. brain models.
Machine Learning applications also have potential within the medical community. A systems algorithm can be modified to look at just about any input and produce any, specific output. In more recent developments, machine learning has been used to diagnose patients and prescribe care, with a higher margin of success. Many teams have also been using machine learning for sequencing and analysis of proteins and genomes in an effort to develop more effective medicine and methods of treatment [21].
4.3 Future Applications
While there is nothing certain in the future of Machine Learning, there is one goal that researchers still need to work towards: creating a machine that learns and thinks in a way similar to humans. However, with no limit on the power of these systems, future A.I. may work much faster and better than humans. Given the direction that current research is taking with applications in machine learning, the next few decades may be full of super-powered robots that surpass humans in every quality. What a comforting thought.
I've had the chance to study this topic in a few classes, I can't wait to see your final product. I specifically was introduced to dynamical networks, I had a prof that was focusing on robots that are able to adapt to their environment! Prof Kwan, I'm pretty sure.. and i think the robot was called Koala and they did some light switch task, making Koala do a sequence of tasks, then they introduced a barrier and Koala was able to adapt to the diff environment and complete the light switch task.
This is very interesting stuff, complicated as well!
Really interesting topic. I know you are not finished with this, but there are a couple of grammatical errors in your introduction paragraph. Looking forward to revisiting your finished product!
Writing about artificial learning is a daunting task, requiring interdisciplinary research in both neuroscience AND computer/mathematics. I don't know if you have backgrounds in those two fields, but I know for me, neuroscience was hard enough!
The diagrams were very helpful to comprehension.
hey… im awed by the content youve discussed. i briefly learned about artificial neural networks before, so im glad you discussed about it here. im going to do some research on my own to find out what the future holds for AI.