Retinal coding is a subsection of population coding, the process of how stimulus information is represented in the neuronal level, specifically how visual information (color, shape, depth etc) is encoded through ganglionic cells. It's particularly important in the field of vision science and prosthetics since modern prosthetics can allow one to perceive spots of light and sharply contrasted edges, focusing on resolution, they do not allow perception of images. Even still, there is a limit to improving resolution, and retinal coding allows visual prosthetics to pass beyond this inherent barrier. Understanding of the retinal code allows one to perceive images as a whole as opposed to images of contrast. Research in this area is focused on identifying the measure of neural activity in ganglionic retinal cells, analyzing it’s response to given visual stimuli, understanding how precise the response is, and specifying the degree of plasticity in stimulus-response relationships.
Table of Contents
1) Classification of Retinal Ganglionic Cells
There are multiple theories of retinal coding, but nearly all of them focus on the retinal ganglionic cells. This is because the retinal ganglionic cell is where visual information from the bipolar and amacrine cells are transduced and transmitted to various areas of the brain for analysis, specifically cortical area V1. However, retinal ganglionic cells are unique in that they do not respond uniformly to visual stimuli, instead, an array of responses have been seen. These responses show a mixture of both linear and nonlinear stimulus-response transformations. Statistical analysis of the different retinal ganglionic cell types using parameters of response latency, response duration, relative amplitude of ON/OFF responses and degree of nonlinearity in stimulus-response transformation led to 5 classes of ganglion cells. These groups were defined to be short latency, transient ON cells, short latency sustained ON cells, short latency ON-OFF cells, short latency OFF cells, and long latency cells. Classification of the retinal ganglionic cells allows one to correctly analyze cell information relationships and build neural pathway circuits to show information flow.
2) Retinal Code Theories
a) Predictive Coding
One theory of the retinal code is known as predictive coding (PC). Predictive coding theory postulates that there is an internal representation of the world resident in the brain which generates predictions about stimuli. Visual stimuli and sensory information are then compared to the predictions through an information processing pathway. The residual error is then calculated and propagated throughout the visual system. Predictive coding was generated from the idea that objects in the physical world have uniform intensities of light reflectance when looking at image points that are close in space and time. The resulting redundancy allowing for efficient encoding of the stimuli, the theory postulated that given a certain intensity of an image point, nearby retinal circuits predict the intensity at adjacent image points, and compare it to the actual intensity. Therefore, the encoded image sent form the retina to the cortical areas is the difference between the predicted image in the brain and the actual visual stimuli.
b) Filter Encoding
Recent breakthroughs in vision prosthetics focus on how retinal ganglion cells encode a visual stimulus using filter encoding. Filter encoding theory takes advantage of information theory, applying a system theoretic approach and quantifying stimulus response relationships as a series of linear filters. Theoretically, this would allow one to work both ends of the relationship, both transforming stimuli into the neural code, and the neural code back into stimuli. Previous research focusing on retinal ganglion visual stimulus reconstruction show that the quality of the reconstruction increases as the number and types of ganglionic cells increase. This is because different types of ganglionic cells contain non-overlapping information. In addition, retinal ganglionic cells were found to be independent encoders as well as correlational encoders. Moreover, using the discovery that a linear construction of stimuli was equally effective as neural network stimulus reconstruction, vision was restored to blind mice using prosthetics.
3) Model: The Virtual Retina
|Perception of the world in binary|
Latest research in this area by Bomash and Nirenberg involves the problem of a lack of a testable model in vision science. This results from a number of essential questions that are still unanswered, ranging from roles of different cell types to the exceedingly large amount of stimuli to explore. They generated a virtual retina, a data driven model of retinal input/output relationships which they then subjected to testing to determine whether it can be compared to a real retina. Results of the tests show that the acquired information is highly reliable, and that the virtual retina not only holds the equivalent quality and quantity of information as a real retina, but is also able to predict functions of real retina cells. This will allow rapid progression of vision science population coding theories in that it allows accelerated analysis of different cell type roles while minimizing animal model experiments.
4)Problems with the Neural Code
a) The Unit of Information: Coarse vs Fine Coding
A problem that most neural code theories face is the determination of the unit of information. It’s commonly known that the neural code is composed of action potential trains, but the basic unit of information is still up for debate. There’s a number of problems with identifying the unit of information, and it mainly revolves around the question of whether the neural code uses a coarse or fine coding system. A coarse coding system would mean that individual neurons don’t carry much information, but a population of neurons carries enough information to be sufficient. The weakness of this particular coding system is that in order for downstream neurons to make accurate judgements based on stimuli, they need to wait for all the neurons to transmit information before starting analysis. On the other hand, coarse coding’s strengths include the idea that downstream neurons do not have to go to any great effort to decode the action potential train code. The second option would be a fine coding system, where individual neurons would carry a substantial amount of information. The strength and weakness of this system are inverse of coarse coding, in that fine coding’s strength is that individual neurons have enough information within them to allow downstream neurons to guide behavior, but these neurons also have a lot more information to unpack from the action potential train. A further consideration is whether the neural system uses a plethora of different codes at each stage of analysis to account for the weaknesses inherent in one type of coding system. These questions are still debated; however, at the retinal level of analysis, however, comparison of these two systems of coding have lead to the conclusion that fine coding is indeed the neural network’s preferred system for maximizing information gathering.
A further problem that the neural code presents is redundancy. If the retinal neural code does involve the use of a fine coding system, the loss of a single neuron’s information could be potentially devastating towards the entire downstream neuron cascade. As such, it’s highly probable that the neuronal system has built a redundancy system into the code as to allow survival of the information even if there is loss of information. Redundancy can be achieved in many ways, one of them being the repetition of words. Research in retinal code redundancy have approached the problem from multiple levels, and have found an approximately 10 fold overrepresentation of information. Redundancy between nearby cells were moderate, but it seems as though ganglionic cells share information with many neighboring ganglionic cells, resulting in a ganglionic cell population that has a 10 fold overrepresentation of information. This brings questions to mind as to the balance between efficiency and redundancy. It’s interesting that the populations as a whole have abundant redundancy but not pairs of cells.
5) Information Theory
a) Shannon's Mutual Information
A figure such as 10 fold representation of information means little without a knowledge of information theory. It starts with mathematician Claude Shannon, who developed information theory in an attempt to quantify communication. Information theory by itself is general, applications of this theory range from neural networks to phone lines. However, Shannon introduced a measure of unit known as the bit. It’s now used to describe storage capacities of electronic space, but can just as easily quantify neural information such as visual stimuli information. This quality of being applicable to a large number of situations is because the bit is a unit of measurement that doesn’t involve the addition of semantic meaning. Information can be quantified into a series of symbols and their correlates. In visual information, these symbols would be the spikes that represent the stimuli. As such, Shannon’s mutual information, also known as the Shannon test is the most widely used test in trying to determine the neural code. It functions as an index that can test the viability of neural codes in the retinal system.
b) Bayesian Decoder
However, the Shannon test isn’t entirely ideal for this scenario since it can miss several neural code variants that are inviable in the retinal organization system. To further our knowledge of the retinal neural code, it’s best to turn to the Bayesian decoder. The Bayesian model came about because of an understanding that the guidance of behavior through the efficient understanding of sensory information from the world must require that the brain must factor in uncertainty. The successful use of Bayesian methods led to the Bayesian coding hypothesis, which postulates that the brain represents sensory information in a probabilistic manner, in other words, using a probability distribution. However, there is little to no neurophysiological data on an organization of neuron populations that supports this hypothesis. In essence, the Bayesian perceptual system represents attributes about an object e.g. Depth, as a conditional probability density function p( Z | I ). It specifies the probability that the object would be at different depths Z, given sensory information.
c) Comparison of the Shannon Test and the Bayesian Decoder
This is not, however, to say that the Shannon test is unreliable, it just has specific strengths and weaknesses. In a comparison between the Bayesian decoder and the Shannon mutual information, the Shannon test was found to outperform the Bayesian methods when dealing with the problem of inferring interaction networks among genes, but fell short when it came to dealing with eliminating inviable codes, where Bayesian indices performed best. One reason for this relationship is simply because the Shannon mutual information and Bayesian decoder are simply optimized for different specific goals. The Shannon test is optimized for identifying the simplest relationship graph, while the Bayesian decoder is more optimal at identifying the particular graph that can be excluded. There are multiple other reasons behind this difference but it is beyond the scope of this wiki to explain these reasons since it would require an in-depth understanding of information theory and mathematical theorems. As with most neural techniques, there is no gold standard test to uncover the remains of the neural code, but rather a combination of techniques is required for progress.