TY - JOUR
T1 - Neural Network Models of Perceptual Learning of Angle Discrimination
AU - Mato, G.
AU - Sompolinsky, H.
PY - 1996/2/15
Y1 - 1996/2/15
N2 - We study neural network models of discriminating between stimuli with two similar angles, using the two-alternative forced choice (2AFC) paradigm. Two network architectures are investigated: a two-layer perceptron network and a gating network. In the two-layer network all hidden units contribute to the decision at all angles, while in the other architecture the gating units select, for each stimulus, the appropriate hidden units that will dominate the decision. We find that both architectures can perform the task reasonably well for all angles. Perceptual learning has been modeled by training the networks to perform the task, using unsupervised Hebb learning algorithms with pairs of stimuli at fixed angles θ and δθ. Perceptual transfer is studied by measuring the performance of the network on stimuli with θ′ ≠ θ. The two-layer perceptron shows a partial transfer for angles that are within a distance a from θ, where a is the angular width of the input tuning curves. The change in performance due to learning is positive for angles close to θ, but for \θ - θ′\ ≈ a it is negative, i.e., its performance after training is worse than before. In contrast, negative transfer can be avoided in the gating network by limiting the effects of learning to hidden units that are optimized for angles that are close to the trained angle.
AB - We study neural network models of discriminating between stimuli with two similar angles, using the two-alternative forced choice (2AFC) paradigm. Two network architectures are investigated: a two-layer perceptron network and a gating network. In the two-layer network all hidden units contribute to the decision at all angles, while in the other architecture the gating units select, for each stimulus, the appropriate hidden units that will dominate the decision. We find that both architectures can perform the task reasonably well for all angles. Perceptual learning has been modeled by training the networks to perform the task, using unsupervised Hebb learning algorithms with pairs of stimuli at fixed angles θ and δθ. Perceptual transfer is studied by measuring the performance of the network on stimuli with θ′ ≠ θ. The two-layer perceptron shows a partial transfer for angles that are within a distance a from θ, where a is the angular width of the input tuning curves. The change in performance due to learning is positive for angles close to θ, but for \θ - θ′\ ≈ a it is negative, i.e., its performance after training is worse than before. In contrast, negative transfer can be avoided in the gating network by limiting the effects of learning to hidden units that are optimized for angles that are close to the trained angle.
UR - http://www.scopus.com/inward/record.url?scp=0030584268&partnerID=8YFLogxK
U2 - 10.1162/neco.1996.8.2.270
DO - 10.1162/neco.1996.8.2.270
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
C2 - 8581884
AN - SCOPUS:0030584268
SN - 0899-7667
VL - 8
SP - 270
EP - 299
JO - Neural Computation
JF - Neural Computation
IS - 2
ER -