Neurons in the visual cortex are typically selective to a number of stimulus dimensions. Thus, there is a basic ambiguity in relating the response level of a single neuron to the stimulus values. It is shown that a multi-dimensional stimulus may be coded reliably by an ensemble of neurons, using a weighted average population coding model. Each neurons' contribution to the population signal for each dimension is the product of its response magnitude and its preferred value for that dimension. The sum of the products was normalized by the sum of the ensemble responses. Simulation results show that the representation accuracy increases as the square root of the number of units irrespective of the number of dimensions. Comparison of a specific 2D case of this population code for orientation and spatial frequency to behavioral discrimination levels yields that 103-104 neurons are needed to reach psychophysical performance. Introduction of each additional dimension requires about 1.7 times the number of neurons in the ensemble to reach the same level of accuracy. This result suggests that neurons may be selective for only 3 to 5 dimensions. It also provides another rationale for the existence of parallel processing streams in vision.