Traditionally, the modeling of sensory neurons has focused on the characterization and/or the learning of input-output relations. Motivated by the view that different neurons impose different partitions on the stimulus space, we propose instead to learn the structure of the stimulus space, as imposed by the cell, by learning a cell specific distance function or kernel. Metaphorically speaking, this direction attempts to bypass the syntactic question of "how the cell speaks", by focusing instead on the semantic and fundamental question of "what the cell says". Here we consider neural data from both the inferotemporal cortex (ITC) and the prefrontal cortex (PFC) of macaque monkeys. We learn a cell-specific distance function over the stimulus space as induced by the cell response; the goal is to learn a function such that the distance between stimuli is large when the responses they evoke are very different, and small when the responses they evoke are similar. Our main result shows that after training, when given new stimuli, our ability to predict their similarity to previously seen stimuli is significantly improved. We attempt to exploit this ability to predict the response of the cell to a novel stimuli using KNN over the learnt distances. Furthermore, using our learned kernel we obtain a partitioning of the stimulus space which is more similar to the partition induced by the cell responses as reveled by low dimension embedding, and thus, are able in some of the cases to peek at the semantic partition induced by the cell.