It is generally accepted that, in the cognitive and neural sciences, there are both computational and mechanistic explanations. We ask how computational explanations can integrate into the mechanistic hierarchy. The problem stems from the fact that implementation and mechanistic relations have different forms. The implementation relation, from the states of an abstract computational system (e.g., an automaton) to the physical, implementing states is a homomorphism mapping relation. The mechanistic relation, however, is that of part/whole; the explaining features in a mechanistic explanation are the components of the explanandum phenomenon and their causal organization. Moreover, each component in one level of mechanism is constituted and explained by components of an underlying level of mechanism. Hence, it seems, computational variables and functions cannot be mechanistically explained by the medium-dependent states and properties that implement them. How then, do the computational and the implementational integrate to create the mechanistic hierarchy? After explicating the general problem (Sect. 2), we further demonstrate it through a concrete example, of reinforcement learning, in the cognitive and neural sciences (Sects. 3 and 4). We then examine two possible solutions (Sect. 5). On one solution, the mechanistic hierarchy embeds at the same levels computational and implementational properties. This picture fits with the view that computational explanations are mechanistic sketches. On the other solution, there are two separate hierarchies, one computational and another implementational, which are related by the implementation relation. This picture fits with the view that computational explanations are functional and autonomous explanations. It is less clear how these solutions fit with the view that computational explanations are full-fledged mechanistic explanations. Finally, we argue that both pictures are consistent with the reinforcement learning example, but that scientific practice does not align with the view that computational models are merely mechanistic sketches (Sect. 6).
Bibliographical noteFunding Information:
We thank Matteo Colombo, Nir Fresco, Arnon Levy, Corey J. Maley, Marcin Miłkowski, Gualtiero Piccinini, Mark Sprevak, the referees from Synthese journal, and the project members of the GIF project “Causation and computation in cognitive neuroscience” (Ori Hacohen, Jens Harbecke, Shahar Hechtlinger, Vera Hoffmann-Kolss, Jan Philipp Köster, and Carlos Zednik) as well as the participants in the IACAP2017 conference, EPSA17 symposium on ‘The Computational Mind’, and the participants of The Third Jerusalem-MCMP Workshop in the Philosophy of Science for helpful comments, which greatly helped to improve this manuscript. This paper was presented also in the colloquia seminars in Tel Chai college and Ben Gurion University. We also thank Zehava Cohen for creating the original figures in this paper. This research was supported by a grant from the GIF, the German- Israeli Foundation for Scientific Research and Development. Lotem Elber-Dorozko is grateful to the Azrieli Foundation for the award of an Azrieli Fellowship.
© 2019, Springer Nature B.V.
- Cognitive neuroscience
- Computational explanations
- Mechanistic explanations
- Mechanistic hierarchy
- Mechanistic levels