The Explanatory Force of Network Models

Carl F. Craver, Preprint, Jan 10, 2017
Commentary by Stephen Downes

Interesting paper that looks at network analyses of things (including brains) and asks about their explanatory power. A quick case in point (my own example): if we say a person knows P because she has network configuration C, does C explain why she knows P? Or does P explain why she has configuration C? This may seem trivial, but if we want to produce P in a person, the explanation is important, as it (maybe) tells us what causes what. The author's thesis (stated in the abstract and then in the third paragraph) is really awkwardly stated. But the conclusion is pretty clear: network analyses do not redefine the norms of  explanation, and they suffer from the same methodological puzzles. Worth reading for the lucid discussion of graph theory as it relates to neural networks. Preprinted in Carl F. Craver's website, found via Philosophical Progress. Image: Wikipedia.

Views: 0 today, 1173 total (since January 1, 2017).[Direct Link]
Creative Commons License. gRSShopper

Copyright 2015 Stephen Downes ~ Contact:
This page generated by gRSShopper.
Last Updated: Feb 20, 2018 02:26 a.m.