Can There Be a Microscope of the Mind?

Michael Feldstein, e-Literate, Apr 17, 2017
Commentary by Stephen Downes
files/images/FMRI_scan_during_working_memory_tasks.jpg

This is a valuable post because it brings together and explains a number of elements of what we might call a cognitivist theory of mind. From where I sit, though, it brings together a lot of nonsense, and the overall theory of mind proposed here is seriously flawed. 

Here's the theory, in a nutshell: cognitive processes (like encoding, planning, solving) are mirrored by brain processes (or, reductively, cognitive processes are brain processes). These processes can be observed using Functional Magnetic Resonance Imaging (fMRI). fMRI is limited, but various types of machine learning (ML) are used to analyze the images "to create a fingerprint of brain activity that is distinctively correlated with a particular mental state." Knowing that these are cognitive processes, we can now fill in the gaps in a sequence of images using prior probability.

So, why do I say this is nonsense? There is no good reason to suppose cognitive processes are mirrored by brain processes. What Feldstein describes here is equivalent to using heat maps of hard drives to understand the narrative structure of Moby-Dick. Nothing in the former bears any resemblance to the latter. This is because 'narrative structure' is an interpretation of the data, and not inherent in the data. It seems to us that Ahab is obsessed with the great whale, but no study of the hard drive will ever uncover that obsession.

And the key to why this is nonsense is actually found in the statement of the theory. When we process fMRI images, why don't we use a sequence of 'encoding, planning, solving...'? There's no way to actually do that; the data underdetermines our choice of cognitive structure. That's why we use machine learning. But suppose humans use machine learning? After all, machine learning is based on neural networks! But if humans use machine learning, then the cognitive processes the fMRI analysis supposedly reveals don't actually exist. It's like we're studying clouds, and asking our software to find images of bunnies in the cloud, and then concluding "we have discovered that clouds contain bunnies."

Views: 0 today, 771 total (since January 1, 2017).[Direct Link]