Thread by Gualtiero Piccinini
- Tweet
- Aug 24, 2022
- #MentalModel
Thread
There are a lot of misconceptions about mental representations.
Mental representations are neural states that are routinely observed and manipulated in the lab, have a viable semantics, and explain intentionality.
A thread.
1/
Mental representations are neural states that are routinely observed and manipulated in the lab, have a viable semantics, and explain intentionality.
A thread.
1/
Many people, pro and con, think mental representations are theoretical posits analogous either to public representations (eg maps) or digital representations within computers. This is an outdated and inadequate conception.
2/
2/
Representations were initially posited by neuroscientists to explain behavior. They figured there must be inner states corresponding to external stimuli and inner causes corresponding to behaviors. This happened centuries ago, way before the cognitive revolution!
3/
3/
Eventually neuroscientists found the states they were looking for: spike trains that are caused by stimuli and in turn cause behavior. States without which neurocognitive systems malfunction.
4/
4/
They found that neural representations tend to be topographically organized to match the structure of the stimuli and behaviors. They found myriad ways to detect and manipulate them.
5/
5/
It's like Mendel positing genes before anyone observed them and then molecular biologists finding DNA. Many details have changed and much more has been learned but the basic idea was correct.
6/
6/
Ok, but are these states really representational? Many people, pro and con, think that neural representations are not really representations, or if they are they need a homunculus to interpret them, or at least that there is no viable story for how they get semantic content.
7/
7/
On the contrary, in recent years there’s been remarkable convergence among philosophers of neuroscience (names below) towards a viable neurosemantics. Neural representations are a kind of structural representations that are (mostly?) acquired by neurocognitive systems ...
8/
8/
along with the computations that process them. They need no homunculus. They represent what they carry information about (indicative representations) or what they are aimed to change in the world (imperative representations).
9/
9/
Ok, but what if we are interested is _mental_ representations? Surely mental representations are at a “higher level” than neural representations?
10/
10/
On the contrary, there are many levels in neurocognitive systems, and they are all neural. Mental representations are neural representations, possibly specialized for certain jobs. Neural “software” is still neural!
11/
11/
There’s even the beginning of a story for how neural representations explain intentionality (e.g., Piccinini, “Nonnatural Mental Representations”), although many details remain to be worked out. If any ambitious philosophy grad student wants to work on this, get in touch.
12/
12/
Some people whose work on neural representation I recommend: @Jonny_CW_Lee, @Eric__Thomson, @russpoldrack, R. Millikan, K. Neander, Nick Shea, @MilekPl, Bill Ramsey, Pawel Gładziejewski, Krystyna Bielecka, @cameronjbuckner Oron Shagrir, Frankie Egan, @DLBarack, @blamlab
13/13
13/13