Thread
Another question for consciousness scientists and AI people:

What is the best evidence for and against large language models having a “higher order” representations, as defined by higher order theories of consciousness?
@hakwanlau + @MatthiasMichel_ 's Perceptual Reality Monitoring says conscious AI systems need:

(1) conceptual capacities

(2) a mechanism that distinguishes internal vs external sensory activities and (3) sensory signal vs noise, which

(4) outputs to belief + decision systems
For their part, they think current AI is very far from this.

And while AI consciousness is possible in principle, implementing Perceptual Reality Monitoring may be virtually impossible in silico (here they cite @pgodfreysmith on 'fine-grained functionalism')
@StanDehaene, @hakwanlau, and @SidKouider (2017) claimed that "Most present-day machine-learning systems are devoid of any self-monitoring", but pointed to promising approaches.

I'm curious what they make of 2022's large language models

www.science.org/doi/10.1126/science.aan8871
The fact that large language models can "predict ahead of time whether they'll be able to answer questions correctly" is suggestive of self-monitoring of a kind - though I don't know enough about this, or about higher-order theories, to say more than that


oh right, maybe you're thinking: "What *is* a higher-order approach to consciousness? I do not understand"

well here's @onemorebrown et al.'s "Understanding the Higher-Order Approach to Consciousness"

www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(19)30161-5
Mentions
See All