Mentions
Andrej Karpathy @karpathy
·
Nov 17, 2022
Good post. A lot of interest atm in wiring up LLMs to a wider compute infrastructure via text I/O (e.g. calculator, python interpreter, google search, scratchpads, databases, ...). The LLM becomes the "cognitive engine" orchestrating resources, its thought stack trace in raw text