Micah R Ledbetter /mɹ̩ˈled/
@mrled
A weird correlation between LLMs and humans: introspection isn't standard. The LLM will only know like "how many tokens can you accept" if its own documentation is in the training set; humans (sometimes) only know like "why are you feeling angry" after lots of psychological work.