zlacker

[parent] [thread] 0 comments
1. westur+(OP)[view] [source] 2023-11-20 21:00:55
Q: Is this a valid argument? "The structure that allows the LLM to realistically 'mimic' human communication is its intelligence. https://g.co/bard/share/a8c674cfa5f4 :

> [...]

> Premise 1: LLMs can realistically "mimic" human communication.

> Premise 2: LLMs are trained on massive amounts of text data.

> Conclusion: The structure that allows LLMs to realistically "mimic" human communication is its intelligence.

"If P then Q" is the Material conditional: https://en.wikipedia.org/wiki/Material_conditional

Does it do logical reasoning or inference before presenting text to the user?

That's a lot of waste heat.

(Edit) with next word prediction just is it,

"LLMs cannot find reasoning errors, but can correct them" >>38353285

"Misalignment and Deception by an autonomous stock trading LLM agent" https://news.ycombinator.com/item?id=38353880#38354486

[go to top]