Kevin Ferron
Apr 16, 2024

Don't disagree with the idea of "nevers" and "always" being dangerous.

In the case of the underlying tech that LLMs are based on at the fundamental level, there is no path forward to address hallucinations, which makes them unfit for mission critical operational capacity.

Will some kind of AI some day achieve the current fantasy of what people believe LLMs to be, who can say. But I can say that LLMs will not be the way.

Kevin Ferron
Kevin Ferron

Written by Kevin Ferron

Founder, Kevin Ferron Tech Consultancy & Digital Agency

No responses yet