Kevin Ferron
2 min readApr 24, 2024

--

Oh I don’t think you’re gaining power and money from this kind of essay.

Your points are about the potential application and I agree it is fun to explore how some day AI can solve for problems.

The issue is that LLMs are horribly inaccurate, with sometimes complete and total hallucination, or more insidious, partial hallucinations that are more subtle. This makes them unsuitable for anything mission critical.

The belief that somehow hallucinations will be solved for or that these models are getting better and better all of the time is part of the big lie that is being sold, and which you are unwittingly selling. The people that gain from this ‘verge of AGI’ narrative are counting on people just like you, doing their job for them: filling in all the blanks that this current technology actually has and extrapolating magical thinking about how this is ‘going to be, very soon*)

It’s not that there aren’t interesting things happening, but there isn’t a path forward with LLMs that leads to them being less faulty. There are lots of plans to extract capital from investors who believe that there is though.

So while you’re patting yourself on the back for being above the Luddites, you’re also missing out on the serious roadblocks and dead-ends the tech actually has, which leads you to a place of being bedazzled and in an imaginary world of playing out scenarios that aren’t realistic based on what actually exists today.

There’s lots of real problems with the world, but us not being grounded in reality and the hucksters and the con artists willingness to profit from this detachment seems to be primary when speaking about AI today.

--

--

Kevin Ferron
Kevin Ferron

Written by Kevin Ferron

Founder, Kevin Ferron Tech Consultancy & Digital Agency

Responses (1)