This reddit post is perhaps the best I’ve seen about AI failings in some time.
ChatGPT is essentially no different than predictive text completion, basically trying to give you an answer that sounds like something someone would type in response to a prompt. Predictive text completion is something I always turn off on my phone.
The software that sort of generates natural language input to code is perhaps interesting, but we should assume it is just as flawed as the output engine.
ChatGPT is, therefore, completely unsuitable to the applications where it is most likely to be deployed - customer service human replacement, analytics, writing, and so on.
When giving anyone a task, if you cannot afford the time to validate it, it is best to not give that person a task. Now imagine you validate the result once or even 10 times and it looks right. Can you trust that person to do it right the next 1000 times? When you are dealing with only an illusion of mental awareness, you cannot.
What has happened with the AI industry is that they create the fake impression on purpose that their product is dangerous, have meetings and teams and summits about “AI safety” and so on, all to create the impression that it is powerful. In reality, we have a technology that is not far removed from Eliza. Various people that have made statements about “fear” of their AI systems are basically working to increase the marketability of there systems. The only “fear” should be that these systems have no quality.
AI imagine generation is somewhat interesting - and seemingly magical at times - but has a similar problem with intelligence. It only seems smart because we do not understand the underlying math.
However, it’s much easier to make image generation “glitch” than find holes in a text system. This is because with any “machine learning” approach the system cannot tell you what it knows, it can only do. This is the greatest flaw of the system. As compared with human intelligence, we can generally learn something and tell you what we learned. If you ask a ChatGPT-like AI what it knows, it will only text complete things based on information in it’s training set, which has absolutely no relation to what it actually knows.
With image generation, it is very hard to describe some object doing something to some other object, rather, what it really does is merge pictures that match keywords. It cannot really create a truly novel solution that does not infringe upon data sources. It has very few “expert systems” that understand anything that it is asked to do, like, for instance, what clothes are or how objects may be related to one another.
Stable Diffusion Prompt using the Juggernaut XL Model: “A chicken delivering three pizzas to a trout with an rabbit on his head standing in a park with a large round fountain in front of a duck pond” … it’s very wrong
“A dolphin playing chess with a duck” — this is a little better, but note the distorted chess pieces and the board is 6x9 and the strange region under the duck. Look at the dolphin’s mouth for a rather serious error.
In both the case of the “strawberry” example as well as these photographs, there is no data in the training set that says anything about what the “model” was asked. Further, if there was an answer, it would be likely made incorrect by confusion with other data in the model, or the way the data was tagged when assimilated.
Machine learning techniques can be used for anomaly detection and that is quite valid.
When we see AI used in applications affecting cloud security or IT automation, in particular, we should be incredibly suspicious of the intelligence of anyone working at such a company. It is a sign they do not think the customer has enough intelligence to understand they are being mislead.
All questions should go back to - what is the training set? Is the training set safe? What tags and metadata are there about the training set? Am I being fooled because I don’t know how the system works?
The mere name “artificial intelligence” is an attempt to mislead and is often used just as “synergy” or sticking an “e-” or “cyber-” in front of a phrase might have been used in the year 2000. Where we see the term we should increase our suspicion, not increase our reverence.