Limits of voice AI: ChatGPT between efficiency and wishful thinking

The current debates about language models such as ChatGPT give the impression that there are hardly any limits to machine language processing. Whether annual report, newspaper article, homework or novel – chatbots seem to master everything. Many voices assume that machine-generated texts will soon be the norm and no longer human-written ones. The expectations when reading texts would change as a result: AI products would be the rule, human texts the exception. They would step out of line, like a postcard does today.

However, the systems still have serious shortcomings. Anyone experimenting with ChatGPT encounters errors. The basic problem is that the AI ​​does not understand what it is talking about. What the human brain sees as semantic signs that convey meaning, the software processes in syntactic and mathematical relationships derived from training data and statistical calculations. Systems like ChatGPT bet that the level of meaning of language can be accessed via numerical relations. So are speaking and writing functions that can be calculated like routes on Google, like matches on Parship, like the weather forecast?

The leap in performance that language models have made over the past few years is amazing. He shows how far the approach of stringing together words and word sequences on the basis of statistical probabilities can go. However, there is no evidence that the semantics can be fully deduced via syntactic relations. However, the opposite cannot be proven either. So is it only a matter of time before systems master the level of meaning – be it that they develop real language understanding or that they sufficiently simulate it? Or does the AI ​​have categorical limits beyond which it cannot grow?

To home page

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *