Will ChatGPT replace our jobs?
AI Generated Content (AIGC) has finally caught the attention of the general public. ChatGPT, an artificial intelligence chatbot developed by OpenAI, has surged in popularity, surpassing 100 million users in just two months. This has sparked widespread discussion in the media and forums about whether ChatGPT will eventually replace certain professions rendering many human jobs obsolete. This fear has only been compounded by recent news that ChatGPT successfully passed the Google Level 3 Engineer Interview, putting the engineering profession directly in the spotlight.
When the program can create very refined texts and even some simple code by simply going off of strings of keywords, everyone comes up with the first question:
Will ChatGPT replace my job?
There is no point in overreacting at this stage. Recently, there has been a wealth of articles exploring both the benefits and limitations of generative artificial intelligence. The technology behind ChatGPT is called Large Language Models (LLMS). Just in the past two days, two prominent AI experts, Yann LeCun and Gary Marcus, have joined forces to outline the numerous limitations of LLMs. LeCun made the following comments.
• LLMs can currently only be used as an aid in writing
• LLMs are "passive" and lack logic
• LLMs will occasionally speak nonsense
• LLM errors can be solved by manual feedback, but cannot be completely eradicated
• Better systems will eventually appear
LeCun also pointed out the fatal flaw of LLM:
"Language only records a small part of all human knowledge; most of the human knowledge or animal knowledge is non-linguistic."
According to LeCun, ChatGPT does not possess a genuine understanding of the language it uses and is unable to comprehend human knowledge. Language as a mode of communication is a specialized but limited means of conveying knowledge. Whether in the form of natural language, code or symbols, these forms of representation aim to express individual objects, attributes and their relationships in an abstract manner.
Language conveys limited information, Compared to non-linguistic forms of representation such as images and charts, it is an inferior means of conveying information. People often use language to compress large amounts of information into concise sentences, which requires context and inference to understand their true meaning.
This highlights the fundamental problem with Large Language Models (LLMs). Language has never been a clear mode of communication and will not be the ultimate means of training artificial intelligence. Hence, LLMs should not be considered to be true AI, as they are simply large databases that regurgitate information fed to them but lack true understanding of any of it.
While using ChatGPT, we saw a significant advancement in the LLM’s ability to understand the context in comprehending context. However, when attempting to alter perspectives, it became evident that ChatGPT instead resets itself to accommodate the change rather than offering a counterargument, resulting in a fragmented worldview.
This phenomenon explains the feeling of vague understanding thatChatGPT gives to users. Although the language model does grant ChatGPT access to a portion of human knowledge, it is still limited by the extent of the expression of natural language.
This article does not aim to dispute the groundbreaking innovation brought by LLMs. However, it must be acknowledged that we are still far from achieving true artificial intelligence and that we will not be “replaced” overnight by LLMs.