ChatGPT: should we be excited or worried?
ChatGPT has been in the news for several weeks. Still at the prototype stage, it is a kind of conversational super bot developed by OpenAI.
Its objective is to answer all your questions, on any subject, such as law, computer code, more general questions, etc.
For those who don’t know ChatGPT yet, I invite you to open a free account on the OpenAI website to test it. I’m sure you’ll be amazed like I was.
So I tested ChatGPT by first giving it some logic exercises.
- A first example is the black and white ball puzzle. Three people are sitting in a triangle in a room and each has on his head (blind), a ball that is either black or white, among a possible total of 3 white and 2 black. If I am one of these people, and if I see that the other two have a white ball on their head, I wait a little and seeing that nobody reacts, then I can say with certainty that I too have a white ball on my head. But what is the reasoning? ChatGPT was able to solve this puzzle partially.
- I also asked him about a problem very similar to Monty Hall‘s, with the caveat that the host doesn’t know which door hides the prize. ChatGPT managed to give me the right probability, and even better, he explained how my puzzle was different from Monty Hall’s… I have to tell you right away, this first set of questions / answers really impressed me.
Then I ask him to rewrite this function, in COBOL, then in Assembler. No problem for ChatGPT. Finally, I identified a bug in his code. When the parameter of the function is null, ChatGPT does not test it. I ask him : “what would happen if the parameter is null ? ChatGPT apologizes (again), and immediately corrects the code of its function. This is extremely impressive!
Limits of ChatGPT
You will have understood, these first tests of ChatGPT have bluffed me. We are very far from Alexa, Siri and other assistants.
Nevertheless, this application has a number of limitations, and here are the most important ones for me:
- Chat GPT is based on an advanced version of the OpenAI GPT-3 model. GPT-3 is currently one of the largest language models ever trained with 175 billion parameters. And this is just for the base model (probably more for the version that was used by ChatGPT). This number may not speak to everyone. So to get an idea, it has been estimated that to fit this model into a computer’s RAM, it would need to have about 700 GB of RAM. So forget about running it on a Raspberry Pi! Still on the subject of orders of magnitude, let’s look at the GPT-3 training (the main step in building the model). Some people have calculated that a V100 machine alone (this is the reference for a machine with specialized hardware to optimize and carry out the training phases of models like GPT-3) would need 288 years to complete this training. Microsoft has therefore made available on Azure a number of V100s so that OpenAI can parallelize all the calculations and realize the GPT-3 model. The training time required is therefore reduced to several months only. It is estimated that the financial cost to build this model is around 4.6 million dollars, only in computing power on Azure. So much for orders of magnitude, and we can make two comments at this point. It is not within the reach of everyone to redo a model like GPT-3 and the carbon footprint just to build a single model is colossal.
- No sooner created, but already obsolete. The GPT-3 model consists of a dataset (web, social networks, Wikipedia, etc.) that dates back to 2021 for the version used by ChatGPT. The knowledge trained in GPT-3 therefore stops at 2021. All events that have occurred since then are not taken into account. When Chat GPT is asked who the current CEO of Twitter is, he naturally replies that it is Jack Dorsey. And when we tell him that it’s not him anymore, he admits that his knowledge stops at 2021. In this case, the limit is the financial and computer cost in the creation of a new and more updated version, which cannot be done in real time like the indexation of websites by Google.
- The last limit of ChatGPT, which in my humble opinion is one of the most important, is that it sometimes makes mistakes, or lies by making people believe that it knows when it does not. Yes, ChatGPT is a victim of a form of ultracrepidarianism (cf. Etienne KLEIN in Le goût du vrai). The problem here lies in the quality of the input data. It is indeed very easy to bias a so-called “artificial” intelligence by making it learn false things. And even if OpenAI has taken precautions on the quality of the data used to train its models, we realize that ChatGPT is sometimes wrong or lies. This can become a problem if this kind of application becomes a reference source of information for everyone. Because yes, ChatPGT can potentially produce false information, false scientific articles that can then be used as a sounding board. These last years have unfortunately shown how much our societies can be weakened by the use of false information. It is therefore crucial, at a time when research is advancing spectacularly in these fields, to also devote resources to the security of these algorithms. We saw it again recently, when Google fired its researchers responsible for the security and ethics in the application of their own algorithms.
The ChatGPT competition
Since we are talking about Google, I hear that ChatGPT will soon replace Google, or that Google is very worried about ChatGPT. You have already understood in the limits exposed above, that to date ChatGPT can not replace Google as such in its work of indexing the web.
The financial and computer cost of this learning method cannot be done in near real time.
Also, Google has been working for several years on its version of ChatGPT called LaMDA. The important difference is that Google has well understood that this technological limit of the learning cost is a wall as such. Its teams are therefore working on the PATHWAYS architecture. Its main difference is that it is able to update its models as data is collected. We have an approach where several models coexist to form a set capable of adapting dynamically. I don’t know at the time of writing how close Google’s approach is functionally to ChatGPT’s, but Google knows that its future is going to be played out here. The Google Search application is in vital need of these technologies to improve the experience of its users.
And Agora Software in all this?
Rest assured, we are not comparing Agora to ChatGPT. That would be pretentious on our part and we do not have the same objectives.
The one of ChatGPT is to propose an “omniscient” application able to answer all your questions and solve many of your problems.
At Agora, our goal is to create targeted conversational application interfaces. For several years, we have been working on our GAEL core technology which is our own NLP (Natural Language Processing).
We have developed an expertise to allow software companies, various platforms, communities or companies to create bots that will translate natural language into API calls (application programming interface) to such or such application. GAEL has the originality of not being centralized, and can be deployed and distributed in containers dedicated to our customers’ projects.
Also, we have organized GAEL by learning classes. This is a concept specific to Agora Software that allows you to directly supervise your projects from our administration console. Thus, each customer can produce its own learning data to create a unique Agora class that will then be tested and applied in the application context dedicated to the customer’s project.
Finally, GAEL is lightweight. It takes less than a minute to complete the learning process to create a class and the model easily fits in a Raspberry Pi. As you can imagine, with a small memory and CPU footprint, the carbon footprint is significantly reduced.
In these conditions, our NLP is not intended to be “omniscient” and to answer all the questions, because its field of competence will stop where the application on which GAEL is connected stops functionally.
The strength of our solution is to give freedom to our customers, while remaining in a totally dedicated and secure environment, limiting data sharing between projects to a minimum.
Agora Software is based on a decentralized and multi-tenant cloud platform, making it possible to connect your project to any collaborative platform and to link your different users in multiple languages, whether they interact on MS Teams, Google Chat, WhatsApp, Messenger or by SMS, via an Agora conversational interface.
We propose an intra-omni-canal and multilingual approach, in the spirit of reusing as much as possible the tools already deployed by our clients (collaborative platforms, social networks, SMS, etc.) with the advantage of communicating directly with the users where they are already located.
ChatGPT is undoubtedly the state of the art in this field. The proposed results are spectacular and show the incredible technical progress made in recent years.
Nevertheless, ChatGPT is not really capable of replacing a human being. It has been trained only via words or sentence corpora, while full understanding of a sentence often requires deeper consideration of its context of occurrence as well as non-verbal elements.
ChatGPT and other similar systems are great tools that will increase productivity in many areas, or help diagnose this or that topic. These systems will reproduce in a more or less faithful way the elements learned, within the limits of what these technologies allow to learn today. They will probably change our lives but without ever replacing us. And for better or for worse, a hammer is also a tool with which we can hammer nails or tap our fingers.
To quote Jabob Browning and Yann LeCun (cf. AI and the limits of language): “it is clear that these systems are doomed to a superficial understanding that will never come close to the complete thought that we see in humans”.
I would like to point out that this article was not written by ChatGPT 😊 and I invite you to consult all the sources in reference found for its writing (they are in the text).
If you are interested in this article, please visit our worktech projects page, and do not hesitate to contact us if you wish to discuss a project with us.
Join-us on our LinkedIn page to follow our news!
Want to understand how our conversational AI platform optimizes your users’ productivity and engagement by effectively complementing your enterprise applications?