Agora Software

Emergence

Or how is it possible that a next word prediction machine (GPT) is able to answer (almost) all our questions? And by the way, what does it answer?

Emergence of Chat-GPT

GPT, who what is it?

GPT’s designers have found a way of transcoding a huge textual knowledge base (the Web) into another equally huge, but digital, one.

This makes it easy for computers to manipulate. To give a few orders of magnitude:

  • Number of books in Borges’ Library of Babel: 105000,
  • Number of atoms in the observable universe: 1080,
  • Number of words on the web: 1015,
  • Number of GPT parameters: 1012,
  • Number of neurons in a human: 1011 (many more in cats, as we know).

Also, GPT concentrates the Web by a factor of 1,000, which is both a lot and not all that surprising.

GPT’s principle is that of a “next word prediction machine” (i.e., the most likely word) based on a context made up of :

  • A database of ordered words (the Web),
  • A more or less precise question (the “prompt”),
  • Its algorithmic model (the details of which are not revealed).
  • As it happens, the power of the model and the richness of the information base mean that answers are most often relevant.

GPT responds with words that make sense! Semantics emerge from signs, and knowledge emerges from a simple string of words. Ouch!

What does GPT tell us?

He’s not telling us anything. Or rather, it doesn’t understand what it’s saying. No more than a CD player or streaming site understands the music or film it’s playing.

GPT is a parrot who is not learned, but gifted with a formidable talent for finding needles (prompt) in a haystack (the Web).

Nothing comes from him. But he’s surprisingly good at finding the information he’s looking for and has the language to say so.

And it’s this apparent proximity that surprises us the most.

Aren’t we tempted to talk to him politely and thank him, as if he really cared?

"A
DALL·E 2023-05-20 08.42.20 – A blonde haired woman dressed in 1950’s clothing, she is on a train looking at the desert in the background, by Edward Hopper (2 versions). Generated in a matter of seconds on a mass-market application, these images are still a little naive, but it’s easy to see how close we are to getting a Hopper that’s as (more?) real as life.

GPT is magic - so ''there's a trick''

The thing is, human language is a game of conventions. A vague consensus, perfected year after year by those who use it.

It’s this decoupling of the signified and the signifier, combined with the invention of the alphabet, that makes it so powerful. It enables us to use just a few dozen signs (letters) to form hundreds of thousands of combinations (words). These can then be combined to form an unlimited number of concepts and meanings. A radical economy of means at the service of infinite possibilities.

The meaning underlying the combinations of signs is therefore only latent in the text. It resides in the mind of the person who writes the text based on the meaning he or she wishes to share.

And who hopes that those who read it will be able to make the journey backward: from signs to meaning? So, it’s the reader (human or machine, for that matter) who brings out the meaning of the words and “statistical” phrases constructed by GPT.

Garbage in, garbage out

GPT “knows” nothing but has access to everything (the texts). The knowledge that emerges and surprises us when we interact with this computer program is that which resides in the zillions of interconnected files that make up the web.

Devoid of common sense, GPT may ingest information that is too (statistically) inaccurate. Thus we shouldn’t be surprised to see GPT turn conspiratorial and denialist. Or even argue that the earth is shaped like a corkscrew. Or that blue-eyed people are aliens who come to devour us at the first opportunity, Mars attacks version!

Le Nœud noir – Georges Seurat – Les structures du dessin n’apparaissent que lorsqu’on prend de la distance. Le peintre fait la transcription de l’idée vers les signes (éléments du dessin), notre œil capte les signes et notre cerveau (pré-entraîné) fait émerger à nouveau l’idée originelle de l’artiste.
Le Nœud noir – Georges Seurat – The structures of the drawing only emerge when we take a step back. The painter transcribes the idea into signs (elements of the drawing), our eye picks up the signs and our (pre-trained) brain re-emerges the artist’s original idea.

GPT, what's next?

Is GPT conscious?

Obviously not, or even intelligent, no matter how little.

And that’s what troubles us when we question it. It’s only the intelligence of its designers, and that of all the contributors to the Web, that we capture in the answers coming from this seemingly magical tool. As a result, we see the collective intelligence of its creators reflected in the responses it provides.

And that’s no mean feat.

Is it possible to enhance its capabilities, by giving it some form of intelligence?

It may not be as impossible as all that. And it is undoubtedly what’s being studied in the laboratories of AI players, whether well-intentioned or ill-intentioned.

For example, by developing a layer of algorithms capable of using the universal memory that is GPT, dynamically proposing the right prompts to serve a larger task.

The addition of recursive structures and a dash of reinforcement learning could boost the capabilities of this future AI tenfold, taking us with it into unexplored territories of human-machine cohabitation.

We can infer that properties of adaptability specific to what we call intelligence could then emerge from such capacities, in the same way as a form of knowledge emerges from this “next word prediction machine” that is GPT.

To reassure ourselves, can we say that the knowledge base, the foundation of this whole edifice, is human?

And that future “hyper GPTs” will somehow remain in our image?

And for how long?

How did we integrate GPT into our solution?

Agora could not remain aloof from this major transformation of the natural language technological landscape.

We will keep pace with this evolution as it progresses while preserving the qualities of sovereignty, explicability, and confidentiality that characterize our solution.

At this stage, the Agora solution uses GPT to perform the following two tasks:

  • The construction of learning corpora for specific intention recognition networks. No more head-scratching to find a sufficient quantity of natural language examples. What’s more, you can use the language of your choice to build this corpus,
  • Refinement of intention recognition within user-generated sentences, if these fall outside the functional scope defined by our customers’ applications.

References

GPT has inspired a lot of literature that we won’t try to quote, except for the following two sources:

Many thanks to Michael McTear from the University of Belfast, for the recent and friendly discussions on automatic language processing, conversational interfaces, and of course the contributions and limits of GPT.

If you are interested in this article, please visit our projects page, and contact us if you wish to discuss vision, technology, and project!

Join us on our LinkedIn page to follow our news!

Want to understand how our conversational AI platform optimizes your users’ productivity and engagement by effectively complementing your enterprise applications?