Naming things is important, not only when programming, but also in the real world. How you name something shapes how you think about it, and by extension limits it. A well known real world instance where you can experience the power of names first hand is law making. Think about it, what sounds better:
That law, that allows law enforcement to tap your communication without a court order, or: “The Patriot Act”
The first instance, is factual, it states the effects of the law. The second name, is anything but, it removes any substance and it fills the hole with something far more powerful than what a fact could ever be: feelings.
A discussion using the first name would probably be brief. A law eroding the privacy of citizens would be unpopular, plain and simple. But if you rename the same law to Patriot Act, it is an entirely different beast, suddenly any discussion revolving around it will be framed by the word patriot. Which given the context of the U.S.A. may been seen as something positive. Who could not be a patriot? Aren’t you proud to be an american? How can you be against America? Your oppossition to this law is unpatriotic, anti-american, do you want them to win? How could you?
It’s a simple trick, but a simple trick can be powerful nevertheless.
Another far more recent example is the whole discussion revolving around large language models instances of which are Chat GPT and GPT-4.
Those models are framed as artificial intelligence, a loaded term, that most people have an intuitive idea what it could be. It could be HAL from “2001: A space odyssey”, or maybe that computer Joaquin Phoenix fell in love with in “Her”. Even if you never consumed any science-fiction, you know intelligence as that abstract concept people possess, that you have.
Large language models work by being trained on massive amounts of text produced by humans on the internet. Those models then calculate how likely a word is given the previous stream of words. At the end of this process you have a computer program that takes in text and predicts the most likely next word. A bit like the iOS keyboard suggesting you the next word when typing.
As humans we are conditioned to make sense of things, so if we are presented with text that follows the sentence structure of our language we may be misguided to think that there was some intelligent entity at work. But make no mistake, at the end it is just a computer, a deterministic machine you use to watch youtube videos and do your taxes with.
Those models are impressive from a technical perspective, ingesting billions of words and creating a statistical model from it is no easy feat, requires a huge amount of compute power and clever engineering. Kudos to those working on them.
But ultimately framing large language models as some kind of artificial intelligence is a smart trick and ultimately a disservice to the whole debate. Any debate around those models will ultimately fail because intelligence is such a fuzzy concept.
For example pointing out that those models don’t have a concept of anything and just predict the next word, is easily countered that human intelligence may just be the same, some neurons in your brain reacting to chemicals and electric impulses. By framing the discussion around intelligence the short-comings and actual technical details of those models are overshadowed by the far more emotionally loaded term intelligence, something fuzzy, something anybody can relate to.
That doesn’t mean that those models can’t be put to good use, but let’s keep the debate around them honest.