Artificial Intelligence

So Orbit has been going quite well although it is taking some time. Just for a change of scene, I've now seen one-too-many posts on LinkedIn along the lines of "is AI going to destroy us".

Artificial Intelligence
Photo by Joshua Hoehne / Unsplash

So Orbit has been going quite well although it is taking some time. Just for a change of scene, I've now seen one-too-many posts on LinkedIn along the lines of "is AI going to destroy us".

I think there may be a discrepancy between what people think of as "AI" as opposed to "ML" vs what we're seeing at the moment from the likes of ChatGPT. I saw an interview recently with an "AI programmer" who was pressed with "so what can we actually use it for today". His response was "well, any kind of research so long has the answer doesn't 'have' to be correct".

I'm struggling to see (especially after trying it out) how something that is after all described as a "large language model" is being equated to "an artificial intelligence that could wipe us out".

What the likes of LLM's are not (IMHO);

  • Skynet, a Terminator or anything from a Terminator films
  • Bishop from the Aliens Films
  • HAL from 2001
  • KIT from Knight Rider

What we may have is something more like;

  • WOPR from "War Games"

Now if anyone is stupid enough to give nuclear launch codes and a big red button to ANY of the above, and you insist on calling said option an "AI", they yes, I would agree that AI could wipe us out. If however this is the benchmark, it seems just as likely that humanity could be wiped out by a hungry toddler (!)

It's quite possible that the general public view AI through the lens of what we see on the big screen (I know I do) and when tech firms come out and refer to LLM's as AI, or when engineers come out and say "I think it's self-aware", it seems possible that they may be inadvertently pushing a misconception. When the same people come out and say "we need regulation", again there is a tendency think "Oh, this must be to stop them from developing something that could destroy us". However, the regulation that's coming down the tracks (at speed) is regulation on how data is captured, what data is allowed to be used by language models, how breaches of copyright will play and how to make LLM's comply with current laws within the context of things like "the right to forget". I'm not sure anybody is tabling a regulation along the lines of "nobody is allowed to generate an AI that will destroy us".

For example it doesn't seem like LLM's have the ability to "unlearn" anything any more than humans do. So once you've told it (for example) how to generate Windows license keys, there is potentially an issue. I guess not a dissimilar issue to the publication of 3D print schematics for projectile weapons.

Is ChatGPT is handing out free Windows license keys? No. But, yes. But, no.
Chat GPT: The P doesn’t stand for piracy

Then there was the story about the staff who thought it would be a good idea to use an LLM to convert meeting notes into a presentation ...

Samsung to ban staff from using ChatGPT after ‘code leak’
Others also blocked as company works on its own generative AI tech

And then there are "hallucinations", this appears to be the technical term used to refer to an LLM giving the wrong answer. Then doubling down on it's error by generating fictitious references to real authors and real publications in order to support it's, well, I want to call it a lie, but as above, hallucination. For example;

New York lawyers sanctioned for using fake ChatGPT cases in legal brief
A U.S. judge on Thursday imposed sanctions on two New York lawyers who submitted a legal brief that included six fictitious case citations generated by an artificial intelligence chatbot, ChatGPT.

I've only spent a few hours with ChatGPT, and while interesting, towards the end of my research it hallucinated generating fictitious references to particle physics papers. It quoted real authors and journals but none of the references were actually real and none of the URL's linked to valid pages.

When I asked it "these are not real, why did you lie to me?" is answered;

"I can't lie, I'm just a Large Language Model .... ".

I resisted the temptation to add "Dave". Please don't circle back and draw a comparison to "I'm sorry Dave, I'm afraid I can't do that" - it's really not the same! (it just feels like it)

Don't believe me, do check out the user-level technical videos on how Large Language Models work. My understanding is that once you boil it down, LLM's essentially produce a "best guess" approach to generating the next word it comes out with. This is how it hallucinates, the references it generates are it's best guess based on it's previous answers and the assumption that it was right. i.e. if it's previous answer(s) was/were right, those references would probably have existed!

In essence it does have a lot of value, but it's just a tool!
(and some users would might agree with the double entendre)

Just to finish off, a computing legend recently said (and I paraphrase);

Show AI 80,000 pictures of dogs and it can recognize a dog faster than any human. My pre-school granddaughter sees a dog and regognises a dog, the difference being , she knows what a dog "is".