Register | Sign In


Understanding through Discussion


EvC Forum active members: 64 (9164 total)
3 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,784 Year: 4,041/9,624 Month: 912/974 Week: 239/286 Day: 0/46 Hour: 0/0


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   ChatGPT
AZPaul3
Member
Posts: 8551
From: Phoenix
Joined: 11-06-2006
Member Rating: 4.9


Message 28 of 152 (910563)
04-26-2023 9:46 PM


TalkyBots
ChatGPT is a very nice toy, that with some efficacy improvements, can be a power tool in R&D. Both Percy and Modulous are impressed with ChatGPT’s programming prowess. If you look you can find some horror stories about some results in other fields. Some of the philosophy stuff I've seen out of this thing is quite dumb. I’m sure these will be trained-out of the next versions. These things only get stronger the more they experience and train.
The problems for academia, copyright, plagiarism, citation standards and on, all need to be worked out. Looks like neat stuff.
But, I’m an old fart. This kinda technology stuff has been a major part of my life and I should have been first to sign up to play. I may have been. That was weeks ago. What I found during signup was the requirement to reveal personal information beyond what I feel is necessary for use of the product.
Right now it is, for me, just an attractive-looking game and I’m on enough mailing lists already. I pass.
I wanted to ask it to print a list of the first 2000 customer last name and phone number entries in OpenAI's authorized ChatGPT user's database.

Stop Tzar Vladimir the Condemned!

Replies to this message:
 Message 38 by Percy, posted 04-27-2023 5:58 PM AZPaul3 has not replied

  
AZPaul3
Member
Posts: 8551
From: Phoenix
Joined: 11-06-2006
Member Rating: 4.9


(3)
Message 36 of 152 (910588)
04-27-2023 2:46 PM


How?
In case someone wants to see the details of how these neural nets like ChatGPT actually work I provide a detailed and boring video of the layout of a simple neural net.
If you think of each number you see in the vid as a cell in a spreadsheet then you can see where changing values will change the outputs. With a specific target output in mind the cells values are changed until the target value is achieved. How each cell is changed is a weighted programming consideration, an algorithm, from the perceived relationships among the nodes and the intended function to be achieved.
Now imagine the hidden middle layer shown in the video being millions of layers of billions of words each and the weighted values between them being changed dynamically with each use based on algorithms that are set by the values in the nodes themselves. Calculating probabilities of a stronger or weaker association with other words depending on the order of past words in the target dialogue. The node (word, symbol) with the highest probability number becomes the next word put in the sequence. Then the process runs again … the entire million layers of billions of words is run through again … and again. One word, symbol, phrase at a time.
So ChatGPT is a big set of very fast computers with millions of the biggest damn spreadsheets (programmatic neural nets) you’ve ever seen, a set of millions of variable algorithms, all outputting at millions of iterations per millisecond in real-time. It will write your term paper in a few seconds.
No one should be surprised it screws up. No one should be surprised you don’t get the same output twice. No one can know what it’s doing. By the time an anomaly is located there is no use trying to trace through the algorithms that have changed a million times since. The reasoning for the glitch is lost. All you can do is train the system to recognize the issue (changes the weights in the algorithms), filter the most egregious stuff out and pray for the intercession of the spirit of Marvin Minsky.
There is no genie. There is no bottle. Right now there is only brute force computing on a rather clever database arrangement.

Stop Tzar Vladimir the Condemned!

  
AZPaul3
Member
Posts: 8551
From: Phoenix
Joined: 11-06-2006
Member Rating: 4.9


Message 39 of 152 (910595)
04-27-2023 6:06 PM
Reply to: Message 37 by Percy
04-27-2023 5:52 PM


Re: ChatGPT Just Blew Me Away
But yeah, ChatGPT is scary good at programming.
That makes sense. It was initially trained by programmers. I wonder if they used initial versions to help program later ones.

Stop Tzar Vladimir the Condemned!

This message is a reply to:
 Message 37 by Percy, posted 04-27-2023 5:52 PM Percy has seen this message but not replied

  
AZPaul3
Member
Posts: 8551
From: Phoenix
Joined: 11-06-2006
Member Rating: 4.9


Message 44 of 152 (910625)
04-29-2023 10:51 AM
Reply to: Message 42 by Phat
04-29-2023 8:52 AM


Re: Bringing AI Specific Conversation Over Here
When you say AI what are you talking about?
What image of this thing do you hold in your mind? What are its physical attributes? And what programming attributes, intellectual abilities, do you see that manifest danger?

Stop Tzar Vladimir the Condemned!

This message is a reply to:
 Message 42 by Phat, posted 04-29-2023 8:52 AM Phat has replied

Replies to this message:
 Message 46 by Phat, posted 05-04-2023 8:26 AM AZPaul3 has not replied
 Message 48 by Phat, posted 05-04-2023 8:34 AM AZPaul3 has not replied

  
AZPaul3
Member
Posts: 8551
From: Phoenix
Joined: 11-06-2006
Member Rating: 4.9


(1)
Message 78 of 152 (911605)
07-18-2023 8:49 PM


Ahh Haa haa haa

Stop Tzar Vladimir the Condemned!

  
AZPaul3
Member
Posts: 8551
From: Phoenix
Joined: 11-06-2006
Member Rating: 4.9


Message 84 of 152 (912153)
08-18-2023 10:54 AM
Reply to: Message 82 by Percy
08-18-2023 9:54 AM


Re: ChatGPT Properly Described
Okay, if the public wants to think of tools like ChatGPT as true AI, so be it.
And if the public wants to think of evolution as supernatural, then so be it?
As I said back in April Message 36, ChatGPT is nothing but brute force computing on a clever database design. There is nothing intelligent about it. To accept ChatGPT as AI is another falsehood from 1984's newspeak.

Stop Tzar Vladimir the Condemned!

This message is a reply to:
 Message 82 by Percy, posted 08-18-2023 9:54 AM Percy has seen this message but not replied

  
AZPaul3
Member
Posts: 8551
From: Phoenix
Joined: 11-06-2006
Member Rating: 4.9


Message 100 of 152 (912505)
09-07-2023 1:32 PM
Reply to: Message 97 by Granny Magda
09-07-2023 8:48 AM


Re: The Difference Between Reasoning and Looking Stuff Up
The way these systems work is brute force computing of next word probability. Given the context (other words determined prior to this one) there are probabilities of the next word to appear as determined by training and usage experience. A malapropism is a word that does not belong. It is misused. There is zero probability that ChatGPT would build the sentence "We visited the sixteen chapel in rome." There is no way for the programming in ChatGPT to select a nonsense word choice.

Stop Tzar Vladimir the Condemned!

This message is a reply to:
 Message 97 by Granny Magda, posted 09-07-2023 8:48 AM Granny Magda has not replied

Replies to this message:
 Message 101 by Tangle, posted 09-07-2023 1:40 PM AZPaul3 has replied

  
AZPaul3
Member
Posts: 8551
From: Phoenix
Joined: 11-06-2006
Member Rating: 4.9


Message 102 of 152 (912507)
09-07-2023 1:58 PM
Reply to: Message 101 by Tangle
09-07-2023 1:40 PM


Re: The Difference Between Reasoning and Looking Stuff Up
Silly and nonsensical in construction, as you say, based on examples in the database. But not a malapropism in sight. It can't do it. It has no way of knowing a malaprop. The silly juxtapositions and described scenes use the words properly. That's is what makes them silly. I guess ChatGPT knows silly. But the "sixteen chapel" is beyond it's error creating capabilities.
I liked the use of alliteration. That shows a major depth to the database. Good show.
This deserves a standing ovulation.

Stop Tzar Vladimir the Condemned!

This message is a reply to:
 Message 101 by Tangle, posted 09-07-2023 1:40 PM Tangle has replied

Replies to this message:
 Message 103 by Tangle, posted 09-08-2023 4:35 AM AZPaul3 has replied

  
AZPaul3
Member
Posts: 8551
From: Phoenix
Joined: 11-06-2006
Member Rating: 4.9


Message 104 of 152 (912513)
09-08-2023 10:14 AM
Reply to: Message 103 by Tangle
09-08-2023 4:35 AM


Re: The Difference Between Reasoning and Looking Stuff Up
Q Can you create malapropisms?
No. The examples came from the literature.
16 Famous Malapropism Examples | Reader's Digest
It didn't create anything. It knows the definition and can regurgitate examples but it cannot make such an original mistake. That is the point to Granny Magda. ChatGPT is still too structured in it's programming to make such a human error.
In all of ChatGPT's responses over forever has anyone reported such an error, such an occurrence, without specifically requesting one? It can't make such an error except at deliberate instruction.

Stop Tzar Vladimir the Condemned!

This message is a reply to:
 Message 103 by Tangle, posted 09-08-2023 4:35 AM Tangle has replied

Replies to this message:
 Message 105 by Tangle, posted 09-08-2023 10:52 AM AZPaul3 has replied

  
AZPaul3
Member
Posts: 8551
From: Phoenix
Joined: 11-06-2006
Member Rating: 4.9


(1)
Message 108 of 152 (912519)
09-08-2023 1:29 PM
Reply to: Message 105 by Tangle
09-08-2023 10:52 AM


Re: The Difference Between Reasoning and Looking Stuff Up
Rocket salad. Very good. Seems ChatGPT can somewhat simulate a sense of humor. Rocket salad is a real actual physical thing btw.
Just a moment... Nip it.
Responses are not unique. Most don't even qualify as malaprops. I give an E for effort though.
It is trying to satisfy but a malaprop in reality is not a planned or requested item. It is (usually) a spontaneous misunderstanding and confusion of syntax and/or pronunciation. ChatGPT doesn't know from syntax and pronunciation. It doesn't know from confusion. All it knows is what word its algorithms register as most probably should appear next. It will not confuse "ovation" with "ovulation" or "Sistine" with "sixteen".
If the request to cite examples of malaprops is given, that's all it can do. If the request is to create original malaprops ... it is not capable of making these errors. It's algorithms select the next word based on word probability within context and is incapable of this kind of inadvertent spontaneous syntax error. Putting two unrelated words together is not a malaprop.
Such must be directed or programmed to occur but why, other than a discussion like this, would anyone want this thing to be programmed to make inadvertent spontaneous syntax errors? It already makes enough math errors.

Stop Tzar Vladimir the Condemned!

This message is a reply to:
 Message 105 by Tangle, posted 09-08-2023 10:52 AM Tangle has not replied

Replies to this message:
 Message 109 by Percy, posted 09-08-2023 3:57 PM AZPaul3 has not replied
 Message 110 by dwise1, posted 09-08-2023 7:48 PM AZPaul3 has replied

  
AZPaul3
Member
Posts: 8551
From: Phoenix
Joined: 11-06-2006
Member Rating: 4.9


Message 111 of 152 (912526)
09-09-2023 9:03 AM
Reply to: Message 110 by dwise1
09-08-2023 7:48 PM


Psychedelic Chat
We will have to ask Percy but I don't think there is a programmatic equivalent to THC that would allow my kind of error to appear in ChatGPT's work. Psychedelics of the AI kind are possible as evidenced by its poem for Tangle but grammatical errors of the AZPaul3 kind are beyond its structure to produce even under the influence whereas I succumb readily.

Stop Tzar Vladimir the Condemned!

This message is a reply to:
 Message 110 by dwise1, posted 09-08-2023 7:48 PM dwise1 has not replied

Replies to this message:
 Message 113 by Percy, posted 09-09-2023 9:47 AM AZPaul3 has replied

  
AZPaul3
Member
Posts: 8551
From: Phoenix
Joined: 11-06-2006
Member Rating: 4.9


Message 112 of 152 (912527)
09-09-2023 9:44 AM
Reply to: Message 107 by Percy
09-08-2023 11:27 AM


Re: ChatGPT Fails Vicar Problem
I asked it to find the error and it came up with the same wrong answer.
From my reading, even though it exists as thousands of computers, ChatGPT is strictly a language algorithm and does not have access to an ALU. I’m thinking it is so blind to anything but the result of the algorithm ChatGPT cannot recognize there is a math component to the issue. It doesn’t know from issue. It doesn’t know from math? Numbers are just weighted words in the algorithm?
This is a glaring hole and you have to know OpenAI is working to resolve this. Must be super difficult in their present architecture or it would have been resolved. Max Tegmark cannot be happy since his math, which is the creator of all structure in the universe, is reduced to a verbose prose. And a broken one at that.

Stop Tzar Vladimir the Condemned!

This message is a reply to:
 Message 107 by Percy, posted 09-08-2023 11:27 AM Percy has not replied

Replies to this message:
 Message 115 by AZPaul3, posted 09-09-2023 10:18 AM AZPaul3 has not replied

  
AZPaul3
Member
Posts: 8551
From: Phoenix
Joined: 11-06-2006
Member Rating: 4.9


Message 114 of 152 (912529)
09-09-2023 9:50 AM
Reply to: Message 113 by Percy
09-09-2023 9:47 AM


Re: Psychedelic Chat
Percy, I think the stick things are chop sticks, the most common eating utensil in the world.

Stop Tzar Vladimir the Condemned!

This message is a reply to:
 Message 113 by Percy, posted 09-09-2023 9:47 AM Percy has not replied

Replies to this message:
 Message 116 by nwr, posted 09-09-2023 10:20 AM AZPaul3 has not replied

  
AZPaul3
Member
Posts: 8551
From: Phoenix
Joined: 11-06-2006
Member Rating: 4.9


Message 115 of 152 (912530)
09-09-2023 10:18 AM
Reply to: Message 112 by AZPaul3
09-09-2023 9:44 AM


Re: ChatGPT Fails Vicar Problem
A follow-up on the math issues.
quote:
Moreover, math problems often require a deeper understanding of concepts and a step-by-step reasoning process to arrive at accurate solutions. While ChatGPT may excel at generating plausible responses, it may struggle to produce accurate mathematical results due to a lack of formal understanding and the absence of a mechanism to perform mathematical computations.
Just a moment...
And what is this "Just a moment..." instead of the article title or url?

Stop Tzar Vladimir the Condemned!

This message is a reply to:
 Message 112 by AZPaul3, posted 09-09-2023 9:44 AM AZPaul3 has not replied

  
AZPaul3
Member
Posts: 8551
From: Phoenix
Joined: 11-06-2006
Member Rating: 4.9


Message 119 of 152 (912544)
09-09-2023 4:54 PM
Reply to: Message 118 by nwr
09-09-2023 3:09 PM


Re: YoRe: Psychedelic Chat
Percy is not used to speaking dumb hillbilly. He didn't recognise it. You and I, on the other hand ...

Stop Tzar Vladimir the Condemned!

This message is a reply to:
 Message 118 by nwr, posted 09-09-2023 3:09 PM nwr has seen this message but not replied

Replies to this message:
 Message 120 by Percy, posted 09-09-2023 5:09 PM AZPaul3 has not replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024