|
Register | Sign In |
|
QuickSearch
EvC Forum active members: 64 (9164 total) |
| |
ChatGPT | |
Total: 916,784 Year: 4,041/9,624 Month: 912/974 Week: 239/286 Day: 0/46 Hour: 0/0 |
Thread ▼ Details |
|
Thread Info
|
|
|
Author | Topic: ChatGPT | |||||||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8551 From: Phoenix Joined: Member Rating: 4.9 |
ChatGPT is a very nice toy, that with some efficacy improvements, can be a power tool in R&D. Both Percy and Modulous are impressed with ChatGPT’s programming prowess. If you look you can find some horror stories about some results in other fields. Some of the philosophy stuff I've seen out of this thing is quite dumb. I’m sure these will be trained-out of the next versions. These things only get stronger the more they experience and train.
The problems for academia, copyright, plagiarism, citation standards and on, all need to be worked out. Looks like neat stuff. But, I’m an old fart. This kinda technology stuff has been a major part of my life and I should have been first to sign up to play. I may have been. That was weeks ago. What I found during signup was the requirement to reveal personal information beyond what I feel is necessary for use of the product. Right now it is, for me, just an attractive-looking game and I’m on enough mailing lists already. I pass. I wanted to ask it to print a list of the first 2000 customer last name and phone number entries in OpenAI's authorized ChatGPT user's database.Stop Tzar Vladimir the Condemned!
|
|||||||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8551 From: Phoenix Joined: Member Rating: 4.9
|
In case someone wants to see the details of how these neural nets like ChatGPT actually work I provide a detailed and boring video of the layout of a simple neural net.
If you think of each number you see in the vid as a cell in a spreadsheet then you can see where changing values will change the outputs. With a specific target output in mind the cells values are changed until the target value is achieved. How each cell is changed is a weighted programming consideration, an algorithm, from the perceived relationships among the nodes and the intended function to be achieved.
Now imagine the hidden middle layer shown in the video being millions of layers of billions of words each and the weighted values between them being changed dynamically with each use based on algorithms that are set by the values in the nodes themselves. Calculating probabilities of a stronger or weaker association with other words depending on the order of past words in the target dialogue. The node (word, symbol) with the highest probability number becomes the next word put in the sequence. Then the process runs again … the entire million layers of billions of words is run through again … and again. One word, symbol, phrase at a time. So ChatGPT is a big set of very fast computers with millions of the biggest damn spreadsheets (programmatic neural nets) you’ve ever seen, a set of millions of variable algorithms, all outputting at millions of iterations per millisecond in real-time. It will write your term paper in a few seconds. No one should be surprised it screws up. No one should be surprised you don’t get the same output twice. No one can know what it’s doing. By the time an anomaly is located there is no use trying to trace through the algorithms that have changed a million times since. The reasoning for the glitch is lost. All you can do is train the system to recognize the issue (changes the weights in the algorithms), filter the most egregious stuff out and pray for the intercession of the spirit of Marvin Minsky. There is no genie. There is no bottle. Right now there is only brute force computing on a rather clever database arrangement. Edited by AZPaul3, . Stop Tzar Vladimir the Condemned! |
|||||||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8551 From: Phoenix Joined: Member Rating: 4.9 |
But yeah, ChatGPT is scary good at programming. That makes sense. It was initially trained by programmers. I wonder if they used initial versions to help program later ones.Stop Tzar Vladimir the Condemned!
|
|||||||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8551 From: Phoenix Joined: Member Rating: 4.9 |
When you say AI what are you talking about?
What image of this thing do you hold in your mind? What are its physical attributes? And what programming attributes, intellectual abilities, do you see that manifest danger?Stop Tzar Vladimir the Condemned!
|
|||||||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8551 From: Phoenix Joined: Member Rating: 4.9
|
Stop Tzar Vladimir the Condemned! |
|||||||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8551 From: Phoenix Joined: Member Rating: 4.9 |
Okay, if the public wants to think of tools like ChatGPT as true AI, so be it. And if the public wants to think of evolution as supernatural, then so be it? As I said back in April Message 36, ChatGPT is nothing but brute force computing on a clever database design. There is nothing intelligent about it. To accept ChatGPT as AI is another falsehood from 1984's newspeak.Stop Tzar Vladimir the Condemned!
|
|||||||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8551 From: Phoenix Joined: Member Rating: 4.9 |
The way these systems work is brute force computing of next word probability. Given the context (other words determined prior to this one) there are probabilities of the next word to appear as determined by training and usage experience. A malapropism is a word that does not belong. It is misused. There is zero probability that ChatGPT would build the sentence "We visited the sixteen chapel in rome." There is no way for the programming in ChatGPT to select a nonsense word choice.
Stop Tzar Vladimir the Condemned!
|
|||||||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8551 From: Phoenix Joined: Member Rating: 4.9 |
Silly and nonsensical in construction, as you say, based on examples in the database. But not a malapropism in sight. It can't do it. It has no way of knowing a malaprop. The silly juxtapositions and described scenes use the words properly. That's is what makes them silly. I guess ChatGPT knows silly. But the "sixteen chapel" is beyond it's error creating capabilities.
I liked the use of alliteration. That shows a major depth to the database. Good show. This deserves a standing ovulation. Edited by AZPaul3, . Stop Tzar Vladimir the Condemned!
|
|||||||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8551 From: Phoenix Joined: Member Rating: 4.9 |
Q Can you create malapropisms? No. The examples came from the literature. 16 Famous Malapropism Examples | Reader's Digest It didn't create anything. It knows the definition and can regurgitate examples but it cannot make such an original mistake. That is the point to Granny Magda. ChatGPT is still too structured in it's programming to make such a human error. In all of ChatGPT's responses over forever has anyone reported such an error, such an occurrence, without specifically requesting one? It can't make such an error except at deliberate instruction.Stop Tzar Vladimir the Condemned!
|
|||||||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8551 From: Phoenix Joined: Member Rating: 4.9
|
Rocket salad. Very good. Seems ChatGPT can somewhat simulate a sense of humor. Rocket salad is a real actual physical thing btw.
Just a moment... Nip it. Responses are not unique. Most don't even qualify as malaprops. I give an E for effort though. It is trying to satisfy but a malaprop in reality is not a planned or requested item. It is (usually) a spontaneous misunderstanding and confusion of syntax and/or pronunciation. ChatGPT doesn't know from syntax and pronunciation. It doesn't know from confusion. All it knows is what word its algorithms register as most probably should appear next. It will not confuse "ovation" with "ovulation" or "Sistine" with "sixteen". If the request to cite examples of malaprops is given, that's all it can do. If the request is to create original malaprops ... it is not capable of making these errors. It's algorithms select the next word based on word probability within context and is incapable of this kind of inadvertent spontaneous syntax error. Putting two unrelated words together is not a malaprop. Such must be directed or programmed to occur but why, other than a discussion like this, would anyone want this thing to be programmed to make inadvertent spontaneous syntax errors? It already makes enough math errors.Stop Tzar Vladimir the Condemned!
|
|||||||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8551 From: Phoenix Joined: Member Rating: 4.9 |
We will have to ask Percy but I don't think there is a programmatic equivalent to THC that would allow my kind of error to appear in ChatGPT's work. Psychedelics of the AI kind are possible as evidenced by its poem for Tangle but grammatical errors of the AZPaul3 kind are beyond its structure to produce even under the influence whereas I succumb readily.
Stop Tzar Vladimir the Condemned!
|
|||||||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8551 From: Phoenix Joined: Member Rating: 4.9 |
I asked it to find the error and it came up with the same wrong answer. From my reading, even though it exists as thousands of computers, ChatGPT is strictly a language algorithm and does not have access to an ALU. I’m thinking it is so blind to anything but the result of the algorithm ChatGPT cannot recognize there is a math component to the issue. It doesn’t know from issue. It doesn’t know from math? Numbers are just weighted words in the algorithm? This is a glaring hole and you have to know OpenAI is working to resolve this. Must be super difficult in their present architecture or it would have been resolved. Max Tegmark cannot be happy since his math, which is the creator of all structure in the universe, is reduced to a verbose prose. And a broken one at that.Stop Tzar Vladimir the Condemned!
|
|||||||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8551 From: Phoenix Joined: Member Rating: 4.9 |
Percy, I think the stick things are chop sticks, the most common eating utensil in the world.
Stop Tzar Vladimir the Condemned!
|
|||||||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8551 From: Phoenix Joined: Member Rating: 4.9 |
A follow-up on the math issues.
quote: Just a moment... And what is this "Just a moment..." instead of the article title or url?Stop Tzar Vladimir the Condemned!
|
|||||||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8551 From: Phoenix Joined: Member Rating: 4.9 |
Percy is not used to speaking dumb hillbilly. He didn't recognise it. You and I, on the other hand ...
Stop Tzar Vladimir the Condemned!
|
|
|
Do Nothing Button
Copyright 2001-2023 by EvC Forum, All Rights Reserved
Version 4.2
Innovative software from Qwixotic © 2024