|
Register | Sign In |
|
QuickSearch
Thread ▼ Details |
|
Thread Info
|
|
|
Author | Topic: Generating information in a neural network playing chess | |||||||||||||||||||||||||||||||||||||||
dwise1 Member Posts: 6059 Joined: Member Rating: 7.8 |
WookieeB. I am a retired software engineer with four decades of practical professional experience.
What is it about programming that you do not understand?
|
|||||||||||||||||||||||||||||||||||||||
WookieeB Member (Idle past 150 days) Posts: 190 Joined: |
What is it about programming that you do not understand? I dont think Im having any problem understanding what programming does. (BTW - I also have experience with software dev) But perhaps you can explain then how a program does or creates something it was not programmed to do? The claim seems to be this program did something or created new information that was not accounted for in the programming. Perhaps the problem is that those making the claim do not understand what "information" is in context of the claim. (hint: Information does not equal data)
|
|||||||||||||||||||||||||||||||||||||||
ringo Member (Idle past 612 days) Posts: 20940 From: frozen wasteland Joined: |
WookieeB writes:
That's the whole point of AI - getting the computer to figure things out for itself instead of being programmed with every possibility. But perhaps you can explain then how a program does or creates something it was not programmed to do?"I'm Fallen and I can't get up!"
|
|||||||||||||||||||||||||||||||||||||||
WookieeB Member (Idle past 150 days) Posts: 190 Joined: |
But perhaps you can explain then how a program does or creates something it was not programmed to do? That's the whole point of AI - getting the computer to figure things out for itself instead of being programmed with every possibility. Well for one, you didnt answer the question. Second, you answer with a false dichotomy."getting the computer to figure things out for itself" - well getting the computer to figure things out is essentially what all programming does. What does "for itself" mean? The computer has no sense of self, and will only do what it is programmed to do. "instead of being programmed with every possibility" - which happens ...never, and is certainly not the case for this chess program (nor was that claim ever being made. I'm not even sure what this is supposed to mean. As for AI, that is a term that is very vague. But if you are in any way referring to a computer performing any function independent of it's programming, then that so far has happened..... never.
|
|||||||||||||||||||||||||||||||||||||||
ringo Member (Idle past 612 days) Posts: 20940 From: frozen wasteland Joined: |
WookieeB writes:
Again, the whole point of AI is to get the computer to do what it is not specifically programmed to do.
The computer has no sense of self, and will only do what it is programmed to do. WookieeB writes:
A machine was given the goal of not crashing into anything. It decided that the way to achieve that goal was to not move at all. But if you are in any way referring to a computer performing any function independent of it's programming, then that so far has happened..... never. Nobody programmed it to do nothing. It figured that out for itself."I'm Fallen and I can't get up!"
|
|||||||||||||||||||||||||||||||||||||||
Stile Member (Idle past 244 days) Posts: 4295 From: Ontario, Canada Joined:
|
I don't think you see the difference.
Let's try using your example:
For example, I could develop a computer program to take the addresses of the roughly 327 million people in the USA and then filter out those living in Arizona. Even before I gave it an initial data set (the 327m people), the algorithm is created. Now I feed the program the initial data set and it spits output of about 7.2 million names. Normal programming, as you understand it:
By your logic it was the computer program itself that developed how to get a list of Arizona dwellers. That is silly And you are correct, for normal programming. AI programming, however, is not normal programming. Here is an example of AI programming in line with your example: Feed in the addresses of the 327 million people in the USA.Feed in moving rates (selling your house, buying a new one) for the last 50 years. -write AI algorithm to take the first years data and predict the second - look up the answer and adjust it's own algorithm -then take the first two years data and predict the third - look up the answer and adjust it's own algorithm -then take the first 3 years data and predict the 4th - look up the answer and adjust it's own algorithm -ask any person to predict the moving rate for next year; especially those who are our "current experts" on it -ask the AI program to predict the moving rate for next year -When next year comes, the AI program had a "closer/better" prediction than any other person-Upon inspection of the AI program, it used a method to predict the moving rate that no person has ever thought of using (possibly because it's too complicated, and/or possibly because no one thought it was applicable.) The AI program "invented" a way to predict the moving rate that didn't exist before.Yes, people invented the AI - but that's missing the point. Just like the Chess AI "invented" a way to play chess that's better than any person or any "normally programmed" chess computer.This way to play chess didn't exist before. No one knew about it - certainly not the programmers. Normal programming - people invent an algorithm knowing it how it will solve a problem - a computer can just do it fasterAI programming - people invent an algorithm without knowing if there's a better solution or not - the computer comes up with the "how to solve it" and it can be better than anything people have ever thought of before. In both scenarios: People invent the computer and the programming to get the answer.But if you don't see the difference: people knowing the "how to" beforehand - just being unable to practically do it, or people learning the "how to" from the computer program... then you're just being silly. Edited by Stile, : Clarified last sentence to better match the idea attempting to be explained.
|
|||||||||||||||||||||||||||||||||||||||
WookieeB Member (Idle past 150 days) Posts: 190 Joined: |
Apologies for the length of this post, but the devil is in the details.
A machine was given the goal of not crashing into anything. It decided that the way to achieve that goal was to not move at all. Nobody programmed it to do nothing. It figured that out for itself. I call BS. Show me the code. When you say "It decided" I'll bet there was some procedure in the code that allowed for or was a direct result of it not moving at all.
Stile writes: AI programming, however, is not normal programming..... Normal programming - people invent an algorithm knowing it how it will solve a problem - a computer can just do it faster AI programming - people invent an algorithm without knowing if there's a better solution or not - the computer comes up with the "how to solve it" and it can be better than anything people have ever thought of before. Your definition of Normal programming is sufficient. You have a problem or goal and you create a set of rules to address the problem. Fine. The only advantage of the computer is speed and/or efficiency. (Take out the direct reference to a computer and that definition fits just about any endeavor a person does) The AI description is odd though because you use a different standard. Instead of indicating AI programming is also about solving a problem, you make the standard whether it is "better" than something else. What makes it "better" you don't define, but I would assume whatever metric "better" would be is compared to human capabilities. So the moment a program does something "better" than a human, it becomes AI? But till it does perform better, is it just "normal programming? Or if an AI program does better than a human, but then later a human comes along and does better than the program, does it go from bein AI to not-AI (normal?) Secondly, the computer itself if not coming up with "how to solve it". You cannot just feed data into a computer and simply say: "Figure this out" and not give it instructions on how to figure it out. Computers don't know how to do anything except what told to do. And that really is the key! Even if a computer performs a task "better" than a human, it still is relying on the programming of a human to do anything. The output or result to the question of "how to solve it" might be something a human hasn't thought of before, but the "how to solve it" method is ALL due to human instructions. This is a category difference you are not seeing.
Here is an example of AI programming in line with your example
In your example, which is just a hypothetical one, you end up making the same category error that you think justifies a claim that the computer itself created something new. Let's break it down.
-write AI algorithm to take the first year's data and predict the second - look up the answer and adjust it's own algorithm Let's call this CYCLE 1.So the first function of the program has an algorithm (code/instructions/rules to do something) to make a prediction off of some data it is fed (1st year). This algorithm no doubt has some search function, some filter to sift through the data, and actions to take on the results of the search which would constitute the "prediction". There there is a second function of the program that takes an "answer" and does something with it. You don't specify what the "answer" is, so I have to try and make an educated guess. Do you mean what the actual data is from real life after a specified time period (2nd year)? If so, this is more data fed to the computer....which is then compared against (I assume) the prediction data. Then, based on more programming (another algorithm) the first function algorithm is adjusted to be more accurate based upon the comparison done between the prediction and real data. I suspect this would fall into some machine learning programming. The success of this process depends on the amount of data given to it.
All of this is intelligently designed, goal-directed, programming. There is no property of the computer itself that could come up with results other than what the instructions were given to it stipulate.
-then take the first two years data and predict the third - look up the answer and adjust it's own algorithm This is CYCLE 2. Pretty much the same as CYCLE 1, except now (I'm assuming) its ingested data includes data that was somewhat already pre-processed in the prior cycle (2nd-year results compared against the prediction to provide data for the adjustment algorithm). Still.....there is no property of the computer itself that could come up with results other than what the instructions given to it stipulate.
-then take the first 3 years data and predict the 4th - look up the answer and adjust it's own algorithm CYCLE3 - same thing as CYCLE 2 with more data. Computer following instructions....still.
-ask any person to predict the moving rate for next year; especially those who are our "current experts" on it
This should be irrelevant, but I think you are here trying to set up the new category to apply to your computer. So, we ask human "experts" to predict results for year 5.....
-ask the AI program to predict the moving rate for next year
...and the computer is doing the same thing in performing the first function of CYCLE 4 (if following the pattern of prior cycles, it's using data from years 1-4 plus it's thrice updated "prediction" function to predict 5th-year results) Now comes the trick...
-When next year comes, the AI program had a "closer/better" prediction than any other person
No quantitative description is given here. Just "closer/better". I'm not sure why this should be significant. After all, isn't the goal of the program to accurately predict results faster or better than humans are normally capable of? What about the prior year's predictions, how do CYCLEs 1-3 results compare? If the results were not "better" was it not considered AI yet?
-Upon inspection of the AI program, it used a method to predict the moving rate that no person has ever thought of using...
And now we have the core of your argument! It is essentially summed up in one word: method. Unsurprisingly, there is no detail given or what this "method" would be. Nevertheless, supposedly in this hypothetical scenario, whatever the first function algorithm of the cycle did, its "method", was significant because it was what "no person has ever thought of using". But why is that significant? Is doing some task that a human wouldn't or couldn't do the standard for calling a computer....intelligent? Having its own sense of intentionality? Special? Or is it significant because the "method" resulted in "closer/better" than the "experts" You see, you've erected some threshold defined by whatever the human "experts" could do, and when the computer exceeded that threshold, you then dubbed it as having accomplished something on its own. That is the category error you are making. But what you are failing to see is that a human mind did actually think of the "method", albeit indirectly. The programmers gave it everything it needed to develop its "method" without having to know what the end result would be. Saying it another way, there is nothing in the "method" that could not be accounted for by the original programming.
...(possibly because it's too complicated, and/or possibly because no one thought it was applicable.)
Both cases are irrelevant. I'm not completely sure what you mean by "complicated", but if it has to do with the processing capability of a computer vs human... so what? If you took a human, and gave them the rules of the program to follow, it might take a LOT LONGER to accomplish, but the human would arrive at the exact same results. As for being applicable, then that acknowledges that someone HAD thought of the "method" and it was subjectively dismissed..again, so what?
The AI program "invented" a way to predict the moving rate that didn't exist before.
If "invented" means followed rules given to it by programmers that led to an analysis that was more accurate than other human "experts" had obtained through other means... OK. But it was all still based on programming by intelligent minds.Yes, people invented the AI - but that's missing the point. People invented the AI... yes, that is the point. People, not the AI computer itself independent of anything the people gave it, are responsible for developing the nebulous "method".
Just like the Chess AI "invented" a way to play chess that's better than any person or any "normally programmed" chess computer.
I'm not sure how you justify these statements, as the chess program so far hasn't played a person (which is irrelevant anyway), and how you are qualifying the other chess programs as "normally programmed" is without support.This way to play chess didn't exist before. No one knew about it - certainly not the programmers. The program may have developed a "style" of play not seen before, but that is a totally subjective observation. I don't think you can claim it is a new way to play, after all, it is still playing by the rules.... that were given the program. Also, I find it very telling that none of the creators of the chess program have made claims of the program coming up with something new apart from its programming akin to what was proposed on this thread. But if you don't see the difference: people knowing the "how to" beforehand - just being unable to practically do it, or people learning the "how to" from the computer program... then you're just being silly. If the people you are referring to are the programmers, the first part is true. But the second part is not. The programmers knew the how to - they programmed the how to in their code. They may not be able to specify the results/output from any specific moment in time because they cannot process the data as fast as a computer. But that doesn't mean that the computer is acting any way other than what they programmed it to do.
|
|||||||||||||||||||||||||||||||||||||||
Stile Member (Idle past 244 days) Posts: 4295 From: Ontario, Canada Joined: |
WookieeB writes: Now comes the trick...
Stile writes: -When next year comes, the AI program had a "closer/better" prediction than any other person No quantitative description is given here. Just "closer/better". I'm not sure why this should be significant. After all, isn't the goal of the program to accurately predict results faster or better than humans are normally capable of? What about the prior year's predictions, how do CYCLEs 1-3 results compare? If the results were not "better" was it not considered AI yet? That is, indeed, "the trick" you are missing. It is a fact that AI programming can produce better results than humans.-they can play chess better (by winning more games) -they can make predictions better (by being more accurate once the real data is obtained) If the result is not "better than humans" - the programming is still considered AI, it's just also considered a failure - the AI did not work out something better than humans have already identified. The thing is - this doesn't always happen.Sometimes the AI does come up with a result that is better than humans. Like winning at chess better than any other known method.Or making predictions better as in my example. And now we have the core of your argument! It is essentially summed up in one word: method. Unsurprisingly, there is no detail given or what this "method" would be. That's the thing - sometimes the "method" cannot be identified because the AI created the method and we don't even understand it. Sometimes the "method" can be identified - and we learn that the AI found an interesting way to solve a problem we never knew aboutCheck this out: quote: Humans: "You shouldn't be able to create an oscillator without a capacitor."AI program: "You can create an oscillator without a capacitor if you use radio signals." -AI is teaching humans -AI is teaching the programmers of the AI Is it possible for humans to know/learn this? Of course. Just as it is possible for humans to learn/win at chess as the AI can.The point is, though - that the AI did it first, and taught the humans - even the human programmers didn't know about the method the AI created. If the people you are referring to are the programmers, the first part is true. But the second part is not. The programmers knew the how to - they programmed the how to in their code. They may not be able to specify the results/output from any specific moment in time because they cannot process the data as fast as a computer. But that doesn't mean that the computer is acting any way other than what they programmed it to do. There are things that AI can teach people that no person knew about (not even the AI programmers) before the AI figured it out. The point isn't "is the AI doing something it isn't programmed to do?"The point is "are humans teaching the program? (normal programming) or is the program teaching humans? (AI programming.)" If you can't identify that difference, or don't think it's significant - you're just being silly.
|
|||||||||||||||||||||||||||||||||||||||
WookieeB Member (Idle past 150 days) Posts: 190 Joined: |
@Stile
You are still missing the point.
It is a fact that AI programming can produce better results than humans. Nobody is contesting this. I agree that some programs can do tasks better than a human. And as I acknowledged in my comment that you quoted, that is the often the goal of programs. As an example, advanced chess programs have been beating humans consistently since the turn of the century. Ever heard of Deep Blue? But it is an irrelevant observation. The question has NOT been: "Can a program/algorithm do something better than humans?"The question has been: "Did a non-intelligent process, specifically the AlphaZero chess program, generate new information?" 'Doing something better than a human' does not equate to 'new information'. The fact that the hypothetical prediction program made a better prediction than the 'expert' humans did is a wonderful thing. But that it what it was designed to do. Even if the solution/output/goal reached was beyond what a human had yet thought of, the way to get to that output WAS thought of..... it was the program given to the computer. The computer did nothing on its own. For the AlphaZero chess program, even though it can beat the pants off of the best human players (don't think this has been tested, but there is no reason to doubt it), and even though it can beat other chess programs, it is still just following the instructions given to it by the programmers. Even if the output is a style of play not recognized before by human players, the output still is a result of the programming given to it by the programmers. The AlphaZero program did not create anything "new" itself. You see, all the information on how to play the game, any rules (algorithm) to compare and give weight to moves in order to win a game, and the procedure to update the weights (update the main algorithm) were all provided to it my intelligent mind(s). All the information, the billions+ of possible games, whichever number of moves deep, were in essence provided by the programmers for the program. All the information on how to sift through and find the 'best' choices with the goal of winning (or draw) of a game were given to the computer. If you think about it, if you had given a human the task to unthinkingly follow the programming, they would come up with the exact same results as the program. The only difference is the speed of computation that the computer has vs a human. What would take a human (probably) years to do, the computer could accomplish in a few seconds. I think where the problem is coming in is that people are confusing what should be called the "method" - the algorithm, with the results/output of processing the "method". The output might be something that a human had not thought of, seemingly something novel. But in reality it is not anything new, as that possible output was accounted for in the programming. All the program did was search through all the possible data points (of which that 'novel' result was a member) and select that particular output, based on programming instructions.
If you can't identify that difference, or don't think it's significant - you're just being silly.
I see the difference, but I do not think it is significant. You have not justified why it is.
Humans: "You shouldn't be able to create an oscillator without a capacitor." AI program: "You can create an oscillator without a capacitor if you use radio signals." -AI is teaching humans -AI is teaching the programmers of the AI I'd like to see the reference for this story. But again, it's somewhat irrelevant.That a human didn't think an oscillator could be created without a capacitor is not the issue. What would be interesting is whether the computer program was told you can't build an oscillator without a capacitor. I'll bet there was nothing in the programming that prevented it, and instead that eventuality was allowed in the programming. sometimes the "method" cannot be identified because the AI created the method and we don't even understand it.
Nope. Don't buy it. The output might be something a human doesn't understand, but the method getting to that output is understood - a programmer provided it.
The point is "are humans teaching the program? (normal programming) or is the program teaching humans? (AI programming.)"
You're creating your own definition here for normal vs AI programming. But it seems circular reasoning to me. Cause, again, would a program not be considered AI if it didn't "teach" something to humans? You seemed to indicate "No" to that question in your post.
|
|||||||||||||||||||||||||||||||||||||||
Stile Member (Idle past 244 days) Posts: 4295 From: Ontario, Canada Joined:
|
WookieeB writes: Stile writes: Humans: "You shouldn't be able to create an oscillator without a capacitor."AI program: "You can create an oscillator without a capacitor if you use radio signals." -AI is teaching humans -AI is teaching the programmers of the AI I'd like to see the reference for this story. Page 153 of the book Superintelligence: Paths, Dangers and Strategies by Nice Bostrom.The book just describes the scenario, but it includes a footnote to look up the original: Bird and Layzell (2002) and Thompson (1997); also Yaeger (1994, 13—14) If we look up Bird and Layzell, you can get to this page:
Radio Emerges from the Electronic Soup Here's another quote from that article:
quote: The AI was programmed to make an oscillator out of transistors.No one told it to make an antenna. No one told it what an antenna is, or how an antenna works. The AI figured out that, if you only have transistors (no capacitor) - the best way to make an oscillator is to make a radio. Bird and Layzell didn't know that.No one did. Until the AI figured it out and showed them. The output might be something a human doesn't understand, but the method getting to that output is understood - a programmer provided it. This is the point of AI - allowing the program to develop the method, which we sometimes don't understand. Quoted again, this time bolded:
quote: Sure "the method getting to that output is understood" - it's iterative based learning they programmed into the AI.-no one cares about this point. What you keep missing is "sometimes we can't figure out the method the AI created during the iterative based learning method and that method is better than anything humans were able to previously identify."-this is "AI creating new information" that humans did not create beforehand You can muddle definitions all you want - this idea isn't going away.
You're creating your own definition here for normal vs AI programming. But it seems circular reasoning to me. Cause, again, would a program not be considered AI if it didn't "teach" something to humans? You seemed to indicate "No" to that question in your post. No. AI is, in a few words, "iterative based learning." This can sometimes develop things humans don't understand to solve problems humans can't solve.-That's the interesting part that everyone talks about This can sometimes develop things humans already understand or things not-as-good as solutions already discovered by humans.-This isn't interested, so no one talks about it, but it's still AI The fact you can't make go away, is that it's possible for AI to develop ideas that humans didn't program in (because "iterative based learning" creates ideas).Many times these ideas are useless or worse. Sometimes they're better. Sometimes they're new. Sometimes they're new and better.
|
|||||||||||||||||||||||||||||||||||||||
Dogmafood Member (Idle past 549 days) Posts: 1815 From: Ontario Canada Joined: |
I don't know much about programming but I drink lots of beer.
The question has been: "Did a non-intelligent process, specifically the AlphaZero chess program, generate new information?" What would qualify as new information? You are saying that a program never does anything that it wasn't forced to do by the programmer. That seems right until the program has no boundaries.
|
|||||||||||||||||||||||||||||||||||||||
WookieeB Member (Idle past 150 days) Posts: 190 Joined: |
Page 153 of the book Superintelligence: Paths, Dangers and Strategies by Nice Bostrom.
Ahh yes. The devil is in the details.The book just describes the scenario, but it includes a footnote to look up the original: Bird and Layzell (2002) and Thompson (1997); also Yaeger (1994, 13—14) I looked up the experiment, and it turns out that you have mis-characterized what happened. If you were relying on Bostrom's account, it might be excused as his description lacked detail and he was focusing on one aspect.
Bostrom writes:
The work "seemingly" is key. It initially gives the impression that the capacitor is a must have item, but in truth it is not and the experimenters knew this.
Another search process, tasked with creating an oscillator, was deprived of a seemingly even more indispensable component, the capacitor. Humans: "You shouldn't be able to create an oscillator without a capacitor."
No, that was not the impression. Layzell specifically setup the scenario for the system to find solutions via the "evolutionary algorithm" given to it to be able to develop an oscillator with the switches-transistors it had available. The parameters of the setup were extremely sensitive, but that sensitivity is what allowed solutions to be found. And they did find multiple solutions, not just the "radio" version.
AI program: "You can create an oscillator without a capacitor if you use radio signals."
You can also make an oscillator without a capacitor and with just transistors, which the AI did find. This was all designed into the experiment.
The AI was programmed to make an oscillator out of transistors.
Which it did.
No one told it to make an antenna.
And the AI didnt make an antenna. The designers did, albeit without initially knowing it. It was just an artifact of physics, where the printed circuit board tracks (not the AI) naturally amplified radio signals in the air (in this case generated by nearby PC monitors). The AI was, by design, already tracking voltages. So when it detected changes in the impedance, it just made use of that environmental factor in its normal course of action to developing an oscillator. It still didn't know what an antenna is, or how it works - none of that mattered to the AI.No one told it what an antenna is, or how an antenna works. In the same experiments, they noted that they had some successful oscillator runs that were due to a soldering iron being plugged in nearby. When they unplugged the iron, the oscillations failed. This was due to a slight variance in input voltage that the plugged in iron caused for the experiment. This just shows the high sensitivity of the components and how unforeseen outside influences could affect the results.
The AI figured out that, if you only have transistors (no capacitor) - the best way to make an oscillator is to make a radio.
For one, the radio version was not the "best" way. They had other successful oscillation runs not using the radio signals. Secondly, crediting the AI with figuring this out is like crediting a dentist with creating a radio just because one of their patients have heard radio signals in their heads due to getting metal fillings/braces in their mouth.
To pick up a radio signal you need other elements such as an antenna. After exhaustive testing they found that a long track in the circuit board had functioned as the antenna. But how the circuit figured out that this would work is not known.
How the AI "figured out" this out IS known. But of course, this all depends on what is meant by the (loaded) wording of "figure out". If one is just referring to the AI in its programmed direction to be measuring changing voltages (and other stuff) and sensing the changes caused by the radio waves on the circuits causing changing impedance, then to make use of varying voltages per it's programming.... then OK, it "figured out" how to use radio waves. But if you think that the AI somehow became aware of what radio is and decidedto produce 'something' new apart from it's instruction to make use of radio waves, you're kidding yourself. The radio effect was there before the AI did anything, and all it did was make use of an input that it was already designed to detect and modify. The AI had no idea that radio was involved or not involved. It has no idea what "radio" even means, in any form. Sure "the method getting to that output is understood" - it's iterative based learning they programmed into the AI. .... What you keep missing is "sometimes we can't figure out the method the AI created during the iterative based learning method .... You're contradicting yourself here.
AI is, in a few words, "iterative based learning." And another statement for "iterative based learning" is brute force processing. And this is really the only advantage a computer has over a human. The speed of calculating through an iterative process for a computer vs human is orders of magnitude better. Otherwise, a human could come to the exact same conclusions (output) as a computer program. In both cases, it's merely plodding through a set of rules. All that above is frankly academic, but doesn't address the original contention - how did the AlphaZero program create new information?
|
|||||||||||||||||||||||||||||||||||||||
WookieeB Member (Idle past 150 days) Posts: 190 Joined: |
You are saying that a program never does anything that it wasn't forced to do by the programmer. That seems right until the program has no boundaries Yes, that is essentially correct. In the case of the AlphaZero program in relation to chess (it was programmed for other games besides chess), everything from how to play chess to analyzing moves and sets of moves to weighing moves to achieve a winning outcome were all given to the computer. Add a lot of memory and processing power and let it go. The outcome? A set of moves that can beat the next best chess program out there. Everything it did was by design! But I wonder what you mean by 'a program that has no boundaries'? Boundaries are inherent to programming, so you'd need to define what you mean.
What would qualify as new information?
Put simply, anything produced outside the 'bounds' of the programming. So I would expect AlphaZero to be able to play any chess game and excel, cause that is what the bounds of the program allow. But could it produce output not related to chess? How about output applicable to another game, like checkers, or a chess variant (3D chess)? Even if you gave it move rules for another game, unless somebody redesigned the weighing and updating algorithm, I bet it would fall flat on its face when playing anyone halfway competent.
|
|||||||||||||||||||||||||||||||||||||||
Stile Member (Idle past 244 days) Posts: 4295 From: Ontario, Canada Joined:
|
WookieeB writes: And the AI didn't make an antenna. The designers did, albeit without initially knowing it. Nope - you're confusing two different ideas again. As I said before: Sure "the (general) method getting to that output is understood" - it's iterative based learning they programmed into the AI.-no one cares about this point. -this is how you're saying "the designers made the AI make the antenna without knowing about it." -again - this is accepted, but it's irrelevant as it's silly -again - no one cares about this point because it's too silly to try and make it seem significant. What you keep missing is "sometimes we can't figure out the (specific) method the AI created during the iterative based learning method and that method is better than anything humans were able to previously identify."-this is "AI creating new information" that humans did not create beforehand Again, from the article:
quote: -this is the really cool and interesting point-the program itself created an algorithm to create an antennae, the AI didn't even know an antennae would specifically help (as you said - there were multiple other solutions that didn't involve an antennae at all.) -but the program figured this out, made it a "decent enough" antennae and ran with it - all without the programmers having any idea at all that it could possibly go in this direction - and the programmer STILL can't figure out how the AI made these connections and moved forward. Do you understand this? We have all the code for the AI available to us - and we STILL can't read it's mind! You can muddle definitions all you want - this idea isn't going away.
And another statement for "iterative based learning" is brute force processing. This is not true at all. Perhaps you simply do not understand programming. "Iterative based learning" and "brute force processing" are basically two extremely opposite programming methods... like "left" and "right" on the political spectrum. If you think they are the same thing - I then understand why you keep missing the other key points. "Brute force" is used when the algorithm is completely understood - but the act of "doing the calculations" would take too long for a human.Set a computer to do the "calculations" - and it spits out the answer. -there is no teaching of an algorithm from program to humans here, only teaching of the answer "Iterative based learning" is used when the algorithm is completely unknown (even unknown if an algorithm exists or not) - a program is written (AI) to "create algorithms" and try them out. Sometimes none can be found. Sometimes many are found and all are previously understood anyway. Sometimes many are found and some of those are new information that humans didn't understand before: Like the ability to beat the existing "best chess" computer or the ability to create an oscillator with transistors by making a radio with an antennae.-there is the possibility here for the program to teach humans an algorithm they were previously unaware of, and the answer may or may not be identified in the end.
|
|||||||||||||||||||||||||||||||||||||||
WookieeB Member (Idle past 150 days) Posts: 190 Joined: |
What you keep missing is "sometimes we can't figure out the (specific) method the AI created during the iterative based learning method
No, this is not true. They can ALWAYS figure out the method, because they programmed the method. You are confusing method and algorithm with the output data of what is being sought. Programmers put a method, an algorithm, to find something out. Even if they program a changeable parameter (a variable) within the method, they still understand the method completely because they made it. The programmers may not know ahead of time what the output will ultimately look like, but that output will ALWAYS be constrained by the parameters of the method/algorithm that they created. Even if the method/algorithm is being updated by a random or variable parameter, that is still a specification that the programmers put in. It is all BY DESIGN.
this is "AI creating new information" that humans did not create beforehand
No, it is not. You are not understanding the concept of information as it relates to computer programming.
But how the circuit figured out that this would work is not known.
It didn't figure anything out. The designers had their hardware board that contained and arrangement of transistors and switches. Before even running the experiment, the designers KNEW that they could produce, or likely produce, an oscillator from their hardware (without a capacitor). They maybe didn't know the precise or best configuration of switching transistors that would produce the desired output, but they knew that within the constraints of their design, it was (likely) possible. Put simply, their "AI" program was programmed to take an action among the controlled switches, monitor the output for oscillation (Yes/No), and then make a note of it. In the 'radio' environment, even before they involved the AI, there was a 'radio' functioning on the hardware. There was likely a random pre-configuration of the switches that was conducive to all of this. Then there was some effect from the electromagnetic waves from the environment (in this case computer monitors) reacting with the wiring (as an antenna) that caused a shifting of voltages on the physical copper - ie a radio. Whether or not the designers realized this ahead of time is irrelevant. It was a physical property that existed. Now run the AI. Does the computer program detect "radio"? NO! ALL the computer program identifies is an oscillation of voltages, cause that is what it was designed to detect. That it happens to be coming from a physical effect of 'radio' waves on the hardware is something blind to the computer. It has no understanding of, nor any capability to understand, what radio is. All it sees is: 'Oscillating yes/no?' and the ability to tweak its switching to affect that output to a pre-specified target. The computer AI has no more understanding of "radio" than it does of "soldering iron" where some oscillations were found when a soldering iron was plugged in nearby and that went away when the iron was unplugged. It had no idea of "human hand" that was found to affect some voltages when it was nearer to the board. The only surprise in all of this was that the designers were not aware of the radio effect on the wiring. When they looked at a configuration that the AI had identified as Oscillator:YES, they had to investigate why something that normally should not have worked was working like that. The AI certainly did not tell them "radio" in any fashion, all it said was: configuration:xyz is Oscillator:YES. It was the minds of the designers that discovered "radio" was what worked. The AI or circuit didn't figure anything out. So....
the program itself created an algorithm to create an antennae
No, it didn't. The antenna was there already before the AI did anything. The AI made use of an alternating voltage, but it was designed to do so. That an antenna caused that alteration the AI never knew, nor did the designers realize until after the DESIGNERS investigated.
but the program figured this out, made it a "decent enough" antennae and ran with it - all without the programmers having any idea at all that it could possibly go in this direction - and the programmer STILL can't figure out how the AI made these connections and moved forward.
The program detected a voltage oscillation, which it was designed to look for and react to. That it was caused by an antenna was oblivious to the program. Whether or not the programmers had an idea that it could go this direction is irrelevant. That a program could modify switches to further affect voltage oscillations WAS an idea the designers had, and that is what the program did to the voltages caused by the 'radio' effect. The programmers of course DID figure out how it all worked, otherwise we wouldn't be hearing about it.
We have all the code for the AI available to us - and we STILL can't read it's mind!
Overstate things much? This is BS. A computer and/or computer program is not a mind.
You can muddle definitions all you want
You're the one muddling definitions. "Iterative based learning" and "brute force processing" are basically two extremely opposite programming methods Iteration basically means a repetition of a process. When it comes to programming it is basically a self-contained process that repeats itself until some variable causes a break in the repetition.Brute force is basically the same thing, just said in a different manner. You repeat a process multiple times, with some slight change in the input (one variable), till you get an desired output (another variable) that stops the repetitive process. You are just hanging up on the whatever meaning you have for "learning" in your "iterative based learning" phrase.
"Brute force" is used when the algorithm is completely understood - but the act of "doing the calculations" would take too long for a human. "Iterative based learning" is used when the algorithm is completely unknown (even unknown if an algorithm exists or not) - a program is written (AI) to "create algorithms" and try them out. The first part is all true. The second part is just crap. "Program" and "Algorithm" are pretty much synonymous. If you want to quibble about the definitions, then an "algorithm" is the set of rules to be followed in calculations, and that at the least would be in the program given to a computer. So to say that the algorithm is unknown is foolish. Even if there are iterations and variables allowed in the program that can change what is considered the "algorithm", it is all still within the bounds of the program and is KNOWN by the designers.
Sometimes many are found and some of those are new information that humans didn't understand before: Like the ability to beat the existing "best chess" computer or the ability to create an oscillator with transistors by making a radio with an antennae.
You are confusing the algorithm with what the algorithm is designed to find: some output. The algorithm is the processing rules. Those rules are what the designers decided, and are within the bounds of the program. The result of running that algorithm, the OUTPUT, might be something that the designers have never seen before....... but finding that output is what the purpose of the algorithm was designed for.-there is the possibility here for the program to teach humans an algorithm they were previously unaware of, and the answer may or may not be identified in the end. It is not the algorithm that teaches humans. It is the result of the algorithm, the output, that can teach humans. And what it teaches the humans is totally outside what the algorithm does. In the oscillator experiment, the algorithm was designed to find an oscillator given the hardware provided. That one of those oscillators came from the effect of radio waves was not something recognized by the program - all it saw was an oscillator:yes. It was the humans that discovered the knowledge that it was a "radio". In the chess program, the algorithm was designed to play chess and output a winning strategy. That it was recognized as a style of play that was unusual was an observation not made by the program, but that was made by an independent human mind outside any involvement of the program. Thus, it is always the OUTPUT that teaches humans after humans reflect on it. The computer NEVER does that.
|
|
|
Do Nothing Button
Copyright 2001-2023 by EvC Forum, All Rights Reserved
Version 4.2
Innovative software from Qwixotic © 2024