Register | Sign In

Understanding through Discussion

EvC Forum active members: 65 (9164 total)
9 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,608 Year: 3,865/9,624 Month: 736/974 Week: 63/286 Day: 63/58 Hour: 10/3

Thread  Details

Email This Thread
Newer Topic | Older Topic
Author Topic:   Generating information in a neural network playing chess
Posts: 190
Joined: 01-18-2019

Message 3 of 33 (871870)
02-14-2020 12:27 AM
Reply to: Message 1 by JonF
02-03-2020 10:10 AM

A sticking point among a lot of ID/Creationists seems to be whether information can be generated by a non-intelligent process
A bit of a vague, non-committal, and not necessarily true statement. Neverthless, it has no bearing..... because even if you classify what was generated as information, it was generated by an intelligent process. The AlphaZero program and algorithms were made by intelligent minds.

This message is a reply to:
 Message 1 by JonF, posted 02-03-2020 10:10 AM JonF has replied

Replies to this message:
 Message 4 by JonF, posted 02-14-2020 9:42 AM WookieeB has replied

Posts: 190
Joined: 01-18-2019

Message 5 of 33 (871928)
02-16-2020 12:54 PM
Reply to: Message 4 by JonF
02-14-2020 9:42 AM

The Alpha Zero program wrote its own "algorithms".
LOL, no it didnt. It was programmed with a MCTS and Alpha-Beta search looking for slightly different outcomes from other chess algorithms. Add in some good memory and compute power and you ended up with the best chess program to date.
But it's intelligently designed all the way down.

This message is a reply to:
 Message 4 by JonF, posted 02-14-2020 9:42 AM JonF has replied

Replies to this message:
 Message 6 by JonF, posted 02-16-2020 3:43 PM WookieeB has replied
 Message 7 by RAZD, posted 02-16-2020 4:39 PM WookieeB has replied

Posts: 190
Joined: 01-18-2019

Message 10 of 33 (871999)
02-17-2020 5:20 PM
Reply to: Message 6 by JonF
02-16-2020 3:43 PM

It wrote its own algorithms.
No it did't. Did you even look at the paper referenced about the program? The one named "Mastering Chess and Shogi by Self-Play with a
General Reinforcement Learning Algorithm" which describes the general idea of what was developed as the program, what instructions and concepts were programmed programmers???

This message is a reply to:
 Message 6 by JonF, posted 02-16-2020 3:43 PM JonF has not replied

Posts: 190
Joined: 01-18-2019

Message 11 of 33 (872001)
02-17-2020 5:27 PM
Reply to: Message 7 by RAZD
02-16-2020 4:39 PM

NRe: Evolved algorithms
No, it didnt "evolve" it's own algorithm. It was programmed with the rules of chess (and other games as it turns out), provided formulas for weighing different moves in the game, given memory and the ability to compare different outcomes so that it could use what it had previously determined and build from there. All the instructions for it to do what it was going to do, the algorithm, was given to it by the programmers. The program did not learn to do any new tasks or abilities that were not given to it.

This message is a reply to:
 Message 7 by RAZD, posted 02-16-2020 4:39 PM RAZD has replied

Replies to this message:
 Message 12 by RAZD, posted 02-22-2020 2:15 PM WookieeB has replied

Posts: 190
Joined: 01-18-2019

Message 13 of 33 (872264)
02-23-2020 10:15 PM
Reply to: Message 12 by RAZD
02-22-2020 2:15 PM

Re: Evolved algorithms
It was programmed with the rules of evolution...
A bit of a generous description, don't you think? After all, there was no mention of the word "evolution" or "evolve", and only one instance of "selection" that had nothing to do with what could be inferred as a "rule of evolution". But hey, if you want to say that, I'll give that to ya.
And yet with that sort of description, the results end up being totally within the parameters of a designed system and show nothing akin to the claims of biological evolution being able to accomplish. I'm curious what you think "evolved" in the system.
The search space of moves is immense and could not fully be searched in detail with modern technology. And yet that space all falls within the parameters of the rules of chess. The programmers gave the chess program a way to heuristically search through the space, find patterns and apply weights to what it searched to build a statistically more-likely-to-win method to playing the game. It was intelligently designed all the way down, and in the end, no matter how well the program performed it's job (surviving in analogy), it still was just 'playing' chess.... as designed. It didn't learn or develop anything new on it's own.
If the whole point was to take the chess algorithm and analogize it to something like natural selection, ok. Nobody, including any ID proponent, is seriously debating that natural selection is a real thing. But I disagree with the title of the PT article, in that there was no complex information created, nor was the process random (claims the paper on the chess program never even hinted at). Any information the program dealt with and the process was programmed in already: ie chess rules, search methods, and algorithm(s) dealing with weighing results. Beyond doing what it was programmed to do in a way that exceeded prior programs, there's nothing there to blow a horn about.
Edited by WookieeB, : cleanup

This message is a reply to:
 Message 12 by RAZD, posted 02-22-2020 2:15 PM RAZD has replied

Replies to this message:
 Message 14 by RAZD, posted 02-25-2020 11:24 AM WookieeB has replied

Posts: 190
Joined: 01-18-2019

Message 15 of 33 (872336)
02-25-2020 4:58 PM
Reply to: Message 14 by RAZD
02-25-2020 11:24 AM

Re: Evolved algorithms
I'm curious what you think "evolved" in the system.
The winning Strategy.
How so? What is your definition of "evolved" then in this instance?
It was not programmed with the winning strategy. So no it was not intelligently designed all the way down it had to develop that strategy through trial and error.
What an odd statement! Of course the program was not given the explicit "winning strategy". If the programmers had 'the winning strategy', they wouldn't need to program anything. They did though program how to find a winning strategy. Everything the program did was according to the rules (ie the algorithm) that it was programmed for. Even the result of the weights and best choices for playing the game was a result of the program. That's what is meant by it being "intelligently designed all the way down", a term you apparently had no clue as to how to understand.
Your logic makes no sense. You are implying that the results of any search application is the application itself generating information and/or learning to do some task on its own. But that is just not the case.
For example, I could develop a computer program to take the addresses of the roughly 327 million people in the USA and then filter out those living in Arizona. Even before I gave it an initial data set (the 327m people), the algorithm is created. Now I feed the program the initial data set and it spits output of about 7.2 million names. By your logic it was the computer program itself that developed how to get a list of Arizona dwellers. That is silly
It learned how to win using an algorithm that was not part of the programming.
The hell it didn't!!! Put simply, an algorithm is a set of rules to follow to solve some question. The program was basically told how to play chess, and how to make the "best" move in a particular situation. What is defined as "best" was not initially specified, but by further rules provided by the programmers, the program was given the ability to randomly and efficiently search though the chess-game-moves search space, apply weights to various actions, and filter options to get what would (according to the rules) be the most statistically winning results when playing. Put simply, the programmers told the program what would constitute as being able to "win" and the program went out and did it. The program did nothing on its own that wasn't defined prior by a programmer.
Which is meaningless, as (A) this is not a defined biological term, (B) it is not quantifiable by any means I know, and (C) evolution does not have to do anything but evolution.
Funny how you'll quibble about this and not even know the meaning of the words you are using previously - like algorithm. And since we're talking about information in a computer program, and not a biological thing, you're complaint is hollow. Then you end with a tautology? Pot, meet kettle.
But if you want to still quibble, drop the word "complex" since you have no understanding of the meaning, use the title as is, and my statement still stands.
Edited by WookieeB, : cleanup

This message is a reply to:
 Message 14 by RAZD, posted 02-25-2020 11:24 AM RAZD has seen this message but not replied

Replies to this message:
 Message 16 by dwise1, posted 02-26-2020 1:04 AM WookieeB has replied
 Message 21 by Stile, posted 02-27-2020 4:32 PM WookieeB has replied

Posts: 190
Joined: 01-18-2019

Message 17 of 33 (872367)
02-26-2020 6:02 PM
Reply to: Message 16 by dwise1
02-26-2020 1:04 AM

Re: Evolved algorithms
What is it about programming that you do not understand?
I dont think Im having any problem understanding what programming does. (BTW - I also have experience with software dev)
But perhaps you can explain then how a program does or creates something it was not programmed to do? The claim seems to be this program did something or created new information that was not accounted for in the programming.
Perhaps the problem is that those making the claim do not understand what "information" is in context of the claim. (hint: Information does not equal data)

This message is a reply to:
 Message 16 by dwise1, posted 02-26-2020 1:04 AM dwise1 has not replied

Replies to this message:
 Message 18 by ringo, posted 02-27-2020 11:03 AM WookieeB has replied

Posts: 190
Joined: 01-18-2019

Message 19 of 33 (872447)
02-27-2020 1:17 PM
Reply to: Message 18 by ringo
02-27-2020 11:03 AM

Re: Evolved algorithms
But perhaps you can explain then how a program does or creates something it was not programmed to do?
That's the whole point of AI - getting the computer to figure things out for itself instead of being programmed with every possibility.
Well for one, you didnt answer the question.
Second, you answer with a false dichotomy.
"getting the computer to figure things out for itself" - well getting the computer to figure things out is essentially what all programming does. What does "for itself" mean? The computer has no sense of self, and will only do what it is programmed to do.
"instead of being programmed with every possibility" - which happens ...never, and is certainly not the case for this chess program (nor was that claim ever being made. I'm not even sure what this is supposed to mean.
As for AI, that is a term that is very vague. But if you are in any way referring to a computer performing any function independent of it's programming, then that so far has happened..... never.

This message is a reply to:
 Message 18 by ringo, posted 02-27-2020 11:03 AM ringo has replied

Replies to this message:
 Message 20 by ringo, posted 02-27-2020 3:50 PM WookieeB has not replied

Posts: 190
Joined: 01-18-2019

Message 22 of 33 (872605)
02-29-2020 3:37 PM
Reply to: Message 21 by Stile
02-27-2020 4:32 PM

Re: Evolved algorithms
Apologies for the length of this post, but the devil is in the details.
A machine was given the goal of not crashing into anything. It decided that the way to achieve that goal was to not move at all.
Nobody programmed it to do nothing. It figured that out for itself.
I call BS. Show me the code. When you say "It decided" I'll bet there was some procedure in the code that allowed for or was a direct result of it not moving at all.
Stile writes:
AI programming, however, is not normal programming.
Normal programming - people invent an algorithm knowing it how it will solve a problem - a computer can just do it faster
AI programming - people invent an algorithm without knowing if there's a better solution or not - the computer comes up with the "how to solve it" and it can be better than anything people have ever thought of before.
Your definition of Normal programming is sufficient. You have a problem or goal and you create a set of rules to address the problem. Fine. The only advantage of the computer is speed and/or efficiency. (Take out the direct reference to a computer and that definition fits just about any endeavor a person does)
The AI description is odd though because you use a different standard. Instead of indicating AI programming is also about solving a problem, you make the standard whether it is "better" than something else. What makes it "better" you don't define, but I would assume whatever metric "better" would be is compared to human capabilities. So the moment a program does something "better" than a human, it becomes AI? But till it does perform better, is it just "normal programming? Or if an AI program does better than a human, but then later a human comes along and does better than the program, does it go from bein AI to not-AI (normal?)
Secondly, the computer itself if not coming up with "how to solve it". You cannot just feed data into a computer and simply say: "Figure this out" and not give it instructions on how to figure it out. Computers don't know how to do anything except what told to do. And that really is the key! Even if a computer performs a task "better" than a human, it still is relying on the programming of a human to do anything. The output or result to the question of "how to solve it" might be something a human hasn't thought of before, but the "how to solve it" method is ALL due to human instructions. This is a category difference you are not seeing.
Here is an example of AI programming in line with your example
In your example, which is just a hypothetical one, you end up making the same category error that you think justifies a claim that the computer itself created something new. Let's break it down.
-write AI algorithm to take the first year's data and predict the second - look up the answer and adjust it's own algorithm
Let's call this CYCLE 1.
So the first function of the program has an algorithm (code/instructions/rules to do something) to make a prediction off of some data it is fed (1st year). This algorithm no doubt has some search function, some filter to sift through the data, and actions to take on the results of the search which would constitute the "prediction".
There there is a second function of the program that takes an "answer" and does something with it. You don't specify what the "answer" is, so I have to try and make an educated guess. Do you mean what the actual data is from real life after a specified time period (2nd year)? If so, this is more data fed to the computer....which is then compared against (I assume) the prediction data. Then, based on more programming (another algorithm) the first function algorithm is adjusted to be more accurate based upon the comparison done between the prediction and real data. I suspect this would fall into some machine learning programming. The success of this process depends on the amount of data given to it.
All of this is intelligently designed, goal-directed, programming. There is no property of the computer itself that could come up with results other than what the instructions were given to it stipulate.
-then take the first two years data and predict the third - look up the answer and adjust it's own algorithm
This is CYCLE 2. Pretty much the same as CYCLE 1, except now (I'm assuming) its ingested data includes data that was somewhat already pre-processed in the prior cycle (2nd-year results compared against the prediction to provide data for the adjustment algorithm).
Still.....there is no property of the computer itself that could come up with results other than what the instructions given to it stipulate.
-then take the first 3 years data and predict the 4th - look up the answer and adjust it's own algorithm
CYCLE3 - same thing as CYCLE 2 with more data.
Computer following instructions....still.
-ask any person to predict the moving rate for next year; especially those who are our "current experts" on it
This should be irrelevant, but I think you are here trying to set up the new category to apply to your computer. So, we ask human "experts" to predict results for year 5.....
-ask the AI program to predict the moving rate for next year
...and the computer is doing the same thing in performing the first function of CYCLE 4 (if following the pattern of prior cycles, it's using data from years 1-4 plus it's thrice updated "prediction" function to predict 5th-year results)
Now comes the trick...
-When next year comes, the AI program had a "closer/better" prediction than any other person
No quantitative description is given here. Just "closer/better". I'm not sure why this should be significant. After all, isn't the goal of the program to accurately predict results faster or better than humans are normally capable of? What about the prior year's predictions, how do CYCLEs 1-3 results compare? If the results were not "better" was it not considered AI yet?
-Upon inspection of the AI program, it used a method to predict the moving rate that no person has ever thought of using...
And now we have the core of your argument! It is essentially summed up in one word: method. Unsurprisingly, there is no detail given or what this "method" would be. Nevertheless, supposedly in this hypothetical scenario, whatever the first function algorithm of the cycle did, its "method", was significant because it was what "no person has ever thought of using". But why is that significant? Is doing some task that a human wouldn't or couldn't do the standard for calling a computer....intelligent? Having its own sense of intentionality? Special? Or is it significant because the "method" resulted in "closer/better" than the "experts"
You see, you've erected some threshold defined by whatever the human "experts" could do, and when the computer exceeded that threshold, you then dubbed it as having accomplished something on its own. That is the category error you are making.
But what you are failing to see is that a human mind did actually think of the "method", albeit indirectly. The programmers gave it everything it needed to develop its "method" without having to know what the end result would be.
Saying it another way, there is nothing in the "method" that could not be accounted for by the original programming.
...(possibly because it's too complicated, and/or possibly because no one thought it was applicable.)
Both cases are irrelevant. I'm not completely sure what you mean by "complicated", but if it has to do with the processing capability of a computer vs human... so what? If you took a human, and gave them the rules of the program to follow, it might take a LOT LONGER to accomplish, but the human would arrive at the exact same results. As for being applicable, then that acknowledges that someone HAD thought of the "method" and it was subjectively dismissed..again, so what?
The AI program "invented" a way to predict the moving rate that didn't exist before.
Yes, people invented the AI - but that's missing the point.
If "invented" means followed rules given to it by programmers that led to an analysis that was more accurate than other human "experts" had obtained through other means... OK. But it was all still based on programming by intelligent minds.
People invented the AI... yes, that is the point. People, not the AI computer itself independent of anything the people gave it, are responsible for developing the nebulous "method".
Just like the Chess AI "invented" a way to play chess that's better than any person or any "normally programmed" chess computer.
This way to play chess didn't exist before. No one knew about it - certainly not the programmers.
I'm not sure how you justify these statements, as the chess program so far hasn't played a person (which is irrelevant anyway), and how you are qualifying the other chess programs as "normally programmed" is without support.
The program may have developed a "style" of play not seen before, but that is a totally subjective observation. I don't think you can claim it is a new way to play, after all, it is still playing by the rules.... that were given the program.
Also, I find it very telling that none of the creators of the chess program have made claims of the program coming up with something new apart from its programming akin to what was proposed on this thread.
But if you don't see the difference: people knowing the "how to" beforehand - just being unable to practically do it, or people learning the "how to" from the computer program... then you're just being silly.
If the people you are referring to are the programmers, the first part is true. But the second part is not. The programmers knew the how to - they programmed the how to in their code. They may not be able to specify the results/output from any specific moment in time because they cannot process the data as fast as a computer. But that doesn't mean that the computer is acting any way other than what they programmed it to do.

This message is a reply to:
 Message 21 by Stile, posted 02-27-2020 4:32 PM Stile has replied

Replies to this message:
 Message 23 by Stile, posted 03-04-2020 12:09 PM WookieeB has replied

Posts: 190
Joined: 01-18-2019

Message 24 of 33 (872839)
03-05-2020 2:23 PM
Reply to: Message 23 by Stile
03-04-2020 12:09 PM

Re: Evolved algorithms
You are still missing the point.
It is a fact that AI programming can produce better results than humans.
Nobody is contesting this. I agree that some programs can do tasks better than a human. And as I acknowledged in my comment that you quoted, that is the often the goal of programs. As an example, advanced chess programs have been beating humans consistently since the turn of the century. Ever heard of Deep Blue?
But it is an irrelevant observation.
The question has NOT been: "Can a program/algorithm do something better than humans?"
The question has been: "Did a non-intelligent process, specifically the AlphaZero chess program, generate new information?"
'Doing something better than a human' does not equate to 'new information'.
The fact that the hypothetical prediction program made a better prediction than the 'expert' humans did is a wonderful thing. But that it what it was designed to do. Even if the solution/output/goal reached was beyond what a human had yet thought of, the way to get to that output WAS thought of..... it was the program given to the computer. The computer did nothing on its own.
For the AlphaZero chess program, even though it can beat the pants off of the best human players (don't think this has been tested, but there is no reason to doubt it), and even though it can beat other chess programs, it is still just following the instructions given to it by the programmers. Even if the output is a style of play not recognized before by human players, the output still is a result of the programming given to it by the programmers. The AlphaZero program did not create anything "new" itself.
You see, all the information on how to play the game, any rules (algorithm) to compare and give weight to moves in order to win a game, and the procedure to update the weights (update the main algorithm) were all provided to it my intelligent mind(s). All the information, the billions+ of possible games, whichever number of moves deep, were in essence provided by the programmers for the program. All the information on how to sift through and find the 'best' choices with the goal of winning (or draw) of a game were given to the computer.
If you think about it, if you had given a human the task to unthinkingly follow the programming, they would come up with the exact same results as the program. The only difference is the speed of computation that the computer has vs a human. What would take a human (probably) years to do, the computer could accomplish in a few seconds.
I think where the problem is coming in is that people are confusing what should be called the "method" - the algorithm, with the results/output of processing the "method". The output might be something that a human had not thought of, seemingly something novel. But in reality it is not anything new, as that possible output was accounted for in the programming. All the program did was search through all the possible data points (of which that 'novel' result was a member) and select that particular output, based on programming instructions.
If you can't identify that difference, or don't think it's significant - you're just being silly.
I see the difference, but I do not think it is significant. You have not justified why it is.
Humans: "You shouldn't be able to create an oscillator without a capacitor."
AI program: "You can create an oscillator without a capacitor if you use radio signals."
-AI is teaching humans
-AI is teaching the programmers of the AI
I'd like to see the reference for this story. But again, it's somewhat irrelevant.
That a human didn't think an oscillator could be created without a capacitor is not the issue. What would be interesting is whether the computer program was told you can't build an oscillator without a capacitor. I'll bet there was nothing in the programming that prevented it, and instead that eventuality was allowed in the programming.
sometimes the "method" cannot be identified because the AI created the method and we don't even understand it.
Nope. Don't buy it. The output might be something a human doesn't understand, but the method getting to that output is understood - a programmer provided it.
The point is "are humans teaching the program? (normal programming) or is the program teaching humans? (AI programming.)"
You're creating your own definition here for normal vs AI programming. But it seems circular reasoning to me. Cause, again, would a program not be considered AI if it didn't "teach" something to humans? You seemed to indicate "No" to that question in your post.

This message is a reply to:
 Message 23 by Stile, posted 03-04-2020 12:09 PM Stile has replied

Replies to this message:
 Message 25 by Stile, posted 03-05-2020 3:16 PM WookieeB has replied
 Message 26 by Dogmafood, posted 03-05-2020 10:58 PM WookieeB has replied

Posts: 190
Joined: 01-18-2019

Message 27 of 33 (873172)
03-10-2020 5:24 PM
Reply to: Message 25 by Stile
03-05-2020 3:16 PM

Re: Evolved algorithms
Page 153 of the book Superintelligence: Paths, Dangers and Strategies by Nice Bostrom.
The book just describes the scenario, but it includes a footnote to look up the original:
Bird and Layzell (2002) and Thompson (1997); also Yaeger (1994, 13—14)
Ahh yes. The devil is in the details.
I looked up the experiment, and it turns out that you have mis-characterized what happened. If you were relying on Bostrom's account, it might be excused as his description lacked detail and he was focusing on one aspect.
Bostrom writes:
Another search process, tasked with creating an oscillator, was deprived of a seemingly even more indispensable component, the capacitor.
The work "seemingly" is key. It initially gives the impression that the capacitor is a must have item, but in truth it is not and the experimenters knew this.
Humans: "You shouldn't be able to create an oscillator without a capacitor."
No, that was not the impression. Layzell specifically setup the scenario for the system to find solutions via the "evolutionary algorithm" given to it to be able to develop an oscillator with the switches-transistors it had available. The parameters of the setup were extremely sensitive, but that sensitivity is what allowed solutions to be found. And they did find multiple solutions, not just the "radio" version.
AI program: "You can create an oscillator without a capacitor if you use radio signals."
You can also make an oscillator without a capacitor and with just transistors, which the AI did find. This was all designed into the experiment.
The AI was programmed to make an oscillator out of transistors.
Which it did.
No one told it to make an antenna.
No one told it what an antenna is, or how an antenna works.
And the AI didnt make an antenna. The designers did, albeit without initially knowing it. It was just an artifact of physics, where the printed circuit board tracks (not the AI) naturally amplified radio signals in the air (in this case generated by nearby PC monitors). The AI was, by design, already tracking voltages. So when it detected changes in the impedance, it just made use of that environmental factor in its normal course of action to developing an oscillator. It still didn't know what an antenna is, or how it works - none of that mattered to the AI.
In the same experiments, they noted that they had some successful oscillator runs that were due to a soldering iron being plugged in nearby. When they unplugged the iron, the oscillations failed. This was due to a slight variance in input voltage that the plugged in iron caused for the experiment. This just shows the high sensitivity of the components and how unforeseen outside influences could affect the results.
The AI figured out that, if you only have transistors (no capacitor) - the best way to make an oscillator is to make a radio.
For one, the radio version was not the "best" way. They had other successful oscillation runs not using the radio signals.
Secondly, crediting the AI with figuring this out is like crediting a dentist with creating a radio just because one of their patients have heard radio signals in their heads due to getting metal fillings/braces in their mouth.
To pick up a radio signal you need other elements such as an antenna. After exhaustive testing they found that a long track in the circuit board had functioned as the antenna. But how the circuit figured out that this would work is not known.
How the AI "figured out" this out IS known. But of course, this all depends on what is meant by the (loaded) wording of "figure out".
If one is just referring to the AI in its programmed direction to be measuring changing voltages (and other stuff) and sensing the changes caused by the radio waves on the circuits causing changing impedance, then to make use of varying voltages per it's programming.... then OK, it "figured out" how to use radio waves.
But if you think that the AI somehow became aware of what radio is and decided
to produce 'something' new apart from it's instruction to make use of radio waves, you're kidding yourself. The radio effect was there before the AI did anything, and all it did was make use of an input that it was already designed to detect and modify. The AI had no idea that radio was involved or not involved. It has no idea what "radio" even means, in any form.
Sure "the method getting to that output is understood" - it's iterative based learning they programmed into the AI.
What you keep missing is "sometimes we can't figure out the method the AI created during the iterative based learning method ....
You're contradicting yourself here.
AI is, in a few words, "iterative based learning."
And another statement for "iterative based learning" is brute force processing. And this is really the only advantage a computer has over a human. The speed of calculating through an iterative process for a computer vs human is orders of magnitude better. Otherwise, a human could come to the exact same conclusions (output) as a computer program. In both cases, it's merely plodding through a set of rules.
All that above is frankly academic, but doesn't address the original contention - how did the AlphaZero program create new information?

This message is a reply to:
 Message 25 by Stile, posted 03-05-2020 3:16 PM Stile has replied

Replies to this message:
 Message 29 by Stile, posted 03-11-2020 8:53 AM WookieeB has replied

Posts: 190
Joined: 01-18-2019

Message 28 of 33 (873174)
03-10-2020 6:07 PM
Reply to: Message 26 by Dogmafood
03-05-2020 10:58 PM

Re: Evolved algorithms
You are saying that a program never does anything that it wasn't forced to do by the programmer. That seems right until the program has no boundaries
Yes, that is essentially correct. In the case of the AlphaZero program in relation to chess (it was programmed for other games besides chess), everything from how to play chess to analyzing moves and sets of moves to weighing moves to achieve a winning outcome were all given to the computer. Add a lot of memory and processing power and let it go. The outcome? A set of moves that can beat the next best chess program out there. Everything it did was by design!
But I wonder what you mean by 'a program that has no boundaries'? Boundaries are inherent to programming, so you'd need to define what you mean.
What would qualify as new information?
Put simply, anything produced outside the 'bounds' of the programming.
So I would expect AlphaZero to be able to play any chess game and excel, cause that is what the bounds of the program allow. But could it produce output not related to chess? How about output applicable to another game, like checkers, or a chess variant (3D chess)? Even if you gave it move rules for another game, unless somebody redesigned the weighing and updating algorithm, I bet it would fall flat on its face when playing anyone halfway competent.

This message is a reply to:
 Message 26 by Dogmafood, posted 03-05-2020 10:58 PM Dogmafood has not replied

Posts: 190
Joined: 01-18-2019

Message 30 of 33 (873609)
03-17-2020 3:36 PM
Reply to: Message 29 by Stile
03-11-2020 8:53 AM

What you keep missing is "sometimes we can't figure out the (specific) method the AI created during the iterative based learning method
No, this is not true. They can ALWAYS figure out the method, because they programmed the method. You are confusing method and algorithm with the output data of what is being sought. Programmers put a method, an algorithm, to find something out. Even if they program a changeable parameter (a variable) within the method, they still understand the method completely because they made it. The programmers may not know ahead of time what the output will ultimately look like, but that output will ALWAYS be constrained by the parameters of the method/algorithm that they created. Even if the method/algorithm is being updated by a random or variable parameter, that is still a specification that the programmers put in. It is all BY DESIGN.
this is "AI creating new information" that humans did not create beforehand
No, it is not. You are not understanding the concept of information as it relates to computer programming.
But how the circuit figured out that this would work is not known.
It didn't figure anything out.
The designers had their hardware board that contained and arrangement of transistors and switches. Before even running the experiment, the designers KNEW that they could produce, or likely produce, an oscillator from their hardware (without a capacitor). They maybe didn't know the precise or best configuration of switching transistors that would produce the desired output, but they knew that within the constraints of their design, it was (likely) possible.
Put simply, their "AI" program was programmed to take an action among the controlled switches, monitor the output for oscillation (Yes/No), and then make a note of it.
In the 'radio' environment, even before they involved the AI, there was a 'radio' functioning on the hardware. There was likely a random pre-configuration of the switches that was conducive to all of this. Then there was some effect from the electromagnetic waves from the environment (in this case computer monitors) reacting with the wiring (as an antenna) that caused a shifting of voltages on the physical copper - ie a radio. Whether or not the designers realized this ahead of time is irrelevant. It was a physical property that existed.
Now run the AI. Does the computer program detect "radio"? NO! ALL the computer program identifies is an oscillation of voltages, cause that is what it was designed to detect. That it happens to be coming from a physical effect of 'radio' waves on the hardware is something blind to the computer. It has no understanding of, nor any capability to understand, what radio is. All it sees is: 'Oscillating yes/no?' and the ability to tweak its switching to affect that output to a pre-specified target.
The computer AI has no more understanding of "radio" than it does of "soldering iron" where some oscillations were found when a soldering iron was plugged in nearby and that went away when the iron was unplugged. It had no idea of "human hand" that was found to affect some voltages when it was nearer to the board.
The only surprise in all of this was that the designers were not aware of the radio effect on the wiring. When they looked at a configuration that the AI had identified as Oscillator:YES, they had to investigate why something that normally should not have worked was working like that. The AI certainly did not tell them "radio" in any fashion, all it said was: configuration:xyz is Oscillator:YES. It was the minds of the designers that discovered "radio" was what worked.
The AI or circuit didn't figure anything out. So....
the program itself created an algorithm to create an antennae
No, it didn't. The antenna was there already before the AI did anything. The AI made use of an alternating voltage, but it was designed to do so. That an antenna caused that alteration the AI never knew, nor did the designers realize until after the DESIGNERS investigated.
but the program figured this out, made it a "decent enough" antennae and ran with it - all without the programmers having any idea at all that it could possibly go in this direction - and the programmer STILL can't figure out how the AI made these connections and moved forward.
The program detected a voltage oscillation, which it was designed to look for and react to. That it was caused by an antenna was oblivious to the program. Whether or not the programmers had an idea that it could go this direction is irrelevant. That a program could modify switches to further affect voltage oscillations WAS an idea the designers had, and that is what the program did to the voltages caused by the 'radio' effect. The programmers of course DID figure out how it all worked, otherwise we wouldn't be hearing about it.
We have all the code for the AI available to us - and we STILL can't read it's mind!
Overstate things much? This is BS. A computer and/or computer program is not a mind.
You can muddle definitions all you want
"Iterative based learning" and "brute force processing" are basically two extremely opposite programming methods
You're the one muddling definitions.
Iteration basically means a repetition of a process. When it comes to programming it is basically a self-contained process that repeats itself until some variable causes a break in the repetition.
Brute force is basically the same thing, just said in a different manner. You repeat a process multiple times, with some slight change in the input (one variable), till you get an desired output (another variable) that stops the repetitive process.
You are just hanging up on the whatever meaning you have for "learning" in your "iterative based learning" phrase.
"Brute force" is used when the algorithm is completely understood - but the act of "doing the calculations" would take too long for a human.
"Iterative based learning" is used when the algorithm is completely unknown (even unknown if an algorithm exists or not) - a program is written (AI) to "create algorithms" and try them out.
The first part is all true.
The second part is just crap. "Program" and "Algorithm" are pretty much synonymous. If you want to quibble about the definitions, then an "algorithm" is the set of rules to be followed in calculations, and that at the least would be in the program given to a computer. So to say that the algorithm is unknown is foolish. Even if there are iterations and variables allowed in the program that can change what is considered the "algorithm", it is all still within the bounds of the program and is KNOWN by the designers.
Sometimes many are found and some of those are new information that humans didn't understand before: Like the ability to beat the existing "best chess" computer or the ability to create an oscillator with transistors by making a radio with an antennae.
-there is the possibility here for the program to teach humans an algorithm they were previously unaware of, and the answer may or may not be identified in the end.
You are confusing the algorithm with what the algorithm is designed to find: some output. The algorithm is the processing rules. Those rules are what the designers decided, and are within the bounds of the program. The result of running that algorithm, the OUTPUT, might be something that the designers have never seen before....... but finding that output is what the purpose of the algorithm was designed for.
It is not the algorithm that teaches humans. It is the result of the algorithm, the output, that can teach humans. And what it teaches the humans is totally outside what the algorithm does. In the oscillator experiment, the algorithm was designed to find an oscillator given the hardware provided. That one of those oscillators came from the effect of radio waves was not something recognized by the program - all it saw was an oscillator:yes. It was the humans that discovered the knowledge that it was a "radio". In the chess program, the algorithm was designed to play chess and output a winning strategy. That it was recognized as a style of play that was unusual was an observation not made by the program, but that was made by an independent human mind outside any involvement of the program. Thus, it is always the OUTPUT that teaches humans after humans reflect on it. The computer NEVER does that.

This message is a reply to:
 Message 29 by Stile, posted 03-11-2020 8:53 AM Stile has replied

Replies to this message:
 Message 31 by Stile, posted 03-17-2020 3:54 PM WookieeB has replied

Posts: 190
Joined: 01-18-2019

Message 32 of 33 (874143)
03-25-2020 6:46 PM
Reply to: Message 31 by Stile
03-17-2020 3:54 PM

The result, and the facts, are the same.
I suppose it would for you, since you are making up your own facts.
With iterative-based learning ("AI") programming - the program/algorithm is capable of creating additional/new/more algorithms, and it's possible that these can be taught to the human programmers who were not aware they were possible until the AI did it and showed them.
Except that is not what happened with either the chess program or the oscillator program.
For the chess app, there is no report that anyone was surprised at what the program did. It accomplished exactly what it was programmed to do, and the programmers know what process (algorithm) was going on with it. The results may have been a surprise to some, but coming up with different results than any other program was the point of the exercise.
For the oscillator app, again, the developers didnt express any surprise at the process. The expressed surprise at some of the results, including the 'radio' version, but how the algorithm settled on that solution is understood.
Name me any algorithm that any program has developed on its own that is not understood by the programmers!
It is not always known what the created algorithms will be - sometimes the AI can teach algorithms to the programmers, and sometimes the algorithms used by the program are unteachable to the programmers - (the programmers can't figure them out) - and the solution still works.
Basically the same statement as above, but please!!!, where has an algorithm been unteachable to the programmers? Never happened.
Proven by the chess AI - that can beat the best "normal" program (which has beaten the best human players.)
With your definitions, where are you getting that it beat a "normal" program? The chess apps that AlphaZero beat were also considered chess AI programs. Your equivocating on your definitions.
Proven by the oscillator-building-AI - that can create an oscillator using an algorithm that is still not understood by the programmers.
Sorry dude. The programmers know perfectly well the algorithm. You should look up their paper. If your going off of the article, you should reconsider it. They routinely embellish the information.
Proven by any other AI doing amazing things - such is the power of "creating algorithms."

This message is a reply to:
 Message 31 by Stile, posted 03-17-2020 3:54 PM Stile has replied

Replies to this message:
 Message 33 by Stile, posted 03-26-2020 12:54 PM WookieeB has not replied

Newer Topic | Older Topic
Jump to:

Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024