Register | Sign In


Understanding through Discussion


EvC Forum active members: 65 (9162 total)
8 online now:
Newest Member: popoi
Post Volume: Total: 915,815 Year: 3,072/9,624 Month: 917/1,588 Week: 100/223 Day: 11/17 Hour: 7/1


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   Generating information in a neural network playing chess
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


(1)
Message 21 of 33 (872455)
02-27-2020 4:32 PM
Reply to: Message 15 by WookieeB
02-25-2020 4:58 PM


Re: Evolved algorithms
I don't think you see the difference.
Let's try using your example:
For example, I could develop a computer program to take the addresses of the roughly 327 million people in the USA and then filter out those living in Arizona. Even before I gave it an initial data set (the 327m people), the algorithm is created. Now I feed the program the initial data set and it spits output of about 7.2 million names.
Normal programming, as you understand it:
By your logic it was the computer program itself that developed how to get a list of Arizona dwellers. That is silly
And you are correct, for normal programming.
AI programming, however, is not normal programming.
Here is an example of AI programming in line with your example:
Feed in the addresses of the 327 million people in the USA.
Feed in moving rates (selling your house, buying a new one) for the last 50 years.
-write AI algorithm to take the first years data and predict the second - look up the answer and adjust it's own algorithm
-then take the first two years data and predict the third - look up the answer and adjust it's own algorithm
-then take the first 3 years data and predict the 4th - look up the answer and adjust it's own algorithm
-ask any person to predict the moving rate for next year; especially those who are our "current experts" on it
-ask the AI program to predict the moving rate for next year
-When next year comes, the AI program had a "closer/better" prediction than any other person
-Upon inspection of the AI program, it used a method to predict the moving rate that no person has ever thought of using (possibly because it's too complicated, and/or possibly because no one thought it was applicable.)
The AI program "invented" a way to predict the moving rate that didn't exist before.
Yes, people invented the AI - but that's missing the point.
Just like the Chess AI "invented" a way to play chess that's better than any person or any "normally programmed" chess computer.
This way to play chess didn't exist before. No one knew about it - certainly not the programmers.
Normal programming - people invent an algorithm knowing it how it will solve a problem - a computer can just do it faster
AI programming - people invent an algorithm without knowing if there's a better solution or not - the computer comes up with the "how to solve it" and it can be better than anything people have ever thought of before.
In both scenarios: People invent the computer and the programming to get the answer.
But if you don't see the difference: people knowing the "how to" beforehand - just being unable to practically do it, or people learning the "how to" from the computer program... then you're just being silly.
Edited by Stile, : Clarified last sentence to better match the idea attempting to be explained.

This message is a reply to:
 Message 15 by WookieeB, posted 02-25-2020 4:58 PM WookieeB has replied

Replies to this message:
 Message 22 by WookieeB, posted 02-29-2020 3:37 PM Stile has replied

  
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


Message 23 of 33 (872786)
03-04-2020 12:09 PM
Reply to: Message 22 by WookieeB
02-29-2020 3:37 PM


Re: Evolved algorithms
WookieeB writes:
Now comes the trick...
Stile writes:
-When next year comes, the AI program had a "closer/better" prediction than any other person
No quantitative description is given here. Just "closer/better". I'm not sure why this should be significant. After all, isn't the goal of the program to accurately predict results faster or better than humans are normally capable of? What about the prior year's predictions, how do CYCLEs 1-3 results compare? If the results were not "better" was it not considered AI yet?
That is, indeed, "the trick" you are missing.
It is a fact that AI programming can produce better results than humans.
-they can play chess better (by winning more games)
-they can make predictions better (by being more accurate once the real data is obtained)
If the result is not "better than humans" - the programming is still considered AI, it's just also considered a failure - the AI did not work out something better than humans have already identified.
The thing is - this doesn't always happen.
Sometimes the AI does come up with a result that is better than humans.
Like winning at chess better than any other known method.
Or making predictions better as in my example.
And now we have the core of your argument! It is essentially summed up in one word: method. Unsurprisingly, there is no detail given or what this "method" would be.
That's the thing - sometimes the "method" cannot be identified because the AI created the method and we don't even understand it.
Sometimes the "method" can be identified - and we learn that the AI found an interesting way to solve a problem we never knew about
Check this out:
quote:
Another search process, tasked with creating an oscillator, was deprived of a seemingly even more indispensible component, the capacitor. When the algorithm presented its successful solution, the researchers examined it and at first concluded that it should not work. Upon more careful examination, they discovered that the algorithm had, MacGyver-like, reconfigured its sensor-less motherboard into a makeshift radio receiver, using the printed circuit board tracks as an aerial to pick up signals generated by personal computers that happened to be situated nearby in the laboratory. The circuit amplified this signal to produce the desired oscillating output.
Humans: "You shouldn't be able to create an oscillator without a capacitor."
AI program: "You can create an oscillator without a capacitor if you use radio signals."
-AI is teaching humans
-AI is teaching the programmers of the AI
Is it possible for humans to know/learn this? Of course. Just as it is possible for humans to learn/win at chess as the AI can.
The point is, though - that the AI did it first, and taught the humans - even the human programmers didn't know about the method the AI created.
If the people you are referring to are the programmers, the first part is true. But the second part is not. The programmers knew the how to - they programmed the how to in their code. They may not be able to specify the results/output from any specific moment in time because they cannot process the data as fast as a computer. But that doesn't mean that the computer is acting any way other than what they programmed it to do.
There are things that AI can teach people that no person knew about (not even the AI programmers) before the AI figured it out.
The point isn't "is the AI doing something it isn't programmed to do?"
The point is "are humans teaching the program? (normal programming) or is the program teaching humans? (AI programming.)"
If you can't identify that difference, or don't think it's significant - you're just being silly.

This message is a reply to:
 Message 22 by WookieeB, posted 02-29-2020 3:37 PM WookieeB has replied

Replies to this message:
 Message 24 by WookieeB, posted 03-05-2020 2:23 PM Stile has replied

  
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


(2)
Message 25 of 33 (872848)
03-05-2020 3:16 PM
Reply to: Message 24 by WookieeB
03-05-2020 2:23 PM


Re: Evolved algorithms
WookieeB writes:
Stile writes:
Humans: "You shouldn't be able to create an oscillator without a capacitor."
AI program: "You can create an oscillator without a capacitor if you use radio signals."
-AI is teaching humans
-AI is teaching the programmers of the AI
I'd like to see the reference for this story.
Page 153 of the book Superintelligence: Paths, Dangers and Strategies by Nice Bostrom.
The book just describes the scenario, but it includes a footnote to look up the original:
Bird and Layzell (2002) and Thompson (1997); also Yaeger (1994, 13—14)
If we look up Bird and Layzell, you can get to this page:
Radio Emerges from the Electronic Soup
Here's another quote from that article:
quote:
To pick up a radio signal you need other elements such as an antenna. After exhaustive testing they found that a long track in the circuit board had functioned as the antenna. But how the circuit figured out that this would work is not known.
The AI was programmed to make an oscillator out of transistors.
No one told it to make an antenna.
No one told it what an antenna is, or how an antenna works.
The AI figured out that, if you only have transistors (no capacitor) - the best way to make an oscillator is to make a radio.
Bird and Layzell didn't know that.
No one did.
Until the AI figured it out and showed them.
The output might be something a human doesn't understand, but the method getting to that output is understood - a programmer provided it.
This is the point of AI - allowing the program to develop the method, which we sometimes don't understand.
Quoted again, this time bolded:
quote:
But how the circuit figured out that this would work is not known.
Sure "the method getting to that output is understood" - it's iterative based learning they programmed into the AI.
-no one cares about this point.
What you keep missing is "sometimes we can't figure out the method the AI created during the iterative based learning method and that method is better than anything humans were able to previously identify."
-this is "AI creating new information" that humans did not create beforehand
You can muddle definitions all you want - this idea isn't going away.
You're creating your own definition here for normal vs AI programming. But it seems circular reasoning to me. Cause, again, would a program not be considered AI if it didn't "teach" something to humans? You seemed to indicate "No" to that question in your post.
No.
AI is, in a few words, "iterative based learning."
This can sometimes develop things humans don't understand to solve problems humans can't solve.
-That's the interesting part that everyone talks about
This can sometimes develop things humans already understand or things not-as-good as solutions already discovered by humans.
-This isn't interested, so no one talks about it, but it's still AI
The fact you can't make go away, is that it's possible for AI to develop ideas that humans didn't program in (because "iterative based learning" creates ideas).
Many times these ideas are useless or worse.
Sometimes they're better.
Sometimes they're new.
Sometimes they're new and better.

This message is a reply to:
 Message 24 by WookieeB, posted 03-05-2020 2:23 PM WookieeB has replied

Replies to this message:
 Message 27 by WookieeB, posted 03-10-2020 5:24 PM Stile has replied

  
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


(1)
Message 29 of 33 (873192)
03-11-2020 8:53 AM
Reply to: Message 27 by WookieeB
03-10-2020 5:24 PM


Re: Evolved algorithms
WookieeB writes:
And the AI didn't make an antenna. The designers did, albeit without initially knowing it.
Nope - you're confusing two different ideas again.
As I said before:
Sure "the (general) method getting to that output is understood" - it's iterative based learning they programmed into the AI.
-no one cares about this point.
-this is how you're saying "the designers made the AI make the antenna without knowing about it."
-again - this is accepted, but it's irrelevant as it's silly
-again - no one cares about this point because it's too silly to try and make it seem significant.
What you keep missing is "sometimes we can't figure out the (specific) method the AI created during the iterative based learning method and that method is better than anything humans were able to previously identify."
-this is "AI creating new information" that humans did not create beforehand
Again, from the article:
quote:
But how the circuit figured out that this would work is not known.
-this is the really cool and interesting point
-the program itself created an algorithm to create an antennae, the AI didn't even know an antennae would specifically help (as you said - there were multiple other solutions that didn't involve an antennae at all.)
-but the program figured this out, made it a "decent enough" antennae and ran with it - all without the programmers having any idea at all that it could possibly go in this direction - and the programmer STILL can't figure out how the AI made these connections and moved forward. Do you understand this? We have all the code for the AI available to us - and we STILL can't read it's mind!
You can muddle definitions all you want - this idea isn't going away.
And another statement for "iterative based learning" is brute force processing.
This is not true at all.
Perhaps you simply do not understand programming.
"Iterative based learning" and "brute force processing" are basically two extremely opposite programming methods... like "left" and "right" on the political spectrum.
If you think they are the same thing - I then understand why you keep missing the other key points.
"Brute force" is used when the algorithm is completely understood - but the act of "doing the calculations" would take too long for a human.
Set a computer to do the "calculations" - and it spits out the answer.
-there is no teaching of an algorithm from program to humans here, only teaching of the answer
"Iterative based learning" is used when the algorithm is completely unknown (even unknown if an algorithm exists or not) - a program is written (AI) to "create algorithms" and try them out. Sometimes none can be found. Sometimes many are found and all are previously understood anyway. Sometimes many are found and some of those are new information that humans didn't understand before: Like the ability to beat the existing "best chess" computer or the ability to create an oscillator with transistors by making a radio with an antennae.
-there is the possibility here for the program to teach humans an algorithm they were previously unaware of, and the answer may or may not be identified in the end.

This message is a reply to:
 Message 27 by WookieeB, posted 03-10-2020 5:24 PM WookieeB has replied

Replies to this message:
 Message 30 by WookieeB, posted 03-17-2020 3:36 PM Stile has replied

  
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


(1)
Message 31 of 33 (873612)
03-17-2020 3:54 PM
Reply to: Message 30 by WookieeB
03-17-2020 3:36 PM


WookieeB writes:
The second part is just crap. "Program" and "Algorithm" are pretty much synonymous. If you want to quibble about the definitions, then an "algorithm" is the set of rules to be followed in calculations, and that at the least would be in the program given to a computer. So to say that the algorithm is unknown is foolish. Even if there are iterations and variables allowed in the program that can change what is considered the "algorithm", it is all still within the bounds of the program and is KNOWN by the designers.
You can use whatever language you want, it doesn't matter.
The result, and the facts, are the same.
With brute-force ("normal") programming - the entire program/algorithm is written/known by the programmers and nothing is ever taught to the programmers except for the answer itself.
It is always known that the answer is "possible" - it's just not known what the answer actually is.
With iterative-based learning ("AI") programming - the program/algorithm is capable of creating additional/new/more algorithms, and it's possible that these can be taught to the human programmers who were not aware they were possible until the AI did it and showed them.
It is not always known if the answer is possible or not - sometimes the AI can teach the programmers that the answer is possible.
It is not always known what the created algorithms will be - sometimes the AI can teach algorithms to the programmers, and sometimes the algorithms used by the program are unteachable to the programmers - (the programmers can't figure them out) - and the solution still works.
Proven by the chess AI - that can beat the best "normal" program (which has beaten the best human players.)
Proven by the oscillator-building-AI - that can create an oscillator using an algorithm that is still not understood by the programmers.
Proven by any other AI doing amazing things - such is the power of "creating algorithms."
You can't make this go away.
Edited by Stile, : Just spelling stuffs.

This message is a reply to:
 Message 30 by WookieeB, posted 03-17-2020 3:36 PM WookieeB has replied

Replies to this message:
 Message 32 by WookieeB, posted 03-25-2020 6:46 PM Stile has replied

  
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


Message 33 of 33 (874200)
03-26-2020 12:54 PM
Reply to: Message 32 by WookieeB
03-25-2020 6:46 PM


Have fun!

This message is a reply to:
 Message 32 by WookieeB, posted 03-25-2020 6:46 PM WookieeB has not replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024