Register | Sign In


Understanding through Discussion


EvC Forum active members: 60 (9209 total)
3 online now:
Newest Member: Skylink
Post Volume: Total: 919,442 Year: 6,699/9,624 Month: 39/238 Week: 39/22 Day: 6/6 Hour: 1/0


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   Percy's Alife Project
Rei
Member (Idle past 7264 days)
Posts: 1546
From: Iowa City, IA
Joined: 09-03-2003


Message 46 of 63 (60695)
10-13-2003 6:08 AM
Reply to: Message 44 by Percy
10-10-2003 9:36 AM


Re: On Space
For Percy:
I think you're really overreaching for your first alife alg., but I'll give you a couple tips anyway
1. Go with a sparse memory structure. Store coordinates for each particle in memory. Adjacency tests are easy, if you set up properly: Break a hash table of the world into octants, those octants into octants, etc, down to a specified depth. As an example, if your depth is only 1, you'd have 8 hash table entries (one for each octant of the 3d world), and each one would have a list of all elements that are inside it. When an element changes octants, you need to update the lists of course. At depth 2, you'd have 64 hash table entries, for comparison. You'll probably find it optimal to choose a depth where there's around 2-10 elements (depending on how you do your algorithms) per octant. It's roughly an O log N algorithm.
2. Realistic temperature and fluid kinematics calculations can be very hard - do you really want to go there?
3. If you do, I would recommend modelling them at a far, far lower precision grid than the locations of your "objects" can be at - perhaps 40x40x40. I wouldn't, however, recommend having them be at arbitrary locations, that will make it too complicated.
4. If you want this to be a massive simulation, you're going to want to design it for grid computing.
I hate to say it, but there is absolutely no simple solution for what you're talking about. Even a 3d Conway's Game of Life, using only 1 bit per cell, and then gzipped at 10x compression, on your million by million by million world, would take 12.5 exobytes of memory. You could never even dream of modelling realistic grid-based kinematics in such a world on a modern computer.
I think as you try and think about it more, you'll probably decide on a simpler design. Remember: simpler designs run faster, too!
------------------
"Illuminant light,
illuminate me."

This message is a reply to:
 Message 44 by Percy, posted 10-10-2003 9:36 AM Percy has not replied

  
Peter
Member (Idle past 1730 days)
Posts: 2161
From: Cambridgeshire, UK.
Joined: 02-05-2002


Message 47 of 63 (60698)
10-13-2003 7:23 AM
Reply to: Message 43 by Dr Jack
10-10-2003 9:18 AM


I see the problem .... would it help if you re-ordered
your object lists based somehow on coordinates though?
Then you only have to check for proximity rather than
adjacent ness and the re-ordering goes in the movement overheads.
Moving on an object-by-object basis would require at most
swapping two array (say) locations per movement.
...not sure you could do it with a 3d grid ... hmmm ...
Maybe still too much though depending on data structures
(and language constraints).

This message is a reply to:
 Message 43 by Dr Jack, posted 10-10-2003 9:18 AM Dr Jack has replied

Replies to this message:
 Message 48 by Dr Jack, posted 10-13-2003 8:54 AM Peter has replied

  
Dr Jack
Member (Idle past 126 days)
Posts: 3514
From: Immigrant in the land of Deutsch
Joined: 07-14-2003


Message 48 of 63 (60709)
10-13-2003 8:54 AM
Reply to: Message 47 by Peter
10-13-2003 7:23 AM


I see the problem .... would it help if you re-ordered
your object lists based somehow on coordinates though?
There is no natural ordering on 3-space (or even 2-space). So you'd need to construct a search tree - probably some kind of octree, or binary space partition, to keep things fast. This can be done easily enough; but it'd still be slower than an array based system.

This message is a reply to:
 Message 47 by Peter, posted 10-13-2003 7:23 AM Peter has replied

Replies to this message:
 Message 49 by Peter, posted 10-13-2003 10:30 AM Dr Jack has not replied

  
Peter
Member (Idle past 1730 days)
Posts: 2161
From: Cambridgeshire, UK.
Joined: 02-05-2002


Message 49 of 63 (60715)
10-13-2003 10:30 AM
Reply to: Message 48 by Dr Jack
10-13-2003 8:54 AM


You could narrow the number of objects to search,
eg by distance from a set-point in space,
and while I see that it would always be slower, you
could have an 'infinite' universe in finite memory.
Depends which might be the more useful ....
Slower assuming that the time to check an empty cell
is negligable. If you have 10^18 cells and it takes
10^-8 secs per cell ... never really thought about this
before so I won't be offended if I'm way off in my thinking
[This message has been edited by Peter, 10-13-2003]

This message is a reply to:
 Message 48 by Dr Jack, posted 10-13-2003 8:54 AM Dr Jack has not replied

  
Percy
Member
Posts: 22937
From: New Hampshire
Joined: 12-23-2000
Member Rating: 6.9


Message 50 of 63 (60865)
10-14-2003 10:59 AM


I just wanted to let people know that I appreciate the responses and hope to reply soon. I seem to be in a busy period at both home and work, but though I haven't had time to respond I have been able to give a little thought to the feedback. I'm giving the cautions about being overly aggressive serious consideration, and I'm thinking about the design and implementation of the genetic algorithms and of sparse arrays.
One brief comment about performance. I've been associated with many projects where performance was a significant concern, and some are still revenue products. Experience says you can eek out a performance improvement of about an order of magnitude every 4 or 5 years. This isn't necessarily good news for Cellscape, because since performance is a critical comparison criteria in the product area I work in, most of the effort goes into performance improvements. For Cellscape we probably want to focus our effort elsewhere. For this reason it makes sense to make design decisions that make performance as small an issue as possible while maintaining other design goals.
--Percy

Replies to this message:
 Message 52 by Peter, posted 10-16-2003 6:24 AM Percy has not replied

  
Void
Inactive Member


Message 51 of 63 (60870)
10-14-2003 12:14 PM
Reply to: Message 1 by Percy
10-06-2003 6:59 PM


-How will the genome of cells be represented?
-How will the genomes express themselves?
-What mutation functions will be used?
All these questions above are critical to the core of your alife program and should be addressed as soon as you have decided what your world will be, which you have.
The issues of genome representation and expression are the *most problematic* in evoluting alife worlds. If you don't get them right or make them too "simple" then evolution will not happen. Good examples of this are a number of my failed projects!
I attempted ambitious projects like this when i first got into alife and they all failed due to the genome representation and expression. I friendlily suggest you attempt a small simulation of this sytem first with a small 2D world before you move it into such a colossal 3D universe. The principles of your genetic model should be able to operate in 2D and 3D and small and large universe alike. Get this working in small 2D and it will work in big 3D. Basically get the genetic algorithm working first is my suggestion.
Currently I am now working on a relatively simple alife project to do with evolving structures of 2D plants (which don't move so it's easier
I will watch eagrely on this topic and wish you the best of luck. Don't worry if this project runs out of steam, lots of mine have and I just pick myself up, learn from its problems and try a new one.

This message is a reply to:
 Message 1 by Percy, posted 10-06-2003 6:59 PM Percy has not replied

  
Peter
Member (Idle past 1730 days)
Posts: 2161
From: Cambridgeshire, UK.
Joined: 02-05-2002


Message 52 of 63 (61149)
10-16-2003 6:24 AM
Reply to: Message 50 by Percy
10-14-2003 10:59 AM


Just another thought on the environment model ---
do you need to wrap at each extremity of your cube or
should some be boundaries ?
Depends what you are modelling I guess.

This message is a reply to:
 Message 50 by Percy, posted 10-14-2003 10:59 AM Percy has not replied

  
Percy
Member
Posts: 22937
From: New Hampshire
Joined: 12-23-2000
Member Rating: 6.9


Message 53 of 63 (61417)
10-17-2003 7:46 PM


Hi, All!
I apologize that this email isn't a reply to anyone specifically. It touches on issues raised by many of the responses.

2D Versus 3D

Regarding 2D versus 3D, 2D is just 3D with Z=1. We should be able to design the system so that it works for Z=n, including when n=1. This allows us to debug in the simpler 2D environment and then move to 3D to add greater variability and complexity - or not if we find that 2D is sufficient for our needs.

Object Types

I'm leaning toward having a single C++ object type whose actual type is is the sum of its algorithms and properties. Objects divide by fission into two objects. Both have the possibility of mutation (this is asexual reproduction, where there is really no such thing as parent and offspring).
An object's type is the sum of its mutational history. For example, if an object's type is "a", then if one of them mutates during fission then the new type would be "a.a". If "a" mutates again during a subsequent fission then the new type would be "a.b". If "a.b" mutates during fission then the new type would be "a.b.a".
A note about the type notation. The letters are unique at each level, just like the sections of a manual. For example, it's as if someone said to read section 1.1.3.1 - you know that each "1" is at a different level of organization. The same is true of my "a.a". The two a's are at different levels of organization and merely represent convenient unique names. I like the "a.a" form as opposed to the 1.1 form because I prefer representing object types as strings rather than variable length integer arrays, or worse, linked lists of some class type.
The notation records the history of descent. Object "a.d.c.a" is descended from object "a.d.c". This allows us to easily tell the degree of relatedness of two objects. Object "a.d.c.a" and object "a.d.c.b" are descended from the same "a.d.c" object type. It might be helpful to think of types as species, or perhaps as subspecies, though I think this has the potential to be misleading since actual cells are comprised of many objects, and it is cell types that would be more analogous to species. Of course, a cell is merely the sum of its objects, and since each cell is a potentially unique combination of objects, each cell could be a unique cell type. Deciding when two cells are the same species could be difficult initially, but I expect that populations of cell species will eventually emerge.
This approach permits an efficiency. Since all objects of the same type are identical with respect to algorithms and properties, each object type's algorithms and properties can be represented centrally and therefore never need be represented more than once, not only saving space but reducing the possibility of cache misses and going virtual. Naturally each individual object possesses unique parameters, such as available energy, temperature, mass, etc.

Algorithms

Regarding algorithms, I've given some careful thought to this and I think I have some good ideas. Let me describe them and see what people think. What I describe here *does* represent a change, however, and so I begin by first explaining the reason for the change.
Our universe is a 3D array where each grid location is a cube. Each cube may be empty or may contain an object. Conceptually one object completely fills a single cube in our universe.
Our objects have to be able to sense other objects. While movement can be random, at least some movement must be directed. For example, we want to able to move toward food. We can view our object's position as lying in the center location of a 3x3x3 array. We need to choose one of the 26 cubes adjacent to the center to move to, and we want to move to one which is adjacent to food (assume that none of these 26 cubes in our 3x3x3 array contain food).
An algorithm to do this could easily be too complex to be practical. For example, here's an algorithm that looks for food two locations away on the X-axis and then moves to a location directly or diagonally adjacent:
  if (ContainsFood(xLoc+2, yLoc, zLoc)) {
if (Unoccupied(xLoc+1, yLoc, zLoc)) {
MovXPos;
} else if (Unoccupied(xLoc+1, yLoc+1, zLoc)) {
MovXPos; MovYPos;
} else if (Unoccupied(xLoc+1, yLoc-1, zLoc)) {
MovXPos; MovYNeg;
} else if (Unoccupied(xLoc+1, yLoc, zLoc+1)) {
MovXPos; MovZPos;
} else if (Unoccupied(xLoc+1, yLoc, zLoc-1)) {
MovXPos; MovZNeg;
} else if (Unoccupied(xLoc+1, yLoc+1, zLoc+1)) {
MovXPos; MovYPos; MovZPos;
} else if (Unoccupied(xLoc+1, yLoc+1, zLoc-1)) {
MovXPos; MovYPos; MovZNeg;
} else if (Unoccupied(xLoc+1, yLoc-1, zLoc+1)) {
MovXPos; MovYNeg; MovZPos;
} else if (Unoccupied(xLoc+1, yLoc-1, zLoc-1)) {
MovXPos; MovYNet; MovZNeg;
}
}
This is a big algorithm, and this is only for food two locations away on the X-axis. We need to check for food at all 98 cubes adjacent to the outside of our 3x3x3 cube, so we have to repeat this 97 more times.
There's also the problem of the check for unoccupied cubes - there's no particular imperative for that check to emerge through random mutation, and most times it won't, so there would need to be an overseer function to check the result of object functions to make sure they don't move objects to unoccupied squares, or displace objects over ridiculous distances in a single step.
Not only are there these problems, but we need such algorithms to *evolve* naturally from a much simpler starting point by random mutation. While this would happen eventually, we want something that simulates before the next millennium. Our operations are too primitive. They're analogous to simulating at the sub-atomic instead of the chemical level. We need higher level primitive operations.
What would these primitive operations be? For moving toward food we could have this:
MoveTowardFood;
It turns out that cell wall objects are the most complex in our universe in the way they move because they have to form themselves into cell walls. We can't have a cell wall object move off towards food while ignoring that it fills a critical role as part of the cell's containment vessel. Therefore what we really want for movement is this:
Move;
In other words, I'm proposing that the very complex algorithm represented above be reduced to just a single function call: Move. Calling this function moves the object one grid unit, but in which direction? The Cellscape software implements the Move function, but what criteria should it use to decide where to move the object.
For the purpose of making such decisions I introduce the concept of properties. Properties are set to values that allow functions like Move to make their decisions. Some proposed properties:
    foodAdj = 10; // Add 10 to "goodness" of each cube adjacent to food
energyAdj = 20; // Add 20 to "goodness" of each cube adjacent to energy
cellwallAdj = 10; // Add 10 to "goodness" of cubes adjacent to cell wall object
And we need to introduce some complexity to perturb these decisions further. There are good reasons for this. For example, we want cell wall objects to move towards food when the food is outside the cell, but to ignore the food when it is inside the cell (the food will be consumed by metabolic objects). In the real world the nature of the outer cell wall is different than the inner cell wall, and we need the equivalent in Cellscape. Our cell wall objects must be asymmetric.
So we introduce the concept of orientation. For a cube there are 24 unique orientations. Each of the six faces of a cube can have different properties. Now we're starting to get into some of the complex relationships that I was hoping would accompany 3D. Proposals for those properties:
    Sensation - ability to detect what is nearby.
Adherence - ability to stick to what it comes in contact with
(good for cell wall, but also hints of eventual multicellularity?)
So we now see that objects have three primary components:
  1. Algorithms that consist of calls to relatively high level functions like Move.
  2. Properties, where properties can be attached to the entire object, or just to certain faces of the object. Properies have fixed values across all objects of the same type. A better term might be global properties.
  3. Parameters, where just as with properties, parameters can be attached to the entire object, or just to certain faces of the object. Parameters can have different values for each object of the same type. A better term might be local properties.
Since each object possesses a type, we can define food, indeed any property or parameter, in terms of object type, eg:
    Food = "b.d.a.b";
FoodAdj = 10;
This means that the "goodness" of any cube is incremented by 10 by the Move function if it contains an object of type "b.d.a.b". However, objects of type "b.d.a" and "b.d.a.b.*" would also cause an increment in "goodness", but adjusted downward by some amount since the match isn't exact. This is like a wolf wanting to prey on deer, but a cow will do.
Properties and parameters can, of course, mutate. For example, say the value of Food mutates:
    Food = "b.e.a.b";
FoodAdj = 10;
There may be no such object type as "b.e.a.b", which would make this a bad mutation. Or maybe "b.e.a.b" is a very common object type, making this a good mutation (unless "b.e.a.b" is a feared predator object which will turn the tables and make you food).

Emergent Properties and Parameters

Both properties and parameters can be thought of as name/value pairs. The only difference between them is that property values are global across all objects of type, while parameter values are potentially unique for each object of type. Clearly the above discussion assumes a number of predetermined properties and parameters, but what about allowing properties and parameters not anticipated by the programmers, and certainly not having any real world analogues, except by accident.
I don't have any ideas on this to propose right now, but I throw it out there for comment.
--Percy
[Fix too-long line, grammar, and garbled presentation of ideas on algorithms. --Percy]
[This message has been edited by Percipient, 10-18-2003]
[This message has been edited by Percipient, 10-18-2003]

Replies to this message:
 Message 54 by Rei, posted 10-17-2003 8:37 PM Percy has replied
 Message 55 by TheoMorphic, posted 10-18-2003 2:05 AM Percy has replied

  
Rei
Member (Idle past 7264 days)
Posts: 1546
From: Iowa City, IA
Joined: 09-03-2003


Message 54 of 63 (61426)
10-17-2003 8:37 PM
Reply to: Message 53 by Percy
10-17-2003 7:46 PM


quote:
Regarding 2D versus 3D, 2D is just 3D with Z=1. We should be able to design the system so that it works for Z=n, including when n=1. This allows us to debug in the simpler 2D environment and then move to 3D to add greater variability and complexity - or not if we find that 2D is sufficient for our needs.
I... wouldn't recommend that. Virtually everything will be a special case for z=1 (actually, typical worlds are either zero-indexed, or indexed over a wide numeric range in both directions. For non-floating point worlds, I prefer zero-indexed, as you can use inherent wraparound of your variables).
quote:
An object's type is the sum of its mutational history. For example, if an object's type is "a", then if one of them mutates during fission then the new type would be "a.a". If "a" mutates again during a subsequent fission then the new type would be "a.b". If "a.b" mutates during fission then the new type would be "a.b.a".
Are you proposing to have it remember every evolutionary pathway of every object in the already memory-intensive universe? Also, I hope that you're not proposing that you do that through C++ inheritance - if so, you're talking about recompilation of the code in real-time - incredibly, incredibly slow, and not really accomplishing much of a purpose.
quote:
This approach permits an efficiency. Since all objects of the same type are identical with respect to algorithms and properties, each object type's algorithms and properties can be represented centrally and therefore never need be represented more than once, not only saving space but reducing the possibility of cache misses and going virtual. Naturally each individual object possesses unique parameters, such as available energy, temperature, mass, etc
Just use a hash table.
quote:
if (ContainsFood(xLoc+2, yLoc, zLoc)) {
if (Unoccupied(xLoc+1, yLoc, zLoc)) {
MovXPos;
} else if (Unoccupied(xLoc+1, yLoc+1, zLoc)) {
MovXPos; MovYPos;
} else if (Unoccupied(xLoc+1, yLoc-1, zLoc)) {
MovXPos; MovYNeg;
} else if (Unoccupied(xLoc+1, yLoc, zLoc+1)) {
MovXPos; MovZPos;
} else if (Unoccupied(xLoc+1, yLoc, zLoc-1)) {
MovXPos; MovZNeg;
} else if (Unoccupied(xLoc+1, yLoc+1, zLoc+1)) {
MovXPos; MovYPos; MovZPos;
} else if (Unoccupied(xLoc+1, yLoc+1, zLoc-1)) {
MovXPos; MovYPos; MovZNeg;
} else if (Unoccupied(xLoc+1, yLoc-1, zLoc+1)) {
MovXPos; MovYNeg; MovZPos;
} else if (Unoccupied(xLoc+1, yLoc-1, zLoc-1)) {
MovXPos; MovYNet; MovZNeg;
}
}
If you do things this way, do you realize how much code you're going to be accumulating, and having to process each cycle? You can't spell things out so flatly - you need fuzziness, such as "direction vector" and "velocity vector". Unless you want some sort of cellular automata, in which case, you're thinking on *far* too high of a level.
quote:
This is a big algorithm, and this is only for food two locations away on the X-axis. We need to check for food at all 56 cubes adjacent to the outside of our 3x3x3 cube, so we have to repeat this 55 more times.
Do you realize how much CPU you're talking about here? With fast functions calls, the above algorithm may take, say, 20 cpu cycles. That's 1120 cycles just for food detection per organism. What happens when there's nothing in neighboring cells? Because, with the size of the world you proposed, unless you have terabytes worth of ram to use for storing organisms in, virtually always the neighboring cells will be empty. And have you thought about how you would make such an algorithm mutable (important if you want interesting behavior)? Do you recompile (incredibly slow time to create), or is it a virtual machine (a lot of work, and slow processing time)?
quote:
foodAdj = 10; // Add 10 to "goodness" of each cube adjacent to food
energyAdj = 20; // Add 20 to "goodness" of each cube adjacent to energy
cellwallAdj = 10; // Add 10 to "goodness" of cubes adjacent to cell wall object
Are you proposing arbitrary metadata, or a preset list of possibilities? How many bits are you thinking about for each? Are they dynamically stored, or statically? Your proposal is going to gobble up memory like crazy, so everything you add to every object makes it a lot much more of a memory eater.
Why use this grid of cells at all? Unless you're storing your data as a fixed point simulation of floating point arithmetic (in which case, you want to process at a much higher level - dividing the universe into quadrants), what is the purpose? So you can basically make a really slow version of a cellular automata? Complex behavior comes from simple rules, too, you don't need all of this stuff. Why do your actions at the level of components of organisms when your sample functions are organism-level? Are you going to program in physics, so that when one part of an organism moves, the others move with it (so that your cell wall doesn't take off and ditch the rest of your cell), or is each part of each organism going to have to have the same code in it (running many, many times for the identical effect)? What about resistance? Reactions inducing opposite reactions? "Spring" effects versus rigid bodies? Breaking points?
Again, I think you're really overreaching for a first attempt here. I would recommend that you start with something easy - a custom cellular automata or simple particle sim or something. If you want a "virtual zoo" type alife sim, I recommend that you have the organisms exist in spatial coordinates, instead of a grid through cells. I would recommend that you not try to model components of cells like this, just model the whole thing. If you want a component sim, don't set the actions that they can take as being things at the cellular level, like "moving". And I would recommend that you have behavior work through a neural net - or even something that just acts like a neural net: Stimulation of some kind of sensor -> Check for activation level of neuron -> if greater than activation level, fire neuron -> neuron stimulates other neurons, either ones that cause activity, other sensory neurons, or neurons that aren't either for senses or action. Repeat. You don't have to have it be able to learn - no backtracing or anything of the sort - you can simply develop them through natural selection. They would effectively be taking the place of the complex chemical interactions that exist in a cell.
------------------
"Illuminant light,
illuminate me."

This message is a reply to:
 Message 53 by Percy, posted 10-17-2003 7:46 PM Percy has replied

Replies to this message:
 Message 58 by Percy, posted 10-18-2003 11:45 AM Rei has not replied

  
TheoMorphic
Inactive Member


Message 55 of 63 (61453)
10-18-2003 2:05 AM
Reply to: Message 53 by Percy
10-17-2003 7:46 PM


Ok, wow this is getting me excited too. Percy, please sign me up for potential programmers when you’re ready to start fleshing out code. I’m familiar with C++ but at a pretty basic level. I’m versed up until about pointers, but am not familiar with object oriented code yet (OO right?). I also have experience with assembly language.
Ok, just real quick, in regards to computing time could we follow SETI’s lead, and divide the computing power up between many different computers? When the program is first run (in it’s simple form: 2D or small 3D cube) hopefully a single computer will be able to handle, and progress can be made on just one computer. But when actual full on simulations are desired, is there some way we can split up the actual processing between many different computers? Granted, the actual implication of that is WAY off, but it’s just an idea to keep the super complex 3D space, and not have to worry (as much) about the limitations of one computer. Does anyone know why this fundamentally wouldn’t be possible?
Ok now about the program. I think having the location of each thing (cell, or food, or waste whatever) in the virtual world contained with the thing would be much more cost effective. That way the entire world will not have to be considered each and every cycle, only the cells that have something interesting happening in them.
Here’s an algorithm that is more efficient than an Object checking all 26 spaces around itself. Order all the Objects in the world from smallest to largest in a pointer list according to their X value. Now take the first object’s X value and compare it to the next object’s X value. If the difference between the X values is within it’s sensing range, then a little switch is flipped in both objects (basically the switch says an object can sense something else OR is sensed by something else). After the switch is flipped the current value is advanced to the next Object with out its switch flipped. Ok, there need to be 3 things remembered: Current Object, Last Object, and Next Object. The Current Object’s X value is compared to the Last, and Next Object’s X value, and if the difference is within the sensing range then the switch is flipped for the current object, and whatever object it sensed. If at any time the Current object is compared to the Next and Last, and nothing is with in it’s sensing range, then that Object is removed from the list, and the process continues (If two X values are too far apart, it doesn’t matter what the Y and Z values are the two Objects will be too far apart to sense each other).
After the list is progressed through, the Objects remaining in the list are re-ordered according to their Y value, the switches are reset, and the process is repeated. Y values too far from any other Y values are eliminated, and Y values close enough get their switches turned on.
There will be some complications when we get to the Z value (namely an object sensing multiple objects, and being detected by multiple objects) and objects with different sensing distances, but so far I think this will be more efficient than checking the 26 spaces around each object. Checking each space around an object will have something along the lines of 26*N checks (where N is the number of objects actually, more than 26 if the sensing distance is more than 1). The algorithm I just described will have at the most 6*N comparisons, regardless of how far an object can sense. Maybe the 6 will be more depending on how the final Z comparison will work.
Has what I described made sense so far? Is it a better solution? Suggestions to improve?
edit: i forgot an important part
[This message has been edited by TheoMorphic, 10-18-2003]

This message is a reply to:
 Message 53 by Percy, posted 10-17-2003 7:46 PM Percy has replied

Replies to this message:
 Message 56 by Rei, posted 10-18-2003 3:08 AM TheoMorphic has replied
 Message 59 by Percy, posted 10-18-2003 8:09 PM TheoMorphic has replied

  
Rei
Member (Idle past 7264 days)
Posts: 1546
From: Iowa City, IA
Joined: 09-03-2003


Message 56 of 63 (61456)
10-18-2003 3:08 AM
Reply to: Message 55 by TheoMorphic
10-18-2003 2:05 AM


quote:
Ok, just real quick, in regards to computing time could we follow SETIs lead, and divide the computing power up between many different computers?
No, you can't. SETI processes packets of data - small amounts of data transfer with large computing time in between. Processing a 3d world virtually always requires very large bandwidth for sharing what is going on in the "border regions". Unless you want bandwidth to be a very serious bottleneck, you're going to need a grid computing system with at least gigabit ethernet.
As computers get more powerful overall, this problem does decrease - the larger the world you can process and the faster you can go through it, the less the ratio of surface area to volume (and consequently, the ratio of border regions to transfer versus data to computer). However, due to latency, you're never going to be able to process something like this at a reasonable speed over the internet, unless we can get around that pesky speed of light.
quote:
Heres an algorithm that is more efficient than an Object checking all 26 spaces around itself. Order all the Objects in the world from smallest to largest in a pointer list according to their X value.
Size is not the key. Geography is. You need a geographic hash table. It's not that hard, really. You can get, depending on how you decide to compute, between O N and O log N. What you advocated sounds like a bubble sort, O N^2. You could do 3 merge sorts or qsorts instead, but why?
------------------
"Illuminant light,
illuminate me."

This message is a reply to:
 Message 55 by TheoMorphic, posted 10-18-2003 2:05 AM TheoMorphic has replied

Replies to this message:
 Message 57 by TheoMorphic, posted 10-18-2003 4:50 AM Rei has not replied

  
TheoMorphic
Inactive Member


Message 57 of 63 (61466)
10-18-2003 4:50 AM
Reply to: Message 56 by Rei
10-18-2003 3:08 AM


Rei writes:
You can get, depending on how you decide to compute, between O N and O log N. What you advocated sounds like a bubble sort, O N^2. You could do 3 merge sorts or qsorts instead, but why?
I didn't go in depth on how the algorithm should sort the various X, Y, and Z values. Really it's not important... whatever is optimal for speed (i'm thinking start with qsort, but then as the Objects move just re-arrange the list... instead of re-sorting).
I don't see how the rest of the algorithm falls in the O N^2 range, since each Object is not being compared to every other object. Each object is just being compared to one (or 2) Objects.
i'm not familar with what a hash table is, so if you'll excuse me i'll find out right now.

This message is a reply to:
 Message 56 by Rei, posted 10-18-2003 3:08 AM Rei has not replied

  
Percy
Member
Posts: 22937
From: New Hampshire
Joined: 12-23-2000
Member Rating: 6.9


Message 58 of 63 (61491)
10-18-2003 11:45 AM
Reply to: Message 54 by Rei
10-17-2003 8:37 PM


Hi, Rei, thanks for the feedback. Much appreciated.
First I have to apologize. Evidently while changing the order of presentation I left out a paragraph or two that set the stage for presenting my algorithm ideas. I've already edited the post and it reads better than it did before I garbled it. What got lost was that I was only presenting the ridiculously complex algorithm in order to justify a move to higher level functions that would permit increased simplicity. I proposed replacing the complex algorithm with a single function call, Move, which draws upon properties and parameters of the object.
quote:
Regarding 2D versus 3D, 2D is just 3D with Z=1. We should be able to design the system so that it works for Z=n, including when n=1. This allows us to debug in the simpler 2D environment and then move to 3D to add greater variability and complexity - or not if we find that 2D is sufficient for our needs.
I... wouldn't recommend that. Virtually everything will be a special case for z=1 (actually, typical worlds are either zero-indexed, or indexed over a wide numeric range in both directions. For non-floating point worlds, I prefer zero-indexed, as you can use inherent wraparound of your variables).
Not sure what you mean by zero-indexed in the context of sparse arrays - I'm probably being dense. As described in earlier posts, I'm planning to use a sparse array, but feedback (that I haven't replied to yet but will soon) convinced me there are better sparse approaches than what I originally had in mind. Typical sparse arrays assume a fairly random scattering of occupied locations around the array space, while the situation in Cellscape is much different. It will frequently be the case that there will be both densely and sparsely populated regions. An occupied cube means it is highly likely that adjacent cubes are also occupied, and this can be taken into account in the implementation. This will still be a sparse array, but I now plan to implement it using a hierarchical clumping approach. Someone mentioned octants - I haven't investigated this yet, but just guessing at what it means the concept seems good.
quote:
An object's type is the sum of its mutational history. For example, if an object's type is "a", then if one of them mutates during fission then the new type would be "a.a". If "a" mutates again during a subsequent fission then the new type would be "a.b". If "a.b" mutates during fission then the new type would be "a.b.a".
Are you proposing to have it remember every evolutionary pathway of every object in the already memory-intensive universe? Also, I hope that you're not proposing that you do that through C++ inheritance - if so, you're talking about recompilation of the code in real-time - incredibly, incredibly slow, and not really accomplishing much of a purpose.
What I'd like to do is represent every unique evolutionary pathway. Oftentimes many objects would have the same pathway, and all such objects would include a pointer to a class holding both the pathway name and the functions and parameters representing that object type. I'm not planning to have any compilation step as part of the simulation.
quote:
This approach permits an efficiency. Since all objects of the same type are identical with respect to algorithms and properties, each object type's algorithms and properties can be represented centrally and therefore never need be represented more than once, not only saving space but reducing the possibility of cache misses and going virtual. Naturally each individual object possesses unique parameters, such as available energy, temperature, mass, etc
Just use a hash table.
I was planning on using an associative array, but they're implemented using hash tables, so we're thinking along the same lines.
Are you proposing arbitrary metadata, or a preset list of possibilities? How many bits are you thinking about for each? Are they dynamically stored, or statically? Your proposal is going to gobble up memory like crazy, so everything you add to every object makes it a lot much more of a memory eater.
I'm proposing both predefined parameters and arbitrary metadata (I like that term). Near the end of my post I think I indicated my desire for including arbitrary metadata (I called them emergent properties), but that I didn't have any specific implementation ideas along those lines at present.
I probably wasn't clear that I plan to have both properties (same value for all objects of type) and parameters (unique value per object). Perhaps better terms would be global and local properties.
My experience has been that careful design allows you to move more efficient implementations beneath well designed interfaces. I work in an industry whose products tend to have hundreds of thousands of lines of code, and which are used on machines with gigabytes of RAM and terabytes of disk, and which nonetheless oftentimes take a week to run. Adding efficiency and performance to a new feature after initial introduction is something we often do. I share the memory and performance concerns, but would prefer not to let such considerations overly influence the initial specification.
Why use this grid of cells at all? Unless you're storing your data as a fixed point simulation of floating point arithmetic (in which case, you want to process at a much higher level - dividing the universe into quadrants), what is the purpose? So you can basically make a really slow version of a cellular automata?
You're making me feel like I just proposed a perpertual motion machine! Let me know if my responses indicate I'm just as delusional.
While I don't find the field of cellular automata particularly illuminating to what I'm trying to do, the similarity of my approach has not escaped me. While I find it interesting that complex behaviors and structures can emerge from simple rules, and that these have been found to in many cases recreate behaviors and structures found in real life, the fact of the matter is that nothing about real life is simple. It's interesting that simple rules can replicate some things about real life, but that doesn't mean that real life is following anything like the same rules. The positive results of cellular automata in helping us understand some aspects of life notwithstanding, I personally don't believe that the immense variety and sheer richness of real life cell behavior emerges from cellular automata.
Why do your actions at the level of components of organisms when your sample functions are organism-level? Are you going to program in physics, so that when one part of an organism moves, the others move with it (so that your cell wall doesn't take off and ditch the rest of your cell)...
What a wonderful image, and this is exactly the problem I've been struggling with. I think the approach I've developed while wrestling with this problem will be valuable in helping attain an emergent result. The idea is to let behaviors and structures emerge, and not to design them in ahead of time.
I *would* like to implement *some* physics. Temperature is one possibility I find appealing, since organisms can evolve behavior to follow temperature gradients, be affected by temperature, and can die at the extremes.
That being said, I should probably repeat something I said in an earlier post, that what I'm proposing is "a life", not "the life". In other words, I hope that evolving lifeforms and a self-sustaining ecology will result, but I'm not expecting nor attempting lifeforms as realized by life on earth.
So by the same token, while I would like to have *a* physics, it wouldn't be *the* physics. And it is important that properties of compounds emerge and not be predetermined. I don't know how to do this yet, but I haven't yet given it any thought. It doesn't seem impossible, and probably has even been done before.
Again, I think you're really overreaching for a first attempt here.
Hopefully my correction to the algorithm section of my earlier post alleviates some of this concern. The feeling of complexity may also be because the approach has evolved during the discussion on this thread, which is a good thing. The very useful feedback I'm getting from you and others is not what I anticipated when I originally began the thread. I actually thought there was a strong possibility it would be ignored. Anyway, the current approach feels pretty simple to me at this point, and when I finally am able to release the base object definitions I think that will be pretty clear.
Thanks again for the feedback - this is really helpful.
--Percy

This message is a reply to:
 Message 54 by Rei, posted 10-17-2003 8:37 PM Rei has not replied

  
Percy
Member
Posts: 22937
From: New Hampshire
Joined: 12-23-2000
Member Rating: 6.9


Message 59 of 63 (61533)
10-18-2003 8:09 PM
Reply to: Message 55 by TheoMorphic
10-18-2003 2:05 AM


TheoMorphic writes:
Ok, wow this is getting me excited too. Percy, please sign me up for potential programmers when you’re ready to start fleshing out code.
Will do! Current project staff, in alphabetical order:
  • awinkisas
  • IrishRockHound
  • Mr Jack
  • NosyNed
  • Percy
  • TheoMorphic
Ok, just real quick, in regards to computing time could we follow SETI’s lead, and divide the computing power up between many different computers?
I haven't touched on this yet, but in the back of my mind I've been assuming that truly interesting simulations could only be carried out using distributed computing. I believe one of the early alife programs, maybe it was Tierra, had a distributed Internet version, and there must be others as well. I share Rei's concern about internode bandwidth, and to be honest I hadn't thought about an Internet version so much as just a network version. My company has 3000 PCs on a dedicated high-bandwidth network, and I thought it would be kind of neat to use them. After all, the PCs don't do much at night - almost all our nightly regression tests run on workstations. One important factor would be dividing the simulated universe at boundaries where bandwidth requirements are minimal. It feels very similar to the problem of distributed simulation of digital systems where there are certain rules of thumb that you follow, such as not splitting the simulation at the instruction cache bus. So I guess you would divide simulated universes at the equivalent of mountain ranges and deserts.
Ok now about the program. I think having the location of each thing (cell, or food, or waste whatever) in the virtual world contained with the thing would be much more cost effective...Here’s an algorithm that is more efficient than an Object checking all 26 spaces around itself. Order all the Objects in the world from smallest to largest in a pointer list according to their X value. Now take the first object’s X value and compare it to the next object’s X value. If the difference between the X values is within it’s sensing range, then a little switch is flipped in both objects...
I can see a couple problems with this approach:
  1. It doesn't address checking the adjacency of an unoccupied cube. When assessing whether the adjacent unoccupied cube is a good candidate to move to, you have to determine whether food or danger or whatever is adjacent to that cube.
  2. It only works for direct adjacency and not diagonal adjacency.
Let me know if you think of ways around these issues.
I'll make a short comment about my current thinking (notice I say "thinking" and not "conclusions") about representing the universe. Someone earlier made reference to something they called an octant. I don't really know what this is, but my guess is that it's an 8x8x8 cube. And it's a real array, not a sparse array, which makes it very efficient, as opposed to a sparse array where every location has to be referenced through the hash table, even if its an adjacent location. And you could have a hierarchy of octants, where you could have an 8x8x8 array of octants, in this case sparsely occupied and probably implemented with a hash table.
My thinking hasn't gone beyond trying to imagine what was meant by octants (a quick Google wasn't helpful) - I haven't made any serious effort to assess how successful this approach might be, but it *feels* promising and worth assessing.
--Percy

This message is a reply to:
 Message 55 by TheoMorphic, posted 10-18-2003 2:05 AM TheoMorphic has replied

Replies to this message:
 Message 60 by TheoMorphic, posted 10-18-2003 9:41 PM Percy has not replied

  
TheoMorphic
Inactive Member


Message 60 of 63 (61543)
10-18-2003 9:41 PM
Reply to: Message 59 by Percy
10-18-2003 8:09 PM


Percipient writes:
1. It doesn't address checking the adjacency of an unoccupied cube. When assessing whether the adjacent unoccupied cube is a good candidate to move to, you have to determine whether food or danger or whatever is adjacent to that cube.
Basically any movement will be valid, as long as it doesn't move it into the same cell as an object is that it's already detected (assuming an object can't move farther than it can detect). It won't have to check to make sure a cell is unoccupied because it will already know the locations of all the Objects around it. The only thing that will have to be done is a comparison between the place it wants to move, and the location of the Objects around itself.
Percipient writes:
2. It only works for direct adjacency and not diagonal adjacency.
i'm pretty sure it does work for diagonals too. Imagine 2 Objects, one located at (0,0,0) and one located at (1,1,1). The detecting range for both Objects are 1. The X values are compared first. 1-0 is less than or equal to 1, so the comparison between these two pieces continues. so Y is compared... again 1-0 is less than or equal to 1. Finally Z is compared... again 1-0 is less than or equal to 1, so both Objects are alerted to each other's presence.
I'm sure this isn't the best way to go about representing the virtual world, but i believe it's better than checking each cell surrounding each object. If someone could explain what a hash table is... and how it would be a better representatio
edited to add:
to clarify: all objects (including food and waste) will have stored with it it's own location. Granted, some objects won't react to anything that’s around it (like food and waste). To represent this they can be assigned a sensing power of 0... or they just won't react to anything they detect, however other Objects can detect them, and so they will be assigned a location.
[This message has been edited by TheoMorphic, 10-18-2003]

This message is a reply to:
 Message 59 by Percy, posted 10-18-2003 8:09 PM Percy has not replied

Replies to this message:
 Message 61 by Rei, posted 10-19-2003 3:12 AM TheoMorphic has not replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024