Ok, wow this is getting me excited too. Percy, please sign me up for potential programmers when you’re ready to start fleshing out code. I’m familiar with C++ but at a pretty basic level. I’m versed up until about pointers, but am not familiar with object oriented code yet (OO right?). I also have experience with assembly language.
Ok, just real quick, in regards to computing time could we follow SETI’s lead, and divide the computing power up between many different computers? When the program is first run (in it’s simple form: 2D or small 3D cube) hopefully a single computer will be able to handle, and progress can be made on just one computer. But when actual full on simulations are desired, is there some way we can split up the actual processing between many different computers? Granted, the actual implication of that is WAY off, but it’s just an idea to keep the super complex 3D space, and not have to worry (as much) about the limitations of one computer. Does anyone know why this fundamentally wouldn’t be possible?
Ok now about the program. I think having the location of each thing (cell, or food, or waste whatever) in the virtual world contained with the thing would be much more cost effective. That way the entire world will not have to be considered each and every cycle, only the cells that have something interesting happening in them.
Here’s an algorithm that is more efficient than an Object checking all 26 spaces around itself. Order all the Objects in the world from smallest to largest in a pointer list according to their X value. Now take the first object’s X value and compare it to the next object’s X value. If the difference between the X values is within it’s sensing range, then a little switch is flipped in both objects (basically the switch says an object can sense something else OR is sensed by something else). After the switch is flipped the current value is advanced to the next Object with out its switch flipped. Ok, there need to be 3 things remembered: Current Object, Last Object, and Next Object. The Current Object’s X value is compared to the Last, and Next Object’s X value, and if the difference is within the sensing range then the switch is flipped for the current object, and whatever object it sensed. If at any time the Current object is compared to the Next and Last, and nothing is with in it’s sensing range, then that Object is removed from the list, and the process continues (If two X values are too far apart, it doesn’t matter what the Y and Z values are the two Objects will be too far apart to sense each other).
After the list is progressed through, the Objects remaining in the list are re-ordered according to their Y value, the switches are reset, and the process is repeated. Y values too far from any other Y values are eliminated, and Y values close enough get their switches turned on.
There will be some complications when we get to the Z value (namely an object sensing multiple objects, and being detected by multiple objects) and objects with different sensing distances, but so far I think this will be more efficient than checking the 26 spaces around each object. Checking each space around an object will have something along the lines of 26*N checks (where N is the number of objects actually, more than 26 if the sensing distance is more than 1). The algorithm I just described will have at the most 6*N comparisons, regardless of how far an object can sense. Maybe the 6 will be more depending on how the final Z comparison will work.
Has what I described made sense so far? Is it a better solution? Suggestions to improve?
edit: i forgot an important part
[This message has been edited by TheoMorphic, 10-18-2003]