I am trying to figure out how one needs to go about in order to enable an
application to distribute intensive computations across a network of
computers. For example, there are a number of 3D rendering packages that
can spread the rendering to available computers on a network.

1. Is this approach completely different then having multiple cpu's in one
computer (and I guess, hyperthreading)?
2. Is this also different then "clusters" of computers, as is being lately
promoted by IBM and the like?
3. From an programming perspective, if we have many separate routines, I can
understand how one would want to use multiple threads. However, suppose,
for simplicity, that we have one very large loop that performs some very
intensive computations, how would we take advantage of cpu's across a
network (or multiple cpu's on one computer or clusters)? I can't see how we
could break the loop up an make use of multiple threads...

Finally, want sort of performance could one expect if the above details were
solved? ie if I have a network of 10 P4-3Ghz computers, that are not used
for anything else but to carry out the intensive computations of that one
application, would the speed be in the order of 2x, 3x etc.. of the base
case of only having one computer?

-Ara