How to create a Linux cluster for running physics simulations in java?
Asked Answered
H

8

6

I am developing a scientific application used to perform physical simulations. The algorithms used are O(n3), so for a large set of data it takes a very long time to process. The application runs a simulation in around 17 minutes, and I have to run around 25,000 simulations. That is around one year of processing time.

The good news is that the simulations are completely independent from each other, so I can easily change the program to distribute the work among multiple computers.

There are multiple solutions I can see to implement this:

  • Get a multi-core computer and distribute the work among all the cores. Not enough for what I need to do.
  • Write an application that connects to multiple "processing" servers and distribute the load among them.
  • Get a cluster of cheap linux computers, and have the program treat everything as a single entity.

Option number 2 is relatively easy to implement, so I don't look so much for suggestions for how to implement this (Can be done just by writing a program that waits on a given port for the parameters, processes the values and returns the result as a serialized file). That would be a good example of Grid Computing.

However, I wonder at the possibilities of the last option, a traditional cluster. How difficult is to run a Java program in a linux grid? Will all the separate computers be treated as a single computer with multiple cores, making it thus easy to adapt the program? Is there any good pointers to resources that would allow me to get started? Or I am making this over-complicated and I am better off with option number 2?

EDIT: As extra info, I am interested on how to implement something like described in this article from Wired Magazine: Scientific replaced a supercomputer with a Playstation 3 linux cluster. Definitively number two sounds like the way to go... but the coolness factor.

EDIT 2: The calculation is very CPU-Bound. Basically there is a lot of operations on large matrixes, such as inverse and multiplication. I tried to look for better algorithms for these operations but so far I've found that the operations I need are 0(n3) (In libraries that are normally available). The data set is large (for such operations), but it is created on the client based on the input parameters.


I see now that I had a misunderstanding on how a computer cluster under linux worked. I had the assumption that it would work in such a way that it would just appear that you had all the processors in all computers available, just as if you had a computer with multiple cores, but that doesn't seem to be the case. It seems that all these supercomputers work by having nodes that execute tasks distributed by some central entity, and that there is several different libraries and software packages that allow to perform this distribution easily.

So the question really becomes, as there is no such thing as number 3, into: What is the best way to create a clustered java application?

Hexastich answered 2/12, 2008 at 10:58 Comment(1)
Is the simulation CPU bound or I/O bound? Is there a large input dataset?Zaporozhye
T
5

I would very highly recommend the Java Parallel Processing Framework especially since your computations are already independant. I did a good bit of work with this undergraduate and it works very well. The work of doing the implementation is already done for you so I think this is a good way to achieve the goal in "number 2."

http://www.jppf.org/

Tatman answered 2/12, 2008 at 11:3 Comment(1)
Interesting framework, that would surely make it simpler to imlpement number twoHort
P
3

Number 3 isn't difficult to do. It requires developing two distinct applications, the client and the supervisor. The client is pretty much what you have already, an application that runs a simulation. However, it needs altering so that it connects to the supervisor using TCP/IP or whatever and requests a set of simulation parameters. It then runs the simulation and sends the results back to the supervisor. The supervisor listens for requests from the clients and for each request, gets an unallocated simulation from a database and updates the database to indicate the item is allocated but unfinished. When the simulation is finished, the supervisor updates the database with the result. If the supervisor stores the data in an actual database (MySql, etc) then the database can be easily queried for the current state of the simulations. This should scale well up to the point where the time taken to provide the simulation data to all the clients is equal to the time required to perform the simulation.

Poor answered 2/12, 2008 at 11:29 Comment(1)
This is still number two, however it is more o less what I envisioned, except using files instead of databases.Hort
N
2

Simplest way to distribute computing on a Linux cluster is to use MPI. I'd suggest you download and look at MPICH2. It's free. their home page is here

If your simulations are completely independent, you don't need most of the features of MPI. You might have to write a few lines of C to interface with MPI and kick off execution of your script or Java program.

Nudity answered 5/12, 2008 at 20:49 Comment(2)
Also consider OpenMPI (open-mpi.org)Evapotranspiration
I wouldn't say MPI is the "simplest" way. JPPF is pretty good in my view. Although, I'm a bit biased to it. :)Tatman
D
2

You should check out Hazelcast, simplest peer2peer (no centralized server) clustering solution for Java. Try Hazelcast Distributed ExecutorService for executing your code on the cluster.

Regards,

-talip

Drawee answered 15/5, 2009 at 6:9 Comment(0)
B
1

You already suggested it, but disqualified it: Multi cores. You could go for multi core, if you had enough cores. One hot topic atm is GPGPU computing. Esp. NVIDIAs CUDA is a very priomising approach if you have many independent task which have to do the same computation. A GTX 280 delivers you 280 cores, which can compute up to 1120 - 15360 threads simultanously . A pair of them could solve your problem. If its really implementable depends on your algorithm (data flow vs. control flow), because all scalar processors operate in a SIMD fashion.

Drawback: it would be C/C++, not java

Bibeau answered 2/12, 2008 at 11:59 Comment(2)
How much money are we talking about for one of these?Hort
Look at your favourite hardware dealer. NVIDIA GTX 280 is off course top notch and cost 400 EUR, but you dont need the state of the art card. CUDA supports almost all recent cards, e.g. the performant GTS 8800 you get for less than 80 EUR. But very important for this: How looks your algorithm?Bibeau
D
1

How optimized are your algorithms? Are you using native BLAS libraries? You can get about an order of magnitude performance gain by switching from naive libraries to optimized ones. Some, like ATLAS will also automatically spread the calculations over multiple CPUs on a system, so that covers bullet 1 automatically.

AFAIK clusters usually aren't treated as a single entity. They are usually treated as separate nodes and programmed with stuff like MPI and SCALAPACK to distribute the elements of matrices onto multiple nodes. This doesn't really help you all that much if your data set fits in memory on one node anyways.

Diffusivity answered 2/12, 2008 at 15:32 Comment(1)
I am using JAMA, which is not an optimized library for all I know. It seems then that the only option is number 2. I was hoping that a linux cluster would be treated like a single entity.Hort
L
1

Have you looked at Terracotta?

For work distribution you'll want to use the Master/Worker framework.

Lodged answered 27/1, 2009 at 7:44 Comment(0)
P
1

Ten years ago, the company I worked for looked at a similar virtualization solution, and Sun, Digital and HP all supported it at the time, but only with state-of-the-art supercomputers with hardware hotswap and the like. Since then, I heard Linux supports the type of virtualization you're looking for for solution #3, but I've never used it myself.

Java primitives and performance

However, if you do matrix calculations you'd want to do them in native code, not in Java (assuming you're using Java primitives). Especially cache misses are very costly, and interleaving in your arrays will kill performance. Non-interleaved chunks of memory in your matrices and native code will get you most of the speedup without additional hardware.

Parallelogram answered 15/6, 2009 at 17:33 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.