Force explicit deletion of a Java object
Asked Answered
L

14

14

I'm working on a Java server that handles a LOT of very dense traffic. The server accepts packets from clients (often many megabytes) and forwards them to other clients. The server never explicitly stores any of the incoming/outgoing packets. Yet the server continually runs into OutOfMemoryException exceptions.

I added System.gc() into the message passing component of the server, hoping that memory would be freed. Additionally, I set the heap size of the JVM to a gigabyte. I'm still getting just as many exceptions.

So my question is this: how can I make sure that the megabyte messages aren't being queued indefinitely (despite not being needed)? Is there a way for me to call "delete" on these objects to guarantee they are not using my heap space?

        try
        {
           while (true)
            {
               int r = generator.nextInt(100);//generate a random number between 0 and 100
                Object o =readFromServer.readObject();
                sum++;
                // if the random number is larger than the drop rate, send the object to client, else
                //it will be dropped
                if (r > dropRate)
                {
                    writeToClient.writeObject(o);
                    writeToClient.flush();
                    numOfSend++;
                    System.out.printf("No. %d send\n",sum);
                }//if

            }//while
        }//try
Lacto answered 1/2, 2010 at 16:1 Comment(6)
To answer some of the answers here my code does not store any references. The server is a Socks proxy that passes Objects. I have a while loops that reads an object from an incoming stream and writes it to an outgoing stream. That's it. Looking into memory profilers now.Lacto
you read the chapter from "Effective Java" very quickly ;)Barytes
do you close those streams? Give some codeBarytes
No, we don't close streams. They are alive for the entire lifecycle of the program. I'll post some of the code in question. writeToClient is an objectoutputstream. So is readFromServer.Lacto
I mean readFromServer is an objectinputstream.Lacto
You need to close the stream to allow its content to get garbage collected. Also all modern garbage collectors simply ignore the System.gc() call entirely, it just doesn't do anything.Agrology
S
3

Looking at your code: are your ObjectInput/OutputStream instances newly created each time a packet arrives or is sent, and if so, are they closed properly? If not, do you call reset() after each read/write? The object stream classes keep a reference to all objects they have seen (in order to avoid resending the same object each time it is referred), preventing them from being garbage collected. I had that exact problem about 10 years ago - actually the first time I had to use a profiler to diagnose a memory leak...

Schizo answered 1/2, 2010 at 17:3 Comment(3)
Hi there.. if i may ask, could this be applicable to my previously asked question: #13489393Attenweiler
My program stops every time it's about to write the excel file... could the outputstream be the reason why it's stopping. thanks!Attenweiler
@ides: no, can't be the same issue - your code creates a new output stream and closes it afterwards.Schizo
F
20

Object streams hold references to every object written/read from them. This is because the serialization protocol allows back references to objects that appeared earlier in the stream. You might be able to still use this design but use writeUnshared/readUnshared instead of writeObject/readObject. I think, but am not sure, that this will prevent the streams from keeping a reference to the object.

As Cowan says, the reset() method is also in play here. The safest thing to do is probably use writeUnshared immediately followed by reset() when writing to your ObjectOutputStreams

Ferrel answered 1/2, 2010 at 16:32 Comment(2)
Can't upvote enough, because this is almost certainly the answer. Couple of extra things: yes, writeUnshared will do exactly what you say. Oddly it's not guaranteed by the Javadoc though, but Sun's implementation certainly works as you describe. Secondly, the other option (especially if you want limited backreference, i.e. you write out a 'cluster' of objects which refer to each other but then the next 'cluster' is separate) is to use ObjectOutputStream.reset() between 'batches', which explicitly 'disregard[s] the state of any objects already written to the stream'.Wiedmann
Here's one more +1 to finally end higher than my answer :)Bellamy
B
11

When JVM is on an edge of OutOfMemoryError, it will run the GC.

So calling System.gc() yourself beforehand ain't going to fix the problem. The problem is to be fixed somewhere else. There are basically two ways:

  1. Write memory efficient code and/or fix memory leaks in your code.
  2. Give JVM more memory.

Using a Java Profiler may give a lot of information about memory usage and potential memory leaks.

Update: as per your edit with more information about the code causing this problem, have a look at Geoff Reedy's answer in this topic which suggests to use ObjectInputStream#readUnshared() and ObjectOutputStream#writeUnshared() instead. The (linked) Javadocs also explains it pretty well.

Bellamy answered 1/2, 2010 at 16:5 Comment(5)
Personally, I'd reorder those. :)Corpulence
@mmyers, 2 is certainly easier but if you've got a leak you're just delaying the problem slightly ;)Desuetude
@Paolo: He edited after I commented but before the 5-minute grace period ended. Honest.Corpulence
Indeed, I was disturbed for some minutes between the edit and comment in.Bellamy
and the curious thing is that mmyers' comment was voted up both before and after the edit.Barytes
S
4

System.gc() is only a recommendation to the Java Virtual Machine. You call it and the JVM may or may not run the garbage collection.

The OutOfMemoryException may be caused by two things. Either you keep (unwanted) references to your objects or you are accepting to many packets.

The first case can be analyzed by a profiler, where you try to find out how many references are still live. A good indication for a memory leek is growing memory consumption of your server. If every additional request makes your Java process grow a little, chances are you are keeping references somewhere (jconsole might be a good start)

If you are accepting more data than than you can handle, you will have to block additional requests until others are completed.

Sick answered 1/2, 2010 at 16:9 Comment(0)
B
3

You can't call explicit garbage collection. But this is not the problem here. Perhaps you are storing references to these messages. Trace where they are handled and make sure no object holds reference to them after they are used.

To get a better idea of what the best practices are, read Effective Java, chapter 2 - it's about "Creating and Destroying Objects"

Barytes answered 1/2, 2010 at 16:4 Comment(2)
You can call explicit garbage collection, although it's not guaranteed to run.Fibrous
Heh, I think he meant "You can explicitly call GC." The call is explicit, the GC is not... ;)Serigraph
S
3

Looking at your code: are your ObjectInput/OutputStream instances newly created each time a packet arrives or is sent, and if so, are they closed properly? If not, do you call reset() after each read/write? The object stream classes keep a reference to all objects they have seen (in order to avoid resending the same object each time it is referred), preventing them from being garbage collected. I had that exact problem about 10 years ago - actually the first time I had to use a profiler to diagnose a memory leak...

Schizo answered 1/2, 2010 at 17:3 Comment(3)
Hi there.. if i may ask, could this be applicable to my previously asked question: #13489393Attenweiler
My program stops every time it's about to write the excel file... could the outputstream be the reason why it's stopping. thanks!Attenweiler
@ides: no, can't be the same issue - your code creates a new output stream and closes it afterwards.Schizo
K
2

You cannot explicitly force deletion, but you CAN ensure that references to messages are not held by only keeping one direct reference in memory, and then using Reference objects to hold garbage-collectible references to it.

What about using a (small, bounded-size) queue for messages to process, then a secondary SoftReference queue which feeds to the first queue? This way you guarantee that processing will proceed BUT also that you won't get out of memory errors if messages are too big (the reference queue will get dumped in that case).

Kayekayla answered 1/2, 2010 at 16:5 Comment(0)
O
2

You can tune garbage collection in java, but you cannot force.

Oligarchy answered 1/2, 2010 at 16:6 Comment(0)
F
1

If you're getting OutOfMemory exceptions, something is clearly still holding a reference to these objects. You can use a tool such as jhat to find out where these references are sticking around.

Ferrel answered 1/2, 2010 at 16:5 Comment(0)
D
1

You need to find out if you are holding onto objects longer than necessary. The first step would be to get a profiler on the case and look at the heap and see why objects aren't being collected.

Although you've given the JVM 1GB, it may be that your young generation is too small if lots of objects are being created very quickly forcing them into older generations where they won't be removed as quickly.

Some useful info on GC tuning: http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html

Desuetude answered 1/2, 2010 at 16:6 Comment(0)
S
1

The server accepts packets from clients (often many megabytes) and forwards them to other clients.

Your code probably receives the "packets" completely before forwarding them. This means it needs enough memory to store all packets entirely until they've been forwarded completely, and when those packets are "many megabytes large" that means you need a lot of memory indeed. it also results in unnecessary latency.

It's possible that you have a memory leak as well, but if the above is true, this "store and forward" design is your biggest problem. You can probably cut memory usage by 95% if you redesign the app to not receive packets completely and instead stream them directly to the clients, i.e. read only a small part of the package at a time and transmit that to the clients immediately. It's not difficult to do this in a way that looks exactly the same to the clients as when you do store-and-forward.

Schizo answered 1/2, 2010 at 16:19 Comment(2)
Normally I would have gone with this design, but it's important for my application to drop a certain percentage of the "packets". The socks proxy server application is supposed to abstract away the environment of a harsh network environment. I'm running forward erasure coding schemes through this proxy (each "packet" is a block of LT Codes).Lacto
@Curious George: I don't understand why you have to drop packets unless you're writing tests, but even then, couldn't you decide on whether or not to drop one particular packet when it arrives, and in that case either return an error to the source client, or simply exhaust the input stream without storing the data?Schizo
F
0

Manually triggering System.gc is not a good answer, as others have posted here. It's not guaranteed to run, and it triggers a full gc, which is likely to hang your server for a long time while it runs(>1 sec if you're giving your server a GB of ram, I've seen several-minute long pauses on larger systems). You could tune your gc which will certainly help, but not completely fix the problem.

If you're reading objects from one stream, and then writing them out to another, Then there's a point in which you're holding the entire object in memory. If these objects are, as you state, large, then that could be your problem. Try to rewrite your IO so that you read bytes from 1 stream and write them to another without ever explicitly holding the complete object (although I can't see how this would work with object serialization/deserialization if you need to verify/validate the objects).

Fibrous answered 1/2, 2010 at 16:25 Comment(0)
J
0

just to add to all those previous replies : System.gc() is not a command to the JVM to initiate garbage collection..it is a meek direction and does not guarantee anything to happen. The JVM specification leaves it to the vendors to take a call on what needs to be done on gc calls. Vendors may even choose to do nothing at all!

Jackscrew answered 1/2, 2010 at 16:26 Comment(2)
FWIW, whenever I've called System.gc() in both the Sun and IBM JVMs, it's done it. It's not a synchronous call, my guess is that it just puts the GC thread on the run queue and returns. The GC thread then runs and suspends itself again. Yes, technically it doesn't have to, and I certainly wouldn't depend on it, but it's definitely worth trying as a debugging step. Oh, and finalizers tend to get run, too. Same caveats, but same observations, too.Serigraph
All I'm saying is that its not dependable. sun and IBM may have implemented it. some x vendor may not have..Jackscrew
A
0

You mention you explicitly need the whole received packet before you can send it? Well, that doesn't mean you need to store it all in memory, does it? Is it a feasible architectural change to save received packets to an external store (maybe ram-disk or DB if even an SSD is too slow) and then pipe them directly to the recipient without ever loading them fully into memory?

Assist answered 1/2, 2010 at 16:33 Comment(0)
S
0

If your server runs for at least a few minutes before it dies, you might want to try running it in the Visual VM. You might at least get a better idea of how fast the heap is growing, and what kind of objects are in it.

Serigraph answered 1/2, 2010 at 17:2 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.