Results 1 to 12 of 12
- 07-16-2007, 04:57 PM #1
Member
- Join Date
- Jul 2007
- Posts
- 26
- Rep Power
- 0
java.lang.OutOfMemoryError: Java heap space
I'm working with a small application that runs multiple threads and potentially can get caught up in a very deep recursive process...
Everything works fine and, obviously, as the recursion gets deeper the execution time is increased...
Each time the recursive method is called it sends output to a text area that can be saved at the users request.
Once the program fills the text area with approximately 212000 lines it crashes and gives me a
Java Code:java.lang.OutOfMemoryError: Java heap space
Upon "googling" for a solution I came across this page which gives an example of a OutOfMemoryError Warning System. This sounds like exactly what I am looking for but I can not get it to work for me...it's not much of a tutorial.
Thanks
- 07-25-2007, 08:07 PM #2
Member
- Join Date
- Jul 2007
- Posts
- 55
- Rep Power
- 0
Increase the heap size as such:
-Xms256m -Xmx512m -XX:PermSize=64M -XX:MaxPermSize=1000M
These values should be provided to the VM.
- 06-12-2010, 08:44 AM #3
Member
- Join Date
- Jun 2010
- Posts
- 7
- Rep Power
- 0
- 06-12-2010, 11:59 AM #4
Member
- Join Date
- Jun 2010
- Location
- Berlin
- Posts
- 22
- Rep Power
- 0
Maybe you should have a look at the Runtime class which enables you to check the current amount of free space (and other things).
The amount of space available to the VM has to be set when starting the VM the way Seemster mentioned before.
- 06-12-2010, 03:50 PM #5
Member
- Join Date
- Jun 2010
- Posts
- 7
- Rep Power
- 0
thanks alot for your reply,I don't know how to use that class,but in my program I define a hashtable in heapspace, then want to put more that 30 million elemnts in it, I think the problem is due to that,
so I need a tool to monitor the memory bottle neck in my program....
another question?
can I define a hashtable on hard disk , instead of heap space?
- 06-12-2010, 03:51 PM #6
Member
- Join Date
- Jun 2010
- Posts
- 7
- Rep Power
- 0
I increase heap space but unfortunatly didn't works...
- 06-12-2010, 03:59 PM #7
Member
- Join Date
- Jun 2010
- Location
- Berlin
- Posts
- 22
- Rep Power
- 0
So what? Have you read the page I linked? Is there some part that you don't understand? If so, just ask. If you don't do this, I have to guess, that you don't want to read by yourself but someone who googles for you.
Of course you can put data onto the harddisk, but the hashtable has to been kept inside the heap, while you want to access it. Of course you could create some kind of table that is partitioned into pages and saves the unneeded pages to the harddisk and reads them when needed and so on. But to be honest, that is not that easy and wont make to much sense for a big hashtable.
Fortunately there are some solutions allready available (and even for free), which you might allready heard about: databases. A database system (e.g. postgres) manages real big amounts of data by using some very intelligent mechanisms. So you should check if it wont be more usefull to use some database instead of a hashtable.
- 06-12-2010, 04:16 PM #8
Member
- Join Date
- Jun 2010
- Posts
- 7
- Rep Power
- 0
thanks a lot,about reading that page, I mean that I don't need total memory space,I need a tool to show me the memory bottle neck in my program....
about database, I think that interaction to database is slower that hashtable,I need a speedy structure,
I'm thankful alot for your useful help
- 06-12-2010, 04:31 PM #9
Member
- Join Date
- Jun 2010
- Location
- Berlin
- Posts
- 22
- Rep Power
- 0
You could get the free space of your VM, too (which is your bottle neck)
Well, I can't guess your restrictions. How is "speedy" defined? There are some limits to each system. Of course the interaction with a database creates some overhead, but the organisation of data decreases the amount of time that is needed to get some result. A database isn't speedy nor slow, this might depend on the database structure and the operations that are used (e.g. a lot of updates or mostly read only). Of course databases know about hashtables and uses them for indexing (among a lot of other systems, bitmaps are even faster!).
I think you should mention what your operation is all about and which restrictions you're thinking about, this might help to suggest a solution.
- 06-12-2010, 04:50 PM #10
Member
- Join Date
- Jun 2010
- Posts
- 7
- Rep Power
- 0
ok,thank you dear friend,your answers really help me,let's explain my operation for you,I want to collect about 40 million urls from web(through sending query to search engine and parse the result page ) with specific condition,I save those urls in hashtable to limit repeated urls,
my operation is compared to crwaling method , so I need to do that in a speedy way which is retrieve urls faster that crawler method.
that's all about my operation..
I appreciated before
- 06-12-2010, 05:09 PM #11
Member
- Join Date
- Jun 2010
- Location
- Berlin
- Posts
- 22
- Rep Power
- 0
And you're pretty welcome!
Well, I think this is allready the bottle neck. The overhead of receiving 40 million urls from the web is quite bigger than putting them into some database. Even the creation of 40 million strings dominates the time of putting them into the database. To get the application really fast, you should think of something like an object cache (the new operation is quite expensive!).
I have had a problem at business, which results in the iteration of some thousand objects and just checking for one condition. The creation of each object and the inspection of the condition took 12 seconds (which was way too much). After switching to a database based solution, the whole process now takes about 170 ms.
If you use a connection pool or ensure at least that you have one established connection, the database approach is really fast. Most of the overhead will be intial (but I think it doesn't matter whether your application starts up in 1s or 1.2s, does it?)
So if you want to safe time, just compare compare the time that is needed to create 1 million objects (like string or URL) to the time needed to create, lets say 1 thousand and resuse them. This is where the time gets lost! And if you parse the string, the is the next place where you might lose time (which depends on the algorithm the parsing is based upon). Again, databases might be a good solution (while the developers have invested a lot of time to get string operations fast done). Just give it a try and compare the results.
- 06-12-2010, 05:30 PM #12
Member
- Join Date
- Jun 2010
- Posts
- 7
- Rep Power
- 0
ok,you really explained me completly,and your explanation was very useful,
you are right,I will think about database,because now with hastable I have ran out of memory or heap space,I recievd outofmemory:heap space error
I hope that by useing database,I wouldn't have memory problem,
ok,you are really right wasting time mostly was due to connection and parsing,
thanks a lot dear friend
today you really helped me
excuse me for taking your time
Similar Threads
-
java.lang.UnsatisfiedLinkError: no parport in java.library.path
By Heather in forum NetBeansReplies: 3Last Post: 09-07-2009, 01:28 PM -
Error: cannot resolve symbol' on Person (java.lang.String, java.lang.String)
By baltimore in forum New To JavaReplies: 2Last Post: 09-18-2008, 07:30 AM -
Heap Sort in Java
By Java Tip in forum AlgorithmsReplies: 0Last Post: 04-16-2008, 10:27 PM -
java.lang.OutOfMemoryError with java vector
By mary in forum New To JavaReplies: 3Last Post: 08-03-2007, 10:55 AM -
Java Heap Out of Memory Error
By stonkers in forum New To JavaReplies: 3Last Post: 07-17-2007, 04:43 PM
Bookmarks