I have a very large set of data acquired from a USB logic analyzer. An example set is 500 packets (can be more than 1000) of 512000 bytes (each bit is 1 of 8 channels). This translates to 2048000000 samples , spread out on 8 rows. My problem is how to efficiently - and quickly - render the data. Right now, I am simply calculating how much data I should skip over for the image width. This works well, but memory usage is very high when navigating (panning, zooming). My question is if there is some sort of library built that can be used for navigating such large sets of data.
Also, how would I go about zooming to the cursor? Right now, it basically stretches the image (by dividing the sample step by 2 for a zoom of 2), so the center goes to the right. Is there a better way to zoom (e.g render large image and "dive" in)?
As for storage, what is the best way to persist the data? Manually writing a file, and then re-importing it? Or is a serializable object writing fast (and size efficient) enough? I am also going to look into data compression. Right now, it compressed data by having a series of longs that tell how long for the state to change (in number of samples). However, this would just take more room, depending on the data acquired.
This is what the rendering looks like: http://i.imgur.com/Zt70k.png
To my knowledge, no such library exists, but there are so many 3rd party libraries out there that I'd be surprised if there isn't one burried somewhere in the bowels of google.
Your approach sounds pretty reasonable... calculate what data you need, and only read that data. As far as zooming goes, I would take the same approach (it would help if the file is open as a RandomAccessFile for all this...). Use whatever user gesture you have come up with for zooming, and calculate what region of the file that corresponds to, and load just that portion. If you're having memory problems, then limit the amount of buffer space you use by reusing buffers as necessary. You're in the realm of hard problems. I've done tons of this stuff and tweaking performance out of it always ends up using every trick in the book.