Question
· Sep 9, 2023

Compacting and Truncating very large datasets

Does anyone have experience compacting and truncating IRIS datasets that are greater than 10 TB in size?  How long did it take and what was the size?  Did you run into any issues?  

Thanks.

Diane

Product version: IRIS 2020.1
Discussion (4)1
Log in or sign up to continue

First my experience is on IRIS.DATs that were 2Tb is size and experience will vary widely since where the free space is is what matters.

There are two different compacts:
Compact globals in a database
Compact free space in a database

If all you want to do is compact free space things are easier.
Do it in small steps with ^DATABASE

Current Size: 2307950 MB
Total freespace: 8564 MB
Freespace at end of file: 18 MB

Target freespace at end of file, in MB (18-8564):200

One great feature is you can cancel if you need to.
Press 'Q' to return, 'P' to pause, 'C' to cancel:

Truncation can be done in the same way in steps.

If you have one I suggest that you do this on your DR instance, move to DR and then do it on what was production.

If you are using 8k blocks for IRIS you should start thinking about the 32Tb limit for 8k databases.

You need to run an integrity check after you are done.

If you wanted to run compact globals that is much more complex.

If the underlying storage does deduplication you should make sure you have extra space at that layer.

The only way to get an estimate I can think of is running the compact against a SAN snapshot on similar hardware.
The free blocks could be anywhere in the file and that plays a big role.

Here are some numbers:
Current Size: 2332950 MB
Freespace at end of file: 300 MB
Total freespace: 8957 MB
Blocks Scanned: 10850564
It took 1 hour 44 minutes

The truncation was almost instant.