In that particular case, we have 16k blocks, due to the past issues with caching big string blocks over the ECP.
But I think, there are a few ways how integrity can be improved, for such cases. I see at least two reasons why we should check integrity periodically.
- We don't have any errors in the database, which may cause to a system failure.
- We don't have any issues in the database, and we ensure that our data is completely available.
I'm faced with an issue when error on a pointer level causes issues with WriteDaemon, and our system just died, when the application tried to get access to the data. And it took some time to figure out why it has happened, even when we did not have any issues with database at all, just only with ECP. That happened in the version 2012.2. And I'm thinking I would be able to set how deeply I could scan blocks, let's say, don't care about data blocks, just scan only pointers blocks. I don't have proportions, but I'm sure that in most cases we would have much more data blocks than pointers blocks. So, it would make integrity check give some results faster.
I know quite well how the database looks inside. But I did not manage, yet to look at how database backups work, and mostly interesting incremental backups. As I know backup works with blocks, so, maybe there is a way to make incremental integrity checks as well. It will not help to find the issues that happened in unchangeable blocks due to hardware issues but could say, that lately changed data is Ok.
- Log in to post comments