Recently I came across a situation where moving a lot of data around on a machine with a 3Ware RAID card ultimately killed the machine.
To test the hardware in advance for this requires a test of both:
- The individual drives, which make up the RAID array
- The filesystem which is layered upon the top of it.
The former can be done with badblocks, etc. The latter requires a simple tool to create a bunch of huge files with "random" contents, then later verify they have the contents you expected.
With that in mind:
dt --files=1000 --size=100M [--no-delete|--delete]
This:
- Creates, in turn, 1000 files.
- Each created file will be 100Mb long.
- Each created file will have random contents written to it, and be closed.
- Once closed the file will be re-opened and the MD5sum computed
- Both in my code and by calling /usr/bin/md5sum.
- If these sums mis-match, indicating a data-error, we abort.
- Otherwise we delete the file and move on.
Adding "--no-delete" and "--files=100000" allows you to continue testing until your drive is full and you've tested every part of the filesystem.
Trivial toy, or possibly useful to sanity-check a filesystem? You decide. Or just:
hg clone http://dt.repository.steve.org.uk/
(dt == disk test)
ObQuote: "Stand back boy! This calls for divine intervention! " - "Brain Dead"
Tags: drives, filesystems, misc, tools 4 comments
At my former job we used to stress test all servers before releasing to customers. We used to test them with "stress" (just apt-get install it) which I think can also generate big files. Just FYI; maybe it helps (someone).