Unless you've been living under a rock, or in a tent (which would make me slightly jealous) you'll have heard about the recent heartbleed attack many times by now.
The upshot of that attack is that lots of noise was made about hardening things, and there is now a new fork of openssl being developed. Many people have commented about "hardening Debian" in particular, as well as random musing on hardening software. One or two brave souls have even made noises about auditing code.
Once upon a time I tried to setup a project to audit Debian software. You can still see the Debian Security Audit Project webpages if you look hard enough for them.
What did I learn? There are tons of easy security bugs, but finding the hard ones is hard.
(If you get bored some time just pick your favourite Editor, which will be emacs, and look how /tmp is abused during the build-process or in random libraries such as tramp [ tramp-uudecode].)
These days I still poke at source code, and I still report bugs, but my enthusiasm has waned considerably. I tend to only commit to auditing a package if it is a new one I install in production, which limits my efforts considerably, but makes me feel like I'm not taking steps into the dark. It looks like I reported only three security isseus this year, and before that you have to go down to 2011 to find something I bothered to document.
What would I do if I had copious free time? I wouldn't audit code. Instead I'd write test-cases for code.
Many many large projects have rudimentary test-cases at best, and zero coverage at worse. I appreciate writing test-cases is hard, because lots of times it is hard to test things "for real". For example I once wrote a filesystem, using FUSE, there are some built-in unit-tests (I was pretty pleased with that, you could lauch the filesystem with a --test argument and it would invoke the unit-tests on itself. No separate steps, or source code required. If it was installed you could use it and you could test it in-situ). Beyond that I also put together a simple filesystem-stress script, which read/wrote/found random files, computes MD5 hashes of contents, etc. I've since seen similar random-filesystem-stresstest projects, and if they existed then I'd have used them. Testing filesystems is hard.
I've written kernel modules that have only a single implicit test case: It compiles. (OK that's harsh, I'd usually ensure the kernel didn't die when they were inserted, and that a new node in /dev appeared ;)
I've written a mail client, and beyond some trivial test-cases to prove my MIME-handling wasn't horrifically bad there are zero tests. How do you simulate all the mail that people will get, and the funky things they'll do with it?
But that said I'd suggest if you're keen, if you're eager, if you want internet-points, writing test-cases/test-harnesses would be more useful than randomly auditing source code.
Still what would I know, I don't even have a beard..
http://www.boxheap.net/ddaniels/blog
Hi Steve,
Sorry to hear you're less involved with auditing.
Even with the audit project seemingly stalled, there was some progress with http://qa.debian.org/daca/ but that seemed to have stalled in 2011 too.
Jenkins is one tool that has been used more for automating running test suites, but tests seem to be mostly by hand.
I like unit tests. It's time consuming to write all the unit tests, and granularity can be an issue. It's nice when you can include code coverage in the results of tests.
fuzzing seems to be the easiest way to get results these days, though it can have low code coverage thus leaving many hidden "hard" bugs.
vsftp showed a great approach of just re-writing for security of code.
Thanks,
Drew Daniels