About Archive Tags RSS Feed

 

Entries tagged api

Detecting fraudulent signups?

21 November 2016 21:50

I run a couple of different sites that allow users to sign-up and use various services. In each of these sites I have some minimal rules in place to detect bad signups, but these are a little ad hoc, because the nature of "badness" varies on a per-site basis.

I've worked in a couple of places where there are in-house tests of bad signups, and these usually boil down to some naive, and overly-broad, rules:

  • Does the phone numbers' (international) prefix match the country of the user?
  • Does the postal address supplied even exist?

Some places penalise users based upon location too:

  • Does the IP address the user submitted from come from TOR?
  • Does the geo-IP country match the users' stated location?
  • Is the email address provided by a "free" provider?

At the moment I've got a simple HTTP-server which receives a JSON post of a new users' details, and returns "200 OK" or "403 Forbidden" based on some very very simple critereon. This is modeled on the spam detection service for blog-comments server I use - something that is itself becoming less useful over time. (Perhaps time to kill that? A decision for another day.)

Unfortunately this whole approach is very reactive, as it takes human eyeballs to detect new classes of problems. Code can't guess in advance that it should block usernames which could collide with official ones, for example allowing a username of "admin", "help", or "support".

I'm certain that these systems have been written a thousand times, as I've seen at least five such systems, and they're all very similar. The biggest flaw in all these systems is that they try to classify users in advance of them doing anything. We're trying to say "Block users who will use stolen credit cards", or "Block users who'll submit spam", by correlating that behaviour with other things. In an ideal world you'd judge users only by the actions they take, not how they signed up. And yet .. it is better than nothing.

For the moment I'm continuing to try to make the best of things, at least by centralising the rules for myself I cut down on duplicate code. I'll pretend I'm being cool, modern, and sexy, and call this a micro-service! (Ignore the lack of containers for the moment!)

| No comments

 

Removing my last server?

5 February 2022 09:00

In the past I used to run a number of virtual machines, or dedicated hosts. Currently I'm cut things down to only a single machine which I'm planning to remove.

Email

Email used to be hosted via dovecot, and then read with mutt-ng on the host itself. Later I moved to reading mail with my own console-based email client.

Eventually I succumbed, and now I pay for Google's Workspace product.

Git Repositories

I used to use gitbucket for hosting a bunch of (mostly private) git repositories. A bad shutdown/reboot of my host trashed the internal database so that was broken.

I replaced the use of gitbucket, which was very pretty, with gitolite to perform access-control, and avoid the need of a binary database.

I merged a bunch of repositories, removed the secret things from there where possible, and finally threw them on a second github account. GPG-encryption added where appropriate.

Static Hosts

Static websites I used to host upon my own machine are now hosted via netlify.

There aren't many of them, and they are rarely updated, I guess I care less.

Dynamic Hosts

That leaves only dynamic hosts. I used to have a couple of these, most notably the debian-administration.org, but that was archived and the final commercial thing I did was retired in January.

I now have only one dynamic site up and running, https://api.steve.fi/, this provides two dynamic endpoints:

  • One to return data about trams coming to the stop near my house.
  • One to return the current temperature.

Both of these are used by my tram-display device. Running these two services locally, in Docker, would probably be fine.

However there is a third "secret" API - blog-comment submission.

When a comment is received upon this blog it is written to a local filesystem, and an email is sent to me. The next time my blog is built rsync is used to get the remote-comments and add them to the blog. (Spam deleted first, of course).

Locally the comments are added into the git-repository this blog is built from - and the remote files deleted now and again.

Maybe I should just switch from writing the blog-comment to disk, and include all the meta-data in the email? I don't wanna go connecting to Gmail via IMAP, but I could probably copy and paste from the email to my local blog-repository.

I can stop hosting the tram-APIs publicly, but the blog comment part is harder. I guess I just need to receive incoming FORM-submission, and send an email.

  • Maybe I host the existing container on fly.io, for free?
  • Maybe I write an AWS lambda function to do the necessary thing?

Or maybe I drop blog-comments and sidestep the problem entirely? After all I wrote five posts in the whole of last year ..

| 2 comments