About Archive Tags RSS Feed

 

Entries tagged mysql

Its a lot like life

3 January 2008 21:50

Assume for a moment that you have 148 hosts logging, via syslog-ng, to a central host. That host is recording all log entries into an MySQL database. Assume that each of these machines is producing a total of 4698816 lines per day.

(Crazy random numbers pulled from thin air; globviously).

Now the question: How do you process, read, or pay attention to those logs?

Here is what we've done so far:

syslog-ng

All the syslog-ng client machines are logging to a central machine, which inserts the records into a database.

This database may be queried using the php-syslog-ng script. Unfortunately this search is relatively slow, and also the user-interface is appallingly bad. Allowing only searches, not a view of most recent logs, auto-refreshing via AJAX etc.

rss feeds

To remedy the slowness, and poor usability of the PHP front-end to the database I wrote a quick hack which produces RSS feeds via queries, against that same database, accessed via URIs such as:

  • http://example.com/feeds/DriveReady
  • http://example.com/feeds/host/host1

The first query returns and RSS feed of log entries containing the given term. The second shows all recent entries from the machine host1.

That works nicely for a fixed set of patterns, but the problem with this approach, and that of php-syslog-ng in general, is that it will only show you things that you look for - it won't volunteer trends, patterns, or news.

The fundamental problem is a lack of notion in either system of "recent messages worth reading" (on a global or per-machine basis).

To put that into perspective given a logfile from one host containing, say, 3740 lines there are only approximately 814 unique lines if you ignore the date + timestamp.

Reducing logentries by that amount (78% decrease) is a significant saving, but even so you wouldn't want to read 22% of our original 4698816 lines of logs as that is still over a million log-entries.

I guess we could trim the results down further via a pipe through logcheck or similar, but I can't help thinking that still isn't going to give us enough interesting things to view.

To reiterate I would like to see:

  • per-machine anomolies.
  • global anomolies.

To that end I've been working on something, but I'm not too sure yet if it will go anywhere... In brief you take the logfiles and tokenize, then you record the token frequencies as groups within a given host's prior records. Unique pairings == logs you want to see.

(i.e. token frequency analysis on things like "<auth.info> yuling.example.com sshd[28427]: Did not receive identification string from 1.3.3.4"

What do other people do? There must be a huge market for this? Even amongst people who don't have more than 20 machines!

| 17 comments

 

You think we just work at a comic book store for our folks, huh?

19 July 2008 21:50

I'm only a minimal MySQL user, but I've got a problem with a large table full of data and I'm hoping for tips on how to improve it.

Right now I have a table which looks like this:

CREATE TABLE `books` (
  `id` int(11) NOT NULL auto_increment,
  `owner` int(11) NOT NULL,
  `title` varchar(200) NOT NULL,
  ....
  PRIMARY KEY  (`id`),
  KEY( `owner`)
)  ;

This allows me to lookup all the BOOKS a USER has - because the user table has an ID and the books table has an owner attribute.

However I've got hundreds of users, and thousands of books. So I'm thinking I want to be able to find the list of books a user has.

Initially I thought I could use a view:

CREATE VIEW view_steve  AS select * FROM books WHERE owner=73

But that suffers from a problem - the table has discountinuous IDs coming from the books table, and I'd love to be able to work with them in steps of 1. (Also having to create a view for each user is an overhead I could live without. Perhaps some stored procedure magic is what I need?)

Is there a simple way that I can create a view/subtable which would allow me to return something like:

|id|book_id|owner | title      |....|
|0 | 17    | Steve| Pies       | ..|
|1 | 32    | Steve| Fly Fishing| ..|
|2 | 21    | Steve| Smiles     | ..|
|3 | 24    | Steve| Debian     | ..|

Where the "id" is a consecutive, incrementing number, such that "paging" becomes trivial?

ObQuote: The Lost Boys

Update: without going into details the requirement for known, static, and ideally consecutive identifiers is related to doing correct paging.

| 14 comments

 

Hey, Ash, where are we?

17 September 2008 21:50

I'm currently fighting with MySQL. The following takes too long:

mysql> SELECT COUNT(id) FROM q_archive;
+-----------+
| COUNT(id) |
+-----------+
|   2738048 |
+-----------+
1 row in set (17.95 sec)

I would like it to take significantly less time, even with memcached being in use it gets hit too often. I've added an index to the table - but I didn't expect that to help, and I wasn't disappointed.

Ho hum.

Maybe another case where flat-files are best. Sure counting them would take a while, but once I've counted them I can just `cat sum`.

This is probably a case where tweaking memory of MySQL would help. But I'm fairly certain if I start messing with that I'll get into trouble with other parts of my site.

ObFilm: The Evil Dead

| 16 comments

 

I'm fireproof, you're not

7 May 2009 21:50

I've mostly been avoiding the computer this evening, but I did spend the last hour working on attempt #2 at distributed monitoring.

The more I plot, plan & ponder the more appealing the notion becomes.

Too many ideas to discuss, but in brief:

My previous idea of running tests every few minutes on each node scaled badly when the number of host+service pairs to be tested grew.

This lead to the realisation that as long as some node tests each host+service pair you're OK. Every node need not check each host on every run - this was something I knew, and had discussed, but I assumed that would be a nice optimisation later rather than something which is almost mandatory.

My previous idea of testing for failures on other nodes after seeing a local failure was similarly flawed. It introduces too many delays:

  • Node 1 starts all tests - notices a failure. Records it
    • Fetches current results from all neighbour nodes.
    • Sees they are OK - the remote server only just crashed. Oops.
  • Node 2 starts all tests - notices a failure. Records it.
    • Fetches current results from all neighbour nodes.
    • Sees they are OK - the remote server only just crashed. Oops.

In short you have a synchronisation problem which coupled with the delay of making a large number of tests soon grows. Given a testing period of five minutes, ten testing nodes, and 150 monitored hosts+services, you're looking at delays of 8-15 minutes. On average. (Largely depends on where in the cycle the failing host is, and how many nodes must see a failure prior to alerting.)

So round two has each node picking tests at "random" (making sure no host+service was tested more than 5 minutes ago) and at the point a failure is detected the neighbour nodes are immediately instructed to test and report their results (via XML::RPC).

The new code is simpler, more reliable, and scales better. Plus it doesn't need Apache/CGI.

Anyway bored now. Hellish day. MySQL blows goats.

ObFilm: Hellboy

| 2 comments

 

Dwayne, I think you might be colorblind.

13 April 2010 21:50

It is unfortunate that most server-packages don't seperate out their init scripts into separate packages:

  • foo
    • Contains the server binary, associated config files, and libraries.
  • foo-run or foo-server
    • Contains the init script(s).

Right now its a real pain to have to modify things like /etc/init.d/ssh to launch two daemons, running on two different ports, with two different configuration files.

Running multiple copies of SMTP daemons, databases, and similar things is basically more complex than it has to be, because our packages aren't setup for it.

If you maintain a daemon please do consider this, failing that honoring a flag such as "DISABLED=true" in /etc/default/foo would allow people to use their own /etc/init.d/foo.local initscript. (That's not perfect, but it is a step in the right direction.)

ObFilm: Little Miss Sunshine.

| 12 comments

 

Images transitioned, and mysql solications.

16 May 2011 21:50

Images Moved

So I've retired my old picture hosting sub-domain, and moved all the files which were hosted by the dynamic system into a large web-root.

This means no more uploads are possible, but each link continues to work. For example:

Happily the system generated "random" links, and it was just a matter of moving each uploaded file into that static location, then removing the CGI application.

The code for the new site has now been made public, although I suspect there will need to be some email-pong if anybody wishes to use it. Comments welcome.

MySQL replacements?

Lets pretend I work for a company which has dealings with many MySQL users.

Lets pretend that, even though it is true, such that I don't have to get into specifics.

Let us pretend that we have many many hundreds of users who are very happy with MySQL, but that we have a few users who have "issues". That might be:

  • mysqld segfaulting every few months, with no real idea why.
    • Transactions are involved. So are stored proceedures.
    • MySQL paid support might have been perfect, or it might have lead to "yup, its a bug. good luck rebuilding with this patch. let us know how it turns out kthxbai."
    • Alternatively it might not have been re-producable.
  • Master-Master and Master-Slave setups being "unreliable" such that data inconsistencies arise despite MySQL regarding them as being in sync.
    • Good luck resolving that when you have two almost-identical "mysqldump" outputs which are 6Gb each and which cause "diff" to exit with "out of memory" even on a 64Gb host.
    • Is it possible to view differences in table-data, via the binary records? That'd be a fun project .. for a masochist.
  • Poor usage of resources.
  • Heavy concurrancy caused by poorly developed applications in a load-balanced environment, leading to stalling queries. (Wordpress + Poor wordpress plugins I'm looking at you; you're next on my killfile).

To compound this problem some of these installations may or may not be running Etch. let us pretend they are not, just to simplify things. (They mostly arent' these days, but I'm sure I could think of one or two if I tried)

So, in this hypothetical situation what would you recommend?

I know there are new forks aplenty of MySQL. Drizzle et al. I suspect most of the forks will be short-lived - lots of this stuff is hard and non-sexy. I suspect the long-lived forks are probably concentrating on edge-cases we've not hit (yet), or on sexy exciting things like new storage engines and going nosql like all the cool kids.

Realistically going down the postgresql road is liable to lead to wholly different sets of problems, and a significant re-engineering of several sites, applications and tools with no proof of stability.

Without wanting to jump ship entirely, what, if any, are our options?

PS. MySQL I still mostly love you, but my two most recent applications were written to use redis instead. Just a coincidence... I swear. No, put down that axe. Please can't we just talk about it?/p>

ObQoote: "I've calculated your chance of survival, but I don't think you'll like it. " - Hitchhikers Film.

| 2 comments

 

Goodbye mysql ..

22 November 2011 21:50

Yesterday evening I updated my server to remove MySQL:

steve:~# dpkg --purge mysql-client-5.1 \
                      mysql-common     \
                      mysql-server-5.1 \
                      mysql-server-core-5.1 \
                      python-mysqldb        \
                      libdbd-mysql-perl     \
                      libdatetime-format-mysql-perl

Until last month I had two database in use, one each for a pair of web-applications. As of now one is using redis - which I'm already using for my image hosting - and the other application is using SQLite.

Until recently I had a high opinion of SQLite, although that has now been downgraded a little, it is still a thoroughly excellent piece of software. I was just surprised at little things it was missing, to the extent I had to rewrite my applications SQL.

Still one less service is a good thing, and the migration wasn't so painful..

In more productive news I recently acquired a nice external flash - the Yongnuo YN-460 II is (very) cheap and cheerful, it can be fired remotely with my triggers so I've had a lot of fun with opportunistically taking pictures and experimenting with lighting.

Most of the results are NSFW, but there are some other examples lurking around including the first time I managed to successfully capture a falling water-drop. (Not the best picture, not the most explicit effect, but fun regardless. I both can and will do better next time!)

Somebody recently asked me to write about "camera stuff under linux" and happily I declined.

Why decline? Because there are so many good tools, applications, and utilities. (I use local tools for organisation and duplicate detection, rawtherapee for RAW conversion and GIMP for touchups). Having many available options is fantastic though, and something hard to appreciate for "newcomers" to Linux.

(Yeah I waited 90 seconds - if I remembered to add -nojava - for Netscape Navigator to start, under X10, with 8Mb of RAM. Happier days are here. Sure DRM is bad, secure boot .. an open question, but damn we have it good compared to almost any previous point in time!)

ObQuote: "Yeah, obviously it is only a tactical party. I'm only having a party to eventually get sex." - Peep Show

| 2 comments