I've really enjoyed reading some of Matthew Garrett's entries about legacy PC hardware features - specifically the cute hack involving A20 and the keyboard controller. Reading things like that really takes me back.
I remember when the z80 was cutting edge, and I discovered you could switch to a whole new set of shadow registers via "exx" and "ex af,af'". I remember using undocumented opcodes, and even now I can assemble and disassemble simple z80 machine code. (Don't get me started on the speedlock protectors and their fiendish use of the R register; that'll stick in your mind if you cracked it. I did.)
I remember being introduced to the PC, seeing subdirectories appear in MS-DOS 2.0, network redirectors appearing in DOS 3.x, and support for big hard drives appearing in MS-DOS 4.0.
I remember the controvosy over the AARD code in the betas of Windows 3, and the focus that was given in the book "Undocumented DOS" which mostly focussed upon the "list of lists". (At that time I'd have been running GEM on an IBM XT with hercules (monochrome) graphics.)
I remember learning about that a .COM file was a flat image, limited to 64k which loaded at offset 100h, to accomodate the PSP (program segement prefix) for compatbility with CP/M (something I've never seen, never used, and known nothing about. I just know you could use the file control blocks to get simple wildcard handling for your programs "for free")
I wrote simple viral code which exploited this layout, appending new code to end of binary, replacing the first three bytes with a jump to the freshly added code. Later you could restore the original bytes back to their original contents and jump back to 100h (I even got the pun in the name of 40Hex magazine.)
I recall when COM files started to go out of fashion and you had to deal with relocation and segment fixups. When MZ was an important figure.
I even remember further back to when switching to protected mode was a hell of tripple-faults, switching back from protected mode to real mode was "impossible" and the delight to be had with the discovery and exploitation of "unreal mode".
All these things I remember .. and yet .. they're meaningless to most now, merely old history. Adults these days might have grown up and reached age 20 having always had Windows 95 - never having seen, used, or enjoyed MS-DOS.
How far have we come since then? In some ways not far at all. In other ways the rise of technology has been exponential.
Back then when I was doing things I'd not have dreamed I could update a webpage from a mobile phone whilst trapped upon a stalled train.
There are now more mobile phones than people in the UK. In some ways that's frightening - People miss the clear seperation between home & work for example - but in other ways .. liberation.
I have no predictions for the future, but it amazing how far we've come just in my lifetime; and largely without people noticing.
The industrial revolution? Did that happen with people mostly not noticing? Or was there more concious awareness? Food for thought.
Tags: history, ms-dos, nostalgia, pc, z80
24 November 2009 21:50
Some technology evolves very quickly, for example the following things are used by probably 80% of readers of this page:
- A web browser.
- A mail client.
- A webserver.
But other technology is stuck in the past and only sees laclustre updates and innovations (not that inovation is mandatory or automatically a good thing)
Right now I'm looking at my webserver logs, trying to see who is viewing my sites, where they came from, and what their favourite pie is.
In the free world we have the choice of awstats, webalizer, and visitors (possibly more that I'm unaware of). In the commercial world everybody and their dog uses Google's analytics.
On the face of it a web analysis package is trivial:
- Read in some access.log files.
- Process to some internal database representaqtion.
- Generate static/dynamic HTML output from your intermediate form, optionally including graphs, images, and pie-charts.
In conclusion why are my web statistics so dull, boring, and less educational than I desire?
I'd be tempted to experiment, but I suspect this is a problem which has subtle issues I'm overlooking and requires an artistic slant to make pretty.
(ObLink: asql is my semi-solution to logfile analysis.)
Tags: apache, softawre that should exist, webstats
27 November 2009 21:50
A couple of days ago I was lamenting the state of webstats, although I was a little vague as to my purpose. Specifically I was wanting to find out about the screen resolutions and user-agents viewing a couple of sites.
Of course there are drawbacks:
- You cannot capture everything.
e.g. HTTP status code isn't available.
- Finds the screen resolution.
- Finds the HTTP referer.
- Finds the current page's title.
- Then submits that to a server-side collection script, via a one-by-one pixel IMG
The script that receives the data writes out the data to a small per-domain SQLite database, which I can then use to generate prettyness. However I suck at being pretty, in most ways, so I've only got functional:
All of this is dynamic and most of the data is anchored to "today", as thats proof of concept enough. Were piwik not written in vile PHP I'd use that - I don't see anything similar out there which is Perl..
The big decision is now "Keep it dynamic" vs. "Output static pages". (vs. call off the experiment now I know that I'm safe to assume "big resolutions").
(Naming software is hard. Recent stuff I've done has had an skx prefix primarily for google-juice. e.g. Randomly I notice that if you search for my personal site on Google's UK engine I come top. Cool.)
ObSubject: The Bourne Identity