Do you get an itch you can't scratch? No, not that kind of itch!

You know how it is. You get an itching, tickling sensation, somewhere in the middle of your back, and you can't quite reach to just the right place. Perhaps it's between your shoulder-blades, or just below, or to one side. Or if you think you can reach, it's never entirely satisfactory wherever you scrape or rub. Do you want to know what causes that itch, the one you just can't seem to scratch?

I know the reason why you itch. If you're sure you want to know too, read on.

I'm working as a Research Assistant. You know, one of those underpaid and overworked kids with lank hair and poor complexions to be found in some numbers in their natural environment - the quieter and darker corners of the science faculty buildings. The faculty itself is part of one of those red-brick Universities which was instituted in an act of Victorian philanthropy, and which has grown over time almost organically. The Uni has gradually displaced the back-to-back terraces and narrow alleys that surrounded it with newer buildings which were probably supposed to be soaring white edifices of glass and stone, but seem to have ended up as irregular piles of water-stained grey concrete.

Like Mycroft, my life runs on rails. During the day, I try to find enough time to make a dent in the seemingly endless task of completing my PhD thesis, between bouts of sleeping and eating from the nearby takeaway kebab shop known affectionately as the 'Armpit'. I spend the minimum possible amount of time in my room in a rented house I share with several other postgrads - which is just as well, since it is cold, squalid and damp.

At night, I'm working on computer models of brain function - a task as large and complex as the Human Genome project, although we're a long way off that kind of successful completion. This is one of those crossover subject areas between AI and Robotics (which has been the Wave of the Future for more decades than I've been alive) and Bio-informatics (sponsorship home of the big pharmaceutical and healthcare companies). Basically, some of us have finally realised that we really don't know enough about creating smart systems - we need to know more about existing intelligences before it makes sense to attempt to build artificial ones.

Of course, brain function mapping has all sorts of potential spin-offs, which is why Big Pharma and the healthcare consortia are interested in what we do. So much of human behaviour is determined by our hard-to-predict reactions to external stimuli, and there's so much we could do with a deterministic model of the machine between our ears - everything from improved anti-depressants (which is a pretty big market these days) or even a better contraceptive with no side-effects. Yes, ladies, you might just be able to think yourself not pregnant!

Selling these big ideas to the big companies, and gathering in the resulting big research grants, is of course the responsibility of my university supervisor and his professor, leaving me the menial task of actually making the technology work.

So, I'm steadily fumbling my way towards constructing a highly-abstracted model of total brain function. It has to be a hugely simplified abstraction - even the immense supercomputer in the basement (supplied at an extremely cut price by Big Blue, who really know how to woo the Big Pharma marketplace) was theoretically capable of representing only a tiny fraction of human mental activity.

Really, I'm refining a nearly automated process. I've been developing a suite of programs, including a library of rapidly-reconfigurable heuristics, which is capable of a statistical analysis of a huge number of brain scans. We've a library of recorded scans from NHS hospitals all over the country, all completely anonymous of course, as well as access to the results of stimulus-response experiments from all over the world. With static, structural information available in increasingly detailed form from CAT scans and the like, and dynamic information from the experiments, there's a wealth of data in there which just needs a structure to pull it out.

So, my heuristics take the raw brain function data, map it to a set of conceptual ideas of brain function, and then compile it into an abstracted executable model in a form that can be executed directly on the thousand-odd processors of the machine in the basement.

In short, I've built a brain capable of being run on a supercomputer. You can't really tell what its thinking, or even if it is thinking in any real way, but you can tell if the model's responses to stimuli correspond to the measured responses in a real brain. There's just enough complexity in the model to show genuinely emergent behaviour and detectable emotional reactions.

Of course, this takes vast amounts of computer power, both to compile the model itself and to execute it. It takes an hour or so of all those processors crunching away to simulate the effect of five seconds worth of what I can loosely call thinking.

Naturally enough, most of this work is done in the middle of the night, when no-one else wants to use the machine. A few uninterrupted sessions in the wee hours are exceptionally productive, when the building is dark and quiet. The whole process is directed from the networked workstation in the corner of the office I share (if I was ever here during the day) with two other RAs and an indeterminate number (it seems different every week) of research students.

Now, a large part of our brains is associated with processing optical inputs - there are other inputs as well, of course, but we are, fundamentally, visual creatures. So, part of the model itself, one of those conceptual ideas of brain function I mentioned, involves stimulating the optic nerves and modelling the corresponding movements of the eyes themselves. This coordination of eye movement and the inputs from the smallish number of high-resolution optical sensors in the retina is one of the novel features of this model, and it seems to successfully overcome some of the limitations in previous attempts to build a truly effective visual parser.

It's well-known that we use only a small fraction of our brain. Actually, that's not really true, more an urban myth. More sophisticated measurements and less intrusive techniques has allowed recent experiments to detect neuron dynamics in regions of the brain previously thought to be redundant. Still, there do seem to be some areas with no discernable purpose, and part of the research is to find out more about unused brain cells.

Basically, I showed pictures to the model. Some of these came from a library specifically for this purpose, but I found I got some interesting reactions, and in particular some dynamic behaviour in regions thought to be inert, by using images with distinctly emotive contexts. Some images were already available online whilst others I simply scanned using the multi-function printer-copier down the hall.

All was going well until I started showing the model pictures of naked people. Look, fine, this is the kind of thing you do when you're working all alone in the middle of the night, at a task which requires occasional flashes of insight, a few minutes of concerted effort and several hours of boredom. Besides, I knew about this collection of well-thumbed magazines hidden away in the back of the filing cabinet.

Of course, I expected some emotional reactions - perhaps some analogue of prudery and embarrassment in the higher regions, and some pretty direct sexual responses in more primitive areas. What I actually got was a curious mixture of disgust and loathing, even fear, and a distinctive activation of the 'fight-or-flight' reaction. If it was a real person, it would be feeling some horrific combination of stomach-turning revulsion and stomach-knotting fright.

I just had to investigate, although I've now come to seriously regret that decision. It's fairly easy to find out what part of an image the model is concentrating on, since it is, in essence, moving its eyes as it scans and comprehends the scene in front of it. I'm sure you can guess the body parts I had expected to attract. I was wrong. Over the course of an hour's run, the model's simulated eye movement ignored the external genitalia and various wobbly bits, and focussed almost entirely on a small area between the shoulder-blades.

You know, I believe this might have been the moment I first started itching in that exact place?

I carefully checked for image defects and scanner problems, and found nothing. The model's reaction to images of people with their clothes on was unsurprising, and completely consistent with its response to other, less emotive, contexts. On closer investigation - yes, I really did download all those pictures from the Internet for scientific reasons - I found that the model would display plausibly randy reactions to pictures where the back and shoulders were not visible, but fear-and-loathing when presented with shoulder-blades.

Introduction Part 2