Autumn 1990 sees me back in higher education, having spent more than ten years working in film and television. The recent past has seen me working in a technical capacity on a range of productions for Central TV, the BBC, local authorities and corporate bodies, but there wasn’t enough freelance work to survive. Meanwhile, there were no more Channel 4 commissions filtering through to the Nottingham-based group of filmmakers and C4 itself has changed. The initial excitement over its brief for innovation has gone. Innovation is less apparent following the channel’s response to financial constraints and political pressure. C4 was criticised by government ministers for its coverage of the miners’ strike (“too left-wing”) but the pressure to be more balanced was just one factor prompting change. A number of other factors, including changes of personnel, changes in direction, changes in scheduling have had their consequences too and the end result of all these changes is that C4 is now part of the mainstream. For innovation, one has to look elsewhere.
I’ve just completed a postgraduate course in IT at Loughborough University, having decided that I needed a change and wanting to know more about the state of the art. I’m now working on a research project at the Loughborough University of Technology Computer-Human Interface Research Centre (LUTCHI for short), working on a picture interpretation system, and on a problem that’s described to me by Stephen Scrivener, a practitioner and academic whose work bridges the gap between fine art and computer science. The project falls into the area of cognitive science, incorporating artificial intelligence and cognitive psychology. The picture interpretation system aims to replicate human systems of perception, with low levels of the system written in the C programming language and higher levels written in PROLOG. I’m working on the low level algorithms that are meant to distinguish between a straight line and a curve.
Computer graphics use algorithms that translate a designer’s depiction of a straight line, or a curve, into pixels on a computer screen, but translating in the reverse direction is not as straightforward. That’s to say, if you present a photograph of a circuit diagram to a computer, can it identify what are straight lines, what are curves, and then go on to tell you – using its memory store and deductive reasoning – that the photograph represents a circuit diagram? At the basic level, can a computer recognise a straight line? That’s the question I’m trying to answer. The current results are not very accurate and finding a solution requires a study of the C code and identifying any bugs therein.
There’s an air of excitement about LUTCHI, and innovation too. I’m working long days, like everybody else, but the people here have a commitment and a genuine passion for what they’re doing, as well as a sense of being on the verge of great things. Sean Clark shows me how to design fractals, and I sit and watch and become hypnotised by a psychedelic display that’s a welcome distraction from debugging C code. As for my low-level algorithms, I find out that the essential one is derived from Pythagoras’ theorem, and design a version that produces more accurate results. It’s a satisfying achievement, but just as satisfying is the knowledge that technology has moved on. Computers are no longer the size of giant washing machines, and artificial intelligence is something else. Makes you wonder…
[Photographic assistance: Sean Clark.]