My PhD research provided part of the Rendering on Demand (RoD) project, funded by 3CR.
The goal of the whole project was to investigate techniques to speed the computation time of High-Fidelity computer graphics.
Many applications require modelling of computer graphics which are more than pretty pictures. For example, Archaeologists and Architects require a visualisation of a scene that reflects (pardon the pun) accurately the lighting within a scene. This is achieved by accurately modelling the transport of light and it's interaction with each surface that it strikes, such a process is known as ray-tracing. This process not only produces some stunning images, but they are also phyically accurate, allowing luminance readings to be taken at a point in the image and as an added benefit they provide authentic High Dynamic Range images. The disadvantage is that they take considerably longer to render than traditional OpenGL/DirectX systems.
A large part of the project was dedicated to looking at exploiting flaws in the human visual system. Such flaws allow us to generate an image that is perceptually identical to a complete solution, but in fact has regions of differing quality within the image. This technique is known as 'Selective Rendering'.As well as the geometric model information, the selective renderer also takes a 'map' as input. The map is a 2D image that specifies which pixels should be rendered in which quality. All that remains is to discover and validate metrics to calculate these maps. Our group has sucessfully used Task maps and Saliency maps to this end. The end result of my research is to develop an 'Ego-motion Map'.
The inset image shows a frame from the mineshaft animation I have used for many of my experiments. My ego-motion map determines the proportion of the screen that should be rendered in high quality according to the amount of motion percieved by the vestibular system.
For my Masters Project and at various times during my PhD, I have been involved in laser scanning archeological sites and objects.
In the first few months of my PhD I went to Delphi to work with Kevin Cain scanning the dancers column (their paper). Two weeks scanning and registering millions of data points taught me a great deal about problem solving, diplomacy (with the locals!) and, of course, about laser scanning.
A couple of months after this our research group was asked to do some
work for Durham University's Archaeology department.
We scanned the Castlerigg Stone Circle, Long Meg sarcen stone and Chapel Stile engravings. The illustration shows the Long Meg scans, with natural texture applied, with no texture and with a mosaic pattern that shows a different colour for each individual scan.
I coordinated the Bristol team in this project. The project was spread over several months and collected 50Gb of data on Castlerigg alone. This project was unique - to our knowledge - in that it coordinated accurate GPS allignments with long-range scanning, accurate to a couple of cm, which was in turn alligned with our sub-mm scanning. I would like to have spent the rest of my PhD developing a system for viewing the massive set of data we collected. At the moment we have to settle for looking at 1/100th of the data at a time.