Hyperphotography
From The Brighton Toy and Model Index
Jump to navigationJump to search
Hyperphotography refers to a number of techniques for "photography plus", where we capture more information than could be stored in a conventional "flat" 2D photographic image. This can then be used to produce a variety of virtual model types.
Versions
Stereophotography
A pair of photographs are taken, side-by-side, to mimic the two views seen by a pair of human eyes. Equipment is then used to show a different image to the user's left and right eyes, allowing their brain to apply its natural 3D reconstruction and "see" the view as three-dimensional.
Film/video/movies
- A slightly obvious (!) application, of using a series of still photographs to show how a scene varies over time.
Photo plus depth map
- A crude depth map can be saved as a greyscale image, where brightness maps to distance from the camera. Depth maps do not store information on hidden surfaces that are obscured or that point away from the camera location. Very rough depth maps or bump maps can be produced by eye that are good enough for creating contoured photographs that will only really be seen from one angle.
Sub-pixel
- A set of photographs is taken with the camera or object fractionally moved between shots. This means that essentially the same view is captured multiple times but with different pixel boundaries (different "pixelation"). When the multiple images are superimposed, and arranged for best fit with subpixel accuracy, the partial overlaps between square pixels generate additional subpixel detail. This technique lets us achieve the "impossible" outcome of taking photographs with a higher resolution than the resolution of the camera sensor.
"Drizzle"
- A variation on subpixel photography, exploiting a known movement between camera and image due to relative motion. This technique is used by amateur (and professional!) astronomers: a telescope can be pointed at a fixed section of sky, and the Earth's rotation then causes an image of the starfield to slowly move across the camera sensor as it continues to take regularly-timed pictures (like "drizzling" runny honey onto a slice of bread). The "drizzle" technique means that individual images can be placed in the composite with extreme subpixel accuracy.
HDR
- HDR or High Dynamic Range photography produces images that capture detail at a wider range of brightnesses than is possible with a given sensor and a single image (where dark regions might be solid black or "bitty", and bright areas might be overexposed to solid white). HDR takes a set of images with different exposures, so that all interesting detail detail is captured in at least one image, and then superimposes them to create a slightly surreal "hyper-real" image with dense detail everywhere.
Retrospective focus
- RF takes a simultaneous cluster of sharp images from fractionally different positions, and then superimposes them. Because objects will appear in the image cluster at slightly different positions depending on their distance, it will be impossible to get all the detail in all the images to line up exactly: if the image alignment is optimised for objects at a particular distance, then these details will appear sharp in the composite, but nearer and more further objects will appear progressively more out of alignment and fuzzy. Changing the alignment then brigs details at different distances in and out of "focus" in the final composite. The process can also be used with software to automatically generate depth maps.
Stacked focus
- Where it is difficult to get everything in focus in an image due to narrow depth of field (an issue in microphotography), multiple images can be taken with different areas in focus, and the images automatically composited by software that selects the sharpest areas from the different images. "Stacking" can also be used for automatically taking a series of shots with other parameters varying, so the the best shot can be selected or composited later (HDR is "stacked exposure" photography).
Retrospective lighting
- The object to be photographed can be "shot" multiple times with a fixed camera and different lighting effects (e.g. as a good default image, with uniform lighting, and then as a "dark" image lit by a sequence of differently-positioned spotlights, or one spotlight moved between shots). When the images are composited, the operator can "fade" different spotlights in and out, "virtually" by altering the mixture of source images. This approach is useful when there is limited studio time, but artistic decisions about lighting need to be taken later (for instance, when the basic car image is agreed, but different types of lighting may work better in in a series of different poster layouts, which might not be ready in time for the photoshoot).
Controlled-environment multi-point lighting
- With this method, a small object is covered with a special black hemispherical hood, studded with point light-sources that are fired off in sequence as the camera in the apex of the hood takes images. This gives a stack of images of the object illuminated from every possible direction (subject only to the number of sources and images used). Multi-point lighting is useful for small objects with shallow detail such as coins or medals, where when handling the real-life object, one would tend to tilt it against the light to be able to see shallow details. With a photoset and viewer software, a user can "virtually" alter the lighting angle by switching or fading between images. A photostack can also be published as an animated GIF, with the user able to change lighting by selecting a single frame.
- Multipoint lighting with known lighting positions can also sometimes be used by specialist software to calculate surface angles for the different points on the surface by their reflected brightness values when lit from different known angles, in order to recreate a 3D model. The advantage of this over conventional photogrammetry is that it allows the photo-stack to be taken in a few minutes with a single fixed camera. This works best with simple "flattish" objects where only one side is to be modelled at a time ... again, mainly for coins or medals.
Multipoint-lighting
- A different form of multipoint lighting uses a lighting ring with a set of LEDs that are fired in sequence, in order to quickly capture a set of images of, say, a human face (which has no concealed or hidden surfaces of interest).
Smart light
- With more advanced multipoint lighting, a video camera is supplemented by a projector that sweeps focused or laser light across the subject from different angles. The horizontal position of the illuminated spot in the video image then gives the distance of the surface from the camera. This method generates a depth map (and perhaps also some side details, depending on the baseline of the projectors), that the photographic image can be wrapped onto.