Digital photogrammetry is made easy by many (commercial or free) software which let you reconstruct the whole surface of archaeological layers or manufacts (generally speaking) with quite accurate -sometimes measurable- results. The following is a quick overview of some free software: from their requirements (in terms of computer hardware, cameras and procedures) to their outputs (unprocessed and post-processed). Indeed, it is important to know from the beginning the solution that one wants to use, since each software has its own guidelines (slightly different one the others) for photo-capturing and processing.
Here are the main examples:-
Software | Server | User | Main requirements | Registration |
X | X | Consistent zoom, lens and light. Works better with artifacts | Yes (free) | |
Insight3d | X | Consistent zoom, lens and light. Works better with architecture | No | |
Bundler + PMVS/CMVS | x | It may work also with photos taken with different cameras in different moments. | No | |
Visual SFM | X | Requires CUDA compatible hardware | No | |
Autodesk PhotoScene (now 123D Catch) | X | X | Better with wide angle lenses (20, 24 or 28 mm) and with the same camera and zoom ratio for the entire project | Yes (free) |
MS PhotoSynth | X | X | Better with wide angle lenses (20, 24 or 28 mm) and with the same camera and zoom ratio for the entire project | Yes (free) |
3DTubeMe | X | Exif from well known cameras/smartphone. At least 5 photos. Better if without zoom. | Yes (free) | |
Areoscan | X | Bundler based | Yes (free) | |
hypr3d | X | Bundler based | Yes (free) | |
X | More than 2 MP resolution but more then 5 MP is useless | Yes (free) | ||
CMP SfM Web Service | X | X | Available in server-cloud computing side and local user version | Yes (online) |
SFMToolkit | X | Bundler based | No |
For a basic comparison of some of the above solutions, it may be useful to take a look at the following technical report by Yuan-Fang Wang – VisualSize:
"A Comparison Study of Five 3D Modeling Systems Based on the SfM Principles"
Although every software requires specific photo-shots, it is always good to keep in mind that all of them are used to work (or they produce better results) with photo sorties in a good order and EXIF information from the JPG file. This means that with some software (mainly Arc3D) you can crop or resize a photo to have the subject in the centre of the frame and cut-out any surrounding noise, whereas with some others this may result in unsuccessful output. Another important consideration is also that we may want to take as many photos as possible of the object we want to document in 3D, but it is good to keep in mind that more photos does NOT always mean better result.
A good general guideline is the Rule of 3: every part of the scene you want to reconstruct should be visible on at least 3 photos. In a way, all the photogrammetric software work like the laser scanner: whatever is not directly visible in at least 2 to 3 photos will result in a gap of information (which may anyway be interpolated by the software, but it will be far from being as accurate as the general scene). Almost all the photogrammetric software nowadays available (in C++, Linux source-codes or windows binary) are based on the Structure from Motion principles (for a quick view on what is about, see http://en.wikipedia.org/wiki/Structure_from_motion). For this reason, the best reconstructions come from at least a triplet of photos.
Several interesting projects are focusing on fully exploiting photos taken from unskilled user or from amateur quality camera (see http://www.visualsize.com/) or even from sparse web photos (http://grail.cs.washington.edu/projects/rome/), and these try to go further than just a point-cloud. Indeed the most profitable output of such a technology consists in a full 3D reconstruction of the whole scene. And if almost all the aforementioned software result in a point-cloud, a professional level (but free) solution is available to convert the sparse points in 3D surface and to process it for several purposes. This solution is Meshlab (Visual Computing Lab of ISTI – CNR). This is an open source (http://meshlab.sourceforge.net/), portable, and extensible system for the processing and editing of unstructured 3D triangular meshes.
It is an essential tool for editing and finalizing the output of many software or even direct devices outputs.
It can easily load and handle huge amount of data and complex point-clouds and produce meshes with the Poission reconstruction filter (just to mention one of its number of very useful algorithm). The created mesh can then incorporate colour information from the point-cloud and export it (for further improvements) in a traditional texture file (together with the mesh itself).
A complete manual is still missing (unfortunately), but quick help is available on the side of every command, and video tutorials are becoming more and more on youtube (i.e. http://www.youtube.com/user/MrPMeshLabTutorials#g/p).
Feel free also to contact the author of this article for further assistance for Meshlab in archaeological contexts.