Meeting Abstract
When we look at images and search for certain objects in them, like cats or trees, our brain makes use of mental images that we have of those objects. The better we know the objects, the clearer our mental images, the easier it is to spot the objects. Mental representations are one of the reasons our visual system is often better at image processing tasks than even very advanced image analysis tools. Here, we will discuss approaches for teaching such representations to computers to facilitate automated 3D image analysis of very large numbers of objects from tomographic (microCT, ET) data, drawing on examples from projects on quantifying subcellular structures, tessellated cartilage, corals and others. One approach uses rather general information —“the object is roundish” or “flat”— which is then incorporated into the image analysis process. The second approach describes the object more explicitly, for example, by using a geometric shape model derived from several objects of the same kind and represented by the mean shape of these objects together with their possible variations. This approach is particularly suitable when the object’s shape is rather conserved, which is usually true for anatomical structures. A third approach, less addressed here, is to use deep neural networks to identify shapes. This can give remarkable results but requires large amounts of training data. We will also present how shape models used in approach 2 can be applied to study how structures and materials change in evolution. Of particular interest are shape models that are generative in the sense that they can generate new objects rather than only reproducing input data. One utility of this could be the reconstruction of fossil data that is only partially preserved.