TUD Logo

TUD Startseite » ... » CGV LV-archiv » SS 05 » Hauptseminar

Computergraphik

Hauptseminar SS2005

Organisation

SS 05, 0/2/0, HS-MI

Dozent

Prof. Dr. S. Gumhold
sgumhold@mpi-sb.mpg.de
mit CC an Herrn Neidhold
bn4@inf.tu-dresden.de

Seminar

Mi 2. DS im Raum GRU 370

Inhalt

Das Hauptseminar beschäftigt sich mit aktuellen Themen der Computergraphik, die auf der SIGGRAPH 2004 publiziert wurden. Die zur Auswahl stehenden Artikel sind online verfügbar. Neben dem Artikel selbst sind noch die drei bis vier wichtigsten Referenzen aufzufinden und zu lesen. Dazu wird ein Vortrag über 45 Minuten mit anschließender Diskussion gehalten. Eine Ausarbeitung von 15-20 Seiten ist bis Semesterende abzugeben. Bedingt durch die Anzahl der möglichen Vortragstermine werden maximal 12 der 16 Themen vergeben. Anmeldung erfolgt per E-Mail an
 

Materialien

Folien zur Einführung

Literaturliste

Termine

11.5.2005, Gastvortrag: Philipp Jenke, MPI Saarbrücken: Validierung einer modernen Flüssigkeitssimulation mittels eines 3-D Animationsscanners
25.5.2005, Alexander Schmidt-Rechenberg, Volumetric Reconstruction and Interactive Rendering of Trees from Photographs, Ausarbeitung
1.6.2005, Martin Manzer, Cross-Parameterization and Compatible Remeshing of 3D Models, Ausarbeitung
 8.6.2005, Aldo Mühlhause, Brook for GPUs: Stream Computing on Graphics Hardware, Ausarbeitung
15.6.2005, Robert Bürger, Painting Detail
22.6.2005, Maik Lathan, Polycube-Maps, Ausarbeitung

Themenliste

Hinweis: Alle Überschriften sind Links auf die Artikel, die online verfügbar sind.

1. Making Papercraft Toys From Meshes Using Strip-Based Approximate Unfolding

Jun Mitani, Hiromasa Suzuki (The University of Tokyo)

We propose a new method for producing unfolded papercraft patterns of rounded toy animal figures from triangulated meshes by means of strip-based approximation. Although in principle a triangulated model can be unfolded simply by retaining as much as possible of its connectivity while checking for intersecting triangles in the unfolded plane, creating a pattern with tens of thousands of triangles is unrealistic. Our approach is to approximate the mesh model by a set of continuous triangle strips with no internal vertices. Initially, we subdivide our mesh into parts corresponding to the features of the model. We segment each part into zonal regions, grouping triangles which are similar topological distances from the part boundary. We generate triangle strips by simplifying the mesh while retaining the borders of the zonal regions and additional cut-lines. The pattern is then created simply by unfolding the set of strips. The distinguishing feature of our method is that we approximate a mesh model by a set of continuous strips, not by other ruled surfaces such as parts of cones or cylinders. Thus, the approximated unfolded pattern can be generated using only mesh operations and a simple unfolding algorithm. Furthermore, a set of strips can be crafted just by bending the paper (without breaking edges) and can represent smooth features of the original mesh models.

2. Energy-Minimizing Splines in Manifolds

Michael Hofer, Helmut Pottmann (Technische Universität Wien)

Variational interpolation in curved geometries has many applications, so there has always been demand for geometrically meaningful and efficiently computable splines in manifolds. We extend the definition of the familiar cubic spline curves and splines in tension, and we show how to compute these on parametric surfaces, level sets, triangle meshes, and point samples of surfaces. This list is more comprehensive than it looks, because it includes variational motion design for animation, and allows the treatment of obstacles via barrier surfaces. All these instances of the general concept are handled by the same geometric optimization algorithm, which minimizes an energy of curves on surfaces of arbitrary dimension and codimension.

3. Defining Point-Set Surfaces

Nina Amenta, Yong Kil (University of California, Davis)

The MLS surface [Levin 2003], used for modeling and rendering with point clouds, was originally defined algorithmically as the output of a particular meshless construction. We give a new explicit definition in terms of the critical points of an energy function on lines determined by a vector field. This definition reveals connections to research in computer vision and computational topology. Variants of the MLS surface can be created by varying the vector field and the energy function. As an example, we define a similar surface determined by a cloud of surfels (points equipped with normals), rather than points. We also observe that some procedures described in the literature to take points in space onto the MLS surface fail to do so, and we describe a simple iterative procedure which does.

4. GrabCut - Interactive Foreground Extraction Using Iterated Graph Cuts

Carsten Rother, Andrew Blake, Vladimir Kolmogorov (Microsoft Research Limited)

The problem of efficient, interactive foreground/background segmentation in still images is of great practical importance in image editing. Classical image segmentation tools use either texture (colour) information, e.g. Magic Wand, or edge (contrast) information, e.g. Intelligent Scissors. Recently, an approach based on optimization by graph-cut has been developed which successfully combines both types of information. In this paper we extend the graph-cut approach in three respects. First, we have developed a more powerful, iterative version of the optimisation. Secondly, the power of the iterative algorithm is used to simplify substantially the user interaction needed for a given quality of result. Thirdly, a robust algorithm for “border matting” has been developed to estimate simultaneously the alpha-matte around an object boundary and the colours of foreground pixels. We show that for moderately difficult examples the proposed method outperforms competitive tools.

5. Spacetime Faces: High-Resolution Capture for Modeling and Animation

Li Zhang, Keith Noah Snavely, Brian Curless, Steve M. Seitz (University of Washington)

We present an end-to-end system that goes from video sequences to high resolution, editable, dynamically controllable face models. The capture system employs synchronized video cameras and structured light projectors to record videos of a moving face from multiple viewpoints. A novel spacetime stereo algorithm is introduced to compute depth maps accurately and overcome over-fitting deficiencies in prior work. A new template fitting and tracking procedure fills in missing data and yields point correspondence across the entire sequence without using markers. We demonstrate a data-driven, interactive method for inverse kinematics that draws on the large set of fitted templates and allows for posing new expressions by dragging surface points directly. Finally, we describe new tools that model the dynamics in the input sequence to enable new animations, created via key-framing or texture-synthesis techniques.

6. Mesh Editing With Poisson-Based Gradient Field Manipulation

Yizhou Yu (University of Illinois at Urbana-Champaign), Kun Zhou (Microsoft Research Asia), Dong Xu, Xiaohan Shi (Microsoft Research Asia and Zhejiang University), Baining Guo, Heung-Yeung Shum (Microsoft Research Asia)

In this paper, we introduce a novel approach to mesh editing with the Poisson equation as the theoretical foundation. The most distinctive feature of this approach is that it modifies the original mesh geometry implicitly through gradient field manipulation. Our approach can produce desirable and pleasing results for both global and local editing operations, such as deformation, object merging, and smoothing. With the help from a few novel interactive tools, these operations can be performed conveniently with a small amount of user interaction. Our technique has three key components, a basic mesh solver based on the Poisson equation, a gradient field manipulation scheme using local transforms, and a generalized boundary condition representation based on local frames. Experimental results indicate that our framework can outperform previous related mesh editing techniques.

7. Modeling by Example

Thomas Funkhouser, Michael Kazhdan, Philip Shilane (Princeton University), Patrick Min (Universiteit Utrecht), William Kiefer (Princeton University), Ayellet Tal (Technion - Israel Institute of Technology), Szymon Rusinkiewicz, David Dobkin (Princeton University)

In this paper, we investigate a data-driven synthesis approach to constructing 3D geometric surface models. We provide methods with which a user can search a large database of 3D meshes to find parts of interest, cut the desired parts out of the meshes with intelligent scissoring, and composite them together in different ways to form new objects. The main benefit of this approach is that it is both easy to learn and able to produce highly detailed geometric models – the conceptual design for new models comes from the user, while the geometric details come from examples in the database. The focus of the paper is on the main research issues motivated by the proposed approach: (1) interactive segmentation of 3D surfaces, (2) shape-based search to find 3D models with parts matching a query, and (3) composition of parts to form new models. We provide new research contributions on all three topics and incorporate them into a prototype modeling system. Experience with our prototype system indicates that it allows untrained users to create interesting and detailed 3D models.

8. Volumetric Reconstruction and Interactive Rendering of Trees from Photographs

Alex Reche (REVES/INRIA and CSTB), Ignacio Martin (GGG and Universitat de Girona), George Drettakis (REVES/INRIA)

Reconstructing and rendering trees is a challenging problem due to the geometric complexity involved, and the inherent difficulties of capture. In this paper we propose a volumetric approach to capture and render trees with relatively sparse foliage. Photographs of such trees typically have single pixels containing the blended projection of numerous leaves/branches and background. We show how we estimate opacity values on a recursive grid, based on alpha-mattes extracted from a small number of calibrated photographs of a tree. This data structure is then used to render billboards attached to the centers of the grid cells. Each billboard is assigned a set of view-dependent textures corresponding to each input view. These textures are generated by approximating coverage masks based on opacity and depth from the camera. Rendering is performed using a view-dependent texturing algorithm. The resulting volumetric tree structure has low polygon count, permitting interactive rendering of realistic 3D trees. We illustrate the implementation of our system on several different real trees, and show that we can insert the resulting model in virtual scenes.

9. Brook for GPUs: Stream Computing on Graphics Hardware(project page)

Ian Buck, Tim Foley, Daniel Horn, Jeremy Sugerman, Pat Hanrahan (Stanford University)

In this paper, we present Brook for GPUs, a system for general-purpose computation on programmable graphics hardware. Brook extends C to include simple data-parallel constructs, enabling the use of the GPU as a streaming coprocessor. We present a compiler and runtime system that abstracts and virtualizes many aspects of graphics hardware. In addition, we present an analysis of the effectiveness of the GPU as a compute engine compared to the CPU, to determine when the GPU can outperform the CPU for a particular algorithm. We evaluate our system with five applications, the SAXPY and SGEMV BLAS operators, image segmentation, FFT, and ray tracing. For these applications, we demonstrate that our Brook implementations perform comparably to hand-written GPU code and up to seven times faster than their CPU counterparts.

10. Shader Algebra

Michael McCool, Stefanus Du Toit, Tiberiu Popa, Bryan Chan, Kevin Moule (University of Waterloo)

An algebra consists of a set of objects and a set of operators that act on those objects. We treat shader functions as first-class objects and define two operators: connection and combination. Connection is functional composition: the outputs of one shader are fed into the inputs of another. Combination concatenates the input channels, output channels, and computations of two shaders. Similar operators can be used to manipulate streams and apply computational kernels expressed as shaders to streams. Connecting a shader to a stream applies that shader to all elements of the stream; combining streams concatenates the record definitions of those streams. In conjunction with an optimizing compiler, these operators can manipulate shaders in many useful ways, including shader specialization, without modifying the original shaders. We demonstrate these operators in Sh, a metaprogramming shading language embedded in C++.

11. Synthetic Aperture Confocal Imaging

Marc Levoy, Billy Chen, Vaibhav Vaish, Mark Horowitz (Stanford University), Ian McDowall, Mark Bolas (Fakespace Labs)

Confocal microscopy is a family of imaging techniques that employ focused patterned illumination and synchronized imaging to create cross-sectional views of 3D biological specimens. In this paper, we adapt confocal imaging to large-scale scenes by replacing the optical apertures used in microscopy with arrays of real or virtual video projectors and cameras. Our prototype implementation uses a video projector, a camera, and an array of mirrors. Using this implementation, we explore confocal imaging of partially occluded environments, such as foliage, and weakly scattering environments, such as murky water. We demonstrate the ability to selectively image any plane in a partially occluded environment, and to see further through murky water than is otherwise possible. By thresholding the confocal images, we extract mattes that can be used to selectively illuminate any plane in the scene.

12. Painting Detail

Nathan A. Carr, John C. Hart (University of Illinois at Urbana-Champaign)

Surface painting is a technique that allows a user to paint a texture directly onto a surface, usually with a texture atlas: a 1:1 mapping between the surface and its texture image.  Many good automatic texture atlas generation methods exist that evenly distribute texture samples across a surface based on its area and/or curvature, and some are even sensitive to the frequency spectrum of the input texture. However, during the surface painting process, the texture can change non-uniformly and unpredictably and even the best atlases are static and can thus fail to reproduce sections of finely painted detail such as surface illustration.  We present a new texture atlas algorithm that distributes initial texture samples evenly according to surface area and texture frequency, and, more importantly, maintains this distribution as the texture signal changes during the surface painting process.

13. Polycube-Maps

Marco Tarini, Kai Hormann, Paolo Cignoni, Claudio Montani (Istituto di Scienza e Technologie dell'Informazione)

Standard texture mapping of real-world meshes suffers from the presence of seams that need to be introduced in order to avoid excessive distortions and to make the topology of the mesh compatible to the one of the texture domain. In contrast, cube maps provide a mechanism that could be used for seamless texture mapping with low distortion, but only if the object roughly resembles a cube. We extend this concept to arbitrary meshes by using as texture domain the surface of a polycube whose shape is similar to that of the given mesh. Our approach leads to a seamless texture mapping method that is simple enough to be implemented in currently available graphics hardware.

14. Cross-Parameterization and Compatible Remeshing of 3D Models

Vladislav Kraevoy, Alla Sheffer(The University of British Columbia)

Many geometry processing applications, such as morphing, shape blending, transfer of texture or material properties, and fitting template meshes to scan data, require a bijective mapping between two or more models. This mapping, or cross-parameterization, typically needs to preserve the shape and features of the parameterized models, mapping legs to legs, ears to ears, and so on. Most of the applications also require the models to be represented by compatible meshes, i.e. meshes with identical connectivity, based on the cross-parameterization. In this paper we introduce novel methods for shape preserving cross-parameterization and compatible remeshing. Our cross-parameterization method computes a low-distortion bijective mapping between models that satisfies user prescribed constraints. Using this mapping, the remeshing algorithm preserves the user-defined feature vertex correspondence and the shape correlation between the models. The remeshing algorithm generates output meshes with significantly fewer elements compared to previous techniques, while accurately approximating the input geometry. As demonstrated by the examples, the compatible meshes we construct are ideally suitable for morphing and other geometry processing applications.

15. Context-Based Surface Completion

Andrei Sharf (Tel Aviv University), Marc Alexa (Technische Universität Darmstadt), Daniel Cohen Or (Tel Aviv University)

Sampling complex, real-world geometry with range scanning devices almost always yields imperfect surface samplings. These 'holes' in the surface are commonly filled with a smooth patch that conforms with the boundary. We introduce a context-based method: the characteristics of the given surface are analyzed, and the hole is iteratively filled by copying patches from valid regions of the given surface. In particular, the method needs to, first, identify regions of space that presumably contain parts of the surface but lack a sufficient number of sample points, second, determine best matching patches, and, third, fit imported patches by transforming them to be aligned with the surrounding surface. The completion process works top down, where details refine intermediate coarser approximations. To align an imported patch with the existing surface, we apply a rigid transformation followed by an iterative closest point procedure with non-rigid transformations. The surface is essentially treated as a point set, and local implicit approximations aid in measuring the similarity between two point set patches. We demonstrate the method at several point-sampled surface, where the holes either result from imperfect sampling during range scanning or manual removal operations.

16. Variational Shape Approximation

David Cohen-Steiner (Duke University), Pierre Alliez (INRIA), Mathieu Desbrun (University of Southern California)

Achieving efficiency in mesh processing often demands that overly verbose 3D datasets be reduced to more concise, yet faithful representations. Despite numerous applications ranging from geometry compression to reverse engineering, concisely capturing the geometry of a surface remains a tedious task. In this paper, we present both theoretical and practical contributions that result in a novel and versatile framework for geometric approximation of surfaces. We depart from the usual strategy by casting shape approximation as a variational geometric partitioning problem. Using the concept of geometric proxies, we drive the distortion error down through repeated clustering of faces into best-fitting regions. Our approach is entirely discrete and error-driven, and does not require parameterization or local estimations of differential quantities. We also introduce a new metric based on normal deviation, and demonstrate its superior behavior at capturing anisotropy.

Stand: 1.2.2010, 14:56 Uhr
Autor: Dipl.-Inf. (FH) Benjamin Neidhold