Image-Based Rendering is the task of generating novel views from existing images. Given a set of images showing a scene from different view-points the main focus is to compute images for new view-points. In this thesis different new methods to solve this problem are presented.These methods are designed to full special goals such as scalability and interactive rendering performance. First the theory of the Plenoptic Function is introduced as the mathematical foundation of image formation. Then a new taxonomy is introduced to categorise existing methods and an extensive overview of known approaches is given. This is followed by a detailed analysis of the design goals and the requirements with regards to input data. It is concluded that for perspectively correct image generation from sparse spatial sampling geometry information about the scene is necessary. This leads to the design of three different Image-Based Rendering methods called View-Dependent Geometry Multiple Local Models and Geometry Guided Plane-Sweep. Several subtypes can be defined and finally six prototypes have been implemented and analysed in detail for this thesis.
Piracy-free
Assured Quality
Secure Transactions
Delivery Options
Please enter pincode to check delivery time.
*COD & Shipping Charges may apply on certain items.