ABE-IPSABE HOLDINGABE BOOKS
English Polski
On-line access

Bookstore

Parallel Architectures and Parallel Algorithms for Integrated Vision Systems

Parallel Architectures and Parallel Algorithms for Integrated Vision Systems

Authors
Publisher Springer, Berlin
Year
Pages 158
Version paperback
Language English
ISBN 9781461288251
Categories Image processing
Delivery to United States

check shipping prices
Ask about the product
Email
question
  Send
Add to bookshelf

Book description

Computer vision is one of the most complex and computationally intensive problem. Like any other computationally intensive problems, parallel pro cessing has been suggested as an approach to solving the problems in com puter vision. Computer vision employs algorithms from a wide range of areas such as image and signal processing, advanced mathematics, graph theory, databases and artificial intelligence. Hence, not only are the comput ing requirements for solving vision problems tremendous but they also demand computers that are efficient to solve problems exhibiting vastly dif ferent characteristics. With recent advances in VLSI design technology, Single Instruction Multiple Data (SIMD) massively parallel computers have been proposed and built. However, such architectures have been shown to be useful for solving a very limited subset of the problems in vision. Specifically, algorithms from low level vision that involve computations closely mimicking the architec ture and require simple control and computations are suitable for massively parallel SIMD computers. An Integrated Vision System (IVS) involves com putations from low to high level vision to be executed in a systematic fashion and repeatedly. The interaction between computations and information dependent nature of the computations suggests that architectural require ments for computer vision systems can not be satisfied by massively parallel SIMD computers.

Parallel Architectures and Parallel Algorithms for Integrated Vision Systems

Table of contents

1. Introduction.- 1.1. Computational Complexities in Vision.- 1.2. Review of Multiprocessor Architectures.- 1.2.1. Mesh connected computers.- 1.2.2. Pyramid computers.- 1.2.3. Hypercube multiprocessors.- 1.2.4. Shared memory machines.- 1.2.5. Systolic arrays.- 1.2.6. Partitionable and hierarchical architectures.- 1.2. Organization.- 2. Model of Computation.- 2.1. Parallelism in IVSs.- 2.2. Data Dependencies.- 2.3. Features and Capabilities of Parallel Architectures for IVSs.- 2.4. Examples of Integrated Vision Systems.- 2.4.1. Image understanding benchmark system.- 2.4.2. Motion estimation and object recognition.- 3. Architecture of NETRA.- 3.1. Processor Clusters.- 3.1.1. Crossbar design.- 3.1.2. Scalability of crossbar.- 3.2. The DSP Hierarchy.- 3.3. Global Memory.- 3.4. Global Interconnection.- 3.4.1. Interconnection network.- 3.4.2. Global bus.- 3.5. IVS Computation Requirements and NETRA.- 3.6. Comparison of NETRA with Other Architectures.- 4. Parallel Algorithms on a Cluster.- 4.1. Classification of Common Vision Algorithms.- 4.2. Issues in Mapping an Algorithm.- 4.3. Performance Evaluation of Parallel Algorithms.- 4.3.1. 2-D convolution.- 4.3.2. Separable convolution.- 4.3.3. Two-dimensional FFT.- 4.3.4. Hough transform.- 4.4. Parallel Implementation Results.- 4.4.1. 2-D FFT.- 4.4.2. Separable convolution.- 4.4.3. Benchmark Algorithms.- 4.5. Summary.- 5. Inter-Cluster Communication In NETRA.- 5.1. Alternatives for Inter-cluster Communication.- 5.1.1. Multistage interconnection network and global memory.- 5.1.2. DSP tree links.- 5.1.3. Global bus.- 5.2. Analysis of Inter-cluster Communication.- 5.3. Approach to Performance Evaluation.- 5.4. Performance of Parallel Algorithms on Multiple Clusters.- 5.4.1. Two-dimensional Fast Fourier Transform (2-D FFT).- 5.4.2. 2-D separable convolution.- 5.4.3. Hough transform.- 5.5. Summary.- 6. Load Balancing and Scheduling Techniques.- 6.1. Need for Efficient Load Balancing Techniques.- 6.2. Load Balancing and Scheduling Techniques for Parallel Implementation.- 6.2.1. Uniform partitioning.- 6.2.2. Static scheduling (First-order scheduling).- 6.2.3. Weighted static scheduling (Second-order scheduling).- 6.2.4. Dynamic.- 6.3. Parallel Implementation and Performance Evaluation.- 6.3.1. Feature extraction.- 6.3.2. Matching features.- 6.3.3. Time match.- 6.3.4. Second stereo match.- 6.3.5. Summary.- 7. Concluding Remarks.- 7.1. Summary and Discussion.- 7.2. Extensions.- References.

We also recommend books

Strony www Białystok Warszawa
801 777 223