Header

UZH-Logo

Maintenance Infos

Tensor methods for high-dimensional analysis and visualization


Ballester-Ripoll, Rafael. Tensor methods for high-dimensional analysis and visualization. 2017, University of Zurich, Faculty of Economics.

Abstract

Most visual computing domains are witnessing a steady growth in sheer data set size, complexity, and dimensionality. Flexible and scalable mathematical mod- els that can efficiently compress, process, store, manipulate, retrieve and visual- ize such data sets are therefore of paramount importance, especially for higher dimensions. In this context, tensor decompositions constitute a powerful mathe- matical framework for compactly representing and operating on both dense and sparse data. Initially proposed as an extension of the concept of matrix decom- position for three and more dimensions, they have found various applications in data-intensive machine learning and high-dimensional signal processing. This thesis aims to help bridge these aspects and tackle modern visual com- puting challenges under the paradigm of a common representation format, namely tensors. Many kinds of data admit a natural representation as higher-order tensors and/or can be parametrized, learned, or interpolated in the form of compact ten- sor models. Numerous tools that are native and unique to said decompositions exist for analysis and visualization, and such tools can be exploited as soon as the known ground-truth is abstracted into this kind of reduced representation. To this end we develop a volume compression algorithm tailored to high reduction rates in visualization applications; we explore compressed-domain processing possibilities including multiresolution convolution, derivation, integration and summed area tables; we produce visualization diagrams directly from compressed tensors via interactive reconstruction; and we propose sensitivity analysis algorithms for model interpretation and knowledge discovery. Emphasis is placed on compactness and interactivity and is addressed via careful tensor format selection and model building, as well as a range of auxiliary technical tools including out-of-core memory management, adaptive quantization, parallelized multilinear algebra operations, and others. We conclude that the models chosen result in a viable and fruitful toolbox for data of diverse origin, size, dimensionality, resolution, and sparsity.

Abstract

Most visual computing domains are witnessing a steady growth in sheer data set size, complexity, and dimensionality. Flexible and scalable mathematical mod- els that can efficiently compress, process, store, manipulate, retrieve and visual- ize such data sets are therefore of paramount importance, especially for higher dimensions. In this context, tensor decompositions constitute a powerful mathe- matical framework for compactly representing and operating on both dense and sparse data. Initially proposed as an extension of the concept of matrix decom- position for three and more dimensions, they have found various applications in data-intensive machine learning and high-dimensional signal processing. This thesis aims to help bridge these aspects and tackle modern visual com- puting challenges under the paradigm of a common representation format, namely tensors. Many kinds of data admit a natural representation as higher-order tensors and/or can be parametrized, learned, or interpolated in the form of compact ten- sor models. Numerous tools that are native and unique to said decompositions exist for analysis and visualization, and such tools can be exploited as soon as the known ground-truth is abstracted into this kind of reduced representation. To this end we develop a volume compression algorithm tailored to high reduction rates in visualization applications; we explore compressed-domain processing possibilities including multiresolution convolution, derivation, integration and summed area tables; we produce visualization diagrams directly from compressed tensors via interactive reconstruction; and we propose sensitivity analysis algorithms for model interpretation and knowledge discovery. Emphasis is placed on compactness and interactivity and is addressed via careful tensor format selection and model building, as well as a range of auxiliary technical tools including out-of-core memory management, adaptive quantization, parallelized multilinear algebra operations, and others. We conclude that the models chosen result in a viable and fruitful toolbox for data of diverse origin, size, dimensionality, resolution, and sparsity.

Statistics

Downloads

3 downloads since deposited on 20 Feb 2018
3 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Dissertation
Referees:Pajarola Renato, Lindstrom Peter
Communities & Collections:03 Faculty of Economics > Department of Informatics
Dewey Decimal Classification:000 Computer science, knowledge & systems
Language:English
Place of Publication:Zürich
Date:October 2017
Deposited On:20 Feb 2018 15:06
Last Modified:14 Aug 2018 11:54
Number of Pages:157
OA Status:Closed
Other Identification Number:merlin-id:16040

Download