Header

UZH-Logo

Maintenance Infos

Lossy volume compression using Tucker truncation and thresholding


Ballester Ripoll, Rafael; Pajarola, Renato (2016). Lossy volume compression using Tucker truncation and thresholding. Visual Computer, 32(11):1433-1446.

Abstract

Tensor decompositions, in particular the Tucker model, are a powerful family of techniques for dimensionality reduction and are being increasingly used for compactly encoding large multidimensional arrays, images and other visual data sets. In interactive applications, volume data often needs to be decompressed and manipulated dynamically; when designing data reduction and reconstruction methods, several parameters must be taken into account, such as the achievable compression ratio, approximation error and reconstruction speed. Weighing these variables in an effective way is challenging, and here we present two main contributions to solve this issue for Tucker tensor decompositions. First, we provide algorithms to efficiently compute, store and retrieve good choices of tensor rank selection and decompression parameters in order to optimize memory usage, approximation quality and computational costs. Second, we propose a Tucker compression alternative based on coefficient thresholding and zigzag traversal, followed by logarithmic quantization on both the transformed tensor core and its factor matrices. In terms of approximation accuracy, this approach is theoretically and empirically better than the commonly used tensor rank truncation method.

Abstract

Tensor decompositions, in particular the Tucker model, are a powerful family of techniques for dimensionality reduction and are being increasingly used for compactly encoding large multidimensional arrays, images and other visual data sets. In interactive applications, volume data often needs to be decompressed and manipulated dynamically; when designing data reduction and reconstruction methods, several parameters must be taken into account, such as the achievable compression ratio, approximation error and reconstruction speed. Weighing these variables in an effective way is challenging, and here we present two main contributions to solve this issue for Tucker tensor decompositions. First, we provide algorithms to efficiently compute, store and retrieve good choices of tensor rank selection and decompression parameters in order to optimize memory usage, approximation quality and computational costs. Second, we propose a Tucker compression alternative based on coefficient thresholding and zigzag traversal, followed by logarithmic quantization on both the transformed tensor core and its factor matrices. In terms of approximation accuracy, this approach is theoretically and empirically better than the commonly used tensor rank truncation method.

Statistics

Citations

Dimensions.ai Metrics
4 citations in Web of Science®
1 citation in Scopus®
7 citations in Microsoft Academic
Google Scholar™

Altmetrics

Downloads

87 downloads since deposited on 12 Aug 2016
20 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Journal Article, refereed, original work
Communities & Collections:03 Faculty of Economics > Department of Informatics
Dewey Decimal Classification:000 Computer science, knowledge & systems
Uncontrolled Keywords:visualization, tensor approximation, data compression
Language:English
Date:2016
Deposited On:12 Aug 2016 06:29
Last Modified:02 Feb 2018 10:15
Publisher:Springer
ISSN:0178-2789
Additional Information:The final publication is available at Springer via http://dx.doi.org/10.1007/s00371-015-1130-y
OA Status:Green
Publisher DOI:https://doi.org/10.1007/s00371-015-1130-y
Other Identification Number:merlin-id:12914

Download

Download PDF  'Lossy volume compression using Tucker truncation and thresholding'.
Preview
Content: Accepted Version
Filetype: PDF
Size: 4MB