UVic logo
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    New user? Click here to register. Have you forgotten your password?
UVic logo
    About
    • Policies
    • License
    • Guidelines
    • FAQs
    • Contact Us
    Browse
    • Communities & Collections
    • Author
    • Title
    • Supervisor
    • Date
    • Department
    • Subject
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Supervisor

Browsing by Supervisor "Adams, Michael D."

Now showing 1 - 17 of 17
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Design and application of quincunx filter banks
    (2007-01-30T18:59:44Z) Chen, Yi; Adams, Michael D.; Lu, Wu-Sheng
    Quincunx filter banks are two-dimensional, two-channel, nonseparable filter banks. They are widely used in many signal processing applications. In this thesis, we study the design and applications of quincunx filter banks in the processing of two-dimensional digital signals. Symmetric extension algorithms for quincunx filter banks are proposed. In the one-dimensional case, symmetric extension is a commonly used technique to build nonexpansive transforms of finite-length sequences. We show how this technique can be extended to the nonseparable quincunx case. We consider three types of quadrantally-symmetric linear-phase quincunx filter banks, and for each of these types we show how nonexpansive transforms of two-dimensional sequences defined on arbitrary rectangular regions can be constructed. New optimization-based techniques are proposed for the design of high-performance quincunx filter banks for the application of image coding. The new methods yield linear-phase perfect-reconstruction systems with high coding gain, good analysis/synthesis filter frequency responses, and certain prescribed vanishing moment properties. We present examples of filter banks designed with these techniques and demonstrate their efficiency for image coding relative to existing filter banks. The best filter banks in our design examples outperformother previously proposed quincunx filter banks in approximately 80% cases and sometimes even outperform the well-known 9/7 filter bank from the JPEG-2000 standard.
  • Loading...
    Thumbnail Image
    Item
    Edgebreaker Based Triangle Mesh-Coding Method
    (2016-07-12) Tang, Yue; Adams, Michael D.
    The Edgebreaker triangle mesh-coding method is presented along with a software implementation of the method developed by the author. The software consists of two programs. The first program performs the mesh compression, and the second program performs the mesh decompression. Various aspects of the method’s performance are studied through experiments, such as coding efficiency and the time and memory complexity. In terms of coding efficiency, our Edgebreaker method outperform the gzip text-based compression technique on an average 4.19 times
  • Loading...
    Thumbnail Image
    Item
    Effective techniques for generating Delaunay mesh models of single- and multi-component images
    (2018-12-19) Luo, Jun; Adams, Michael D.
    In this thesis, we propose a general computational framework for generating mesh models of single-component (e.g., grayscale) and multi-component (e.g., RGB color) images. This framework builds on ideas from the previously-proposed GPRFSED method for single-component images to produce a framework that can handle images with any arbitrary number of components. The key ideas embodied in our framework are Floyd-Steinberg error diffusion and greedy-point removal. Our framework has several free parameters and the effect of the choices of these parameters is studied. Based on experimentation, we recommend two specific sets of parameter choices, yielding two highly effective single/multi-component mesh-generation methods, known as MED and MGPRFS. These two methods make different trade offs between mesh quality and computational cost. The MGPRFS method is able to produce high quality meshes at a reasonable computational cost, while the MED method trades off some mesh quality for a reduction in computational cost relative to the MGPRFS method. To evaluate the performance of our proposed methods, we compared them to three highly-effective previously-proposed single-component mesh generators for both grayscale and color images. In particular, our evaluation considered the following previously-proposed methods: the error diffusion (ED) method of Yang et al., the greedy-point-removal from-subset (GPRFSED) method of Adams, and the greedy-point removal (GPR) method of Demaret and Iske. Since these methods cannot directly handle color images, color images were handled through conversion to grayscale as a preprocessing step, and then as a postprocessing step after mesh generation, the grayscale sample values in the generated mesh were replaced by their corresponding color values. These color-capable versions of ED, GPRFSED, and GPR are henceforth referred to as CED, CGPRFSED, and CGPR, respectively. Experimental results show that our MGPRFS method yields meshes of higher quality than the CGPRFSED and GPRFSED methods by up to 7.05 dB and 2.88 dB respectively, with nearly the same computational cost. Moreover, the MGPRFS method outperforms the CGPR and GPR methods in mesh quality by up to 7.08 dB and 0.42 dB respectively, with about 5 to 40 times less computational cost. Lastly, our MED method yields meshes of higher quality than the CED and ED methods by up to 7.08 and 4.72 dB respectively, where all three of these methods have a similar computational cost.
  • Loading...
    Thumbnail Image
    Item
    A Flexible C++ Library for Wavelet Transforms of 3-D Polygon Meshes
    (2020-02-13) Wei, Shengyang; Adams, Michael D.
    The lifted wavelet transforms of 3-D polygon meshes are introduced, and the details of Loop and Butterfly wavelet transforms are studied. Then, a library that implements a framework for computing lifted wavelet transforms of polygon meshes is presented. To compute Loop and Butterfly wavelet transforms, users can employ the built-in functionality of the library. In addition, users can also define custom wavelet transforms via a secondary application programming interface provided by the library. Some application programs implemented with this library are also provided for demonstration purposes, including application that perform wavelet-based polygon mesh simplification and denoising. Finally, the run-time performance of the library are measured. Our library is shown to perform lifted wavelet transforms, except the subdivision detection step in linear time with respect to the number of vertices. The main bottleneck is the subdivision detection step, since it includes sorting, which has a time complexity greater than linear.
  • Loading...
    Thumbnail Image
    Item
    A Flexible mesh-generation strategy for image representation based on data-dependent triangulation
    (2012-05-15) Li, Ping; Adams, Michael D.
    Data-dependent triangulation (DDT) based mesh-generation schemes for image representation are studied. A flexible mesh-generation framework and a highly effective mesh-generation method that employs this framework are proposed. The proposed framework is derived from frameworks proposed by Rippa and Garland and Heckbert by making a number of key modifications to facilitate the development of much more effective mesh-generation methods. As the proposed framework has several free parameters, the effects of different choices of these parameters on mesh quality (both in terms of squared error and subjectively) are studied, leading to the recommendation of a particular set of choices for these parameters. A new mesh-generation method is then introduced that employs the proposed framework with these best parameter choices. Experimental results show our proposed mesh-generation method outperforms several competing approaches, namely, the DDT-based incremental scheme proposed by Garland and Heckbert, the COMPRESS scheme proposed by Rippa, and the adaptive thinning scheme proposed by Demaret and Iske. More specifically, in terms of PSNR, our proposed method was found to outperform these three schemes by median margins of 4.1 dB, 10.76 dB, and 0.83 dB, respectively. The subjective qualities of reconstructed images were also found to be correspondingly better. In terms of computational cost, our proposed method was found to be comparable to the schemes proposed by Garland and Heckbert and Rippa. Moreover, our proposed method requires only about 5 to 10% of the time of the scheme proposed by Demaret and Iske. In terms of memory cost, our proposed method was shown to require essentially same amount of memory as the schemes proposed by Garland and Heckbert and Rippa, and orders of magnitude (33 to 800 times) less memory than the scheme proposed by Demaret and Iske.
  • Loading...
    Thumbnail Image
    Item
    Image Morphing with the Beier-Neely Method
    (2015-11-04) Zhu, Feng; Adams, Michael D.
    The Beier-Neely feature-based image-morphing method is studied. Then, software implementing the Beier-Neely image-morphing method, designed and developed by the author, is presented. The software consists of three programs. The first program is a graphical user interface (GUI) used to manually select feature line segments. The second program is a morphing program that generates a morphing image sequence, where each intermediate frame in the sequence represents a stage in the morphing process. The third program converts the image sequence produced to a video that displays the image morphing effect.
  • Loading...
    Thumbnail Image
    Item
    Image representation with explicit discontinuities using triangle meshes
    (2012-09-11) Tu, Xi; Adams, Michael D.
    Triangle meshes can provide an effective geometric representation of images. Although many mesh generation methods have been proposed to date, many of them do not explicitly take image discontinuities into consideration. In this thesis, a new mesh model for images, which explicitly represents discontinuities (i.e., image edges), is proposed along with two corresponding mesh-generation methods that determine the mesh-model parameters for a given input image. The mesh model is based on constrained Delaunay triangulations (DTs), where the constrained edges correspond to image edges. One of the proposed methods is named explicitly-represented discontinuities-with error diffusion (ERDED), and is fast and easy to implement. In the ERDED method, the error diffusion (ED) scheme is employed to select a subset of sample points that are not on the constrained edges. The other proposed method is called ERDGPI. In the ERDGPI method, a constrained DT is first constructed with a set of prespecified constrained edges. Then, the greedy point insertion (GPI) scheme is employed to insert one point into the constrained DT in each iteration until a certain number of points is reached. The ERDED and ERDGPI methods involve several parameters which must be provided as input. These parameters can affect the quality of the resulting image approximations, and are discussed in detail. We also evaluate the performance of our proposed ERDED and ERDGPI methods by comparing them with the highly effective ED and GPI schemes. Our proposed methods are demonstrated to be capable of producing image approximations of higher quality both in terms of PSNR and subjective quality than those generated by other schemes. For example, the reconstructed images produced by the proposed ERDED method are often about 3.77 dB higher in PSNR than those produced by the ED scheme, and our proposed ERDGPI scheme produces image approximations of about 1.08 dB higher PSNR than those generated by the GPI approach.
  • Loading...
    Thumbnail Image
    Item
    An Improved Error-Diffusion Approach for Generating Mesh Models of Images
    (2014-11-25) Ma, Xiao; Adams, Michael D.
    Triangle mesh models of images are studied. Through exploration, a computational framework for mesh generation based on data-dependent triangulations (DDTs) and two specific mesh-generation methods derived from this framework are proposed. In earlier work, Yang et al. proposed a highly-effective technique for generating triangle-mesh models of images, known as the error diffusion (ED) method. Unfortunately, the ED method, which chooses triangulation connectivity via a Delaunay triangulation, typically yields triangulations in which many (triangulation) edges crosscut image edges (i.e., discontinuities in the image), leading to increased approximation error. In this thesis, we propose a computational framework for mesh generation that modifies the ED method to use DDTs in conjunction with the Lawson local optimization procedure (LOP) and has several free parameters. Based on experimentation, we recommend two particular choices for these parameters, yielding two specific mesh-generation methods, known as MED1 and MED2, which make different trade offs between approximation quality and computational cost. Through the use of DDTs and the LOP, triangulation connectivity can be chosen optimally so as to minimize approximation error. As part of our work, two novel optimality criteria for the LOP are proposed, both of which are shown to outperform other well known criteria from the literature. Through experimental results, our MED1 and MED2 methods are shown to yield image approximations of substantially higher quality than those obtained with the ED method, at a relatively modest computational cost. For example, in terms of peak-signal-to-noise ratio, our MED1 and MED2 methods outperform the ED method, on average, by 3.26 and 3.81 dB, respectively.
  • Loading...
    Thumbnail Image
    Item
    An improved incremental/decremental delaunay mesh-generation strategy for image representation
    (2016-12-16) EL Marzouki, Badr Eddine; Adams, Michael D.
    Two highly effective content-adaptive methods for generating Delaunay mesh models of images, known as IID1 and IID2, are proposed. The methods repeatedly alternate between mesh simplification and refinement, based on the incremental/decremental mesh-generation framework of Adams, which has several free parameters. The effect of different choices of the framework's free parameters is studied, and the results are used to derive two mesh-generation methods that differ in computational complexity. The higher complexity IID2 method generates mesh models of superior reconstruction quality, while the lower complexity IID1 method trades mesh quality in return for a decrease in computational cost. Some of the contributions of our work include the recommendation of a better choice for the growth-schedule parameter of the framework, as well as the use of Floyd-Steinberg error diffusion for the initial-mesh selection. As part of our work, we evaluated the performance of the proposed methods using a data set of 50 images varying in type (e.g., photographic, computer generated, and medical), size and bit depth with multiple target mesh densities ranging from 0.125% to 4%. The experimental results show that our proposed methods perform extremely well, yielding high-quality image approximations in terms of peak-signal-to-noise ratio (PSNR) and subjective visual quality, at an equivalent or lower computational cost compared to other well known approaches such as the ID1, ID2, and IDDT methods of Adams, and the greedy point removal (GPR) scheme of Demaret and Iske. More specifically, the IID2 method outperforms the GPR scheme in terms of mesh quality by 0.2-1.0 dB with a 62-93% decrease in computational cost. Furthermore, the IID2 method yields meshes of similar quality to the ID2 method at a computational cost that is lower by 9-41%. The IID1 method provides improvements in mesh quality in 93% of the test cases by margins of 0.04-1.31 dB compared to the IDDT scheme, while having a similar complexity. Moreover, reductions in execution time of 4-59% are achieved compared to the ID1 method in 86% of the test cases.
  • Loading...
    Thumbnail Image
    Item
    An improved Lawson local-optimization procedure and its application
    (2018-04-30) Fang, Yue; Adams, Michael D.
    The problem of selecting the connectivity of a triangulation in order to minimize a given cost function is studied. This problem is of great importance for applications, such as generating triangle mesh models of images and other bivariate functions. In early work, a well-known method named the local optimization procedure (LOP) was proposed by Lawson for solving the triangulation optimization problem. More recently, Yu et al. proposed a variant of the LOP called the LOP with lookahead (LLOP), which has proven to be more effective than the LOP. Unfortunately, each of the LOP and LLOP can only guarantee to yield triangulations that satisfy a weak optimality condition for most cost functions. That is, the triangulation optimized by the LOP or LLOP is only guaranteed to be such that no single edge flip can reduce the triangulation cost. In this thesis, a new optimality criterion named n-flip optimality is proposed, which has proven to be a useful tool for analyzing the optimality property. We propose a more general framework called the modified LOP (MLOP), with several free parameters, that can be used to solve the triangulation-cost optimization problem. By carefully selecting the free parameters, two MLOP-based methods called the MLOPB(L,M) and MLOPC(L) are derived from this framework. According to the optimality property introduced in the thesis, we have proven our proposed methods can satisfy a stronger optimality condition than the LOP and LLOP. That is, the triangulation produced by our MLOP-based methods cannot have their cost reduced by any single edge flip or any two edge flips. Due to satisfying this stronger optimality condition, our proposed methods tend to yield triangulations of significantly lower cost than the LOP and LLOP methods. In order to evaluate the performance of our MLOP-based methods, they are compared with two other competing approaches, namely the LOP and LLOP. Experimental results show that the MLOPB and MLOPC methods consistently yield triangulations of much lower cost than the LOP and LLOP. More specifically, our MLOPB and MLOPC methods yield triangulations with an overall median cost reduction of 16.36% and 16.62%, respectively, relative to the LOP, while the LLOP can only yield triangulations with an overall median cost reduction of 11.49% relative to the LOP. Moreover, our proposed methods MLOPB(2,i) and MLOPA(i) are shown to produce even better results if the parameter i is increased at the expense of increased computation time.
  • Loading...
    Thumbnail Image
    Item
    Improved subband-based and normal-mesh-based image coding
    (2007-12-19T22:16:05Z) Xu, Di; Adams, Michael D.
    Image coding is studied, with the work consisting of two distinct parts. Each part focuses on different coding paradigm. The first part of the research examines subband coding of images. An optimization-based method for the design of high-performance separable filter banks for image coding is proposed. This method yields linear-phase perfect-reconstruction systems with high coding gain, good frequency selectivity, and certain prescribed vanishing-moment properties. Several filter banks designed with the proposed method are presented and shown to work extremely well for image coding, outperforming the well-known 9/7 filter bank (from the JPEG-2000 standard) in most cases. Several families of perfect reconstruction filter banks exist, where the filter banks in each family have some common structural properties. New filter banks in each family are designed with the proposed method. Experimental results show that these new filter banks outperform previously known filter banks from the same family. The second part of the research explores normal meshes as a tool for image coding, with a particular interest in the normal-mesh-based image coder of Jansen, Baraniuk, and Lavu. Three modifications to this coder are proposed, namely, the use of a data-dependent base mesh, an alternative representation for normal/vertical offsets, and a different scan-conversion scheme based on bicubic interpolation. Experimental results show that our proposed changes lead to improved coding performance in terms of both objective and subjective image quality measures.
  • Loading...
    Thumbnail Image
    Item
    Library usage analysis in the C++ codebase of Fedora Linux 37
    (2024) Deng, Jiachao; Adams, Michael D.
    C++ source code analysis is conducted at scale. A framework is proposed for analyzing the C++ codebase of operating systems that employ the dnf package manager, such as Fedora Linux and Red Hat Enterprise Linux. The framework can run an arbitrary static analysis tool over software packages that contain C++ code from compatible operating systems. In order to evaluate the effectiveness of the framework and to better understand how the C++ language is used in practice, a C++ analysis tool is developed to study library usage with a fine level of granularity, considering instances of uses of types, type aliases, member/non-member functions, variables, and enumerators. Our framework, combined with the C++ library usage analysis tool, is used to analyze 2 379 software packages from the codebase of Fedora Linux 37. The number of packages analyzed is two to three orders of magnitude larger than that of previous C++ research. We applied our library usage analysis tool to nearly 400 million lines of C++ code across these packages. Leveraging the Clang compiler front-end libraries, our tool extracts information from correctly parsed C++ code, which is an improved approach compared to many existing studies. As a result, the tool provides an accurate collection of library usage instances from C++ software. Numerous observations are made regarding various aspects of library usage that can facilitate improved teaching of C++, aid in the refinement of C++ libraries, and help guide the future evolution of the C++ standard. For example, our analysis reveals that C++ programmers rarely use some C++ standard library algorithms designed for specialized purposes or combined operations. These algorithms often appear in less than 1% of all C++ software packages investigated. We suggest that the standard library exercise caution when adopting infrequently needed algorithms to maintain a streamlined interface. Such observations summarize current trends in C++ library usage and provide recommendations for improving the C++ language and its libraries.
  • Loading...
    Thumbnail Image
    Item
    Mesh models of images, their generation, and their application in image scaling
    (2019-01-22) Mostafavian, Ali; Adams, Michael D.
    Triangle-mesh modeling, as one of the approaches for representing images based on nonuniform sampling, has become quite popular and beneficial in many applications. In this thesis, image representation using triangle-mesh models and its application in image scaling are studied. Consequently, two new methods, namely, the SEMMG and MIS methods are proposed, where each solves a different problem. In particular, the SEMMG method is proposed to address the problem of image representation by producing effective mesh models that are used for representing grayscale images, by minimizing squared error. The MIS method is proposed to address the image-scaling problem for grayscale images that are approximately piecewise-smooth, using triangle-mesh models. The SEMMG method, which is proposed for addressing the mesh-generation problem, is developed based on an earlier work, which uses a greedy-point-insertion (GPI) approach to generate a mesh model with explicit representation of discontinuities (ERD). After in-depth analyses of two existing methods for generating the ERD models, several weaknesses are identified and specifically addressed to improve the quality of the generated models, leading to the proposal of the SEMMG method. The performance of the SEMMG method is then evaluated by comparing the quality of the meshes it produces with those obtained by eight other competing methods, namely, the error-diffusion (ED) method of Yang, the modified Garland-Heckbert (MGH) method, the ERDED and ERDGPI methods of Tu and Adams, the Garcia-Vintimilla-Sappa (GVS) method, the hybrid wavelet triangulation (HWT) method of Phichet, the binary space partition (BSP) method of Sarkis, and the adaptive triangular meshes (ATM) method of Liu. For this evaluation, the error between the original and reconstructed images, obtained from each method under comparison, is measured in terms of the PSNR. Moreover, in the case of the competing methods whose implementations are available, the subjective quality is compared in addition to the PSNR. Evaluation results show that the reconstructed images obtained from the SEMMG method are better than those obtained by the competing methods in terms of both PSNR and subjective quality. More specifically, in the case of the methods with implementations, the results collected from 350 test cases show that the SEMMG method outperforms the ED, MGH, ERDED, and ERDGPI schemes in approximately 100%, 89%, 99%, and 85% of cases, respectively. Moreover, in the case of the methods without implementations, we show that the PSNR of the reconstructed images produced by the SEMMG method are on average 3.85, 0.75, 2, and 1.10 dB higher than those obtained by the GVS, HWT, BSP, and ATM methods, respectively. Furthermore, for a given PSNR, the SEMMG method is shown to produce much smaller meshes compared to those obtained by the GVS and BSP methods, with approximately 65% to 80% fewer vertices and 10% to 60% fewer triangles, respectively. Therefore, the SEMMG method is shown to be capable of producing triangular meshes of higher quality and smaller sizes (i.e., number of vertices or triangles) which can be effectively used for image representation. Besides the superior image approximations achieved with the SEMMG method, this work also makes contributions by addressing the problem of image scaling. For this purpose, the application of triangle-mesh mesh models in image scaling is studied. Some of the mesh-based image-scaling approaches proposed to date employ mesh models that are associated with an approximating function that is continuous everywhere, which inevitably yields edge blurring in the process of image scaling. Moreover, other mesh-based image-scaling approaches that employ approximating functions with discontinuities are often based on mesh simplification where the method starts with an extremely large initial mesh, leading to a very slow mesh generation with high memory cost. In this thesis, however, we propose a new mesh-based image-scaling (MIS) method which firstly employs an approximating function with selected discontinuities to better maintain the sharpness at the edges. Secondly, unlike most of the other discontinuity-preserving mesh-based methods, the proposed MIS method is not based on mesh simplification. Instead, our MIS method employs a mesh-refinement scheme, where it starts from a very simple mesh and iteratively refines the mesh to reach a desirable size. For developing the MIS method, the performance of our SEMMG method, which is proposed for image representation, is examined in the application of image scaling. Although the SEMMG method is not designed for solving the problem of image scaling, examining its performance in this application helps to better understand potential shortcomings of using a mesh generator in image scaling. Through this examination, several shortcomings are found and different techniques are devised to address them. By applying these techniques, a new effective mesh-generation method called MISMG is developed that can be used for image scaling. The MISMG method is then combined with a scaling transformation and a subdivision-based model-rasterization algorithm, yielding the proposed MIS method for scaling grayscale images that are approximately piecewise-smooth. The performance of our MIS method is then evaluated by comparing the quality of the scaled images it produces with those obtained from five well-known raster-based methods, namely, bilinear interpolation, bicubic interpolation of Keys, the directional cubic convolution interpolation (DCCI) method of Zhou et al., the new edge-directed image interpolation (NEDI) method of Li and Orchard, and the recent method of super-resolution using convolutional neural networks (SRCNN) by Dong et al.. Since our main goal is to produce scaled images of higher subjective quality with the least amount of edge blurring, the quality of the scaled images are first compared through a subjective evaluation followed by some objective evaluations. The results of the subjective evaluation show that the proposed MIS method was ranked best overall in almost 67\% of the cases, with the best average rank of 2 out of 6, among 380 collected rankings with 20 images and 19 participants. Moreover, visual inspections on the scaled images obtained with different methods show that the proposed MIS method produces scaled images of better quality with more accurate and sharper edges. Furthermore, in the case of the mesh-based image-scaling methods, where no implementation is available, the MIS method is conceptually compared, using theoretical analysis, to two mesh-based methods, namely, the subdivision-based image-representation (SBIR) method of Liao et al. and the curvilinear feature driven image-representation (CFDIR) method of Zhou et al..
  • Loading...
    Thumbnail Image
    Item
    A new progressive lossy-to-lossless coding method for 2.5-D triangle meshes with arbitrary connectivity
    (2016-11-03) Han, Dan; Adams, Michael D.
    A new progressive lossy-to-lossless coding framework for 2.5-dimensional (2.5-D) triangle meshes with arbitrary connectivity is proposed by combining ideas from the previously proposed average-difference image-tree (ADIT) method and the Peng-Kuo (PK) method with several modifications. The proposed method represents the 2.5-D triangle mesh with a binary tree data structure, and codes the tree by a top-down traversal. The proposed framework contains several parameters. Many variations are tried in order to find a good choice for each parameter considering both the lossless and progressive coding performance. Based on extensive experimentation, we recommend a particular set of best choices to be used for these parameters, leading to the mesh-coding method proposed herein
  • Loading...
    Thumbnail Image
    Item
    A novel fully progressive lossy-to-lossless coder for arbitrarily-connected triangle-mesh models of images and other bivariate functions
    (2018-08-16) Guo, Jiacheng; Adams, Michael D.; Agathoklis, Panajotis
    A new progressive lossy-to-lossless coding method for arbitrarily-connected triangle mesh models of bivariate functions is proposed. The algorithm employs a novel representation of a mesh dataset called a bivariate-function description (BFD) tree, and codes the tree in an efficient manner. The proposed coder yields a particularly compact description of the mesh connectivity by only coding the constrained edges that are not locally preferred Delaunay (locally PD). Experimental results show our method to be vastly superior to previously-proposed coding frameworks for both lossless and progressive coding performance. For lossless coding performance, the proposed method produces the coded bitstreams that are 27.3% and 68.1% smaller than those generated by the Edgebreaker and Wavemesh methods, respectively. The progressive coding performance is measured in terms of the PSNR of function reconstructions generated from the meshes decoded at intermediate stages. The experimental results show that the function approximations obtained with the proposed approach are vastly superior to those yielded with the image tree (IT) method, the scattered data coding (SDC) method, the average-difference image tree (ADIT) method, and the Wavemesh method with an average improvement of 4.70 dB, 10.06 dB, 2.92 dB, and 10.19 dB in PSNR, respectively. The proposed coding approach can also be combined with a mesh generator to form a highly effective mesh-based image coding system, which is evaluated by comparing to the popular JPEG2000 codec for images that are nearly piecewise smooth. The images are compressed with the mesh-based image coder and the JPEG2000 codec at the fixed compression rates and the quality of the resulting reconstructions are measured in terms of PSNR. The images obtained with our method are shown to have a better quality than those produced by the JPEG2000 codec, with an average improvement of 3.46 dB.
  • Loading...
    Thumbnail Image
    Item
    A Novel Progressive Lossy-to-Lossless Coding Method for Mesh Models of Images
    (2015-07-29) Feng, Xiao; Adams, Michael D.
    A novel progressive lossy-to-lossless coding method is proposed for mesh models of images whose underlying triangulations have arbitrary connectivity. For a triangulation T of a set P of points, our proposed method represents the connectivity of T as a sequence of edge flips that maps a uniquely-determined Delaunay triangulation (i.e., preferred-directions Delaunay triangulation) of P to T. The coding efficiency of our method is highest when the underlying triangulation connectivity is close to Delaunay, and slowly degrades as connectivity moves away from being Delaunay. Through experimental results, we show that our proposed coding method is able to significantly outperform a simple baseline coding scheme. Furthermore, our proposed method can outperform traditional connectivity coding methods for meshes that do not deviate too far from Delaunay connectivity. This result is of practical significance since, in many applications, mesh connectivity is often not so far from being Delaunay, due to the good approximation properties of Delaunay triangulations.
  • Loading...
    Thumbnail Image
    Item
    A Software Package for Generating Code Coverage Reports With Gcov
    (2021-12-14) Hu, Zhenmai Jr; Adams, Michael D.
    Code coverage is an essential tool often used in software testing. Therefore, a tool that generates well-organized and easy-to-read customized reports containing code coverage information is highly beneficial. In this report, we present the Gcov Report Generator (GRG) software, which includes a library developed for generating code coverage reports in PDF format with Gcov and a supporting application program named coverage that uses the library through the command line. This GRG software can work with the GCC C++ compiler version 10 onwards. The documentation of the application programming interface for the GRG library, the command-line interface for using coverage, and the usage example of generated PDF reports are presented. The GRG software can be used as a front-end tool to the Gcov program to generate code coverage reports in PDF format with function coverage, statement coverage, and branch coverage information. In addition, program options can be utilized to filter the file and function patterns, select coverage criteria types, specify coverage thresholds, and aggregate function information for templates, constructors, and destructors.
Contact Us
Copyright

  • Copyright info
  • Legal notice
  • Accessibility
 
University of Victoria Libraries

  • PO Box 1800 STN CSC
    Victoria BC V8W 3H5
    Canada
    Phone: 1-250-721-6673

We acknowledge and respect the Lək̓ʷəŋən (Songhees and Esquimalt) Peoples on whose territory the university stands, and the Lək̓ʷəŋən and W̱SÁNEĆ Peoples whose historical relationships with the land continue to this day.