ETD (Electronic Theses and Dissertations)

Permanent URI for this collection

For information on how to submit your thesis to this collection, please go to our ETD website on the UVic Libraries Website.

Access to the full text of some theses may be restricted at the request of the author.

All theses from 2011 to the present are in this collection, as well as some from 2010 and earlier years.


Recent Submissions

Now showing 1 - 20 of 7958
  • Item
    QPLEX: Towards the Integration of Platform Agnostic Quantum Computation into Combinatorial Optimization Software
    (2024) Giraldo Botello, Juan Fernando; Müller, Hausi A.; Villegas Machado, Norha Milena
    Quantum computing has the potential to surpass the capabilities of current classical computers when solving complex problems. Combinatorial optimization has emerged as a pivotal target area for quantum computers, as problems in this field are renowned for their complexity and resource-intensive nature. Moreover, these challenges play a critical role in various industrial sectors, including logistics, manufacturing, and finance. This thesis explores the integration of quantum computation into classical software tools as a means to potentially address combinatorial optimization problems more efficiently and effectively. This work introduces QPLEX, a Python software library that enables practitioners and researchers to implement the general mathematical formulation of a given combinatorial optimization problem once and execute it seamlessly on multiple quantum devices using various quantum algorithms. This software solution automatically adapts a general optimization model to the specific instructions utilized by the target quantum device’s SDK. It offers a versatile execution workflow capable of running gate-based hybrid quantum-classical algorithms for combinatorial optimization in a platform-agnostic manner. This approach reduces the programming overhead required for modeling and experimenting with combinatorial optimization solutions. Within this manuscript, we address and introduce the various aspects associated with the development of QPLEX in a clear and comprehensive manner. These aspects encompass the quantum algorithms and quantum hardware available in the library, along with QPLEX’s system design and implementation. Additionally, we provide a guide on how to use the library and conduct a thorough evaluation of the software solution within a specific use case as part of this thesis.
  • Item
    Mechanisms of cerebral artery compliance at sea-level and following acclimatization to high altitude.
    (2024) Underwood, Destiny; Smith, Kurt
    Brain health is dependent on adequate cerebral blood flow (CBF) delivered through healthy compliant vessels that buffer pulsatile hemodynamic stress. Pharmacological interventions at sea-level (SL) and high altitude (HA, 5050m) that increase and lower CBF provide a useful experimental design to assess the mechanisms involved in buffering cerebrovascular hemodynamic stress. We characterized pulsatile hemodynamic damping factors (DFi), as an index of cerebral hemodynamic stress. DFi was calculated from pulsitility (PI) in the internal carotid (ICA) and middle cerebral arteries (MCA) at SL and HA following pharmacological attempts to increase (SL=Dobutamine, DOB; HA = DOB+Actetazolamide, DOB+ACZ) and decrease (Indomethacin; INDO) CBF in healthy lowlander adults (n=12, 4 females). Cerebrovascular hemodynamics in the ICA (flow [QICA], PIICA) and MCA (velocity [MCAv], PIMCA) were measured using ultrasound; DFi=PIICA:PIMCA. Administration of DOB (2-5μg/kg/min) at SL, DOB+ACZ (5μg/kg/min+10 mg/kg) at HA, and INDO (1.45 mg/kg) at SL and HA were performed on separate days in randomized order. No QICA response were observed following DOB, while QICA increased following DOB+ACZ (change+41±24 ml.min-1, p=0.01), and decreased following INDO at SL (change-53± 56 ml.min-1,p=0.04) and HA (change -41± 18 ml.min-1, p=0.004). DOB and DOB+ACZ administration differentially altered HR (change-3 bpm; change+5 bpm, p=0.02), ICAV (change-6 ± 10 cm.s-1; change+10 ± 11 cm.s-1; p=0.04), MCAv (change+0 ± 10 cm.s-1; change+17± 5 cm.s-1), and PIICA (change+0.4 ± 0.2 a.u; change +0.2 ± 0.09 a.u.; p=0.03). DOB reduced DFi (change -0.1± 0.05, p=0.02) at SL. Meanwhile DFi following INDO was significantly lower at HA (change -0.54± 0.3a.u, p=0.02) but not at SL (change -0.26± 0.3 a.u, p=0.18). The results from these two field experiments highlights that reducing CBF via cyclooxygenase inhibition detrimental alters the buffering of cerebrovascular hemodynamic forces. In contrast, at HA when CBF is increased following DOB+ACZ cerebrovascular hemodynamic regulation was preserved.
  • Item
    An Intersectionality-Informed Analysis of Loneliness and Discrimination Experienced by 2S/GBTQ+ People Living With Disabilities Before and During the COVID-19 Pandemic
    (2024) Amato, Anthony Theodore; Lachowsky, Nathan; Card, Kiffer
    Introduction: Social inequities such as loneliness and discrimination due to sexual orientation (herein, discrimination) are prevalent across disabled people and Two-Spirit, Gay, Bisexual and Trans men, Queer and Non-Binary (2S/GBTQ+) communities. However, little is known about how loneliness and discrimination were experienced in Canada at the intersection of disability and 2S/GBTQ+ communities, especially before and during the COVID-19 pandemic. Method: To address this knowledge gap, four cycles (2019, 2020, 2021, 2022) of cross-sectional, bilingual, community-based Sex Now survey data were used, which included 2S/GBTQ+ people aged 15 years or older and living in Canada. A total of 12,355 2S/GBTQ+ participants responded to loneliness outcomes, and 11,575 to discrimination outcomes. A multi-stage data analysis was conducted. First, crosstabulations and chi-square tests were used to describe and test for differences across outcomes across the four survey cycles. Second, pooled data were analyzed to describe and test for differences in outcomes based on social determinants of health. Third, stratified analyses were repeated for participants living with and without a disability. Finally, only among 2S/GBTQ+ participants living with disabilities, multivariable logistic regression models of each outcome identified 1) temporal trends by survey year, and 2) social determinants of health correlates. Results: There were statistically significant differences in outcomes across survey cycles, which were greater among 2S/GBTQ+ participants living with a disability. Compared with 2019 (before COVID-19), the odds of reporting loneliness were greater for 2S/GBTQ+ participants living with disabilities in 2020 and 2021 (but not 2022). 2S/GBTQ+ participants living with a disability who reported a racialized identity, financial strain, or a gender-expansive identity had greater odds of reporting loneliness. Compared with 2019 (before COVID-19), decreased odds of reporting discrimination were found in 2021 and 2022 (but not 2020). Generally, older 2S/GBTQ+ participants living with a disability were less likely to experience discrimination. 2S/GBTQ+ participants living with disabilities who were racialized, queer versus bisexual identified, and gender-expansive reported greater odds of discrimination. Conclusions: These findings suggest that 2S/GBTQ+ people living with disabilities were impacted by greater loneliness and lesser discrimination during COVID-19. However, social inequities were also present among 2S/GBTQ+ people living with disabilities. Equitable policy planning is needed to ensure that underserved yet deserving communities are not disproportionally affected by future pandemics and associated public health responses.
  • Item
    Paleoenvironmental interpretations of the Late Triassic marine realm across the Canadian Cordillera: Slow burn of the end-Triassic mass extinction
    (2024) Lei, Jerry; Husson, Jon
    Despite representing some of the most pivotal intervals in evolutionary history, the timing and tempo of mass extinction events have remained contentious. Many studies have contributed evidence suggesting that ecosystem disturbance associated with the end-Triassic mass extinction (ETME) began prior to the Triassic/Jurassic boundary (TJB), but the extent and duration of this leadup phase is not well established. This uncertainty is exacerbated by a comparative lack of studies investigating the ETME within the context of long-term Late Triassic trends, as well as by the dominance of Tethyan datasets in paleoenvironmental interpretations of the epoch. The research presented in this dissertation consists of a multi-faceted investigation of Panthalassan paleoenvironmental conditions spanning from the Norian/Rhaetian boundary (NRB) to across the TJB, as recorded in western Canadian marine strata. An instance of coral reef collapse on Mount Sinwa, British Columbia, is associated with the paleoenvironmental disturbance around the NRB via conodont and Re–Os isochron age constraints. Ratios of 87Sr/86Sr are observed to gradually increase across the late Norian, as opposed to the sudden drop previously observed in Tethyan datasets, indicating the NRB disturbance was not triggered by mantle-derived volcanism on a global scale. A 3 – 4‰ negative excursion in δ13C values is captured in the latest Norian on Mount Sinwa, consistent with the global carbon cycling disruption proposed to occur around the NRB by prior studies. The conodont species Mockina carinata and Mockina englandi are especially abundant in the Norian and Rhaetian strata of Panthalassa. Morphometric analyses on these two conodont species demonstrate a gradual reduction of platform width across the NRB. These intraspecific trends are likely a more conservative parallel to concurrent intergeneric morphology shifts observed in Tethyan conodonts, together potentially implying a global shift in conodont diet away from mineralized food sources during this time. This may suggest that the biomineralization pressure typically associated with the ETME began at a lesser severity around the NRB, and that conodont biodiversity underwent only limited recovery between the substantive turnover at the NRB and complete extinction of the class around the ETME. Specimens of both these species that have a mid-platform length to breadth ratio greater than 3:1 are observed exclusively in the Rhaetian, a clear sign of morphotype origination or subspeciation, with implications for improved biostratigraphic utility. The compilation of δ13C values across stratigraphic sections from Williston Lake, Holberg Inlet, and Kyuquot Sound in the Canadian Cordillera develops a comprehensive Panthalassan record spanning from the Norian through into the Hettangian, with representation from a variety of depositional settings across a wide paleogeographic area. Three distinct negative excursions are observed, with one proximal to the NRB, one within the Rhaetian, and another across the TJB. The somewhat variable positions of these excursions suggest that the earliest “precursor” excursion associated with the Rhaetian leadup to the ETME may be indistinguishable from an excursion associated with the NRB. Some of the observed excursions are too large in magnitude to reflect shifts in global ocean water chemistry, necessitating a local-scale amplification mechanism, such as disturbance-triggered organic carbon respiration in a water column with restricted circulation. Nevertheless, this evidence for repeated carbon cycling instability indicates the ecological distress that initiated around the NRB persisted across the Rhaetian, escalating into the TJB. Drawing from a combination of lithological, paleontological, and geochemical evidence from across the Canadian Cordillera, this dissertation supports the hypothesis of a protracted ETME that initiated as early as the NRB. With implications of elevated extinction pressure persisting for millions of years before the climax at the TJB, this research challenges preconceptions of the timescale in which mass extinction events ought to be envisioned.
  • Item
    Ancient abundance, distribution, and size of Olympia Oysters (Ostrea lurida) in the Salish Sea: a perspective from the Lekwungen village of Kosapsom (DcRu-4), southern Vancouver Island, British Columbia
    (2024) Vollman, Taylor; McKechnie, Iain
    Olympia oysters (Ostrea lurida) are the only oyster species native to the Northwest Coast of North America and are currently a focus of restoration and management following a collapse over the past 150 years. This thesis examines 42 archaeological assemblages containing Olympia oysters in the Salish Sea to better understand Indigenous uses, changes in abundance and distribution between ancient and modern and develops a method to estimate ancient size-at-harvest from partial valves. I observe that Olympia oysters are not a particularly abundant species in archaeological sites when measured by weight and MNI (<15% relative frequency) except in a few sites with high abundance in specific nearshore habitats and locations. Additionally, I examine the size and abundance of Olympia oysters from the Kosapsom Village site (DcRu-4), a site with exceptionally high Olympia oyster frequency (~68 % MNI) located on Southern Vancouver Island in British Columbia in the traditional territories of Esquimalt and Songhees Nations. I compare oyster size ranges from Kosapsom to modern restoration sites and observe that sizes are larger than modern oysters in the same waterway but are similar to a 20+ year restoration site in Fidalgo Bay, Washington. Both abundance and size at Kosapsom increased over 1800 years. I interpret these increased sizes (~14% increase) as reflective of harvesting restrictions and population enhancement strategies, which are consistent with maintaining long-term harvest stability. This research contributes to the growing recognition that archaeological records of traditional Indigenous shellfish use and management hold great potential to expand historical baselines and inform modern coastal restoration and conservation strategies.
  • Item
    A Laboratory Study on the Influence of Guided Drop Tower Carriage Mass and Kinematic Differences to Full-Surrogate Free Falls Toward Enhanced Helmet Certification Methods
    (2024) Brice, Aaron; Dennison, Christopher
    Falling from height presents a significant risk for military personnel due to the frequency at which they perform high exposure maneuvers, such as walking along unstable structures, repelling from buildings or aircrafts, and low altitude egressing. Traumatic brain injury (TBI) resulting from falls from height (FFH) account for approximately 20% of TBIs with a reported cause in the military, despite the presence of protective head gear. This is likely because current certification testing performed on military helmets emphasize protection against ballistic threats over blunt impacts, such as falls. Military personnel have identified the need for the next generation of helmets to provide better protection against blunt impacts. To develop such helmets, a method for helmet evaluation in scenarios that are representative of real-life falls must be established as the new standard for helmet impact testing. Guided vertical drop towers are a test device commonly used to evaluate the impact attenuating properties of protective headgear in headfirst falls during certification testing. These devices provide a simple, low cost, repeatable means for conducting certification tests over using full-body surrogates to replicate a person experiencing a headfirst fall. However, there are some limitations to the guided drop tower that may limit their ability to properly replicate a fall from height. The most notable limitations are that guided drop towers are constrained to only a single degree of freedom and the impact mass of a drop tower assembly typically only includes the mass of a human head and neck rather than the mass of a full-body. At present there is little work on how these limitations may yield a differing kinematic response between a guided drop tower and that of an actual fall. The objectives of this thesis was to determine if kinematic differences exist between a guided drop tower and a free-falling person, in unhelmeted and helmeted scenarios. The outcomes of this thesis will contribute toward the development of enhanced test standards that evaluate protective headgear in scenarios that are more representative of real-life falls. iii A custom guided drop tower equipped with a Hybrid III head/neck and adjustable weight drop carriage along with a full-body Hybrid III 50th percentile male surrogate, to represent a falling person, were subjected to two experimental series 1) unhelmeted impacts at four angles between 30° and 75° and four impact velocities between 1.50 m/s and 3.00 m/s and, 2) helmeted impacts at 30° and 75° with impact velocities of 3.00 m/s and 4.50m/s. Impacts in both series were conducted onto a rigid impact surface and kinematic measures of head center of gravity linear acceleration, angular acceleration, and angular velocity were measured. Results of the unhelmeted impact series identified that the drop tower can provide an acceptable approximation of the linear acceleration but not the angular velocity that is likely to be experienced by a person in a headfirst frontal impact. This is due to the angular velocity differing in either the magnitude of the peak angular velocity or direction and time instance of peak measures. Changes to the mass of the drop carriage, to be closer to that of a full dummy, did not bring angular velocity closer to that measured for the full dummy. The helmeted impact study identified that a drop tower is likely to yield an underestimate of peak kinematics in shallow angle impacts and an overestimate of peak kinematics in steep angle impacts. This suggests that the drop tower, in its current form, provides a varying estimate of the resultant peak kinematics in helmeted impacts which is dependent on impact angle. These differences in response are primarily attributable to variances in helmet liner engagement when comparing the drop tower and a person falling. The results of this research found that in their current form guided drop towers do not provide a true representation of the kinematic response that is likely to result in a headfirst fall, either unhelmeted or helmeted. Further the addition of mass to the drop carriage in either scenario did not alter the drop tower’s response to a point where it matched the measured response of the falling surrogate .These differences in kinematic responses between the drop tower and what is likely to be experienced by a falling person, specifically in the case of underestimated responses in shallow angle helmeted falls emphasizes the need to further develop testing methods to ensure that future helmets are evaluated in a way that effectively tests the helmet’s impact-attenuating abilities in an actual fall.
  • Item
    Role of the PEST Domains in Proteasomal Degradation of Rett Protein: MeCP2
    (2024) Kalani, Ladan; Ausio, Juan
    Located on the X-chromosome is the gene encoding the nuclear protein Methyl CpG binding protein 2 (MeCP2). The instability of this protein causes pleiotropic neurological abnormalities, including the debilitating neurodevelopmental disease Rett syndrome (RTT). MeCP2, an epigenetic regulator abundant in neurons, is involved in pleiotropic molecular interaction. Many deleterious mutations of MeCP2 impact its mRNA or protein levels. Neuron maturation and dendritic arborization are compromised when MeCP2 levels are out of the homeostatic range. The mechanisms the cell uses to maintain MeCP2 levels within a tight range have yet to be fully understood. Several hypotheses addressed the homeostatic mechanisms of MeCP2, which involve miRNAs, N-terminal degradation signals or N-degrons, and the PEST domains that act as degradation switches upon post-translational modifications (PTMs). Our lab hypothesized the involvement of MeCP2 PEST-mediated degradation as a mechanism of its homeostatic regulation; however, this hypothesis has yet to be experimentally proven. I experimentally tested the PEST-mediated degradation of MeCP2 with Rett-causing mutations by integrating MeCP2 constructs that have an altered or deleted PEST domain and used microscopy, FRAP analysis and western blotting to characterize in vitro how these constructs behave relative to WT and mutated MeCP2. MeCP2 has Rett-causing mutations that cause lower protein levels, such as T158M; the PEST motif expedites its degradation as deleting it results in higher protein levels. Moreover, mutations that result in higher levels of MeCP2, such as R294X, show stronger DNA binding relative to WT, as assessed by NaCl fractionation. For the first time, we report that the Ct-PEST domain of MeCP2 plays a role in its degradation.
  • Item
    Pseudoku: A Sudoku Adjacency Algebra and Fractional Completion Threshold
    (2024) Nimegeers, Kate; Dukes, Peter
    The standard Sudoku puzzle is a 9 × 9 grid partitioned into 3 × 3 square boxes and partially filled with symbols from the set {1, 2, ..., 9}, with the goal of the puzzle being to complete the grid so that each symbol appears once and only once in each row, column, and box. We study generalized Sudoku puzzles, set on an n × n grid with cells partitioned into n boxes (sometimes called cages) of height h and width w such that hw = n. Throughout this work, these generalized Sudoku are referred to as (h, w)-Sudoku when h and w are significant, but as simply Sudoku otherwise. The goal of solving a partially filled (h, w)-Sudoku puzzle remains the same; complete the Sudoku by assigning placements in the grid to each symbol from {1, 2, ..., hw} so that each symbol appears once and only once in each row, column, and box. This thesis is specifically concerned with establishing conditions which guarantee a fractional Sudoku completion. A fractional Sudoku completion is an assignment of a set of weights to each symbol-cell incidence, representing the proportion of the symbol for that specific cell. The total weight of symbols for each cell must sum to one, and the sum of the weights for each symbol must be exactly one across the cells from each row, column, and box. These conditions still require a balanced distribution of symbols throughout the grid, but with considerably more flexibility than the typical Sudoku conditions. In order to apply graph theoretic techniques to the problem, we develop a 4-partite graph representation, GP , for a partial Sudoku, P . The 4 parts correspond to the rows, columns, symbols, and boxes of P , and the edges of GP indicate the conditions for a completed Sudoku that remain unsatisfied in P . We then introduce the concept of a tile: a 4-vertex subgraph of GP , which represents a valid symbol placement in P . Completing P is equivalent to decomposing the edges of GP into these tiles. We then use an edge-tile inclusion matrix to relate the existence of such a decomposition to the existence of an solution vector with {0, 1} entries for a specific linear system. It is here that we move to the fractional setting through a relaxation of what constitutes an acceptable solution to the linear system - specifically, we are satisfied with solution vectors for which all entries are non-negative. To find conditions that guarantee such a solution exists we study the Gram matrix of the edge-tile inclusion matrix for the empty (h, w)-Sudoku, denoted M. We show that M is symmetric and that each element of M corresponds to a pair of edges in the graph representation Ghw of the empty (h, w)-Sudoku grid. We then leverage the inherent symmetry of equivalence relations between these edges to establish a Sudoku adjacency algebra which contains M . This allows us to explicitly construct a generalized inverse for M . This generalized inverse, along with some applied perturbation theory, is used to show that given large enough h and w, the linear system for any sufficiently sparse partial (h, w)-Sudoku is a minor perturbation of the linear system for the empty (h, w) Sudoku, and therefore allows a fractional completion. After presenting this main result, we take a brief detour to consider the unique case of Sudoku puzzles with thin boxes, examining how fixing the box width variable w while allowing height h to grow asymptotically influences the density conditions necessary for fractional completion. We also give an overview of our exploratory use of the Schur complement for matrix decomposition. Although this method didn’t directly feed into our primary results, it was instrumental in the discovery of the equivalence relations we used to construct our Sudoku adjacency algebra. Finally, we explore the potential applicability of our methodologies to certain Sudoku variants and acknowledge the limitations inherent in our approach. In the appendices, we provide additional resources that complement the main body of our work. In Appendix A, we give a factorization of the Sudoku matrix M and its eigenvectors as Kronecker products for readers who wish to more directly compare our methodology to algebraic graph theory work done on Sudoku by other researchers. Appendix B presents a series of interactive and educational activities designed to introduce students to the basic principles of Latin squares in a fun spy-themed setting.
  • Item
    Evaluating the impacts of anthropogenic development on large mammals across protected and industrialized landscapes in Western Canada
    (2024) Smith, Rebecca M; Fisher, Jason T; Shackelford, Nancy
    Anthropogenic landscape development leads to substantial habitat loss and fragmentation, with large mammals among the most strongly impacted. In this thesis, I used wildlife camera traps across landscapes in Western Canada to investigate two landscape-level management actions for development. First, protected areas (PAs) control development within their boundaries, so they provide refuge to wildlife from many anthropogenic disturbances. Despite their prevalence, many PAs fall short of protecting species and habitats. Since PAs are intrinsically linked to their surrounding lands, pressures outside of PAs can be sources of mortality for mammals using habitat that spans boundaries. To improve our understanding, Chapter Two of this thesis examined the relative impacts of landscape development inside and outside of PAs on large mammals. Species occurrences were best predicted by models that comprised both inside- and outside-PA development, demonstrating that PAs do not offer the full protection they are mandated to. Most of the land on earth, however, remains unprotected, so conservation relies on species persistence in unprotected regions with active development. The composition and configuration of habitat resulting from development has been found to influence species distributions, but configuration is often disregarded as influential in landscapes with less than 70% total habitat loss. Chapter Three examines the relative influences of landscape composition and configuration on large mammal species distributions across a petroleum extraction region. Both configuration and composition were revealed as important, and the specific measures of configuration that explained species occurrence showed that resulting landscape configuration from development restructures the ecological mechanics of ecosystems. Together these results can be used to inform landscape management practices across North America to conserve large mammal species.
  • Item
    Drivers of Coastal Morphodynamics on a Deltaic Barrier in the Colombian Caribbean
    (2024) Gómez, Juan Felipe; Kwoll, Eva; Walker, Ian J.
    This research presents results from a series of analyses focussed on the overarching goal of identifying and understanding drivers of coastal morphodynamics on a deltaic barrier in the Colombian Caribbean coast, eastward from the Magdalena River mouth. Forcing mechanisms operating at different temporal scales were considered, including the influence of vertical land motion (VLM), storms, river discharge, trade winds, and wave conditions. These forcing mechanisms were related to geomorphic changes determined from satellite imagery taken before and after specific events. Satellite imagery and synthetic aperture radar acquisitions were used to assess decadal-scale coastline changes and VLM for the period 2007–2021. The findings revealed that VLM rates are highly variable alongshore, and that subsidence occurs mainly landward of highly erosive stretches of coastline associated with former mangrove forest. Drivers of coastal morphodynamics operating on time spans from days to seasons were assessed by focusing on four lagoons located along the back-barrier to better understand the interplay between extreme events and the breaching and healing of inlets that are temporarily formed between the lagoons and the ocean. Satellite data in conjunction with hourly readings from weather stations spanning the past 50 years helped to determine the conditions that enabled the breaching and healing processes to transpire in the lagoons. Aligned with the predominantly erosive regime along the study area, the findings indicated that the cumulative effect of the breaching and healing of the lagoons resulted in a deltaic barrier that has rolled over the lagoons, modifying their size over time. The occurrence of meteotsunamis and their role in coastal morphodynamics was investigated using a wavelet analysis applied to water-level readings in three tide gauges for the period 2013–2022. After the discovery of one event with meteotsunami-like characteristics, the atmospheric conditions and total water levels associated with this event were analyzed. The results indicated that total water levels related to the meteotsunami are similar to those produced by moderate storms and both phenomena can induce breaching of lagoons. To date, the barrier has responded to external forcers through a landward displacement of the coastline driven by cycles of lagoon breaching and healing as well as overwashes on lagoons, wetlands, and beaches. Seasonal storms have been critical in forcing these processes and have substantially influenced the barrier evolution during the last 50 years. Taken as a whole, this body of work provides knowledge about the response of deltaic barriers to geomorphologic forcers based on a study area in an understudied region of the Caribbean. At a regional level, the findings are relevant for science-based coastal planning and managing policies. Moreover, this research used a variety of methodological approaches to track causality on coastal landscapes in a manner that can be replicated in other areas with limited pre-existing information and without ongoing monitoring programs.
  • Item
    Infrared-Visible Image Fusion in the Gradient Domain
    (2024) Premaratne, Sanduni; Agathoklis, Panajotis; Bruton, Leonard T.
    Due to the complementary properties of the infrared cameras compared to conventional visible imaging cameras, it has become increasingly popular to fuse infrared and visible images of the same scene for better visual understanding. One major application of this is surveillance which involves videos and requires fast processing. Therefore, there is a need for investigating novel low-complexity fusion algorithms that can be implemented in real-time applications. In this study, we address this critical research problem by two-scale fusion in the gradient domain with saliency detection and image enhancement. In the proposed method, the source images are first decomposed in to base and detail layers. Next, the base parts are fused in the gradient domain by choosing the maximum absolute gradient, whereas the gradients of the detail parts are fused using a weighted average where the weights are calculated using saliency maps. Prior to fusion, the detail parts are enhanced using a guided filter-based enhancement approach. Finally, the fused gradients of the base and detail components are added together to obtain the gradients of the fused image, from which the fused image is reconstructed using a reconstruction technique based on wavelets. Experimental results demonstrate that the proposed method achieves very competitive performance in subjective and objective fusion assessments, while also outperforming most methods in terms of computational complexity.
  • Item
    Feedback on English as an Additional Language Students’ Writing: Trends in Corrective Feedback Strategies
    (2024) De Paula, Isabel; Anderson, Tim
    Written Corrective Feedback (WCF) has gained continuous attention in recent years. This growing interest is attributed to both the conceptual controversies surrounding feedback and the variety of available written corrective feedback strategies. While the diversity of options (and opinions) allows teachers to differentiate instruction and feedback, it also poses challenges such as confusion and insecurity, as teachers need to fully understand the characteristics of each strategy and which factors might influence them, in addition to understanding their students’ individual needs and abilities to make informed decisions concerning the most suitable strategy. To address such complexity in feedback choices, this study takes a content analytic approach to synthesize and compare 48 empirical studies of written corrective feedback on English as an Additional Language (EAL) students’ writing published between 2011 and 2019. The main aim of this content analysis is to investigate written corrective feedback trends over the years and identify potential factors that could impact the effectiveness of these WCF strategies. Results indicate that written corrective feedback can foster improved language accuracy and help EAL students to enhance their second language writing skills. However, feedback’s efficacy is mediated by certain variables that include the learners’ proficiency levels, age, the learning environment, previous content and metalinguistic knowledge, and students’ and teachers’ perceptions of the corrective feedback. Furthermore, the duration of exposure to both the target language and the WCF strategy also plays an important role in the effectiveness of the feedback.
  • Item
    The Iceberg Theory on the Shared Understanding of Non-Functional Requirements in Continuous Software Engineering
    (2024) Werner, Colin; Damian, Daniela
    While software is largely considered to be heavily associated with technology, it is ultimately software developers that design, discuss, architect, write, test, re-write, and maintain the code that is compiled into the respective software. These humans are, after all, not perfect, and for the most part are not working in isolation from one another. Thus building a shared understanding amongst a group of software developers, including requirements, is key to ensuring downstream software activities are efficient and effective. Non-functional requirements (NFR), which include performance, availability, and maintainability, are vitally important to overall software quality and that the software fulfills its intended purpose. Research has shown that NFRs are, in practice, poorly defined and difficult to verify, especially in agile environments. A lack of attention to NFRs could potentially derail a software project. Many an organization frequently incurs technical debt by making trade-offs between the timely delivery of promised software features and rigorous system design that incorporates sufficient attention for vital NFRs. The software industry has always sought to shorten the time of delivery of systems and features, including the adoption of iterative and incremental methods, in particular agile methods, which have become the norm. Practices such as Continuous Integration, which relies on automatically testing newly integrated code, inspired the automation of other activities within software development, allowing the whole development process to become more continuous. This has led to a trend that has been called continuous software engineering (CSE). CSE relies on automated and fast releases of new versions, delivering new features quickly to users. However, feature development is usually driven by functional requirements (FR), while such fast delivery frequently means that non-functional requirements receive less attention. Previous work has pointed out that NFRs are frequently neglected in agile development, and indeed, little work exists that has explored NFRs in the context of CSE. A major complication of an NFR is that it relates to an entire system's architecture, which is problematic for two reasons. First, evaluating the impact of frequent updates, which comes with a continuous software engineering process, on NFRs is very challenging. Second, it can be challenging for all developers in a project to have a shared and common understanding of a system's architecture, in particular for very large systems. In this dissertation, I describe a multi-year, multi-case study to empirically investigate how four organizations, for which NFRs are paramount to their business survival, build, manage, and maintain the shared understanding of NFRs in their continuous practices. My research goal is to develop a deep and rich understanding of the relationship between an organization and their shared understanding of NFRs in CSE. Through the results and insights from this in-depth research, I developed the Iceberg Theory on the complex and intricate relationship between a shared understanding of NFRs and CSE. The theory includes a classification of shared understanding, a lack of a shared understanding, nine practices an organization may use to build a shared understanding, in addition to the associated challenges and the triggers that prompted an organization to build said shared understanding.
  • Item
    How High School Teachers in Victoria, BC Are Implementing British Columbia’s New Assessment Framework in Their Classrooms
    (2024) Muirhead, Ariel U'Chong; Sanford, Kathy
    As a result of changing to a content-based, competency-driven curriculum in 2010, British Columbia’s Ministry of Education and Childcare (BCMECC) has been rolling out a new provincial assessment framework since 2016. Draft provincial documents described the new assessment framework as aligning with standards-based grading, introduced a new provincial proficiency scale, and announced the elimination of percentages or letter grades in the assessment of kindergarten to grade 9 classes. Implementation of this new assessment framework was mandated across the province in September 2023. My study asks how some high school teachers in Victoria, BC had been implementing this new assessment framework in their classrooms prior to the mandate with the idea that their experiences, successes, and challenges would be of value to teachers who would be required to make similar changes, as well as school districts, educational partners, and the BCMECC who could use the data to support teachers with this change. By interviewing four high school teachers, I collected narrative data using a multiple case study model, which I analyzed with thematic analysis. The study finds that the new classroom assessment framework is most authentically implemented when teacher assessment philosophy aligns with the provincial framework, embraces learning as a process, involves students in the assessment process, and keeps students at the centre of decision-making. While the mental and practical work of classroom assessment models requires the undue work and time of teachers, this work is both important and necessary for the emotional and academic student success.
  • Item
    Through the Looking Glass - Strategies in Achieving Stakeholder Performance
    (2024) Salmon, Emily; Murphy, Matthew
    This dissertation thoroughly explores stakeholder value capture through three interconnected studies, collectively advancing our understanding of how community stakeholders systematically capture diverse elements of value over time. By challenging fundamental economic assumptions within value-based strategy theory and incorporating a behavioural theory lens, I develop a theoretical model to offer conceptual clarity on the concept of value capture, disentangling potential from realized value capture. The subsequent empirical studies test and build upon these theoretical advancements, with a specific focus on Indigenous communities impacted by nearby mining projects. In this context, I investigate the impact of contractual stakeholder governance, specifically the negotiation and implementation of Community Benefit Agreements, on community stakeholder value capture outcomes. Contrary to conventional wisdom, the findings indicate that contractual forms of stakeholder governance, particularly CBAs, do not consistently lead to higher value capture outcomes. Furthermore, the research reveals that stakeholders concurrently experience both value capture and destruction across various dimensions, challenging existing theoretical explanations. Expanding on these insights, the research then uncovers the diverse value capture strategies associated with achieving higher levels of value capture, finding that communities can capture value across varying levels of bargaining power while the ease of capturing value varies according to the type of value. This holistic exploration enhances our understanding of the determinants of stakeholder value capture, supplementing established explanations centered on bargaining power with innovative theoretical developments related to complementary resources and institutional contexts. Collectively, these studies offer a nuanced and comprehensive perspective on stakeholder value capture processes, contributing to the evolving landscape of value capture theory and practice.
  • Item
    Mobile Guards’ Strategies for Graph Surveillance and Protection
    (2024) Virgile, Virgélot; MacGillivray, Gary; Mynhardt, C. M.
    In this dissertation, we study the “one guard moves” model of both the eternal domination game and the eviction game. We investigate the computational complexity of deciding whether k guards can respond to any sequence of attacks on an n-vertex graph G in both games. We show that this decision problem is EXPTIME-complete when neither G nor k is fixed, and when the initial configuration of the guards is given in both cases. We further show that in the case of the eternal domination game, if the guards can choose their initial configuration and the graph is directed, the decision problem remains EXPTIME-complete. We present an algorithm that decides the problem in time O(kn^(k+2)) for both games, marking a significant improvement over the previously fastest known algorithm which has time complexity O(n^(2k+2)). Our algorithm further determines the maximum number of attacks (potentially infinite) the guards can defend from each configuration. We study the relationship between the eternal domination number of a graph and its clique covering number using both large-scale computation and analytic methods. In doing so, we answer two open questions of Klostermeyer and Mynhardt, and disprove a conjecture of Klostermeyer and MacGillivray (The Fundamental Conjecture [Eternal Domination: Criticality and Reachability, Discuss. Math. Graph Theory 37 (2017), no. 1, 63–77]). We prove that the smallest graph having its eternal domination number less than its clique covering number has ten vertices. We also demonstrate that for any integer k>=2, there exist infinitely many graphs having domination number and eternal domination number equal to k containing dominating sets which are not eternal dominating sets. In addition, we show that there exists a function f such that for any integer k>=1, any graph with independence number k has eviction number at most f(k). We further show that the eviction number of cographs can be computed in polynomial time. Finally, we study the length of both games when played on an n-vertex graph on which are located k guards; that is, the maximum number of turns required before a winner can be decided.
  • Item
    Secure and Privacy-preserving Data Aggregation in Internet of Vehicles
    (2024) Liu, Rui; Pan, Jianping
    In Internet of Vehicles (IoV), crucial data is aggregated to support the applications for automatic driving, intelligent transportation and smart cities. It is crucial to carefully address certain challenges in this process, particularly regarding security and privacy. In this dissertation, we first target a representative IoV data aggregation scenario, fine-grained air quality monitoring. The major challenges we focus on include: a) the sensory data provided by vehicles usually vary in quality; b) there is a significant difference in traffic volumes of streets or blocks, which leads to a data sparsity problem; and c) the original sensory data, vehicle identities, and trajectories face risks of exposure. To address these issues, we propose a truth discovery algorithm incorporating multiple correlations, and extend it to a privacy-preserving framework, EAirQ. EAirQ relies on a traditional end-to-end data aggregation architecture. Designing a new architecture specifically for vehicular networks may hold significant value. Thus, we introduce a privacy-preserving two-layered architecture with vehicle clusters. Instead of focusing on a specific application, we present how this architecture can be well adopted in a general distributed machine learning scenario. We named this part of the work CRS. CRS not only protects the local data, the identities and trajectories of vehicles, but also ensures the accuracy of aggregated learning models by handling packet loss in the application layer. We further work on eliminating the limitations of the proposed two-layered architecture in the following three aspects: a) to provide fast and easy verification of messages within a cluster; b) to preserve vehicle privacy without adopting the pseudonym technique; c) to consider the adversarial behaviors of vehicles and enhance the security. Our solution introduces a novel concept, data approval, based on the Schnorr signature scheme. This part of the work, named SADA, meets more security requirements and is lightweight for vehicles. In addition to exploring new solutions to preserve the privacy of vehicle identities and trajectories, we also pay attention to the latest industry standards. This part of the work focuses on tackling the challenge of certificate provisioning in the latest solution to satisfy the anonymous communication requirement in IoV. We propose a non-interactive approach, named NOINS, empowering vehicles to generate short-term key pairs and anonymous implicit certificates on their side. This new paradigm introduces the possibilities for many extensions and applications.
  • Item
    Flow Analysis of Non-Spherical Granular Materials in a Two-Dimensional Hopper
    (2024) Mortezapour, Abdolreza; Nadler, Ben
    Non-spherical granular materials have been of an interest for the various research communities and industries due to their widespread presence in natural and engineered systems. These materials, which include substances like soil, powders, dry sludges, and grains, exhibit complex behaviors influenced by factors such as grains interactions and boundary conditions. Under sufficient conditions, these materials can flow, ranking second only to water as the most handled materials in diverse industries. Therefore, understanding how these materials flow is important in different domains, from wastewater treatment and mining to food and pharmaceutical industries. Granular flow within hoppers, driven by gravity, provides cost-effective transportation and is widely used in material handling and storage systems. This research aims to investigate the behavior of non-spherical grains in flow within a hopper through implementing a Finite Element Analysis (FEA) suite and using a previously developed model for non-spherical granular flow. A simulation similar to an available experiment is conducted by implementing the developed model for both spherical and non-spherical grains. The results from the simulation consistently align with those of the experiment, demonstrating the validity and accuracy of the simulation. Moving forward, more complex conditions in a practical application are examined to showcase the capability of the model and the implementation approach. The simulation results reveal the effect of boundary conditions and model parameters on grains orientation and flow within the hopper. The main motivation behind this research lies in establishing a foundation for utilizing the capabilities of a FEA suite to facilitate further investigations spanning a broad range of geometries and conditions, addressing challenges in numerical modeling of complex non-spherical granular flows. The outcome of this research in successfully integrating the developed model into the suite and simulating granular flow in different conditions and geometries, can be employed for further studies with practical significance for industries dealing with granular materials. It lays the groundwork for implementing a versatile FEA suite to simulate complex behaviors of granular materials. This foundation is viable for further studies addressing potential issues related to grain flow in hoppers, aiming to optimize industrial processes and improve material handling and storage techniques.
  • Item
    Single-Class Instance Segmentation for Vectorization of Line Drawings
    (2024) Vohra, Rhythm; Branzan Albu, Alexandra
    Images can be represented and stored either in raster or in vector formats. Raster images are the most ubiquitous and are defined as matrices of pixel intensities/colours, while vector images consist of a finite set of geometric primitives, such as lines, curves, and polygons. Since geometric shapes are expressed via mathematical equations and defined by a limited number of control points, they can be manipulated in a much easier way than by directly working with pixels; hence, the vector format is much preferred to raster for image editing and understanding purposes. The conversion of a raster image into its vector correspondent is a non-trivial process, called image vectorization. Creating vector images from a given raster image can be time-consuming and requires the expertise of a skilled graphic user. This thesis explores the effectiveness of a Deep Learning based framework to vectorize raster images comprising line drawings with minimal user interventions. To improve the visual representation of the image, each stroke in the line drawing is represented with a different label and vectorized. In this document, we present an in-depth study of image vectorization, the objective of our research, challenges, potential resolutions, and compare the outcomes of our approach on six datasets consisting of different types of hand drawings. More specifically, this thesis begins by comparing raster images with vector images, the importance of image vectorization, and our objective to convert raster images to vector-based representations by accurately separating each stroke from the line drawings. In further chapters of this thesis, a Deep Learning based segmentation methodology is introduced to perform Single-Class Instance Segmentation of hand drawings to process the input raster image by labeling each pixel as belonging to a particular stroke instance. This segmentation approach is able to leverage the spatial relationships between each stroke instance. A novel loss function specifically designed to optimize our highly imbalanced datasets by scaling the margins and adding a regularization term to improve its feature selection technique. The weighted combination of our proposed margin regularized loss function is combined with the Dice loss to reduce the spatial overlap and improve the predictions over infrequent labels. Finally, the effectiveness of our segmentation technique of line drawing vectorization is compared experimentally with the state-of-the-art and our reference method. Our method can successfully handle a wide variety of human drawing styles. The results are comparable in terms of accuracy and way ahead in terms of speed and complexity, with other methods.
  • Item
    Exploring spatiotemporal variability in secondary production off the west coast of Vancouver Island using biochemical approaches
    (2024) Hubbert, Liam; Dower, John F.; Sastri, Akash Rene
    Zooplankton production in marine ecosystems refers to the rate at which zooplankton biomass increases through a combination of somatic and reproductive growth. Despite its importance in understanding the flow of energy to higher trophic levels, in situ measurements of zooplankton production rates in marine ecosystems remain rare. In recent decades, biochemical methods of estimating zooplankton production have become increasingly popular, though there still exist critical knowledge gaps as to how effective these methods are at estimating in situ growth and production rates. Addressing these knowledge gaps is necessary to lay the foundation for the future integration of routine secondary production rate measurements as part of synoptic oceanographic surveys. Chapter 1 of this thesis introduces the global importance of zooplankton and reviews the methods currently used to assess zooplankton production rates. Specifically, two contemporary biochemical methods are discussed, as well as their advantages and limitations as compared to more traditional incubation methods. The first is the aminoacyl-tRNA synthetases (AARS) method, where the activity of in vivo AARS enzymes is utilized to derive a proxy measure of growth rate. The second is the chitobiase method, in which the rate of decay of dissolved chitobiase activity in water is used to estimate the growth and production rates of crustacean zooplankton assemblages. The chapter concludes with a description of the regional oceanographic setting in which these studies took place and outlines the primary objectives of this thesis. Chapter 2 focuses on the AARS method of measuring secondary production rates. Here, the efficacy of this method for mixed zooplankton assemblages was assessed by comparing growth and production rate estimates to those predicted from two widely used empirical models. Samples collected from eight stations off the West Coast of Vancouver Island (WCVI) in September 2021 were used to measure total AARS and protein-specific AARS (spAARS) activities. Total AARS showed strong positive correlations with production rates predicted by both models, whereas correlations with spAARS were weaker. Spatial variation in AARS activity showed that higher production rates were observed in the inshore regions of the WCVI, and lower rates were observed offshore. These results indicated that in situ AARS-based production rates are temperature-dependent and show significant variation with total zooplankton biomass. In Chapter 3, the chitobiase method of estimating secondary production rate was used to assess production rates in the waters off the WCVI in September 2022 and May 2023. Water samples were collected from the four distinct bioregions off the WCVI during each sampling period, along with zooplankton net samples for biomass and taxonomy analyses. The data gathered from these samples were used to better understand how production rates vary between regions with distinct oceanographic characteristics. Chitobiase biomass-production rate (BPR) and growth rate estimates (daily production to biomass ratio) varied with both season and region, though the trends in these rates did not align with trends in mixed-layer temperature and biomass. Higher chitobiase-based growth rates were observed in inshore regions during September 2022 and May 2023. Chitobiase BPR in September 2022 also followed this trend. Conversely, production rates in May 2023 were higher in the south, indicating a change in the regional drivers of production rate between seasons. The chitobiase-based growth and production rate estimates obtained during this study were also added to the growing time series of previous chitobiase measurements in this region and indicate that production rates have recovered since the low values measured following the 2014-2016 marine heatwave. Chapter 4 of this thesis presents general conclusions on how the AARS method can be used in future studies, as well as the ecological and methodological challenges faced during this study. This thesis concludes with suggestions of how these methods can be utilized in the future to gain a greater understanding of in situ zooplankton community production rates.
UVic Partial Copyright License for Thesis/Dissertation/Project 1. In consideration of the University of Victoria Libraries agreeing to store your thesis/dissertation/project (“Work”) as required for the granting of your degree and facilitating access to this Work by others, you hereby grant to the University of Victoria, a non-exclusive, royalty-free license for the full term of copyright protection to reproduce, copy, store, archive, publish, loan and distribute to the public the Work. Distribution may be in any form, including, without limiting the generality of the foregoing, through the Internet or any other telecommunications devices. 2. This license includes the right to deal with this Work as described in paragraph 3 of this agreement in any format, including print, microform, film, sound or video recording and any and all digital formats. The University may convert this Work from its original format to any other format that it may find convenient to facilitate the exercise of its rights under this license. 3. The University will not use this Work for any commercial purpose and will ensure that any version of this Work in its possession or distributed by it will properly name you as author of the Work. It will further acknowledge your copyright in the Work and will state that any third party recipient of this Work is not authorized to use it for any commercial purpose or to further distribute or to alter the Work in any way without your express permission. 4. You retain copyright ownership and moral rights in this Work and you may deal with the copyright in this Work in any way consistent with the terms of this license. 5. You promise that this Work is your original work, does not infringe the rights of others and that you have the right to make the grant conferred by this non-exclusive license. You have obtained written copyright permission from the copyright owners of any third-party copyrighted material contained in the Work sufficient to enable you to make this grant. 6. You also promise to inform any person to whom you may hereafter assign or license your copyright in the Work of the rights granted by you to the University in this license.