Browsing by Department "Department of Computer Science"
Now showing 1 - 20 of 1060
Results Per Page
Sort Options
Item 2 to 1 embeddings of grids into hypercubes(1993) Manke, Dennis L.We consider two-to-one embeddings of grids into the next smaller Hypercube and derive novel two-to-one embedding techniques that achieve optimal dilation 1 for many grids, where in some cases no previous solutions were known. In particular, dilation 1, two-to-one embeddings into the next smaller Hypercube can be found for grids that are: - square. - close to square. - of height within one of a power of 2.Item 3D structured spreadsheet(1994) Wu, QianThe 3D Structured Spreadsheet (3DSS) is a new kind of spreadsheet that uses program organization concepts similar to those of block-structured languages. A program developed in a block-structured language such as PASCAL contains a main unit and optionally many functions and procedures. The major unit such as blocks, procedures and functions can be recursively defined. References to variable names follow scoping rules. In the same way, a 3DSS program can contain many sheets: one main sheet, and optionally many subsheets. A cell in a sheet can contain different kinds of values: text, number, formula, or a subsheet or a group of timesheets that vary with "time". Every cell can be given a name, and the reference relationships among the cells of parent sheets and subsheets follow the block structured scoping mechanism. Sheets in a 3D Structured Spreadsheet program are therefore organized into a tree structure. Any sheet can also vary in "time" (3D) using time formulas to construct a group of time frames-- timesheets. 3D Structured Spreadsheet combines subsheet and timesheet together, and can organize information into a complex dynamic hierarchical way. In addition, one can define functions and procedures using subsheets and timesheets through programming-by-example in the same spreadsheet programming paradigm. The new spreadsheet design is based on ideas used by conventional spreadsheets, but it has significant advantages over the conventional ones. The hierarchical organization mechanism allows top-down design. The 3D feature provides a more powerful and convenient method for organizing information that varies in "time", and a more powerful calculation ability for iterative calculations. User-defined functions or procedures through programming-by-example in the same spreadsheet programming paradigm extends the spreadsheet programming environment and provides an abstract way to organize blocks of information having similar structure.Item A characterization of early career researchers’ activities with academic sources for writing(2024) Islam, Mohammad Shakirul; Nacenta, MiguelResearch processes involve gathering and understanding previous work, including finding existing literature, reading, annotating, collecting and, finally, synthesizing summaries as literature surveys and related work sections. Although much work has been devoted to understanding how people find and retrieve previous work and how people use existing reference managers, in this paper we consider the processes involving research of previous literature in a broader context, looking at how the found sources and references are incorporated into the process of writing new papers and reports. We propose an activity model (RaMSeS) for the general process of research with academic sources; from paper search to the writing of text that incorporates previous work citations. We used this model to design a survey that investigates current practices by early career researchers. Through the survey, we were able to classify early career researchers into three coarse groups: casual collection managers (who use reference management systems less and tend not to revisit their collections), traditional document managers (who tend to take comprehensive notes but use multiple systems to manage information), and digitally-savvy collection managers (who are more interested in organizing and categorizing their document collections). We also learned about the ways in which participants use their source collections for writing, refreshing their knowledge, and recognizing patterns in the literature. We also conducted an evaluation study with experts in the field of library sciences to understand the applicability of our proposed model (RaMSeS) to assist in the teaching and explanation of the entire process of collecting and curating bibliographies for early career researchers. We provide a thorough thematic analysis of the interviews that we conducted with the experts who overall, found the RaMSeS model useful and provided insightful feedback on how it can be applied to the entire research process. Our findings can support the development of tools that further support the later parts of the research process when existing literature is re-read and analyzed to become part of new research documents.Item A differential privacy-preserving data publishing algorithm for bus trajectory analysis: A case study on BC Transit(2025) Bahari Neematabad, Mahboubeh; Lu, YunThe increasing use of trajectory data in location-based services and public transit planning highlights the high analytical value of such data. However, legal, technical, and especially privacyrelated concerns have significantly limited public access to these datasets. This thesis investigates privacy protection in trajectory databases—specifically, passenger movement data from public bus systems—under strong Differential Privacy (DP) guarantees. We collaborate with BC Transit to make the first publicly available, privacy-preserving analysis of BC Transit’s bus tap dataset from Victoria, British Columbia. This work reviews existing DP mechanisms and selects two practical and applicable algorithms for public transit data. These mechanisms are then adapted and optimized to suit the unique characteristics of such data. The goal is to evaluate their practical effectiveness in privacy-preserving publication of transit data while maintaining the utility required for meaningful analysis. The BC transit bus tap dataset (containing bus tap-ins) enables already-useful analyses such as count or sum queries (e.g., number of visits to a bus stop) used as the benchmark of several related works. However, we aim to demonstrate the power of the state-of-the-art—privacy-preserving trajectory analyses, and with approval from our collaborators at BC Transit, we construct a plausible synthetic trajectory dataset that corresponds to the original given tap dataset based on known weekly role-specific travel patterns. Two privacy-preserving algorithms are then applied: • Noisy Prefix Tree (Rui Chen et al., 2011): A prefix tree-based DP algorithm for sequential data. • PPDP (Yang Li et al., 2020): An improved prefix tree algorithm tailored for transit smart card data. We also compare the count queries on the original data using the Laplace mechanism with those on the synthetic trajectories, to evaluate how well basic utility is preserved. For sequential transit data, we introduce the following technical improvements to enhance the effectiveness of prefix tree-based methods: • A spatio-temporal dimensionality reduction technique to sample noisy nodes with better efficiency; • An improved post-processing method for achieving consistency in the noisy prefix tree after noise injection. In addition, A hybrid privacy budget allocation approach is employed, which balances tree depth with the actual distribution of nodes at each level in a more intuitive and effective manner. Experimental results—conducted on synthetic trajectories generated from real-world tap card data from the BC Transit system—demonstrate that this framework can enforce strong privacy guarantees while answering complex transit-related analytical queries. This work serves as one of the first steps for data sharing among researchers, municipal agencies, and smart service developers, especially in BC, contributing to the design of more efficient, innovative, and human-centered public transportation systems.Item A distributed model to expand the reach of drug checking(Drugs Habits and Social Policy, 2022) Wallace, Bruce; Gozdzialski, Lea; Qbaich, Abdelhakim; Azam, Md. Shafiul; Burek, Piotr; Hutchison, Abby; Teal, Taylor; Louw, Rebecca; Kielty, Collin; Robinson, Derek; Moa, Belaid; Storey, Margaret-Anne; Gill, Chris; Hore, Dennis K.Purpose – While there is increasing interest in implementing drug checking within overdose prevention, we must also consider how to scale-up these responses so that they have significant reach and impact for people navigating the unpredictable and increasingly complex drug supplies linked to overdose. The purpose of this paper is to present a distributed model of community drug checking that addresses multiple barriers to increasing the reach of drug checking as a response to the illicit drug overdose crisis. Design/methodology/approach – A detailed description of the key components of a distributed model of community drug checking is provided. This includes an integrated software platform that links a multi-instrument, multi-site service design with online service options, a foundational database that provides storage and reporting functions and a community of practice to facilitate engagement and capacity building. Findings – The distributed model diminishes the need for technicians at multiple sites while still providing point-of-care results with local harm reduction engagement and access to confirmatory testing online and in localized reporting. It also reduces the need for training in the technical components of drug checking (e.g. interpreting spectra) for harm reduction workers. Moreover, its real-time reporting capability keeps communities informed about the crisis. Sites are additionally supported by a community of practice. Originality/value – This paper presents innovations in drug checking technologies and service design that attempt to overcome current financial and technical barriers towards scaling-up services to a more equitable and impactful level and effectively linking multiple urban and rural communities to report concentration levels for substances most linked to overdose.Item A framework for autonomic digital twin orchestration and management systems(2025) Rivera, Luis F.; Müller, Hausi A.; Villegas Machado, Norha MilenaThe advancement and implementation of the Digital Twin (DT) concept are poised to disrupt multiple application domains across industry and society. DTs enable the augmentation of machine, system, and human capabilities by enriching data-driven decision-making and forecasting through the continuous aggregation, interpretation, and exploitation of relevant phenomena from mirrored counterparts—i.e., Real Twins (RTs). The accelerated adoption of DT technologies—from smart urban infrastructures to complex IT environments—has been propelled by the synergistic convergence of innovations in the Internet of Things (IoT), discriminative (traditional) Artificial Intelligence (AI), Generative AI (GenAI), simulation technologies, and Cloud Computing. Central to the DT vision is the notion of Digital Twin Operation & Management Systems (DTOMSs), software-intensive infrastructure responsible for realizing the potential of DTs and preserving sustained fidelity between RTs and their virtual representations. However, the inherent dynamism and unpredictability of RT environments pose significant challenges to the relatively static nature of contemporary DTOMS architectures. These systems often fall short in reflecting evolving RT contexts, anticipating behavioural drift, and adapting to runtime uncertainties—capabilities essential to unlocking the full potential of DT-based systems. This dissertation addresses the limitations of conventional DTOMSs by advocating a transition toward autonomic DTOMSs (i.e., ADTOMSs). Grounded in the principles of self-adaptive systems, autonomic computing, and continuous software engineering, we investigate how ADTOMSs can dynamically represent, reason about, and evolve in response to contextual and continuous variations in mirrored RTs and their operational environments. We propose foundational architectural constructs, runtime DT modelling mechanisms, GenAI-based knowledge exploitation techniques, and model evolution strategies. Collectively, these contributions constitute a framework that advances the engineering of DTOMSs toward autonomic systems with improved capabilities for managing complexity and uncertainty. The contributions of this research are fourfold. First, we propose a reference model and accompanying reference architecture that delineate the core design elements of ADTOMSs, incorporating self-management capabilities aimed at mitigating system complexity and reducing human intervention and cognitive load. Second, we introduce an adaptive model evolution mechanism that enables ADTOMSs to incrementally refine internal representations in response to the evolving dynamics of their corresponding RTs. Third, we develop a dynamic DT context modelling and knowledge representation framework to support continuous monitoring and adaptive reasoning over conditions captured or simulated from mirrored RTs. Fourth, we design an automated reasoning framework, leveraging Continuous Experimentation (CExp) and GenAI, to extract actionable insights from heterogeneous data sources, which facilitates early anomaly detection and behavioural forecasting. Methodologically, this research adopts an exploratory sequential mixed-methods approach, integrating conceptual modelling, systematic literature analysis, and empirical validation through case-driven experimentation. The proposed contributions are evaluated within the domains of smart urban transit and IT environments, demonstrating their feasibility, adaptability, and practical relevance across heterogeneous operational contexts. Building upon the contributions of this dissertation, several promising research directions emerge that can inspire further academic exploration and practical innovation. First, our modelling infrastructure lays the foundations for context management in GenAI settings. This paradigm leverages our modelling approach to treat prompts, retrieval corpora, model snapshots, inference runs, and their artifacts as first-class elements, enabling reproducible, auditable, and drift-aware reasoning within DT-based systems. Second, our reference model and reference architecture create opportunities to realize a systematic, policy-governed mapping from control objectives and observed symptoms to well-scoped reasoning tasks, making tacit operational cues explicit and ensuring that queries to the reasoning layer remain aligned with goals and context. Third, the CExp practices incorporated in our conceptualization of ADTOMSs provide a sound basis for a hybrid quantum orchestration twin that operationalizes the use of quantum and classical resources as a managed control objective, using controlled experiments and provenance-aware scheduling to decide when and how to employ each under fidelity, latency, and cost constraints. Together, these directions extend our contributions toward trustworthy, explainable, and efficient autonomy in advanced DT-based systems. In summary, this dissertation advances the conceptual and technological foundation of DTOMSs by introducing autonomic principles into their operational lifecycle. The resulting ADTOMS paradigm establishes a robust basis for the continuous, autonomic evolution of DTs, positioning them as resilient, adaptive, and long-lived software-intensive systems capable of operating effectively under uncertainty. This work contributes to the broader vision of self-managing data-intensive systems and offers novel engineering strategies for advancing DT practices across dynamic application domains.Item A methodological approach to extracting patterns of service utilization from a cross-continuum high dimensional healthcare dataset to support care delivery optimization for patients with complex problems(BioMedInformatics, 2024) Bambi, Jonas; Santoso, Yudi; Sadri, Hanieh; Moselle, Ken; Rudnick, Abraham; Robertson, Stan; Chang, Ernie; Kuo, Alex; Howie, Joseph; Dong, Gracia Yunruo; Olobatuyi, Kehinde; Hajiabadi, Mahdi; Richardson, AshlinBackground: Optimizing care for patients with complex problems entails the integration of clinically appropriate problem-specific clinical protocols, and the optimization of service-system-encompassing clinical pathways. However, alignment of service system operations with Clinical Practice Guidelines (CPGs) is far more challenging than the time-bounded alignment of procedures with protocols. This is due to the challenge of identifying longitudinal patterns of service utilization in the cross-continuum data to assess adherence to the CPGs. Method: This paper proposes a new methodology for identifying patients’ patterns of service utilization (PSUs) within sparse high-dimensional cross-continuum health datasets using graph community detection. Result: The result has shown that by using iterative graph community detections, and graph metrics combined with input from clinical and operational subject matter experts, it is possible to extract meaningful functionally integrated PSUs. Conclusions: This introduces the possibility of influencing the reorganization of some services to provide better care for patients with complex problems. Additionally, this introduces a novel analytical framework relying on patients’ service pathways as a foundation to generate the basic entities required to evaluate conformance of interventions to cohort-specific clinical practice guidelines, which will be further explored in our future research.Item A power-aware IoT-fog-cloud architecture for telehealth applications(2025) Guo, Yunyong; Ganti, SudhakarThis dissertation presents an energy-efficient model for integrating Internet of Things (IoT) devices with fog and cloud computing platforms, specifically designed for telehealth applications. As the deployment of telehealth IoT devices continues to grow, the demand for efficient, real-time data processing and energy conservation becomes increasingly critical. This research addresses these challenges by proposing a hybrid architecture that combines the low-latency benefits of fog computing with the scalable resources of cloud computing. The model reduces energy consumption by processing data locally through fog nodes, minimizing the need for constant communication with cloud servers. This not only decreases latency but also optimizes the use of computational resources, making the system more adaptable to the dynamic demands of telehealth services. The model is further enhanced by an adaptive resource scaling algorithm, which dynamically adjusts processing capacity based on workload, ensuring both efficiency and reliability in critical healthcare applications. Simulations studies demonstrate the effectiveness of the model in reducing energy consumption and improving system performance for real-time telehealth monitoring. The results show significant improvements in data processing speed, energy efficiency, and resource utilization compared to traditional cloud-only architectures. This work contributes to the ongoing development of sustainable telehealth solutions by providing a robust framework for IoT-fog-cloud integration that meets the stringent demands of modern healthcare systems.Item A prototype architecture for interactive 3D maps on the web(2024) Liu, Ting; Coady, YvonneVirtual 3D city models offer detailed 3D representations of urban space and serve in various fields, such as urban planning, architecture, navigation, and environmental simulation. With the advancement of technologies such as photogrammetry and laser scanning, the scale of 3D city models has increased significantly, making it a challenge to transmit and visualize such large datasets for sharing purposes. The development of advanced web technologies and the emergence of WebGL has made it possible to render and share large-scale 3D city models on the Internet. In addition, the introduction of game engines has further enhanced the simulation and interactive functions of 3D GIS applications. In this project, the exploration focused on using and integrating WebGL-based rendering tools to visualize large 3D city models, providing a portal where users can navigate and interact with urban scenarios from different perspectives. The architecture utilized 3DCityDB for tiling and format conversion of 3D models, 3D Web Client/Cesium.js virtual globe for loading large-scale tiled data, and Babylon.js to achieve interactive functions and environmental simulation. A GridMap mechanism was proposed to solve the problem of loading a large number of models with geographic coordinates in the Babylon scene. Test results show that this mechanism can maintain effective loading efficiency. Especially when the size of the dataset grows significantly, loading time and memory consumption will not increase, and FPS can also be maintained at a high level to ensure smooth interaction. This study expands the feasibility of applying 3D GIS data in web-based game engines through enhanced interactivity and simulation.Item ABLOC: Accountable Blockchain Logging for Offline Care(2023-09-22) Krysl, Joseph; Weber, Jens; Price, MorganRetroactive security is important to cyber security; it is used to hold people account- able for their actions [1]. In the medical world, it is difficult to assign proper privileges, as they can be too wide and vulnerable to misuse, or too narrow [1, 2, 3, 4] restricting access to patient data [2, 4]. Clinicians are often given wide privileges to ensure they can access the data required to care for patients [2]. Logging is relied upon to find breaches of policies [2, 3, 4, 5] but, without reliable logs, changes can be made to the data in the EMR without anyone knowing [6]. Blockchain-based logging has been proposed but requires a stable internet connection [7]. This thesis presents Account- able Blockchain Logging for Offline Care (ABLOC), a Directed Acyclic Graph (DAG) based blockchain, that is combined with a gossip protocol to improve the forensic re- liability and accountability of logs. ABLOC can tolerate participating realms, the internet space that houses one or multiple pieces of medical software, going offline, recovering, and resynchronizing with the rest of the network. The ABLOC system receives log hashes, summarizes them, and shares the summary with different realms on the ABLOC network. This work presents the necessary background information, discusses the design of the ABLOC system, and evaluates the proposed system the- oretically and with a prototype. The proposed system has promising results in the scalability tests performed.Item Abstract and Metaphoric visualization of emotionally sensitive data(2022-04-28) Malik, MonaStandard visualizations such as bar charts and scatterplots, especially those representing qualitative, emotionally sensitive issues, fail to build a connection between the data that the visualization represents and the viewer of the visualization. To address this challenge, the information visualization community has become increasingly interested in exploring creative visualization techniques that could potentially help viewers relate to the suffering and pain in emotionally sensitive data. We contribute to this open question by investigating whether visualizations that rely on metaphors (i.e., that involve existing mental images such as a tree or a person image) with some emotional connection can foster viewers’ empathy and engagement with the data. Specifically, we conducted an empirical study in which we compare the effect of visualization type (metaphoric and abstract) on people’s engagement and empathy when exposed to emotionally sensitive data (data about sexual harassment in academia). We designed a metaphoric visualization that relies on the metaphor of a flower symbolizing life, beauty, and fragility which might help the viewers to relate to the victim, build some emotional connection, and an abstract visualization that relies on purely geometric forms with which people should not have any existing emotional connection. In our study, we found no clear difference in engagement and empathy between metaphoric and abstract visualization. Our findings indicate that female participants were slightly more engaged and empathic with both visualizations compared to other participants. Additionally, we learned that measuring empathy in a data visualization is a complex task. Informed by these findings on how people engage and empathize with metaphoric and abstract visualization, newer and improved visualization and experiences can be developed for similar emotionally sensitive topics that are emotionally charged and fear-provoking.Item Accelerated, Collaborative & Extended BlobTree Modelling(2015-04-23) Grasberger, Herbert; Wyvill, BrianBlobTree modelling has been used in several solid modelling packages to rapidly prototype models by making use of boolean and sketch-based modelling. Using these two techniques, a user can quickly create complex models as combinations of simple primitives and sketched objects. Because the BlobTree is based on continuous field-values, it offers a lot of possibilities to create and control smooth transitions between surfaces, something more complicated in other modelling approaches. In addition, the data required to describe a BlobTree is very compact. Despite these advantages, the BlobTree has not yet been integrated into state of the art industrial workflows to create models. This thesis identifies some shortcomings of the BlobTree, presents potential solutions to those problems and demonstrates an application that makes use of the BlobTree's compact representation. A main criticism is that the evaluation of a large BlobTree can be quite expensive, and, therefore, many applications are limited in the complexity of models that can be created interactively. This work presents an alternative way of traversing a BlobTree that lowers the time to calculate field-values by at least an order of magnitude. As a result, the limit of model complexity is raised for interactive modelling applications. In some domains, certain models need more than one designer or engineer to be created. Often, several iterations of a model are shared between multiple participants until it is finalized. Because the description of a BlobTree is very compact, it can be synchronized efficiently in a collaborative modelling environment. This work presents CollabBlob, an approach to collaborative modelling based on the BlobTree. CollabBlob is lock-free, and provides interactive feedback for all the participants, which helps with a fast iteration in the modelling process. In order to extend the range of models that can be created within CollabBlob, two areas of BlobTree modelling are improved in the context of this thesis. CAD modelling often makes use of a feature called filleting to add additional surface features, which could be caused by a manufacturing process. Filleting in general creates smooth transitions between surfaces, something that the BlobTree can do with less mathematical complexity than approaches needed in Constructive Solid Geometry (CSG), in the case of fillets between primitives. However, little research has been done on the construction of fillets between surfaces of a single BlobTree primitive. This work outlines Angle-Based Filleting and the Surface Fillet Curve, two solutions to improve the specification of fillets in the BlobTree. Sketch-based implicit modelling generates 3D shapes from 2D sketches by sampling the drawn shape and using the samples to create the implicit field via variational interpolation. Additional samples inside and outside the sketched shape are needed to generate a field compatible with BlobTree modelling and state of the art approaches use offset curves of the sketch to generate these samples. The approach presented in this work reduces the number of sample points, thus accelerating the interpolation time and improving the resulting implicit field.Item Accessibility in a virtual classroom : a case study for the visually impaired using WebCT(2008-04-10T05:59:35Z) Hadian, Shohreh.; Storey, Margaret-Anne.Item Active learning under the Bernstein condition for general losses(2020-08-31) Shayestehmanesh, Hamid; Mehta, NishantWe study online active learning under the Bernstein condition for bounded general losses and offer a solution for online variance estimation. Our suggested algorithm is based on IWAL (Importance Weighted Active Learning) which utilizes the online variance estimation technique to shrink the hypothesis set. For our algorithm, we provide a fallback guarantee and prove that in the case that R(f*) is small, it will converge faster than passive learning, where R(f*) is the risk of the best hypothesis in the hypothesis class. Finally, in the special case of zero-one loss exponential improvement is achieved in label complexity over passive learning.Item Activities of daily living as a functional assessment predictor in older adults: a systematic review with focus on architecture in connected health(2019-12-03) Alani, Adeshina; Weber, Jens; Price, MorganBackground: Functional Assessment (FA) in older adults is an important measure of their health status. FA using Activities of Daily Living (ADL) is a strong predictor of health outcomes, especially as we age. With the development of increasingly-connected health, we have a new opportunity for more robust and improved FA. Objective: The objective of this thesis is to collate and discuss published evidence on FA predictors and how the FA predictors can be collected using the paradigm of Connected Health (CH) architectures through an industrial case study in CHAPTER 5: INDUSTRIAL CASE STUDY. Methods: The method is to do two Systematic Literature Reviews (SLRs). The two SLRs were undertaken with Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement (PRISMA) and Parsifal, an online tool for SLR. This thesis catalogs various FA and state-of-the-art Software Engineering Architectural Tactics and Styles (SEATS) used within Connected Health (CH) that focus on ADL. The results of the cataloged information were used in the industrial case study where some of the FA predictors were automated. Articles obtained from the data source during the SLRs were filtered based on the titles, abstracts, full-text provision, English language literature, including age, which must be sixty-five years and above. Another reviewer was also included in this study, while all the defined inclusion and exclusion criteria detailed in this thesis were applied. Information about FA via ADL were extracted from the articles with further extraction on the SEATS used for computer-supported FA during the industrial case study. Data Source: During the SLRs processes, database searched included PubMed, EBSCOhost, Engineering Village, IEEE Xplore Digital Library, and ScienceDirect. The conducted search contains both controlled terms called Medical Subject Headings(MeSH) such as activities of daily living and search strings such as functional assessment, older adults, geriatrics, seniors, elderly care, and aging. Results: From four hundred and ninety-five initial abstracts and titles, nineteen full-text journal articles were included in the final review for the SLR on FA predictors. Six full-text journal articles were obtained from the SLR on CH architectures after reading its 449 titles and abstracts. In the SLR on FA predictors, predictor metrics for FA via ADL were extracted from each of the articles. Gait speed, sleep quality, and movement activities were assessed as ADL predictor metrics for FA in older adults. Other FA predictors published involved self-reported metric scale measurement using Barthel-20 scale and performance-based scale through Timed-UP and Go test. This thesis reviewed each metric for sleep quality and movement activities. In the SLR on CH architectures, quick response of ADL and resource efficiency such as sensors were some of the major tactics related to performance in Software Engineering (SE) quality in CH, while confidentiality and integrity of FA measures related to security in SE quality in CH was another major concern. Conclusion: Having conducted the two SLRs, a wide range of measures were used for FA in older adults, including consideration on the SEATS used for computer-supported FA. Overall, these FA measures and SEATS provide inexpensive and easy-to-implement FA. The diversity of the FA measures and SEATS contributes towards the development of computer-supported FA. However, future work is needed to consider the result of this study as an open-source computer-supported FA tool, and such tool should also be evaluated and verified through direct examination with older adults.Item Adapting a system-theoretic hazard analysis method for interoperability of information systems in health care(2022-04-25) Costa Rocha, Oscar Aleixo; Weber, Jens; Price, MorganThe adoption of Health Information Systems (HIS) by primary care clinics and practitioners has become a standard in the healthcare industry. This increase in HIS utilization enables the informatization and automation of many paper-based clinical workflows, such as clinical referrals, through systems interoperability. The healthcare industry defines several interoperability standards and mechanisms to support the exchange of data among HIS. For example, the health authorities, Interior Health and Northern Health, created the CDX system to provide interoperability for HIS across British Columbia using SOAP Web Services and HL7 Clinical Document Architecture (CDA) interoperability standards. The CDX interoperability allows HIS such as Electronic Medical Record (EMR) systems to exchange information with other HIS, such as patients clinical records, clinical notes and laboratory testing results. In addition, to ensure the EMR systems adhere to the CDX specification, these health authorities conduct conformance testing with the EMR vendors to certify the EMR systems. However, conformance testing can only cover a subset of the systems' specifications and a few use cases. Therefore, systems properties that are not closely associated with the systems (i.e. emergent properties) are hard, or even impractical, to assure using only conformance testing. System safety is one of these properties that are particularly significant for EMR systems because it deals with patient safety. A well-known approach for improving systems safety is through hazard analysis. For scenarios where the human factor is an essential part of the system, such as EMR systems, the System-Theoretic Process Analysis (STPA) is more appropriate than traditional hazard analysis techniques. In this work, we perform a hazard analysis using STPA on the CDX conformance profile in order to evaluate and improve the safety of the CDX system interoperability. In addition, we utilize and customize a tool named FASTEN to support and facilitate the analysis. To conclude, our analysis identified a number of new safety-related constraints and improved a few other already specified constraints.Item Adapting personal music based on game play(2010-03-09T18:04:15Z) Rossoff, Samuel Max; Gooch, BruceMusic can positively affect game play and help players to understand underlying patterns in the game, or the effects of their actions on the characters. Conversely, inappropriate music can have a negative effect on players. While game makers recognize the effects of music on game play, solutions that provide users with a choice in personal music have not been forthcoming. I designed and evaluated an algorithm for automatically adapting any music track from a personal library so that is plays at the same rate as the user plays the game. I accomplish this without access to the video game's souce code, allowing deployment with any game and no modifications to the system.Item Adaptive algorithms for online learning in non-stationary environments(2025) Nguyen, Quan M.; Mehta, NishantTraditional online learning literature often assumes static environments, where fundamental properties like data distribution or action spaces do not change over time, and the learner competes against a single best action. This framework, however, fails to capture the complexity of many practical scenarios, such as automated diagnostic systems or inventory management, where the optimal course of action is non-stationary and changes sequentially. In such settings, adaptivity is crucial as algorithms must maintain and leverage past information to respond effectively to unforeseen changes. This thesis advances the theory of online learning in non-stationary environments by developing adaptive algorithms with provably strong theoretical guarantees. Two key non-stationary learning problems are online multi-task reinforcement learning (OMTRL) and multi-armed bandits with sleeping arms. In OMTRL, a learner interacts with a sequence of Markov Decision Processes (MDPs). Each MDP is chosen adversarially from a small collection of MDPs, requiring the learner to efficiently transfer knowledge between tasks. In multi-armed bandits with sleeping arms, the set of available arms varies adversarially across rounds, prompting the learner with unique exploration-exploitation tradeoff methods. A key contribution of this thesis is a number of novel lower bounds and algorithms with near-optimal worst-case regret upper bounds for these two problems. In addition, this thesis applies the new techniques in these new algorithms into deriving improved sample complexity for group distributionally robust optimization (GDRO) and novel data-dependent best-of-both-worlds regret upper bounds for multi-armed bandits. In summary, this thesis provides mathematically-grounded adaptive algorithms that achieve state-of-the-art performance guarantees in learning from non-stationary and adversarially changing environments in reinforcement learning and multi-armed bandits, as well as showing new, fundamental connections between multi-armed bandits with sleeping arms and robust optimization.Item Adaptive Gaussian-credit probing sequence for packet classification in computer communication networks(2008-04-10T06:02:03Z) Jayeh, Mohamed H.; Wu, KuiItem Adaptive lifelong learning(2018-12-20) Parul; Mehta, NishantLifelong learning is an emerging field in machine learning that still requires a lot of research. In lifelong learning, the tasks are presented sequentially, the system learns knowledge at each task and the goal is to retain the learned knowledge and utilize it when learning a new task. Exponentially Weighted Aggregation for Lifelong Learning (EWA-LL) is a meta-algorithm used in lifelong learning setting. It transfers information from previous tasks to the next. A prior distribution is maintained on the set of representations, which is updated after the encounter of each new task using the exponentially weighted aggregation (EWA) procedure. This project tries to relax the problem and explores the case of an easy scenario where we have some more information about the data. It implements adaptive learning in lifelong learning setting. It utilizes the adaptive learning algorithm Follow The Leader with Dropout Perturbations (FTL-DP) used in Online Prediction with Expert Advice. FTL-DP sets the losses of the experts to 0 or 1 at each task based on the dropout probability before selecting the leader. This project transports FTL-DP to lifelong learning setting. The goal is to prove that adaptive algorithm in lifelong learning is a better approach than EWA-LL as it gives smaller regret for certain easy problems while still maintaining the regret bounds similar to EWA-LL for the harder problems.