Theses (Electrical and Computer Engineering)

Permanent URI for this collection


Recent Submissions

Now showing 1 - 20 of 635
  • Item
    Effect of point mutations on the conformation changes of PR65 using double nanohole aperture tweezer signals
    (2024) Mathew, Samuel; Gordon, Reuven
    The purpose of this research was to investigate the effect of point mutations on the conformation changes of PR65 using double nanohole (DNH) optical tweezers. It explored the following questions: how does a dielectric nanoparticle interact with a nanoaperture? Could the way a nanoparticle interacts with a nanoaperture be exploited for probing protein conformation changes using aperture tweezers such as the DNH optical tweezer? What can be said of the behavior of PR65 when trapped in a DNH optical tweezer? Could the behavior of PR65 when trapped in a DNH optical tweezer be explained in terms of how PR65 interacts with the DNH aperture? Does the behavior of PR65 when trapped in a DNH change with point mutation? How might this be exploited for probing the impact of point mutations on the conformational dynamics of PR65 using DNH optical tweezers? The study has implications for tracking mutations in proteins as well as for drug discovery and testing. Methods employed included both theoretical modelling and experimental measurements using the DNH optical tweezer. We modelled the interaction between a nanoaperture and a dielectric nanoparticle in terms of a simple dipole-dipole interaction based on the Rayleigh scattering and Bethe aperture theorems. Our model showed that the interaction enhanced both the trapping potential and the transmission through the aperture in accordance with the self-induced back-action (SIBA) effect, in which a nanoparticle interacting with a focused laser beam aids in its own trapping. The model agreed quite well with numerical simulations performed in Lumerical and revealed that the motion of a particle trapped in an optical tweezer can be used to probe changes in the shape and size of a particle. This is because changes in the shape and size of a particle in an optical tweezer will alter the polarizability of such a particle, and therefore the restoring force that it felt in the tweezer. This will manifest as differences in root-mean-squared displacement (RMSD) and corner frequency characteristic of the motion of the particle in the optical tweezer between one conformation and another. We thus formulated a hypothesis that if conformation changes induced by point mutations alter the material polarizability of PR65, then the DNH optical tweezer signals acquired by trapping each mutant PR65 will have different RMSDs and corner frequencies from that of wild type PR65. To test this hypothesis, DNH optical tweezers fabricated by colloidal lithography were used to trap PR65 wild type and six of its mutants at a laser power of ~ 22 mW. The resulting optical signals were captured using an avalanche photodiode (APD) connected to a digital USB-4771A data acquisition module and analyzed using MATLAB. Parameters extracted from the acquired signals and studied included the median transition time between the characteristic jump states shown by the acquired signals and the RMSD and corner frequency of the acquired signals. These parameters were higher for some of the mutants of PR65 and lower for others in comparison with wild type PR65. Correlation of the RMSDs with in silico mean contour lengths of PR65 wild type and six of its mutants studied was also consistent with this conclusion, and in agreement with our hypothesis. These results imply that PR65 undergoes conformation changes that are impacted by substitution mutations, with some mutations causing PR65 to assume an elongated conformation and other mutations causing PR65 to assume a more compact conformation.
  • Item
    Label-Free Studies of Single Biological Nanoparticles Using Optical Nanotweezers
    (2024) Peters, Matthew; Gordon, Reuven
    This thesis has two parts. In the first, we demonstrated tracking and imaging of single proteins without a fluorescent label or tether, well below the previously achieved smallest protein. This result made use of interference effects similar to interferometric scattering microscopy, with additional interference enhancement from the enhanced electromagnetic field. We use the tracking to obtain a single proteins velocity and size the protein. In the second part, we explored the use of optical scattering from an unlabelled. single extracellular vesicle to be used for cancer diagnostics. We trained a 1D-convolutional neural network using the transmission signal of an extracellular vesicle trapped in a double nanohole. We achieved a greater than 90% accuracy in classifying an extracellular vesicle with its parent cell. Three different parent cells were used, MCF10A (non-malignant), MCF7 (non-invasive, cancerous), and MDA-MB-231 (invasive, cancerous).
  • Item
    Edge computing for effective and efficient traffic characterization
    (2024) Khan, Asif; Gulliver, T. Aaron; Khan, Zawar
    Traffic flow analysis is essential to develop smart urban mobility solutions. Many advanced traffic flow monitoring solutions have been proposed but they employ only a small number of parameters. To overcome this limitation, an edge computing solution is proposed based on nine traffic parameters, namely vehicle count, direction, speed, and type, flow, peak hour factor, density, time headway, and distance headway. This solution is low cost, low power, low data bandwidth, and easy to install, deploy, and maintain. It is a sensor node comprised of an RPi 4, Pi Camera, Intel Movidius NCS2, Xiaomi MI Power Bank, and Zong 4G Bolt+. Pre-trained models from the OpenVINO Toolkit are employed for vehicle detection and classification, and a Centroid Tracking Algorithm (CRA) is used to estimate vehicle speed. The measured traffic parameters are transmitted to the ThingSpeak cloud platform via 4G. The proposed solution was field-tested for one week (7 h/day), with approximately 10,000 vehicles per day. The count, classification, and speed accuracies obtained were 79.8%, 93.2%, and 82.9%, respectively. The sensor node can operate for approximately 8 h with a 10,000 mAh power bank and the required data bandwidth is 1.5 MB/h. The proposed edge computing solution overcomes the limitations of existing traffic monitoring systems and can work in complex and heterogeneous environments.
  • Item
    Advances in Erbium-Doped Nanostructures: From Nanothermometry Applications to Single Photon Emission and Microresonator Innovations
    (2024) Hosseini Toodeshki, Elham; Gordon, Reuven
    This dissertation examines erbium-doped micro and nanostructures in-depth, with a focus on optical properties and fabrication techniques. It investigates nanothermometry, emphasizing the stoichiometric control possible in these materials, and demonstrates ratiometric temperature measurement at the nanoscale. The research achieves significant results in trapping erbium-containing nanocrystals using optical tweezers and nanoaperture trapping techniques, establishing their capabilities as single-photon sources. The study goes on to describe the fabrication of erbium-doped silica microcavities and thin films using the sol-gel method, revealing their potential in creating microdisks that support low-threshold lasing.
  • Item
    A Sim-to-Real Deformation Classification Pipeline using Data Augmentation and Domain Adaptation
    (2024) Sol, Joel; Najjaran, Homayoun
    Geometrical quality assurance is critical for improving manufacturing time and cost. This is more inhibiting when human operators’ visual or haptic assessment is necessary. Modern machine learning (ML) methods can solve this problem but require large datasets with diverse deformations. However, preparing those deformations using physical objects can be difficult and costly. This thesis uses Blender, an opensource simulation tool, to imitate object deformities and automate the preparation of synthetic datasets. The utility of these datasets is improved using two methods; data augmentation such as background randomization and domain adaptation networks. The background randomization approach provides a way to generalize the image distribution to various environments, whereas the domain-adapted approach provides a better-targeted distribution. This thesis showcases that synthetic data created in Blender can be effective for training deformation classification networks. The discrepancies between real and simulated environments can be mitigated to create models for sim-to-real deformation detection.
  • Item
    A Novel Approach to PUF-based Hardware Security: Noise-aware Authentication and Key Exchange Protocol
    (2024) Al Far, Hamza; Gebali, Fayez; El Miligi, Haytham
    This thesis proposes a novel approach to PUF-based IoT security. PUFs are used to provide a unique identity to the IoT device using it. However, the authentication process can be undermined by the noisy responses of PUFs. Traditional error correction codes can address this issue but increase system complexity, overhead, and security risks. To overcome these limitations, the study proposes an innovative approach that leverages a statistical analysis technique to extract the relevant information from the PUF response bits, resulting in a strengthened authentication key. The proposed method achieves an innovative authentication and key exchange protocol with enhanced security without traditional error correction codes. The results of evaluating the approach with synthetic PUF data demonstrate a significant improvement in PUF-based security. This study represents a significant step towards enhancing the security of IoT devices, demonstrating the potential of a unique combination of PUF and the statistical analysis approach in addressing the challenges of hardware-based security.
  • Item
    Numerical and experimental investigation of enhancing transdermal model drug delivery: a study on bio-inspired microneedles and iontophoresis integration
    (2024) Madadi Masouleh, Masha; Hoorfar, Mina
    This study investigates the potential enhancement in model drug (acid/dye) delivery by integrating microneedle (MN) technology with iontophoresis (ITP), focusing on transitioning from cone-shaped MNs to bio-inspired variants. It aims to assess the influence of altering MN geometry, particularly incorporating barbs on bio-inspired MNs, on the electric field, and surface area to understand their impact on acid/dye delivery. Anticipated outcomes suggest increased penetration depth of model drugs over time using bio-inspired MNs with ITP, indicating superior model drug delivery across gel. Detailed findings and comparative analyses elucidate differences in penetration depths between bio-inspired and cone MN configurations, providing insight into drug delivery efficiency. The study merges bio-inspired MNs with ITP for enhanced transdermal model drug delivery (TDD). Using COMSOL Multiphysics 6.1, parameters like voltage distribution, electric field strength, and drug concentration within the skin are simulated. Bio-inspired MNs show superior electric field strengths, particularly at their edges, augmenting electrophoretic and diffusive flux, thereby improving drug concentrations within the skin. The maximum electric field strength measured is 50 V/m for cone MNs and significantly higher at 900 V/m for bio-inspired MNs, concentrated particularly at the edges of the bio-inspired MNs in contrast to the overall surface of cone MNs. Length of created channels by cone MN is 1600 m and by bio-inspired is 2400 μm. Moreover, the combined effect of cone MNs and ITP exhibits the deepest penetration, reaching ~2000 μm after 10 mins. The implementation of ITP as a driving force further amplifies the model drug's permeation through the punctured gel. Ultimately, bio-inspired microneedle array (MA) and ITP achieve a remarkable and synergistic enhancement in dye and acid delivery. The confluence of bio-inspired MA and ITP displays the deepest penetration depth, reaching ~2600 μm after 10 mins. The diffusion of the model drug through microholes created by the cone MA significantly enhances permeation, reaching a depth of approximately 1000 μm, even without the application of ITP. Similarly, the bio-inspired MA-created microholes allow for model drug diffusion to deeper layers, enhancing permeation up to ~1400 μm without ITP after 10 mins. Higher fluorescence intensity, observed specifically in microholes created by the bio-inspired MA,signifies a more extensive diffusion of the model drug solution into deeper gel layers facilitated by these microholes. The investigation covers design, fabrication, experimental investigations, and discussions on outcomes and synergies between MNs and ITP. Examining varied MN geometries' impact on model drug permeation rates promises advancements in drug delivery methods.
  • Item
    Analyzing Ocean Boundary Phenomena in Echograms: A Deep Learning Approach
    (2024) Senjaliya, Femina Bharatkumar; Albu, Alexandra Branzan
    This research puts emphasis on the fundamental part of marine monitoring as an instrument to study how the oceans influence the global climate, biodiversity and ecological systems under the condition of the Arctic region. Utilizing underwater active acoustic surveys conducted with moored multi-frequency echosounders as our source gives us the opportunity to reflect on the complexity of ocean settings. We propose a deep-learning approach to automate the identification of sea surface boundaries and near-surface phenomena in echograms to assist oceanographers who currently rely heavily on the time-consuming manual analyses. The identification of boundaries at the surface and the occurrence of bubble phenomena are vital to those who investigate marine environments. These factors greatly affect the complex interactions between organisms. We propose a two-step, end-to-end, deep learning approach where the first step uses an image classification framework to categorize echograms based on surface conditions and is followed by the second step where we employ semantic segmentation frameworks that help to delineate sea surface and near-surface bubbles within the water column. This segmentation in the second step is equipped with a type-specific model that has been proven to outperform a single global segmentation model. Furthermore, our methodology incorporates innovative learning strategies, including a tailored boundary loss function, to enhance model performance. Through comprehensive testing with a range of image classification and semantic segmentation architectures, we identify the most effective models for Arctic echogram analysis. Our proposed deep learning pipeline showcases noteworthy capabilities in accurately characterizing and analyzing marine acoustic data.
  • Item
    Addressing Data Scarcity with Computer Vision Methods
    (2024) Dash, Amanda; Branzan Albu, Alexandra
    Data scarcity characterizes situations where the demand for abundant, quality data is greater than their availability. Lack of quality data is a significant issue when designing and implementing computer vision-based algorithms; more specifically, deep learning-based approaches require “large” amounts of curated data for training and validation. There are many scenarios, such as environmental monitoring, where gathering more data is not viable. This dissertation explores different methodologies and strategies for overcoming data scarcity in computer vision algorithms. While addressing all methods for handling data scarcity would be an over-ambitious endeavour, this dissertation focuses on three primary strategies for working with small datasets: traditional computer vision, deep learning regularization functions, and synthetic datasets. Detailed objectives, solutions and insights from each are presented for diverse problem domains and case studies within the computer vision field. The first strategy consists of developing traditional computer vision methods. We discuss this strategy for two case studies: estimating bird population and domain-independent video summarization. The first case study results in a method that integrates motion analysis and segmentation methods to cluster and count birds in large moving flocks, filmed using hand-held video devices by citizen scientists. The second case study addresses the high demand for automatic video summarization systems due to the dramatic increase in media streaming content and consumer-level video creation; our proposed method uses a bottom-up approach for the automatic generation of dynamic video summaries by integrating motion and saliency analysis with temporal slicing. The second strategy focuses on using regularization functions while training deep learning systems. We propose a novel custom loss function, Dense Loss, which was designed to use local region homogeneity regularization to promote contiguous and smooth segmentation predictions while also using an L1-norm loss to reconstruct dense-labelled annotation ground truth for a synthetic handwritten annotation mixed-media dataset. Regularization also helps when foreground and background classes are not well-represented; we thus propose a texture-based domain-specific data augmentation technique applicable when training on small datasets for deep learning image classification tasks. The third strategy consists of generating synthetic datasets and evaluating the performance of state-of-the-art deep learning architectures when trained on them. We propose a mosaic texture dataset and an image-to-text table summarization dataset. Both address a lack of data in their corresponding application domains. Our research shows that each application domain affected by data scarcity needs to be thoroughly studied before proposing solutions to mitigate this problem. Each of the projects developed in this dissertation supports the hypothesis that small datasets are viable sources for research and applications when their particularities are addressed during development and implementation. This dissertation concludes with a set of best practices for developing Computer Vision systems with small data as a contribution to the community.
  • Item
    Ego-motion Aware Multi-object Tracking: An application for a ROS-based Framework
    (2024) Mahdian, Navid; Najjaran, Homayoun
    Multi-object tracking (MOT) is a critical step for safe and reliable operations of robotics and autonomous systems in dynamic and cluttered environments which are inherent to real-world applications. This thesis introduces a novel MOT framework designed for the Robot Operating System (ROS), serving as a versatile foundation for the implementation, testing, and evaluation of various MOT algorithms within the realm of robotics and autonomous systems. A key hallmark of this framework is its integration with both simulated environments and real-world robotic platforms, facilitating exhaustive testing and refinement of MOT algorithms under a broad spectrum of conditions. Moreover, this comprehensive framework is distinguished by its capability for automatic ground truth generation enables detailed and systematic evaluation across numerous operational scenarios. Within this framework, the Ego-motion Aware Target Prediction module (EMAP) is developed, which significantly enhances the performance of detection-based multi-object tracking algorithms. By integrating camera motion and depth information, EMAP effectively decouples camera movement from object trajectories, thereby minimizing tracking disturbances caused by the ego-motion of the observer. EMAP's effectiveness is rigorously demonstrated through evaluations using the KITTI dataset and a custom-generated dataset in the CARLA autonomous driving simulator, showing substantial improvements in tracking performance, especially in scenarios marked by significant camera motion or the absence of detections. Additionally, this thesis presents a self-supervised multi-object tracking algorithm that incorporates an adaptive track-matching mechanism. This mechanism leverages unlabeled data to refine tracking precision and efficiency, reducing the dependency on extensive manual annotations and thereby enabling more scalable and generalizable tracking applications. Together, these contributions significantly advance the field of autonomous systems and robotics by paving the way for more robust and reliable multi-object tracking technologies.
  • Item
    An Evidential Deep Learning Classifier with an Integrated Capability for Uncertainty Quantification
    (2024) Hammad, Noofa; Najjaran, Homayoun
    While deep neural networks (DNNs) have demonstrated great proficiency in diverse tasks spanning various domains, the reliability of their predictions remains a subject of ongoing research. In the context of classification problems, there is a common misconception regarding probabilities generated by DNNs, falsely equating them with the confidence of the models in their assigned classes. Incorporating the softmax layer at the end of the network compels models to convert activations to probabilistic values between 0 and 1, irrespective of the underlying activation values. When activations are insufficient for accurate decision-making, raising uncertainty about the correct classification, a model should quantify its uncertainty about the true classification of the input data rather than making uncertain decisions. In this light, this study proposes a distance-based evidential deep learning (d-EDL) classifier with an additive capability for uncertainty quantification (UQ). The d-EDL classifier comprises two key components: the first utilizes convolutional neural network (CNN) layers for feature extraction, while the second incorporates designed layers for decision-making. In the second component, the first layer calculates basic probability assignments (BPAs) from the extracted feature vectors using a distance metric, measuring proximity between an input pattern and selected data representatives. A clustering algorithm is employed to form representatives for each data label; the closeness to a label representative reflects the potential belonging of the input to that label. The second and third layers employ combination rules to merge BPAs, leveraging probability theory and Dempster-Shafer (D-S) theory. The output of the d-EDL network is a probability distribution extended to include uncertainty as a class. An end-to-end training method is provided to train the proposed classifier, enabling joint learning and updating all network parameters. Five variants of the d-EDL classifier, each with a different number of data representatives, are trained on an image dataset, and their uncertainty quantification ability is assessed. The assessment involves evaluating the models in three scenarios, each with a common misclassification leading factor: noise, image rotation, and out-of-distribution (OOD) data. The results demonstrate the excellent capability of d-EDLs, especially those with 20 and 40 data representatives, to effectively quantify uncertainty rather than misclassification when faced with unfamiliar data.
  • Item
    Design of Frequency-Selective Surfaces for Advanced Applications
    (2024) Formiga Mamedes, Deisy; Bornemann, J.
    The advancement of mobile technology is driven by the requirements for wider bandwidth, higher data rates, a large number of users, and reliable connectivity. The fifth generation (5G) mobile network is currently in its early stages of commercialization with two frequency bands allocated for its technology. Frequency-selective surfaces (FSS) form a promising technology to help meet these requirements. Extensive research has been conducted on the use of FSSs as spatial filters in the sub-6GHz and millimeter-wave (mm-wave) spectra, as they are able to impart screening properties in the spatial domain. Therefore, this dissertation presents works focused on FSS technology to demonstrate and verify its advantages. First, a new polarization converter system using only a single-layer FSS is proposed. Design equations are introduced for the four-arms star geometry which is used as polarizing element. Polarization converters have become popular in different communication systems due to their characteristics of mitigating the effects of polarization mismatching, thus improving signal strength. Second, an ultra-wide band-stop FSS operating at K- and Ka-bands for mm-wave applications is presented. This structure comprises of double-layer FSS with simple modeling, where a series of basic equations are implemented and described. When the proposed resonators are cascaded, they offer wide bandwidth, eliminating the need for extra layers. Third, a new beam-tilting and gain enhancement system operating at 28 GHz is proposed. The system is composed of a bio-inspired bow-tie antenna as excitation source and a single-layer FSS positioned at the bottom of the antenna. The effect of FSS panel size is investigated to achieve the better antenna performance. Fourth, a system of closely coupled complementary and passive FSSs that achieves dual- and triple-band operations is presented. Four configurations of the elements are investigated, which can present two and three transmission bands in one or both polarizations by inducing an electromagnetically induced transparency effect. Fifth, a novel reconfigurable complementary-inspired FSS with reconfigurable frequency response is described. The proposed structure consists of two resonators with immersed biasing network, and only a single PIN diode per unit cell as active device. Single- and dual-passband performance is achieved by switching the diode’s state from off to on. When the threshold voltage is applied, no passband appears. Sixth, two high-gain beam-switching antenna systems are presented. Both systems comprise of a dipole antenna as excitation source and single-layer PIN-diode-switched FSS panels as mechanism of reconfiguring the radiation pattern. The first system is configured as a reconfigurable corner antenna with large beam-switching range. The second system can steer the beam in the azimuth and elevation planes. Therefore, the works developed in this dissertation prove the FSS’ reliability in the sub-6GHz and mm-wave frequency ranges for different and advanced applications.
  • Item
    Optimal Embedding of the Phase Unwrapping Problem onto the Quantum Annealers
    (2024) Kashfi Haghighi, Mohammad; Nikitas, Dimopoulos
    Quantum computers and algorithms are undergoing rapid development, offering promising solutions to complex computational problems. This study focuses on harnessing the potential of quantum annealing to address the challenging phase unwrapping problem. Specifically, we employed D-Wave’s quantum annealers, currently among the most powerful in existence. To effectively utilize these systems, it is crucial to embed the problem onto their underlying structure, the Pegasus graph in the case of the D-Wave Advantage system. A shorter chain-length in the embedding process generally correlates with improved results. In the course of this thesis, we devised an algorithm for efficiently embedding the phase unwrapping problem onto the D-Wave Advantage system. Our approach yielded promising results when compared to D-Wave’s automatic embeddings. Notably, our introduced embedding boasts the minimum chain-length and utilizes the native structure of the target graph. Additionally, we leveraged D-Wave’s hybrid workflow, combining classical and quantum computing capabilities, to tackle larger image problems. Refinements to the hybrid method were implemented, resulting in enhanced performance. Experimental evaluations were conducted on actual quantum annealers, demonstrating that our refined algorithms outperform those provided by D-Wave.
  • Item
    Infrared-Visible Image Fusion in the Gradient Domain
    (2024) Premaratne, Sanduni; Agathoklis, Panajotis; Bruton, Leonard T.
    Due to the complementary properties of the infrared cameras compared to conventional visible imaging cameras, it has become increasingly popular to fuse infrared and visible images of the same scene for better visual understanding. One major application of this is surveillance which involves videos and requires fast processing. Therefore, there is a need for investigating novel low-complexity fusion algorithms that can be implemented in real-time applications. In this study, we address this critical research problem by two-scale fusion in the gradient domain with saliency detection and image enhancement. In the proposed method, the source images are first decomposed in to base and detail layers. Next, the base parts are fused in the gradient domain by choosing the maximum absolute gradient, whereas the gradients of the detail parts are fused using a weighted average where the weights are calculated using saliency maps. Prior to fusion, the detail parts are enhanced using a guided filter-based enhancement approach. Finally, the fused gradients of the base and detail components are added together to obtain the gradients of the fused image, from which the fused image is reconstructed using a reconstruction technique based on wavelets. Experimental results demonstrate that the proposed method achieves very competitive performance in subjective and objective fusion assessments, while also outperforming most methods in terms of computational complexity.
  • Item
    Single-Class Instance Segmentation for Vectorization of Line Drawings
    (2024) Vohra, Rhythm; Branzan Albu, Alexandra
    Images can be represented and stored either in raster or in vector formats. Raster images are the most ubiquitous and are defined as matrices of pixel intensities/colours, while vector images consist of a finite set of geometric primitives, such as lines, curves, and polygons. Since geometric shapes are expressed via mathematical equations and defined by a limited number of control points, they can be manipulated in a much easier way than by directly working with pixels; hence, the vector format is much preferred to raster for image editing and understanding purposes. The conversion of a raster image into its vector correspondent is a non-trivial process, called image vectorization. Creating vector images from a given raster image can be time-consuming and requires the expertise of a skilled graphic user. This thesis explores the effectiveness of a Deep Learning based framework to vectorize raster images comprising line drawings with minimal user interventions. To improve the visual representation of the image, each stroke in the line drawing is represented with a different label and vectorized. In this document, we present an in-depth study of image vectorization, the objective of our research, challenges, potential resolutions, and compare the outcomes of our approach on six datasets consisting of different types of hand drawings. More specifically, this thesis begins by comparing raster images with vector images, the importance of image vectorization, and our objective to convert raster images to vector-based representations by accurately separating each stroke from the line drawings. In further chapters of this thesis, a Deep Learning based segmentation methodology is introduced to perform Single-Class Instance Segmentation of hand drawings to process the input raster image by labeling each pixel as belonging to a particular stroke instance. This segmentation approach is able to leverage the spatial relationships between each stroke instance. A novel loss function specifically designed to optimize our highly imbalanced datasets by scaling the margins and adding a regularization term to improve its feature selection technique. The weighted combination of our proposed margin regularized loss function is combined with the Dice loss to reduce the spatial overlap and improve the predictions over infrequent labels. Finally, the effectiveness of our segmentation technique of line drawing vectorization is compared experimentally with the state-of-the-art and our reference method. Our method can successfully handle a wide variety of human drawing styles. The results are comparable in terms of accuracy and way ahead in terms of speed and complexity, with other methods.
  • Item
    Hardware Architecture for Accelerating Frequency-Domain Ultrasound Image Reconstruction
    (2024-02-09) Navaeilavasani, Pooriya; Rakhmatov, Daler N.
    Ultrasound is a widely employed biomedical imaging modality enabling non-invasive, low-cost, and real-time diagnostics. In a typical ultrasound system, a multi-channel transducer emits sound waves into the medium and then records returning echo signals that are subsequently converted into an image of the subsurface structure. Coherent plane-wave compounding (CPWC) is one of the latest ultrasound imaging techniques that involves emitting multiple plane-wave pulses at various angles and then combining angle-specific reconstructed image data into a final frame. This approach offers high data acquisition rates (e.g., hundreds or even thousands of raw data frames per second) that are crucial for capturing fast-changing phenomena in the imaged medium. High data acquisition rates should be matched with fast data processing to increase the frame rate of reconstructed, or beamformed, image frames. One example of highly efficient plane-wave beamforming methods is the Temme-Mueller algorithm that operates in the Fourier domain. This thesis describes a novel pipelined hardware architecture for accelerating the execution of this algorithm. The proposed design has been coded in VHDL and implemented on a modern Xilinx® field-programmable gate array (FPGA), taking advantage of Xilinx® intellectual property (IP) core reuse to reduce development time. Our architecture is capable of producing over 1,300 beamformed frames per second, where each frame contains 256K complex-valued data points using the 32-bit floating-point representation for both real and imaginary parts. The correctness of our FPGA-based beamformer has been verified by comparing its output to the reference software-based implementation of the Temme-Mueller algorithm. This verification was done on an experimental ultrasound dataset available as part of the public-domain PICMUS evaluation framework. Our evaluation results demonstrate that the proposed design provides a promising alternative to the conventional GPU-based approach to high-frame-rate ultrasound image reconstruction, paving the way for future algorithmic and architectural enhancements.
  • Item
    Learning-based Ultra-Wideband Indoor Ranging and NLOS Identification
    (2024-02-09) Li, Xin; Dong, Xiaodai
    The need for precise indoor positioning has become increasingly important with the rise of Internet of Things (IoT) technology, robotics, and autonomous vehicles. Indoor positioning has a wide range of applications, including asset tracking, indoor navigation, and location-based services. To achieve high positioning precision for these applications, accurate and reliable indoor ranging is a key factor when using techniques like time of arrival (ToA), as it enables the calculation of distances between different objects in the indoor environment. In this thesis, we focus on machine learning-based approaches for indoor ranging and non-line-of-sight (NLOS) identification. The first part of the thesis concentrates on reducing ranging errors through machine learning with the improvement of the resolution of channel impulse response (CIR) data. We collect a dataset of 412, 172 traces of CIR data across 12 indoor Line-of-Sight (LOS) scenarios. This dataset is used to train and test three machine learning models, including long short-term memory (LSTM), gated recurrent units (GRU), and multi-layer perception (MLP), to predict the range between the anchor and tag directly through the CIR data. The results demonstrate that LSTM and GRU models outperform traditional meth-ods and the device built-in algorithm in terms of mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE), thereby showing the effectiveness of machine learning techniques for indoor ranging applications. On the other hand, indoor ranging accuracy can be significantly affected by NLOS conditions, where the direct path between the transmitter and receiver is obstructed, and the signal has to travel through multiple reflections and diffractions before reaching the receiver. In this thesis, we propose a quantitative approach to differentiate between Soft and Hard NLOS based on the ranging error percentage. We develop machine learning models to identify and classify NLOS conditions. Our study shows that when NLOS is classified into Soft NLOS and Hard NLOS, the accuracy of LOS identification is achieved better than using binary classification. Compared to traditional methods such as leading edge detection or search back window for ranging and positioning, our method exhibits superior performance in noise, multipath, and NLOS environments.
  • Item
    Log Message Anomaly Detection using Positive and Unlabeled Learning
    (2024-01-29) Seifishahpar, Fatemeh; Gulliver, T. Aaron
    Log messages are widely used in cloud servers and software systems. Anomaly detection of log messages is important as millions of logs are generated each day. However, besides having a complex and unstructured form, log messages are large unlabeled datasets which makes classification very difficult. In this thesis, a log message anomaly detection technique is proposed which employs Positive and Unlabeled Learning (PU Learning) to detect anomalies. Aggregated reliable negative logs are selected using the Isolation Forest, PU Learning, and Random Forest algorithms. Then, anomaly detection is conducted using deep learning Long Short-Term Memory (LSTM) network. The proposed model is evaluated using the commonly employed Openstack, BGL, and Thunderbird datasets and the results obtained indicate that the proposed model performs better than several well-known approaches in the literature.
  • Item
    StretchVADER – A Rule-based Technique to Improve Sentiment Intensity Detection using Stretched Words and Fine-Grained Sentiment Analysis
    (2024-01-22) Jokhio, Muhammad Naveed; Gulliver, Thomas Aaron
    Watching a horror movie and someone shouts “HEEEELLLPPPPPPPPP” or someone replies to your joke with a huge “HAHAHAHAHAHAHAHAHAHAHA” is known as word stretching. Word stretching is not only an integral part of spoken language but is also found in many texts. Though it is very rare in formal writing, it is frequently used on social media. Word stretching emphasizes the meaning of the underlying word, changes the context and impacts the sentiment intensity of the sentence. In this work, a rule-based fine-grained approach to sentiment analysis named StretchVADER is introduced that extends the capabilities of the rule-based approach called VADER. StretchVADER detects sentiment intensity using textual features such as stretched words and smileys by calculating a StretchVADER Score (SVS). This score is also used to label the dataset. It has been observed that many tweets contain stretched words and smileys, e.g. 28.5% in a randomly extracted dataset from Twitter. A dataset is also generated and annotated using SVS which contains detailed features related to stretched words and smileys. Finally, Machine Learning (ML) models are evaluated using two different data encoding techniques, e.g. TF-IDF and Word2Vec. The results obtained show that the XGBoost algorithm with 1500 gradient-boosted trees and TF-IDF data encoding achieved a higher accuracy, precision, recall and F1-score than the other ML models, i.e. 91.24%, 91.11%, 91.24% and 91.08%, respectively.
  • Item
    Physical Layer Authentication for Wireless Applications
    (2023-12-13) Hammouda, Mohammed; Gulliver, T. Aaron
    Internet of things (IoT) devices have become ubiquitous and go far beyond smartphones and similar devices. The IoT allows for numerous applications such as smart homes, intelligent healthcare, and intelligent transportation. However, high deployment costs limit cellular network coverage in remote and rural areas, and the reliability of cellular infrastructure during natural disasters is a concern. Thus, space and ground network integration has been proposed to provide global connectivity and support a wide range of IoT applications. Unfortunately, spoofing attacks are problematic due to network complexity and heterogeneity. Authentication for access control is an efficient way to ensure user legitimacy. However, upper layer authentication (ULA) is challenging due to limited computational power, high complexity, and communication overhead. Thus, physical layer authentication (PLA) has been proposed to aid ULA in solving these problems. PLA exploits the fact that legitimate parties and attackers have distinct physical characteristics which are unique between every pair of connected peers based on their spatial locations. In this dissertation, PLA schemes are presented using wireless attributes. First, an adaptive PLA scheme for IoT applications in urban environments is proposed using machine learning (ML) with antenna diversity to increase the number of features. A one-class classifier support vector machine (OCC-SVM) is employed using the magnitude and real and imaginary parts of the received signal at each receive antenna as features. The sounding reference signal (SRS) in the 5G uplink radio frame is employed for this purpose. Results are presented which show that this scheme provides a high authentication rate (AR) with sufficient antenna diversity. Furthermore, an adaptive PLA scheme is presented for collaboration between distributed IoT devices in multiple-input-multiple-output (MIMO) systems. The performance is evaluated considering two majority voting schemes for practical IoT applications. These schemes may be preferable for IoT devices with limited computing capabilities. An adaptive PLA scheme for low earth orbit (LEO) satellites is proposed that employs ML with Doppler frequency shift (DS) and received power (RP) features. This scheme is evaluated for fixed and mobile satellite services at different altitudes. Results are presented which show that the proposed scheme provides better authentication performance using DS and RP features together compared to using them separately. Moreover, PLA using a hypothesis test with threshold or ML for satellite authentication is presented. The results show that the AR with DS is higher than with RP at low elevation angles for both schemes, but is higher with RP at high elevation angles. Further, the ML authentication scheme provides a higher AR than the threshold scheme for a small percentage of the training data considered as outliers, but at larger percentages the OR threshold scheme is better. Finally, game-theoretic satellite authentication using physical characteristics for spoofing detection is presented. Results are given to demonstrate the effectiveness of the proposed approach.