Master's Projects
Permanent URI for this collection
Browse
Browsing Master's Projects by Department "Department of Electrical and Computer Engineering"
Now showing 1 - 20 of 204
Results Per Page
Sort Options
Item A comparison of Long Short-Term Memory, Convolutional Neural Network, Transformer, and Mamba models for sentiment analysis(2024) Ruan, Hang; Gulliver, Thomas AaronSentiment analysis is a critical task in Natural Language Processing (NLP) that helps decode the emotions and opinions embedded in text. With applications spanning from market research and social media monitoring to political analysis and customer feedback evaluation, sentiment analysis provides invaluable insights into public opinion and consumer behavior. This project studies the evolution of sentiment analysis models, focusing on the advancements made by deep learning techniques such as Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNNs), and transformer-based models like Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-trained Transformer (GPT). These models have set new benchmarks for accuracy, efficiency, and versatility. Additionally, this explores Mamba, a recent State Space Model (SSM) designed to overcome the computational challenges of transformers in handling long sequences and demonstrates state-of-the-art performance on language modeling tasks comparable to transformers twice its size. This study examines the strengths and limitations of these models, comparing their performance on sentiment analysis datasets to provide a comprehensive understanding of their applicability and efficacy in various contexts.Item A machine learning approach to network security anomaly detection(2025) Verma, Prateek; Yang, Hong-ChuanSupervised machine learning has emerged as a highly effective technique for classification in anomaly-based cyber-threat detection systems due to its predictability, and high accuracy. This work utilizes the CICIDS2017 dataset which is widely recognized as a benchmark for anomaly detection research. The work begins with the idea to implement a two-layered ML-based detection model. The proposed system’s first layer performs binary classification to differentiate benign from malicious traffic, while a secondary, multi-class classification system identifies specific attack types to implement targeted countermeasures. Incremental Principal Component Analysis (PCA) technique and Synthetic Minority Oversampling (SMOTE) is applied to balance the dataset, critical for both binary and multi-class classification tasks. Among all evaluated machine learning models, LightGBM achieved superior performance with 99% accuracy, 98.1% F1-score, and minimal resource usage, outperforming traditional methods like SVM, KNN, Random Forest and Decision Trees. Further feature reduction, guided by feature importance scores, led to an even more lightweight model while performance metrics such accuracy, recall, and F1-score, remained consistent or improved slightly within a margin of ±0.5% highlighting the stability and efficiency of the proposed approach. This proposed system demonstrates that advanced, resource-efficient supervised ML models such as LightGBM can significantly improve real-time threat detection while offering a scalable and cost-effective solution for future cybersecurity deployments.Item A machine learning framework for malware triage(2024) Danaeifard, Soroush; Traore, Issa; Woungang, IsaacEvery day, thousands of new malicious software emerge globally, posing threats to consumer devices, stealing private data, or inducing financial losses. The increasing number and sophistication of malware threats underscores the need for effective and efficient malware detection and triage schemes. Malware triage is a process used by cybersecurity professionals to quickly assess, prioritize, and respond to malware incidents. Effective malware triage requires a combination of automated tools, skilled personnel, and well-defined procedures to quickly and accurately respond to malware incidents, minimizing damage and recovery time.Item A simulation platform for connected autonomous vehicles incorporating physical and communication simulators(2024) Chen, Yuhao; Cai, LinThis project report provides a holistic record of the development of a connected autonomous vehicle simulation framework incorporating a physics simulator and a communication simulator. The development of this tool aims to help researchers in vehicle communication protocols to evaluate the simulated performance of their solutions in the physical world. By using this tool, communication researchers can observe the impact of their communication protocols on the actual connected autonomous vehicle operation process without the need to delve into the underlying logic of vehicle kinematic simulation. They only need to configure simple parameters and deploy their own protocols on the communication simulator and see the effect. This project report will start by introducing the components and operating principles of the entire system, and then demonstrate its usage through a simple simulation example.Item Achieving Quality of Service in Medium Scale Network Design Using Differentiated Services(2016-09-21) Khan, Usama; Gebali, Dr. FayezQuality of service (QoS) means packets are classified and sent to the destination based on the priority of the packet. Before the advent of this standard data packets were sent on a standard namely "Best effort". In this standard packets were sent on the policy of first come and fi rst serve basis without providing reliability, maximum throughput, and latency. This often results congestion on the router due to the load of Queue, packets were dropped due to congestion issues. The rise of multimedia application defines a need for a new standard which guarantees bandwidth with low delay and jitter. Multimedia applications like VoIP, Video conferencing are delay sensitive and cannot survive on the "Best effort" therefore we require some sort of differentiators that can detect these different types and appropriately prioritize and queue them for the effective transmission and this transmission is achieved with a new standard known as Quality of Service (QoS). Quality of Service is achievable by different types namely RSVP, RSVP-TE, MPLS and differentiated services. The main objective of this project is to explain how a medium scale network can be redesigned to implement quality of service within the network. Real-time simulations are obtained for multiple performance factors and implemented into a sample network to achieve the desired results.Item Addressing Class Imbalance in Facial Emotion Recognition(2021-12-08) Ghafourian Bolori Mashhad, Sarvenaz; Baniasadi, AmiraliThe wide usage of computer vision in many perspectives has been attracted in the recent years. One of the areas of computer vision that has been studied is facial emotion recognition, which plays a crucial role in the interpersonal communication. This work demonstrates the advances could be made in this eld. This work tackles the problem of intraclass variances in the face images of emotion recognition datasets. We test the system on an augmented dataset including CK+, EMOTIC, and KDEF dataset samples, which increase the intraclass variances in the face images of our dataset. The proposed method is based on SMOTETomek.Item Advanced Encryption Standard Implementation on Field Programmable Gate Arrays(2017-12-05) Behrouzinekoo, Maryam; Gulliver, T. AaronCryptography provides users with secure communications and data transmission privacy and authenticity (Coron, 2006). Today the most widely used algorithm for private key encryption is the Advanced Encryption Standard (AES). It operates on 128 bit blocks of data in the form of a 4£4 matrix of bytes called the state matrix. The encryption/ decryption process is performed on this matrix using key sizes of 128, 192 or 256 bits. The AES round operations include shift rows, mix columns, and sub bytes using finite field arithmetic. Numerous studies have been done on the AES cryptosystem focusing on design optimization in terms of the memory used in hardware implementation (Van Dyken & Delgado-Frias, 2010). The sub bytes operations dominates the hardware complexity of AES due to its non linearity. In this report, the AES hardware feasibility is improved by implementing the sub bytes operation using inversion in GF(256). This inversion is decomposed into a network of logic gates which reduces the required read onlymemory (ROM) by 89% compared to using look up tables.Item Agentless Host Intrusion Detection Using Machine Learning Techniques(2023-04-12) Jianfeng, Liu; Issa, TraoreWith the rise in the frequency and sophistication of cyberattacks, host intrusion detection systems (HIDSs) have become an essential component in monitoring and protecting endpoints in the network security perimeter. Current HIDSs rely on a local software agent deployed on the monitored host that collects and processes or pre-processes required data. However, this architecture has adverse effects such as increased attack surface, and high maintenance cost and overhead. Recently, a generic agentless endpoint framework that collects transparently raw data from the monitored host was proposed by Ghaleb et al [1] along with a basic threshold-based statistical model for intrusion detection as an initial proof of concept. This report extends the generic agentless framework by collecting a new dataset with more attack vectors and developing and comparing six machine learning models, including k-nearest neighbors, logistic regression, naïve Bayes, decision tree, random forest, and support vector machine. The experimental evaluation using the collected dataset confirmed the feasibility of agentless host intrusion detection, with increased detection efficiency and effectiveness.Item Alternating Direction Method of Multipliers (ADMM) Techniques for Embedded Mixed-Integer Quadratic Programming and Applications(2020-05-13) Liu, Jiaqi; Lu, TaoIn this project, we delve into an important class of constrained nonconvex problems known as mixed-integer quadratic programming (MIQP). The popularity of MIQP is primarily due to the fact that many real-world problems can be described via MIQP models. The development of efficient MIQP algorithms has been an active and rapidly evolving field of research. As a matter of fact, previously well-known techniques for MIQP problems are found unsuitable for large-scale or online MIQP problems where algorithm’s computational efficiency is a crucial factor. In this regard, the alternating direction method of multipliers (ADMM) as a heuristic has shown to offer satisfactory suboptimal solutions with much improved computational complexity relative to global solvers based on for example branch-and-bound. This project provides the necessary details required to understand the ADMM-based algorithms as applied to MIQP problems. Three illustrative examples are also included in this project to demonstrate the effectiveness of the ADMM algorithm through numerical simulations and performance comparisons.Item Analysing Twitter Feeds to Predict Stock Movements(2016-09-21) Venkataramana, Anoop; Gulliver, Aaron T.On average, every second, approximately 6,000 tweets are tweeted on Twitter, which accounts for approximately 500 million tweets a day, and hence, 200 billion tweets per year. In 2010, tweets per day were around 50 million, so in just five years the amount of data has increased by ten times. This exponential increase in data creation and user activity makes Twitter an ideal tool for analysing financial trends. Sentiment analysis is the process of identifying and categorizing opinions expressed in text and determining writer attitudes towards a particular topic. There are few existing systems for analysing tweets to predict sentiments and results may not be accurate due to the random and short nature of tweets. Existing information retrieval techniques rely heavily on linguistic features like part of the speech or trigger words and perform poorly because they cannot understand sentiments. In this project, a segmentation algorithm is used to improve the accuracy and hence provide better sentiment prediction. In the proposed model, a tweet is split into meaningful segments (a word or group of words), while context is preserved and extracted from the segments.Item Analysis of Two Representative Algorithms of Depth Estimation from Light Field Images(2017-08-28) Yutao, Chen; Panajotis, Agathoklis; Kin, LiLightfield (LF) cameras offer many more of advanced features than conventional cameras. One type of LF camera, the lenslet LF camera is portable and has become available to consumers in recent years. Images from LF cameras can be used to generate depth maps which is an essential tool in several areas of image processing and can be used in the generation of various visual effects. LF images generated by lenslet LF cameras have different properties that images generated from an array of conventional cameras and thus require different depth estimation approaches. To study and compare the differences of depth estimation from LF images, this project describes two existing algorithms for depth estimation. The first algorithm, from Korea Advanced Institute of Science and Technology, estimates the depth labels based on stereo matching theory, where each label is corresponding to a specific depth. The second algorithm, developed by University of California and Adobe Systems Company, takes full advantage of the LF camera structure to estimate depths from so-called refocus cue and correspondence cue, and combines the depth maps from both cues in a Markov Random Field (MRF) to obtain a quality depth map. Since these two methods apply different concepts and contain some widely used techniques for depth estimations, it is worthy to analyze and compare their advantages and disadvantages. In this report, the two methods were implemented using public domain software, the first method being called the DEL method and the second being called the DER method. Comparisons with respect to computational speed and visual quality of the depth information show that the DEL method tend to be more stable and gives better results than the DER method for the experiments carried out in this report.Item Anomaly detection in drone activities: Data collection and unsupervised machine learning modeling(2025) Chen, Zhuo; Traoré, Issa; Mamun, MohammadAs Internet of Things (IoT) devices, drones are among the most popular unmanned aerial vehicles (UAVs), equipped with multiple sensors, cameras, and communication systems. These features expose them to potential vulnerabilities exploitable by hackers. making it crucial to explore these vulnerabilities and implement effective anomaly detection while operating UAVs. This study investigates a DJI Edu Tello drone to comprehensively assess its vulnerabilities and develop anomaly detection mechanisms using different unsupervised machine learning techniques. Two types of data were collected: benign data from legitimate actions and attack data comprising nine types of attacks. Feature extraction and engineering were performed based on scripts from the Canadian Institute for Cybersecurity (CIC), which were modified to suit the specific needs of this project. The modifications aimed to improve the robustness of the detector by removing and modifying existing features and introducing new measurements to represent the captured packets. The anomaly detector was formulated after comparing three unsupervised machine learning algorithms: Isolation Forest, Local Outlier Factor (LOF), and Elliptic Envelope, through extensive performance evaluations and analyses. The study demonstrated the effectiveness of these algorithms in detecting anomalies and enhancing the security of drones. The findings also highlight the critical role of robust feature engineering and careful algorithm selection in developing a reliable anomaly detection system for UAVs.Item Anomaly Detection Systems for Distributed Denial of Service Attacks(2017-02-27) Raza, Assad; Gulliver, T. AaronDistributed Denial of Service (DDOS) attacks persist and are growing stronger. According to the latest data, 2016 has seen DDOS attacks which were large in both frequency and size \cite{arbor}. DDOS attacks have been investigated extensively and various countermeasures have been proposed to protect networks from these attacks. However, DDOS is still considered to be the major threat to current networks and there is a need for Anomaly Detection Systems (ADSs) to accurately detect DDOS attacks. Furthermore, network traffic now has significant Peer to Peer (P2P) traffic. P2P traffic in Europe accounts for more than a quarter of all bandwidth, and 40 percent of all packets sent. Previous work has shown that P2P traffic can have a negative impact on the accuracy of ADSs. A P2P traffic preprocessor was proposed in \cite{sardarali} to compensate for the adverse impact of P2P traffic on ADSs. In this project, two well-known anomaly detectors, namely Network Traffic Anomaly Detector (NETAD) and Maximum Entropy Anomaly Detector (MaxEnt), are evaluated with and without this P2P traffic preprocessor for the detection of DDOS attacks. Performance of these ADSs has also been evaluated for the detection of TCP and UDP flood Denial of Service (DOS) attacks. Results are presented which show that using this P2P traffic preprocessor improves the ability of these ADSs to detect attacks.Item Assessing IP Weight Metrics for Cloud Intrusion Detection using Machine Learning Techniques(2018-02-22) Hu, Ruiqi; Traoré, IssaDespite the growing popularity of cloud computing, security is still an important concern of cloud customers and potential adopters. Cloud computing is prone to the same attack vectors as traditional networks, in addition to new attack vectors that are specific to cloud platforms. Intrusion Detection Systems (IDS) deployed in the cloud must take into account the specificity of the underlying threat landscape as well as the architectural and operational constraints of cloud platforms. In this project, an IDS that utilizes IP weight metrics for feature selection is implemented. Additionally, this system is tested with different supervised classification models and evaluated on a cloud intrusion dataset. In comparison with the results under conventional network environment, we conclude that the performance of IDS against cloud intrusions is promising, however, other developments such as unsupervised intrusion detection techniques and extra data preprocessing stages should be researched for the best practice of the system.Item Assessing the Effectiveness of Malicious Domain Prediction Using Machine Learning(2023-04-28) Bu, Jinlin; Traore, IssaMalicious domains are a serious threat to network security as they deceive users into accessing them, leading to information disclosure, identity theft, and economic losses. Despite efforts to tackle this problem, cybercriminals continue to buy and use brand-new domains to evade detection, bypassing network defenses and endangering users' security. Predicting future malicious domains in advance can greatly reduce their harm. The Domain Prediction System (DPS) developed by one of the industry partners of the Information Security and Object Technology (ISOT) Lab aims to predict in advance potentially malicious domains, but the effectiveness of the system needs to be tested as it is uncertain whether the predicted domains will be used for malicious purposes. This report introduces the problem's background and a description of the dataset used in the experiments. Then evaluates the effectiveness of the DPS system by comparing two sets of models: baseline and predictive models. The baseline models were obtained by training and testing different machine learning (ML) classifiers using existing (known) benign and malicious domains. The predictive models were obtained by training the ML classifiers using domains generated by the DPS that may be used for malicious purposes, and testing using the same benign domains as previously. The evaluation of the predictive models on the same test set as the baseline models yielded comparable performance measures, providing a strong indication of the utility and credibility of the predicted domains.Item Assessing the Effectiveness of Snort in Detecting Malicious URLs(2023-08-29) Zuva, Simbarashe; Traore, Issa; Woungang, IsaacWeb attacks have been on the rise in recent years, and organisations are constantly searching for new and better ways to detect and block the corresponding attack vectors. Some of the prominent attributes of web attack vectors are malicious domains used to trigger or sustain these attacks, for instance, through launching phishing attacks or by hosting command and control (C&C) infrastructures. Detecting accurately and blocking the malicious domains has become increasingly difficult due to the evasive techniques used by the attackers to mask their activities by emulating legitimate network traffic to an accurately high degree and through tactics such as domain generation algorithms (DGA) and fast flux DNS. Snort, an open-source intrusion detection system, has traditionally been utilized to detect network intrusions through network traffic signature analysis. However, while snort has subsequently been upgraded to enable the detection of web attacks, its effectiveness in detecting malicious domains is questionable because of the coarse-grained nature of web attack signatures. At the same time, it is a reasonable proposition to assume that there would be an implicit relation between granular attacks and the usage/occurrence of malicious domains. In this project, a platform is developed to explore and assess experimentally the ability of snort in detecting malicious domains. The proposed approach extracts some useful indicators of compromise (IoC) from the granular Snort alerts triggered by web visits and leverage such information to establish whether the corresponding URLs are benign or malicious. The platform was built around a headless chrome browser and the pfSense open-source firewall which has a built-in snort engine. The experimental evaluation, conducted using a public dataset of benign and malicious domains, yielded important insights into the strengths and limitations of snort in detecting malicious domains, and helped identify directions for future improvements.Item Attack Fingerprints based on the Activity and Event Network(AEN) Model(2020-08-12) Nie, ChenyangThe Activity and Event (AEN) graph is a new framework that enables capturing ongoing security-relevant activity and events occurring at a given organization using a large random time-varying graph model. The graph is generated by processing various network security logs, such as network packets, system logs, and intrusion detection alerts. In this report, we show how known attack methods can be captured generically using attack fingerprints based on the AEN graph. The fingerprints are constructed by identifying attack idiosyncrasies under the form of subgraphs that represent indicators of compromise (IOCs), and then encoded using PGQL queries. Among the many attack types, three main categories are implemented in our model: Probing, Denial of Service(DoS), and authentication breaches; Each category contains its common variations. The experimental evaluation of the fingerprints was carried using a combination of intrusion detection datasets and yielded very encouraging results.Item Audio analysis of customer calls for predicting purchase intentions: A novel approach to e-commerce insights(2024) Yu, Miao; Li, Kin FunClient audio recordings represent a valuable resource for many types of businesses. Utilizing these recordings to identify potential customers can help enhance purchase rates and reduce marketing costs, particularly with different kinds of machine learning methods that automatically label different groups, including positive, neutral, and negative buyers, instead of manual analysis. Though previous research has predominantly focused on text content analysis for this purpose, audio features, which effectively capture voice nuances such as tone, pitch, rhythm, and interaction patterns between interviewers and interviewees, may impact the model performance. This project explored an innovative method. It firstly investigates the effectiveness of emotion detection through audio features, leveraging two datasets: the Toronto Emotional Speech Set (TESS) and the Surrey Audio-Visual Expressed Emotion Dataset (SAVEE). Furthermore, hierarchical clustering techniques are applied to explore the relationship between emotion-related audio features and customer categories using audio data provided by VINN Auto, an e-commerce firm. Next, Exploratory Data Analysis (EDA) is conducted to find the correlation between interaction-related audio features and customer categories, including positive, neutral, and negative buyers within the same dataset after labeling it. Using supervised learning, the results indicate that integrating audio features, including emotion-related and interaction pattern features, can affect the performance of models like Support Vector Machines (SVM), Decision Tree, and Extreme Gradient Boosting (XGBoosts), particularly when combined with traditional audio content-related features such as Term Frequency-Inverse Document Frequency (TF-IDF) scores while applying adjusted weight configuration for positive class. After these exploration, an ensemble method using a soft voting mechanism across these three models is developed to assess whether it can enhance the identification of potential purchasers. The approach of combining emotion-related audio features, interaction pattern features, and content-based features like TF-IDF scores with tailored weight configurations highlights the value of collaborating audio features in customer identification tasks compared with only using content-based features like TF-IDF scores. It could be a robust strategy for improving classification outcomes for the relevant analysis in the future.Item Augmenting Wireless Quality of Service Metrics with Crowdsourced Wireless Quality of Experience Data(2015-12-17) Macdonald, Hunter; Darcie, Thomas; Neville, StephenDue to advances in mobile devices, service providers must support a roughly year over year doubling of data traffic on their wireless networks. Large capital expenditures are required on an ongoing basis to upgrade networks and keep up with this increasing demand. However, revenue growth is not keeping pace with these capital costs. This is placing significant capital strain on wireless service providers as they seek to increase the extent and capacity of their infrastructure deployments. Service providers are highly motivated to increase revenues in order to improve their bottom line. Unfortunately it is difficult to increase average revenue per user (ARPU), which has remained stagnant for most service providers in North America for the last 3 years. Instead, 70% of service providers cite improved customer attraction and retention, often achieved through providing a better wireless user experience, as being their core strategy for improving revenues and affording the required network upgrades. The understood dynamic is that customers do not have strong loyalties to their wireless service providers. They will willingly change providers to receive a better wireless experience. However the approach of attracting customers with superior quality of experience can only work for some service providers. When it comes to improving customer retention and attraction rates there must be winners and losers since not every service provider can deliver the best quality of service. This has produced a highly competitive customer acquisition and retention landscape with each service provider striving to attract a growing share of customers so that more revenue is available to support reinvestment into their networks. In many cases service providers will go so far as to buy customers out of their existing competitor contracts in order to gain their patronage. Given that the majority of wireless service providers are competing on wireless quality to attract and retain customers, the industry has reached a critical point at which being profitable relies on near perfect deployment of a limited amount of capital to improve customer experience. A wireless service provider who poorly deploys its capital will fall behind their competition in terms of wireless quality and will lose portions of their critically important customer base. This means that revenues drop further - reducing the capital available to re-invest into the network. This accelerates the process so that once a wireless service provider falls behind in quality it becomes increasingly difficult to catch up. Hence it is critical for service providers to understand what is influencing wireless user experience on their networks so that effective strategies can be put in place for cost-efficient continuous improvement. Service providers are actively seeking solutions to help them be more intelligent with how they spend their network improvement dollars. Basic economics suggests that companies may compete on both price and quality. This is of course also true for communications networks and some smaller players have emerged which compete by offering a lightweight, low-cost feature set. However the same core dynamics exist among this group of service providers. If one of the low-cost players poorly deploys their capital, and provides a lower quality of service at a given price point, then they will lose customers to those offering better service at the same (or lower) price point. Losing those customers reduces revenue for that wireless service provider making it difficult for them to remain competitive. Hence, even among low-cost providers, it is critical that they deploy solutions that help them spend their network deployment and improvement capital as intelligently and cost effectively as possible. Quality of Service (QoS) has traditionally been measured using network probes deployed within the service provider’s core network. Hence QoS describes the network’s perspective of user experience. At this time, the majority of network investment decisions are made to achieve the greatest gains to QoS. This approach does achieve some level of customer perceived success. However making decisions based on maximizing QoS does not necessarily mean that the consumers will see improvement. QoS often fails to reflect the consumer’s perception of wireless quality which can, at times, be substantially different. Using QoS as the key performance indicator for a wireless network creates problems because it only incorporates information collected with core network probes or via deep packet inspection. As an example to highlight the shortcomings of network-side monitoring, consider what happens when a mobile device fails to connect to a communications network. In this case the dropped call doesn’t reach the network and network monitoring solutions in the core are blind to the error. However if monitoring was also performed directly on the mobile device that event would be recorded. It is increasingly important to make decisions to maximize the customer’s perspective of received quality. Wireless service providers can therefore improve the effectiveness of their network investments by augmenting their existing QoS information with user experience information collected directly on mobile handsets. These device-side readings reflect the consumer’s perspective of quality. Additionally, by monitoring directly on mobile handsets types of failure events can be captured that are not measureable by core network probes and deep packet inspection. Quality of Experience (QoE) is a term coined to characterize these key performance indicators of a wireless user’s experience built by incorporating direct mobile handset data collection. QoE requires not just on-device measurement but also and understanding of what on-device applications require what levels of network service. Customers experience networks through the applications they use on their wireless devices. Hence for a service provider to truly understanding a customer’s QoE they must monitor experience from the application layer of their customers’ mobile handsets and build an understanding of how specific network conditions affect the performance of mobile applications. For example service providers must become well-versed in the customer’s perspective of video and voice over IP calling (VoIP) sessions occurring over their networks, where these are distinct from, for example, the network demands of text messaging. This report explores the differences in QoE and QoS in order to highlight the benefits of deploying solutions focused on monitoring QoE to enhance network planning and operations practices. In particular customer QoE as it relates to video and VoIP services is examined as those are the services that tend to be the most network sensitive while having a strong potential to impact a service provider’s customer churn rate.Item Authentication Algorithms modelling and Simulations of an Arbiter PUF(2023-05-08) Khan, Vaseem; Gebali, FayezPhysical attacks represent a threat to intellectual property, confidential data, and service security because they typically involve reading and modifying data. Attackers frequently have access to tools and resources that can be utilised, either invasively or non-invasively, to read or corrupt memory. Secret keys for cryptographic techniques are often kept in memory. Physical Unclonable Functions (PUFs), which dynamically construct keys only when necessary and do not need to be retained on a powered-off chip, appear to be a potential remedy for such issues. PUFs are circuit primitives that use inherent differences of microchips made during the manufacturing process to produce distinctive "fingerprint" output sequences (response) to a particular input (challenge). The PUF is a fantastic choice for creating cryptographic keys since these modifications are stochastic, device-specific, hard to duplicate even by the same manufacturer using similar procedures, tools, and settings, and are intended to be static. The delay based PUF, an arbiter PUF, is the subject of our study. It benefits from the differences in propagation delays that are present between two symmetrical channels. Without the need for helper data or secure sketch techniques, we created some of the most modern algorithms that may be used to enable solid authentication and secret key generation. Finally, we present data that demonstrates how these devices behave and how their functionality is influenced by the chosen authentication mechanism and key system variables.