Theses (Physics and Astronomy)

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 20 of 508
  • Item
    Study of Dalitz decays from π0 in e+e− → τ+τ− events recorded by the Belle II detector as a control sample for dark sector search validation
    (2025) Sutariya, Dhwani Raju; Roney, J. Michael
    This thesis presents a validation study of Monte Carlo (MC) simulations in the Belle II experiment by comparing simulated data with real experimental data along with the measurement of the Dalitz decay branching fraction (BF). The analysis focuses on the Dalitz decay of the neutral pion, π0 → γe+e−, which serves as a control sample for searches for dark photons in event e+e− → γ[A′ → e+e−]. The neutral pions are from tau pair events where τ+ → ¯ντ π+[π0 → γe+e−]. By evaluating efficiency and purity, we ensure the robustness of event selection, background suppression, and modeling of key kinematic distributions. After applying selection criteria, the computed MC purity is 99.27% with an efficiency of 0.0122, with the final data sample containing 572 events. The Dalitz decay BF is measured to 0.01076 ± 0.00045(stat.), in agreement with the Particle Data Group (PDG) value of 0.01174 ± 0.00035, with a discrepancy of 1.7σ. This study is limited to evaluation of statistical uncertainties on the BF with a discussion on systematic studies as a follow up. These findings provide essential input for improving data-MC agreement in future dark photon searches and improving measurement of the Dalitz decay BF.
  • Item
    Development of in-situ plasma processing on 1.3 GHz superconducting radiofrequency cavities at TRIUMF
    (2025) Hedji, Daniel; Laxdal, Robert; Junginger, Tobias
    Superconducting Radio Frequency (SRF) technology is a key component in many particle accelerators operating in a continuous wave, or high duty cycle, mode. The on-line performance of SRF cavities, as defined by the accelerating gradient and the unloaded quality factor, Q0, is negatively impacted by the gradual increase of particulate contamination, furthering field emission. Conventional cleaning procedures are both time- and resource-exhaustive as they are done ex-situ. Plasma processing is an emerging in-situ method of cleaning which chemically removes hydrocarbon-based field emitters by the ignition of a plasma in the cavity volume. An R&D program is underway at TRIUMF with the goal of developing fundamental power coupler (FPC) driven plasma processing of the installed 1.3 GHz nine-cell cavities in the ARIEL 30MeV SRF eLINAC. Processing recipes have been systematically studied in one single-cell and two multi-cell cavities off-line. Cavities were first artificially contaminated using a Helium-Methane plasma. In most of the tests, the removal of hydrocarbons was verified through the byproduct responses on a Residual Gas Analyzer (RGA). A plasma recipe with a cavity pressure of 80 mTorr, and a gas ratio of 95% Helium to 5% Oxygen was found to remove the largest abundances of hydrocarbon byproducts from each of the tested cavities. Cavity performance changes were tested cryogenically before and after conditioning with this particular recipe. These experiments were unable to recover the cavity performance but did provide insight toward the plasma processing testing procedure and apparatus needed for assembled cavities. Multi-cell testing was also conducted to identify plasma locations for the various modes in the fundamental TM010 passband. Here, a predictive model was developed to compare frequency shift data resulting from a plasma ignition with field behavior collected from beadpull distributions through a least-squares minimization. The results presented show the estimated plasma locations and movements due to power increases.
  • Item
    Simulating the chemical enrichment of the intra-group medium
    (2025) Padawer-Blatt, Aviv; Babul, Arif
    The channels by which heavy elements are produced through nucleosynthesis in stars, ejected from stars into the surrounding interstellar medium (ISM) of host galaxies, and dispersed (both spatially and in thermodynamic phase, i.e. density, temperature, and velocity) into their gaseous atmospheres are fundamental to the formation and evolution of galaxies, as well as gravitationally bound collections of galaxies - groups and clusters. These massive systems host hot diffuse gas throughout their volume, known as the intragroup medium (IGrM), which can become substantially enriched with metals, informing us about stellar populations, chemical production and enrichment mechanisms, large-scale gas flows, and gas- and metal-mixing. This thesis investigates the chemical enrichment of the IGrM using cosmological simulations. Specifically, I compare results from the simba and simba-c simulations, focusing on the distribution of metal abundances in galaxy groups. simba-c incorporates an updated and more realistic chemical enrichment and stellar feedback model (Chem5), leading to notable differences in IGrM abundances compared to simba. I examine projected emission-weighted abundance profiles, finding that Simba-c generally produces lower-amplitude abundance profiles with flatter cores, aligning better with observational data across a range of X-ray relevant metals. However, the agreement between simulations and observations for both Simba-c and Simba worsens with decreasing group mass through an increase in the amplitudes of the simulated abundance profiles relative to those of the observed profiles; this agreement is also somewhat sensitive to the specific element under consideration. Moreover, I investigate the 3D mass-weighted abundance profiles to deepen my understanding of the physical mechanisms driving the changes found between Simba and Simba-c and between low and high mass groups. The results indicate that Simba-c enriches the IGrM to a lesser degree than Simba across all studied metals and mass scales, and produces less total metal mass in the hot diffuse phase. I ascribe these features to reduced metal yields in Chem5 compared to Simba and the replacement of Simba’s instantaneous enrichment model with Chem5 in Simba-c. On the other hand, Simba-c actually contains more total hot gas mass in low mass groups than does Simba, which may be due to slight changes in the stellar and AGN feedback models. My study reveals that accurate sub-grid models for chemical enrichment, as well as metal dispersal and mixing processes, are required to realistically reproduce observed group environments in cosmological simulations.
  • Item
    Optical, mechanical, and detector developments for the Prime-Cam 850 GHz module
    (2024) Huber, Anthony I.; Chapman, Scott
    Prime-Cam is a first-generation instrument designed for the Fred Young Submillimeter Telescope (FYST) in the Cerro Chajnantor Atacama Telescope (CCAT) Facility, which will be sited on Cerro Chajnantor in the Chilean Atacama Desert at an elevation of 5600 m. Among the instrument modules being developed for the Prime-Cam receiver, the 850 GHz module will probe the highest frequency and presents unique challenges in optical design, coupling, detection, and readout. The 850 GHz module is of particular importance to the astronomical community due to the absence of near-future proposals for instruments at similar wavelengths and at equivalent sites. This work describes the parameter space of the 850 GHz optical system between the F$\lambda$ spacing, beam size, pixel sensitivity, and detector count. The optimization of an optical design for the 850 GHz instrument module for CCAT-prime is also presented. Success of the 850 GHz module hinges on the development of state-of-the-art, photon-noise-limited kinetic inductance detector (KID) arrays which will facilitate quality single-frequency, dual-polarization measurements in the given atmospheric windows. The 850\,GHz module will consist of approximately 45,000 titanium-nitride, polarization-sensitive, lumped-element kinetic inductance detectors, meaning the module will field more microwave kinetic inductance detectors than any other millimeter-wave receiver to date. We present the critical aspects of the detector design and discuss solutions to the challenges of efficient optical coupling and a multi-octave readout band. The detectors are being designed to be read out using a multi-octave readout architecture, allowing for approximately double the multiplexing of other Prime-Cam modules. The parameter space in the development of these detectors is explored, including testing a means of shorting inductors to modify the resonance with minimal changes to the absorber architecture and testing different volumes of the inductor. Results and optical characterization of the prototype pixels for the 850 GHz instrument module are presented. The results of this work will directly inform the design of microwave KIDs for the multi-octave readout architecture as part of the development of densely packed arrays for the Prime-Cam instrument. The 850 GHz module is expected to be observing in 2026. Also included is a report on a blind, millimeter-wave redshift survey of the brightest, unlensed submillimetre galaxies from the SCUBA-2 Cosmology Legacy Survey. The 14 brightest submillimetre galaxies (S850>$11 mJy) identified as single sources by the SMA were selected from the Lockman Hole, AEGIS, and CDF-N fields. 12 of these 14 sources were observed using the IRAM NOEMA interferometer, where at least one strong emission line was detected in each galaxy. Redshifts are assigned to each of the observed galaxies, unambiguously in five cases with two or more detected lines, and guided by photometric redshifts for the seven single line cases. The luminosities and widths of the CO lines, as well as the flux density, are used to probe the properties of these hyper-luminous infrared galaxies. The extreme nature of these galaxies is then contrasted with the results of previous surveys.
  • Item
    An odyssey in exploring nuclei: High-precision mass measurements of exotic tin isotopes and progress toward implementing a phase-based measurement technique
    (2024) Czihaly, Annabelle Isabella; Kwiatkowski, A. A.; Lefebvre, Michel
    Precision mass measurements are integral in advancing our understanding of nuclear physics. Since mass is a fundamental property of nuclei. TRIUMF’s Ion Trap for Atomic and Nuclear science (TITAN) facility houses two high-precision mass spectrometers designed for mass measurements of radioactive isotopes: a Multiple-Reflection Time-of-Flight Mass Spectrometer (MR-TOF MS) and a Measurement Penning Trap (MPET). This thesis presents the results from a MR-TOF MS campaign focused on the measurement of doubly magic 100Sn where we successfully measured isotopes with mass numbers 104 through 107 to a precision of δm/m ≈ 10−7. Additionally, the most precise TITAN trap, MPET, is undergoing a major upgrade aimed at achieving precisions below δm/m ≈ 10−10. As part of this upgrade, a new phase-based measurement technique called Phase-Imaging Ion-Cyclotron-Resonance (PI-ICR) is being implemented. To support this implementation, the Phase-Imaging Analysis Tool (PhIAT) was upgraded to assist in system tuning and to perform mass measurements with PI-ICR.
  • Item
    Calcium excess in novae: Beyond nuclear physics uncertainties
    (2024) Loria, Mallory; Herwig, Falk; Ruiz, Chris
    We examine Ca abundances in classical novae from spectroscopic observations spanning 65 years and investigate whether they are systematically high compared to those predicted by nova models. For the first time, we perform Monte Carlo simulations assessing the impact of nuclear reaction rate uncertainties on abundances predicted by multi-zone nova models. We compare these results with similar simulations using one-zone nova models. While the Ca abundances in the models are sensitive to variations of rates of the reactions 37Ar(p, γ)38K and 38K(p, γ)39Ca, the nuclear physics uncertainties of these reactions cannot account for the discrepancy between the observed and predicted Ca abundances in novae. We also investigate the impact of the 19F(p, γ)/19F(p, α) branching ratio that controls hot CNO cycle breakout on Ca production by increasing this ratio by factors of 10 and 100, finding no increase in the Ca abundance. To explain the peculiar abundances observed in novae with high Ca abundances, alternative mixing scenarios with different pre-mixed material are explored. The dust fractionation hypothesis, which suggests that the Ca overabundance could be explained by Ca being trapped in dust, is ruled out due to the simultaneous overabundance of Ar, which would not be expected to be trapped in dust. Furthermore, the overabundance of Ca has important implications for measuring 7Be in nova ejecta, as Ca lines are used to estimate 7Be abundances. If the Ca abundance is incorrectly determined, it could lead to inaccurate 7Be abundance estimates. Possible alternative explanations for the observed Ca overabundance are discussed.
  • Item
    Searching for long-lived supersymmetric particles using displaced vertices and missing transverse energy with the ATLAS detector
    (2024) Carlson, Evan Michael; Trigger, Isabel; Kowalewski, Robert V.
    The Standard Model of particle physics has been extremely successful in its predictive power and has withstood a wide array of precision tests designed to expose any flaws in its description of fundamental particles. However, the Standard Model is unable to explain several phenomena observed in the universe, such as the nature of the dark matter which makes up more than 80% of the gravitationally interacting matter in the universe. Theories that extend the Standard Model with new fundamental particles have been postulated to address the questions left unanswered by the Standard Model. Many supersymmetric theories provide viable dark matter candidates. In order to more precisely test the Standard Model and its possible extensions, the ATLAS experiment at the Large Hadron Collider has been constructed to measure high energy proton-proton collisions. Long-lived particles (LLPs) are commonly predicted by extensions to the Standard Model. The decay of a LLP to charged particles within the ATLAS Inner Detector would produce tracks that are displaced from the interaction point, which could be reconstructed as a displaced vertex. This dissertation presents a search for displaced vertices with high invariant mass and high track multiplicity in events with significant missing transverse energy in the 2016-2018 data set collected by the ATLAS experiment. The observed number of events is consistent with the number expected from background processes. The results are interpreted in the context of a split-supersymmetry model with long-lived gluinos decaying to neutralinos and Standard Model quarks, and exclusion limits are set at 95% confidence level.
  • Item
    The feasibility study for measuring the branching fraction of B meson decays to D, eta, lepton, and its neutrino
    (2024) Gholipourverki, Sahar; Kowalewski, Robert V.
    Semileptonic B meson decays provide a direct method to measure the parameters of the Cabibbo-Kobayashi-Maskawa (CKM) matrix, which is essential for understanding quark mixing and CP violation in the Standard Model. This thesis presents an exploratory study on the Branching Fraction of the B meson decay mode [character cannot be represented] which has never been measured. Here, ℓ represents either an electron or a muon, νℓ denotes the corresponding lepton neutrino, D(∗) refers to either a D meson or its excited state D∗, and η symbolizes the eta meson. In this analysis to reduce the significantly higher occurrence of background events compared to signal candidates, both B mesons are reconstructed semileptonically. The tag-side B meson predominantly decays via [character cannot be represented]. This investigation aims to contribute to a more comprehensive understanding of B meson semileptonic decays by attempting to bridge the gap between the measurements of exclusive semileptonic decay modes and the summation of all measured exclusive semileptonic decays, and improve the understanding of the background in lepton flavor-violating decay modes such as B → D(∗)τντ . This work shows that with 470.78 fb−1, with the MC BF([character cannot be represented]) + BF([character cannot be represented]) = 0.008 where ℓ can be either an electron or a muon, Belle II may be able to measure sum of the branching fraction for the decay of [character cannot be represented] and B− → D[character cannot be represented] with more than 5σ significance.
  • Item
    The search for young planets with JWST/NIRCam
    (2024) Mullin, Camryn; Dong, Ruobing
    As part of the James Webb Space Telescope (JWST) Guaranteed Time Observation (GTO) program “Direct Imaging of YSOs” (program ID 1179), I used JWST NIRCam’s direct imaging mode with filters F187N, F200W, F405N, and F410M to perform high contrast observations of the circumstellar structures surrounding the protostar HL Tau. The data reveal the known stellar envelope, outflow cavity, and streamers, but do not detect any companion candidates. I detect scattered light from an in-flowing spiral streamer previously detected in HCO+ by the Atacama Large Millimeter/submillimeter Array, and part of the structure connected to the c-shaped outflow cavity. For detection limits in planet mass, I use BEX evolutionary tracks when Mp < 2MJ and AMES-COND evolutionary tracks otherwise, assuming a planet age of 1 Myr (youngest available age). Inside the disk region, due to extended envelope emission, the point-source sensitivities are ∼5 mJy (37 MJ) at 40 au in F187N, and ∼0.37 mJy (5.2 MJ) at 140 au in F405N. Outside the disk region, the deepest limits I can reach are ∼0.01 mJy (0.75 MJ) at a projected separation of ∼525 au.
  • Item
    Theory of charge and flux noise in superconducting wires
    (2024) Nava Aquino, José Alberto; de Sousa, Rogério
    Superconducting qubits are at the forefront of efforts to develop scalable quantum computers due to their potential to perform complex computations beyond the capabilities of classical systems. However, maintaining the quantum coherence of these qubits remains a significant challenge, primarily due to various noise sources such as flux noise, dielectric loss, and quasiparticle poisoning. This dissertation presents a detailed theoretical investigation into two noise mechanisms affecting superconducting qubits: flux noise from spin impurities and charge/flux noise from non-equilibrium superconducting quasiparticle distributions. The first part of the research focuses on developing a general theoretical framework to calculate flux noise arising from spin impurities. This framework accounts for spin diffusion and spin-lattice relaxation, incorporating a discrete diffusion model to handle confinement effects and inhomogeneities. Analytical and numerical results show that the spin relaxation model aligns with experimental observations in aluminum devices, while the spin diffusion model better matches experiments in niobium devices. The second part of the thesis proposes a theory addressing charge and flux noise due to non-equilibrium superconducting quasiparticle distributions within superconducting wires. This theory highlights the significant impact of ohmic loss generated by these quasiparticles, revealing their contribution to charge noise at intermediate frequencies and a nearly white flux noise background. Comparative analysis with experimental data provides some validation for the theoretical models and gives insights into the temperature-dependent behavior of flux noise and the distinctive noise characteristics in aluminum and niobium devices. The findings highlight the necessity of addressing wire-resident quasiparticles
  • Item
    Development and demonstration of an on-detector technique to limit the impact of atmospheric emission lines on near-infrared spectra
    (2024) Grosson, Theodore; McConnachie, Alan; Venn, Kimberley Ann
    Observations in the near-infrared using large ground-based telescopes are limited by bright atmospheric emission lines, particularly the OH Meinel bands. These lines can saturate a spectrograph on the order of minutes, resulting in the loss of information at wavelengths containing the lines. OH lines also vary on the scale of minutes, so observations longer than this timescale cannot capture this variability. Both of these properties necessitate the use of short exposure times in order to perform accurate sky subtraction. To observe faint science targets, several short exposures must be coadded instead of taking a single long exposure. Because each exposure includes its own independent read noise, this results in an increase in the total noise of the coadded image. In this thesis I present a new method to achieve longer exposure times in near-infrared spectra without the saturation of these lines, while still preserving information about their variability so that sky subtraction can still be applied. This is accomplished by periodically resetting the pixels on an H2RG detector that contain bright lines while the rest of the detector continues integrating. This method is demonstrated on the McKellar Spectrograph, where we reset the emission lines from an arc lamp while still recording their flux. I show that, when comparing the resulting spectrum and its signal-to-noise to a more conventional observing mode, the only measurable systematic difference is a result of our imperfect setup and can be removed with a standard nonlinearity correction. This method does not have the drawbacks of other measures to mitigate the effects of OH lines, such as short exposure times or completely removing the information at the relevant wavelengths, and as such shows promise for potential future use at observatories. We advocate demonstrating this method on sky spectra at existing high-quality facilities in order to test its feasibility for use in sky subtraction schemes for premier modern spectrographs.
  • Item
    Deep learning-enabled studies of galaxy mergers and supermassive black hole evolution
    (2024) Bickley, Robert W.; Ellison, Sara L.
    When the smooth evolution of an isolated galaxy is punctuated by a merger event with a companion of similar mass, theory and observations indicate that a metamorphosis will begin. Dramatic changes in the morphologies and kinematics of merging galaxies are thought to funnel gas towards their centres, leading to elevated star formation rates and supermassive black hole (SMBH) accretion rates. The transformation brought about by mergers appears to be the missing link between the two main types of galaxies – blue star-forming spiral galaxies, and red quiescent elliptical galaxies – observed in the Universe. Simulations predict that galaxies are experiencing the most rapid changes immediately after coalescence (when the merging companions are no longer distinct objects), but observational samples of post-merger galaxies predating this work are generally incomplete (small, and possibly not representative of the post-merger class) or contaminated. In this work, I present the methodological details of an updated post-merger identification effort using a simulation-trained convolutional neural network (CNN a type of automated machine vision tool) to flag galaxies that are very likely to be post-mergers. I present a proof-of-concept feasibility study using mock observations of simulated galaxies (Chapter 2) before applying the CNN to classify real images of galaxies in the low-redshift Universe (Chapter 3). The CNN classification effort is followed by a manual quality control exercise, which finally leads to the identification of large (with some 100s of galaxies each), pure, and defensible post-merger samples from two different imaging surveys: the Canada France Imaging Survey (CFIS), and the Dark Energy Camera Legacy Survey (DECaLS). With the post-merger samples in hand, I also present on the demographics and evolutionary characteristics of post-merger galaxies using multiple astronomical surveys for multi-wavelength characterization. I find that star-forming post-mergers are elevated by a factor of ∼ 2 in their star formation rates relative to star-forming non-merger galaxies (Chapter 4). I also find that active galactic nuclei (AGN; the observable phenomena associated with SMBH accretion) are more common by a factor of 2–4 in post-mergers compared to non-mergers, and that those AGN appear to be about twice as luminous as AGN in non-mergers (Chapter 5). Finally, I use new X-ray observations from the extended ROentgen Survey with an Imaging Telescope Array (eROSITA) space mission to verify that AGN are unusually common in post-mergers, and to characterize the strength of the connection between mergers, SMBH, and AGN obscuration (Chapter 6). In each result, I also compare the characteristics of the new post-merger samples to statistically identified groups of galaxy pairs that are presumed pre-mergers. Close galaxy pairs are somewhat more likely to experience elevated star formation, SMBH accretion, and obscuration than their isolated peers, but the results for galaxy pairs are generally weaker than for post-mergers. Together, the results of my studies indicate that the amplitude of transformation seen in post- mergers is unique in the low-redshift Universe. Looking forward, I project the viability of future astronomical surveys for post-merger identification, and find something rather unexpected: while next-generation observatories will offer an opportunity for marginal improvement in identifying the remnants of major galaxy mergers, imaging that is already available (CFIS, DECaLS) is well suited to the task (Chapter 7). I therefore posit that the present generation of astronomers studying galaxy mergers can use forthcoming surveys like Euclid and the Legacy Survey of Space and Time (LSST) to answer more difficult and granular questions about the impact of mergers on galaxy evolution.
  • Item
    Differential cross-section measurements of WbWb production in the lepton+jets channel at √s = 13 TeV with the ATLAS detector
    (2024) Chen, Bohan; Kowalewski, Robert V.
    The top quark provides a unique opportunity to test the Standard Model of particle physics. Its heavy mass causes it to decay before it can hadronize, allowing for the study of the properties of a bare quark. This thesis presents differential cross-section measurements stemming from various top quark processes. This measurement is conducted in the presently uncovered semileptonic channel, where one W boson decays leptonically and the other decays hadronically. The analysis is motivated by the need to improve the modeling of the interference effects between singly-resonant and doubly-resonant top quark decays. This effect has been the source of significant modeling uncertainties in other top-sector analyses. The data and simulation used to perform this analysis correspond to the full ATLAS Run 2 dataset, collected 2015-2018. This data corresponds to an integrated luminosity of 140 fb^{-1} and a center of mass energy of √s = 13 TeV. The measured cross-sections are compared to predictions using different combinations of MC generators. Together with a complementary analysis in the dileptonic channel, this thesis provides a means to constrain modeling uncertainties towards the development of an all-inclusive bb4l generator. This generator could model all top quark processes while accurately accounting for all interference effects.
  • Item
    Dosimetry and radiobiology of ultrahigh dose-rate radiotherapy delivered with low-energy x-rays and very high-energy electrons
    (2024) Hart, Alexander; Bazalova-Carter, Magdalena
    Radiotherapy is a powerful tool in oncology, from curative treatments to pain relief in palliative care. However, the efficacy of radiotherapy is limited by side effects caused by damage to healthy tissues. Ultrahigh dose-rate radiotherapy (UHDR-RT) has emerged as a possible method of reducing damage to normal tissues while maintaining the ability to control the progression of cancer. UHDR treatments are delivered three orders of magnitude faster than conventional dose-rate radiotherapy (CDR-RT). To reach the dose rates associated with UHDR-RT, novel radiation sources have been developed, spanning a wide range of radiation types, energies, and time structures of delivery. These include kilovoltage x-rays produced by a shutter-controlled x-ray tube, and very high energy electrons (VHEE) accelerated to 200 MeV at high energy physics laboratories. Testing the capability of these sources requires specialized dosimeters and radiobiological models which are not commonly used in traditional radiotherapy. In this work, plastic scintillation detectors (PSDs) of various compositions were used to measure dose from both 120 kVp x-rays and 200 MeV electrons. Experiments with the shutter-controlled x-ray tube demonstrated that lead-doped polystyrene PSDs can be used as accurate dosimeters for dose-rates of up to 40.1 Gy/s and for pulse widths of 1 - 100 ms. At the CERN linear electron accelerator for research (CLEAR) the ability of PSDs to respond linearly with dose and independent of dose rate with 200 MeV electrons was assessed as well as the radiation hardness of the probes. Polystyrene-based PSDs maintained linear light output with dose up to 125.2 Gy per pulse. After receiving tens of kGy within one day, PSDs showed reduced light output. However, they exhibited dose-dependent recovery, and maintained linearity of output with dose per pulse. To explore the radiobiological effects of the same radiation sources, Drosophila melanogaster were irradiated as larvae and were monitored for effects on their development. It was shown that UHDR 120 kVp x-rays are capable of reducing normal tissue damage in flies compared to CDR treatments. At 22 Gy, the UHDR irradiated flies had a longer median lifespan, while at 24 Gy they survived to adulthood at higher rates than the corresponding CDR groups. Irradiations of D. melanogaster with 200 MeV and 9-20 MeV over a range of doses from 10 - 45 Gy at both UHDR and CDR were also performed. The dose response curves allowed for an in vivo determination of the relative biological effectiveness (RBE) of VHEE beams, cal- culated to be between 0.97 and 1.01. This work establishes that PSDs and D. melanogaster are useful platforms for characterizing the physical and radiobiological properties of novel UHDR-RT sources.
  • Item
    Accelerating fluid dynamics problems in planet formation with machine learning
    (2024) Mao, Shunyuan; Dong, Ruobing
    I develop two machine learning tools for solving forward and inverse problems in protoplanetary disks. The first tool, Protoplanetary Disk Operator Network (PPDONet), predicts the solution of disk--planet interactions in real--time. PPDONet is based on Deep Operator Networks (DeepONets), a class of neural networks capable of learning non--linear operators to represent deterministic and stochastic differential equations. It maps three scalar parameters in a disk--planet system -- the Shakura \& Sunyaev viscosity $\alpha$, the disk aspect ratio $h_\mathrm{0}$, and the planet--star mass ratio $q$ -- to steady--state solutions of the disk surface density, radial velocity, and azimuthal velocity. Comprehensive testing demonstrates the accuracy of PPDONet, with predictions for one system made in less than a second on a laptop. A public implementation of PPDONet is available at \url{https://github.com/smao-astro/PPDONet}. The second tool, Disk2Planet, infers key parameters in disk-planet systems from observed disk structures. It processes two-dimensional density and velocity maps to output the Shakura--Sunyaev viscosity, disk aspect ratio, planet--star mass ratio, and the planet's location. Disk2Planet integrates the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), an evolutionary algorithm for complex optimization problems, with PPDONet. Fully automated, Disk2Planet retrieves parameters within three minutes on an Nvidia A100 GPU, achieving accuracies ranging from thousandths to percentages. It effectively handles data with missing parts and unknown levels of noise. Together, these tools advance the field of planet formation by providing rapid, accurate solutions and parameter inferences for disk-planet systems, enhancing our understanding of the underlying physics of protoplanetary disks.
  • Item
    An examination of the use of the chemotherapy drug doxorubicin, gold nanoparticles, and radiation in combined cancer therapy
    (2024) Eaton, Sarah; Chithrani, Devika
    The chemotherapy drug doxorubicin (DOX) is a widespread and effective treatment for many different types of cancer. However, it is known for causing significant and dangerous side effects due to high cardiotoxicity. Gold nanoparticles (GNPs) are a promising field of nanomedicine due to their stability, customizability, and radiosensitization properties as demonstrated using in vitro and mice models. They accumulate preferentially in tumours due to the enhanced permeability and retention effect. The combination of GNP mediated radiotherapy and DOX has the potential to deliver highly targeted and effective therapeutics while sparing surrounding healthy tissue. This work used GNPs conjugated with PEG and RGD, DOX, and radiotherapy in combination to investigate possible synergistic cancer therapeutics. MDA-MB-231 cells were dosed for 48 hours with GNPs at a clinically relevant concentration of 7.5 μg/mL. DOX was dosed at the measured IC50 concentration of 144.4 nM for a 48 hour exposure. Radiation doses of 2 Gy and 5 Gy were used, as 2 Gy is commonly used for fractionated radiotherapy and recent clinical trials have also shown 5 Gy to be an effective fractionated radiation dose. A cytotoxicity assay was conducted to determine the IC50 of DOX which was used as the dosing concentration for all other assays. Live cell images were taken to demonstrate the internalization of DOX and GNPs in the cells. To quantify if DOX affected the uptake of GNPs into the cells, a cellular uptake study was conducted. As previous research has indicated that DOX causes cell cycle arrest, a cell cycle assay was conducted. To assess the cytotoxicity and radiosensitization properties of GNPs and DOX, a cellular proliferation study and a clonogenic assay were conducted. Additionally, a DNA double strand break assay was conducted to assess the amount of DNA damage caused. The cellular uptake study revealed that DOX caused an increase in GNP uptake, with (1.27±0.16)×10^6 GNPs per cell when treated with DOX, and (0.76±0.05)×10^6 GNPs per cell when untreated. DOX showed evidence of radiosensitization in the proliferation assay with the combination of DOX and radiation causing a (54±2)% reduction in cell growth when 2 Gy was administered, and a (69±8)% reduction in cell growth when 5 Gy was administered. However, this effect was not synergistic. In the other assays conducted, DOX caused cell cycle arrest, extensive DNA damage, and no clonogenic growth. It was concluded that DOX was inducing senescence at the given dose. GNPs showed some radiosensitization in the proliferation assay at 2 Gy, with (24±2)% reduction in growth after 3 days in the 2 Gy GNP sampled compared to (15±2)% reduction in growth in the 2 Gy control sample. No other significant differences in growth due to GNPs were seen in the proliferation assay. The clonogenic assay showed that 2 Gy radiation caused a (67±5)% decrease and 5 Gy caused a (97.9±0.6)% decrease in clonogenic survival of cells treated with radiation only when compared to the unirradiated control. The GNP incubated sample demonstrated some radiosensitivity in the clonogenic assay as it had a (78±3)% lower surviving fraction when irradiated with 2 Gy then the unirradiated control. The GNPs also showed toxicity in the unirradiated sample, with (30±11)% lower surviving fraction than the control in the clonogenic assay. A Bliss independence test found the GNPs and 2 Gy radiation to have independent effects. There was no significant difference between the GNP and control cells in the clonogenic assay when irradiated with 5 Gy. The DNA double strand break assay showed that 2 Gy radiation caused an increase in DNA damage foci from 2.0±0.2 to 5.1±0.5 foci per cell. No significant difference in foci was seen between the control and the GNP incubated cells. While the results from this work did not demonstrate a conclusive benefit from the combined therapy of doxorubicin, GNPs, and radiation, the system is still of interest. Future experiments could be performed using a reduced doxorubicin concentration such as the IC20, to reduce the toxicity while still causing an effect. If a synergistic effect can be observed, it could be exploited to significantly reduce normal tissue toxicity in cancer patients while still delivering a lethal dose of chemotherapy and radiotherapy to the tumour.
  • Item
    Expedition unknown: Characterizing and modelling perturbed debris disks in search for elusive planets
    (2024) Crotts, Katie; Matthews, Brenda C.; Dong, Ruobing
    Debris disks, which are defined as optically thin, dusty disks around main sequence stars, are intimately connected with planets in their systems. Not only does the mere existence of a debris disk suggest the presence of planets, as they efficiently stir the orbits of planetesimals leading to collisional evolution, but they can also easily shape the morphologies of their disk. To better understand planet-disk interactions, one crucial step is to uncover the variation in disk morphologies that are present in currently resolved disks. Further studies can then be done to understand how these disk morphologies are related to known or unknown planets. In my thesis, I conducted a uniform, empirical analysis of 23 debris disks imaged with the Gemini Planet Imager (GPI) in polarized intensity. For this study, I characterized each disk through multi-wavelength, near-IR data to identify any asymmetries present. I find that the majority of disks (19/23) present a significant asymmetry in either geometry, surface brightness, disk color, or a combination of the three. These findings suggest that perturbations in our sample, as seen in scattered light, are common. Some of these perturbations are consistent with planet-disk interactions, including surface brightness asymmetries, eccentric disks, and warps. Additionally, I identified several possible trends between disk properties and stellar properties that may give further insight into debris disk evolution. This includes a trend between disk color and stellar temperature, and trends between the disk vertical aspect ratio and stellar temperature in tandem with the disk radius. Within the GPI disk sample, I identify one of the most asymmetric disks, HD 111520. In another empirical analysis, I take a closer look at the HD 111520 debris disk to better understand its complex morphology. Using both polarized and total intensity multi-wavelength GPI observations, alongside observations taken with the Hubble Space Telescope (HST), I confirm that the disk hosts a variety of asymmetrical features and structures. This includes the strong 2 to 1 brightness asymmetry observed in previous studies, as well as a significant disk color asymmetry, a distinct 4 degree warp from the disk midplane past ∼180 au, and a bifurcation or “fork”-like structure on the NW side. While the color asymmetry and extreme brightness asymmetry suggests that the disk may have undergone a recent giant collision, the warp and fork structures strongly suggests the presence of an unseen planet. Once these complex disk structures/features are identified, the disk morphology can effectively be used to probe unseen planets. In the final part of my thesis, I used the n-body code REBOUND to simulate the features of the highly asymmetrical disk around HD 111520 via planet-disk interactions. I find that a planet with a mass of ~1 Mjup, that is on an eccentric and inclined orbit outside of the warp location, can create a similar radial asymmetry, warp, and “fork”-like structure in the disk as seen in observations. This work demonstrates how disk morphologies can be used to constrain the mass and orbit of a hidden planet in a perturbed debris disk system.
  • Item
    Evaluation of an acoustic Doppler profiler with application to stratified flow in a fjord
    (1985) Zedel, Leonard James
    In this thesis, the incoherent Doppler profiling technique for remote current measurement in the ocean is evaluated. The fundamentals of Doppler profiling are analyzed in detail and the practical application of the technique is discussed. The single beam Institute of Ocean Sciences (IOS) prototype Doppler profiler is investigated, both with theoretical models of its signal processing circuit and with laboratory and field tests of its operational characteristics. Some time series analysis techniques useful in evaluating the Doppler signal are discussed. The performance of three mean frequency estimators are compared: it is found that the complex covariance method and the scalar phase change method produced accurate estimates, but the vector phase change method yields standard deviations 1.4 times higher than the other methods. The standard deviation of the complex covariance method is shown to depend on the choice of time lag. In agreement with a previous theoretical study (Miller and Rochwarger 1972), it is found that the use of small time lags does not provide the smallest standard deviations. Several data averaging schemes are compared, and, based on the results of this comparison an acceptable scheme for use in coastal waters is selected. As an example of the application of the Doppler profiler, tidal flows occurring over Observatory Inlet sill are investigated. The observations demonstrate the detail with which such a flow can be studied using acoustic remote sensing techniques. The observations are compared to a time-dependent, layered hydraulic model of flow over a sill. The agreement between flow simulated by the model and the Doppler observations indicates that the hydraulic analysis of such a flow accounts for many of the observed characteristics. This comparison serves to illustrate the value of the Doppler measurement approach in highly variable flows.
  • Item
    A study of some new solutions of Einstein and Einstein-Maxwell equations
    (1979) Zannias, Thomas Themistokleous
    In this thesis we deal with static axially symmetric gravitational fields in vacuum and with static axially symmetric electrovacuum. The formalism of Weyl-Levi-Civita has been employed for obtaining solutions of Einstein's and Einstein-Maxwell equations. A solution representing the exterior field of a Curzon particle in combination with a general line mass is obtained. Through a suitable formalism we generate the charged metric representing a charged Curzon particle and a charged general line mass. We also examine some properties of the Bach and Weyl metric. Further we derive solutions of Einstein field equations representing point sources exhibiting multipole structure. Special cases of balance between multipole point sources in the general theory of relativity are also examined.
  • Item
    Structure beneath Queen Charlotte Sound from seismic refraction and gravity interpretations
    (1990) Yuan, Tianson
    The Queen Charlotte Islands region is located on the Canadian western margin near the triple junction between the Juan de Fuca ridge system, the Cascadia subduction zone, and the Queen Charlotte transform fault. The evolution and interactions of the continental and oceanic plates have played an important role in the structural development of the region. A combined multichannel seismic reflection and refraction survey was carried out in July 1988 to study the Tertiary sedimentary basin architecture and formation, and to define the crustal structure and associated plate interactions in the region. Simultaneously with the collection of the multi­-channel reflection data, refractions and wide-angle reflec­tions from airgun array shots were recorded on single channel seismographs distributed on land around Hecate Strait and Queen Charlotte Sound. For this thesis a subset of the resulting data set was chosen to study the crustal structure in Queen Charlotte Sound and the adjacent subduction zone. Two-dimensional raytracing and synthetic seismogram modelling produced a well constrained velocity structure model across Queen Charlotte Sound. Moho depth is modelled at 27 km off southern Moresby Island but only 23 km north of Vancouver Island. Excluding the approximately 3 km of the Tertiary sediments, the crust in the latter area is less than 20 km thick, indicating substantial crustal thinning in Queen Charlotte Sound. Such thinning of the crust suggests an extensional mechanism for the origin of the sedimentary basin. On a margin-parallel line, in the southern portion of Queen Charlotte Sound a mid-crustal event with apparent velocity of more than 7.2 km/s was modelled as a high velocity sliver at a depth of about 17 km. On an unreversed refraction line normal to the continental margin, an upper crust layer with velocity more than 7 km/s also was interpreted at depths above about 13 km. The interpretation of these high velocity layers is uncertain, but they could represent high velocity material imbedded in the crust from earlier subduction episodes or mafic underplating associated with the Masset volcanics. Refraction velocities of both sediment and upper crust layers are lower in the southern part of Queen Charlotte Sound than in the region near Moresby Island. Well velocity logs and multichannel reflection velocity analyses indicate a similar velocity contrast. Gravity models along the reflec­tion line require lower sediment and upper crust densities, consistent with the crustal thinning implied by refraction data. The low velocity/low density sediments correspond to high porosity marine sediments found in wells in the southern part of the region, and contrast with lower porosity non-marine sediments in wells further north. The contrast in upper crust velocity and density from north to south can be explained if the Mesozoic or Tertiary volcanics that appear to floor the basin are underlain by thick and dense volcanic sequences in the north, and by a predominantly sedimentary sequence in the south.