- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
"For the students by the students"
Welcome to the 1st annual student symposium in Physics at the Niels Bohr Institute. This is a new initiative to celebrate and show all the great work done by all of the master students at NBI across the many fields.
OBS - REGISTRATION DEADLINE: Sunday the 20th of March. There will be free sandwiches, coffee and snacks during the program, and cheap beer after 4 pm for the registered.
There are sandwiches and drinks for all participants.
When a patient is submitted to radiotherapy a treatment plan is made. This plan accounts for the energies used in the treatment, the angles from which the energy is delivered and the number of times the patient has to receive treatment i.e number of fractions. In conventional oncology the radiation plan will stay unchanged throughout all the fractions, without regard for minor, long or short term changes in the physiology.
In my thesis I have explored a new option for treatment planning, for vulva cancer patientsm namely adaptive planning.
In adaptive planning we do a cone-beam-CT (CBCT) in a combined CT-Linac, every time the patient is receiving a fraction. From that CBCT it is possible to see changes in physiology around the target, the tumor, and then make minor changes to the treatment plan. These changes takes from 15 to 30 minutes to make. This is done primarily by a medical doctor with a speciality in oncology, but the plan needs approval from a medical physicist. The point of these small changes to the original plan is to optimize the coverage of the target and at the same time to make sure that the organs-at-risk (OAR), all non-tumor organs in the area of radiation, is receiving as little a dose as possible. Changes in the physiology is ranging from deformation of organs, changes in tumor size, shifts in tumor or
OAR location, all the way to urine in the bladder and air in the intestines. But since this adaptation takes 15 to 30 minutes there might be other minor changes to the physiology during the adaption process. In order to to be able to check if the adaptation is still valid when the radiation is delivered a second CBCT is taken just before the delivery of the radiation, after the adaption process.
This project is using the second CBCT to simulate an entirely new adaptation, made with the image taken just before the original treatment. This simulated treatment is compared with the clinically used plan to see if the adaption is still valid.
So far results are showing that adaptation used on 10 patients are valid and are still usable after 15 to 30 minutes. The next step is dose accumulation across all fractions not only to check each fraction, but every fraction all together, to make sure the patient is receiving enough radiation in the tumor whilst as low a dose in the OARs as possible.
The scope of the project is to have a direct impact of the treatment methods used at Rigshospitalet in order to minimize side effects and maximize treatment success in vulva cancer patients.
Greenland has not always been glaciated, and the ice sheet was likely at a minimum– if it existed at all– about one million years ago (Yau, et al, 2016). Here, the aim is to simulate the last million years of the Greenland ice sheet under the assumption that it incepted on bare bedrock. This is accomplished by melting the ice sheet off of present day topography, which allows us to isostatically rebound the bedrock, so we can approximate the conditions during glacial inception (Solgaard, et al, 2013). Then, we model the build-up of a potential ice sheet, varying different parameters, using the Parallel Ice Sheet Model (PISM) . Finally, we compare the model results for inception and dynamics of the ice sheet over the last million years with estimations of extent and timing from several geologic surveys.
An important problem that molecular astrophysics faces is to understand where, when and how complex organic and prebiotic molecules emerge. To form the basic building blocks of life -- for example amino acids or sugars -- molecules have to go through many complex chemical reactions between many different species. The origins of these biomolecular precursors are an ongoing research field, however observations show that they already arise in star-forming regions. Such molecules, including methanol, methyl formate, formamide and other complex organic molecules, form through gas- and solid-state chemistry. For instance, methanol is formed from carbon monoxide (CO) on grain surfaces through hydrogenation steps. Keeping CO in the solid state to form more complex species, however, requires a cold environment (less than ${\sim}20$K). Therefore, temperature and other physical parameters might have an impact on the chemistry in star-forming regions. Over the last year a picture has emerged where young stars accrete gas and dust in a highly episodic manner with, e.g., strong bursts of accretion related to the formation of disks and binaries. This may strongly affect the chemistry as the luminosity of the protostar and thus the temperature in the envelope and disk surrounding the young star will vary significantly compared to what one would expect from simple classical infall models. In this project we characterise the chemical effects of a changing environment around a protostar. We make simple simulations where we couple our chemical network with an underlying physical model of the environment. We explore the significance of, for example, the length, frequency and magnitude of bursts and the density of the envelope on the formation of chemically interesting molecules. We find that many species are affected more than the uncertainty of the network and thus could leave a mark on the observations. Comparing our results to widely spread observations of various species, one could make assumptions on the history of different protostars.
Coffee and snacks
During the last interglacial (130.000-110.000 years ago), the climate in Green-land was warmer than today. Modelling has shown a large span of ice sheet mass losses in Greenland, and it is unclear how far back the ice sheet retreated in the North. At present, Eemian ice is widespread along the margin in Northern Greenland suggesting that the Greenland mass loss was less in the North than inferred from models. Using the ice flow modelling tool PISM I aim to map the present extent of Eemian ice and compare with the findings of Eemian ice with in the present day ice sheet, as well as determining how large of an impact ice in Canada has had on the ice extend. This will help to constrain the Eemian mass loss in NorthernGreenland, and thereby to predict its future evolution and contribution to sea level risein a warmer climate.
Although we do not yet have an accepted theory of quantum gravity, we can predict some of its features. One such predictions is that space-time fluctuates at tiny distances, perhaps even producing microscopic short-lived "virtual" black holes. These effects are difficult to probe experimentally, because they are only expected to be large at energies and distances approaching the Planck scale. However, a promising area where a sensitivity to quantum gravity signals might be achieved is in neutrino oscillations. This is due to the long travel distances of neutrinos, where tiny perturbations to their propagation might accumulate to a measurable signal once they reach a detector. Specifically, fluctuations of space-time and interactions with microscopic black holes lead to loss of coherence and damping of neutrino oscillations. We aim to search for such a signal in atmospheric neutrino data from the IceCube Neutrino Observatory. This will be the most sensitive experimental test to date on neutrino decoherence resulting from Plack scale physics.
When the neutral hydrogen of the intergalactic medium (IGM) started to ionise, the Universe entered its latest major transition: The Epoch of Reionisation (EoR). Gamma-ray bursts (GRBs), the violent cosmic explosions signaling the death of a star with more than 30 times the mass of our Sun, can via their immense brightness make it possible to perform detailed studies of the IGM during the EoR, in particular the degree of ionisation at this epoch. Further, the imprinted absorption features from specific elements in the interstellar medium (ISM) on the GRB afterglows provide unique insights into these properties, which are impossible to study otherwise but also important ingredients in the overall recipe for galaxy evolution. In my thesis, I am examining a good-quality high-S/N optical/near-infrared spectrum taken with the VLT/X-shooter of a recently detected afterglow of the Swift GRB 210905A at redshift z~6.3, when the Universe was less than 10% of its current age. Using Dynamic Nested Sampling methods, I find evidence for a positive neutral fraction of 20% at this epoch, which allows me to put a strong constraint on the reionisation history. I compare this and my results for the host galaxy metallicity to recent models and observations of the ISM in galaxies and the overall IGM at a similar epoch and find that their predictions are in agreement with what I have found.
Topological defects are increasingly being identified in various biological systems, where their
characteristic flow fields and stress patterns are associated with continuous active stress generation
by biological entities. Here, using numerical simulations of continuum fluctuating nematohydrody-
namics we show that even in the absence of any activity, both noise in orientational alignment and
hydrodynamic fluctuations can independently result in flow patterns around topological defects that
resemble the ones observed in active systems. Remarkably, hydrodynamic or orientational fluctuations alone can
reproduce the experimentally measured stress patterns around topological defects in epithelia. We further highlight subtle differences between noise in orientation and hydrodynamic fluctuations based on defect trajectories and persistence time. Our simulations show the possibility of both extensile-
and contractile-like defect motion due to fluctuations and reveal the defining role of passive elastic
stresses in establishing fluctuation-induced defect flows and stresses.
Influenza A is a common viral infection spreading among humans and still remains a major societal challenge resulting in annual, global outbreaks with numerous casualties. A critical step in virus dissemination is the budding from the plasma membrane of infected cells. Despite extensive efforts, the mechanism underlying this viral budding event remains largely unsolved, however, the viral proteins NA, HA, M1 and M2 are known to be implicated in the facilitation of viral budding. In recent years, stochastic lateral pressure amongst membrane proteins, known as protein crowding, has evolved as a putative and effective driver of membrane bending and tubulation. The bulky ectodomains of the spike proteins NA and HA renders crowding a promising mechanism involved in the progression of viral budding. To gather evidence for this mechanism, optical tweezers were used to probe membrane bending in living cells expressing viral proteins. Specifically, membrane tethers were pulled from HEK cells transiently expressing NA, employing optical tweezers to allow for sensitive assessment of the membrane budding potential. Parallel imaging with confocal fluorescence microscopy allowed for correlation of the protein density with the force needed to extract membrane tethers. The results show a decrease in the tether equilibrium force as membrane coverage of NA increases, indicating an effect of crowding. Unexpectedly however, it was found that the average force (17.47 pN) measured for NA membrane tethers is significantly larger than for control cells (10.95 pN). This difference between transfected and wildtype cells is postulated to arise from a variability in membrane mechanics between the two groups, possibly due to side effects of the transfection protocol or due to an increase in the membrane bending rigidity (κ) upon expression of the NA transmembrane domain in the membrane. Conclusively, the presented assay is appropriate for the study of membrane crowding, and offers a novel method for quantitatively assessing the budding effect from viral envelope proteins.
Our aim is to implement an automated quality control assurance method
for meteorological data. The approach is based on a set of different tests
that probe the spatial, temporal an physical consistency of our data in
a statistical driven method. In order to merge the different outputs, we
propose a statistical framework based on accuracy estimation and opti-
misation of our tests and bayesian multiple evidence merging. Finally we
would set the guidelines for an imputation method of the detected outliers
based on the optimisation of the likelihood yielded by our tests.
My research on cavity opto-mechanics includes measurements on the transmission
and scattering loss of the photonic crystal membrane. The method we
took was to build a cavity using a concave mirror as the input window of the
cavity. By measuring the finesse, transmission and reflection at resonance, we
will be able to squeeze some optical information about our membrane out of the
cavity. During my presentation, I will try different models to explain what I measured in experiments.
The daily X-shooter spectra of AT2017gfo provides the most detailed information of any Kilonova to date. Indeed, the recently discovered spectral signatures of Strontium (Sr) yields constraints on the ejecta density, expansion velocity and potential asymmetry. In this talk, we present the spectral information hidden within the Sr-lines, while applying the methodology of expanding photospheres. Ultimately, this framework provides tight statistical and consistent constraints on the expansion rate of the universe, while probing the asymmetry of the Kilonova explosion.
Lensing using galaxy clusters is a sensitive and promising probe of cosmology, complementing classical probes such as the CMB, SN and BAO. It presents an independent way of probing the background geometry of the universe, and has shown to be sensitive to total matter density and dark energy equation of state parameters. This probe is, however, currently limited by the accuracy of lensing models which make use of rigid assumptions. BayesLens is a state-of-the-art hierarchical modelling code created to alleviate these rigid assumptions, and has shown increased performance both on mocks and real clusters.
In this project, we aim to extend the BayesLens code to include cosmological parameters as hyperparameters. With this we would create a code that models the lensing properties and underlying cosmology at the same time, with which we aim to increase precision and accuracy of cluster lensing cosmography. We aim to first prove performance on mock clusters with predefined parameters. After this, the aim of the project is to recreate current cluster lensing cosmography studies using the extended code, to determine improvements and impact when considering real-life clusters.
Neutrinos are elementary particles with unique properties that make them ideal probes of new physics beyond the Standard Model. The highest-energy neutrinos known, produced in astrophysical phenomena, offer us an opportunity to look for new fundamental particles at energies not reached by particle accelerators on Earth. One possible new particle is the axion, originally postulated as a solution to the strong CP problem of the Standard Model, and, also, a dark matter candidate. We investigate the potential existence of the axion indirectly: as the high-energy astrophysical neutrinos travel billions of light-years on their journey towards the Earth, they might encounter a cosmic background of axions. If a neutrino-axion interaction happens, it will leave characteristic features like bumps and dips in the shape of the neutrino energy spectrum. We search for these features in astrophysical high-energy neutrino events detected by the IceCube neutrino telescope. The method allows us to look for tiny coupling strengths and axion masses down to $10^{-11}$ eV. Our preliminary findings are promising, since current IceCube observations seem to be able to constrain the existence of neutrino-axion couplings in a sizable part of the model parameter space.
The albedo of snow and ice has a strong positive feedback loop that controls the amount of solar radiation absorbed by the surface. Snow melt induced by a surface warming will lead to a decreased albedo that allows for more solar radiation to reach the surface and will further increase the melting and surface warming. In current climate models, the albedo over perennial snow is kept constant and thus, can not represent this feedback. In the recent past, a more realistic snow albedo parametrization has been incorporated into the global climate model EC-Earth3 over the Greenland Ice-sheet. To analyze and understand the impact of this new parametrization, several simulations with (and without) the modified snow albedo scheme were performed using the atmospheric general circulation model (A-EC-Earth3). When evaluating the simulated model climate, wave train-like patterns are visible in the global two meter temperature. These patterns lead back to shifts in elements of the general circulation of the atmosphere. To observe the significance of these patterns over their long term means, the Welch’s t-test and the Kolmogorov-Smirnov Test were used. In a likewise manner, other atmospheric variables such as the geopotential height and mean sea level pressure were evaluated to link these patterns to a specific part of the atmospheric circulation. The latter is part of ongoing work.
Active matter comprises systems of living or synthetic constituents that are constantly driven far away from thermodynamic equilibrium. Here we explore the collective self-organization of active flexible filaments using a numerical model of self-propelled bead chains. We explore phase transitions to and from active turbulent states. In particular, we investigate the perhaps counterintuitive result in which extremely flexible polar filaments require more active force transition than their more rigid counterparts.
The IceCube Detector is a neutrino telescope designed to study the cosmos from deep within the ice sheet of the south pole. A cubic kilometre of ice is used to detect the Cherenkov radiation from charged decay products of neutrinos. With thousands of triggered events every second, sophisticated methods are needed to process the data. Our use of Graphical Neural Networks provides an alternative reconstruction method to the reliable, but slow, current regime.
We present new analyses of stopped muons contained inside the detector, estimating the inelasticity of neutrino/anti-neutrino decays as well as using GNNs for noise cleaning in the forthcoming IceCube upgrade detector.
The IceCube Neutrino Observatory is a cubic-kilometre Cherenkov detector located at the South Pole. The IceCube Upgrade will improve the detection, reconstruction, and particle identification of atmospheric neutrinos in the GeV energy range using seven additional strings of new multi-PMT photosensors to increase the instrumentation density in the bottom centre of the detector. Neutrino telescopes are currently falling short at separating between neutrino and antineutrino events. Even a small partial separation between those two signals would be a major improvement. For example, this separation could be used to enhance the detector sensitivity to the neutrino mass ordering. The difference in the weak interactions between the neutrinos and antineutrinos caused by their opposed chiral states results in different inelasticity distributions. The muon neutrino or antineutrino interacting through charged current creates a muon who can be separated from the hadronic cascade. The modelling of the theoretical inelasticity distribution of the interactions in the GeV energy range coupled with the higher rate of detected events improves the potential of statistically separating the neutrino from the antineutrino signal. This is studied using simulations of the upgraded detector.
The theory of General Relativity (GR) stands on two of Einstein's most celebrated ideas: the Equivalence Principle and the statement that "gravity is geometry". The former implements Special Relativity (SR) and forces the geometry of spacetime to be Lorentzian, thus turning the previous statement into "gravity is Lorentzian geometry". It may not be obvious that these two ideas, yet closely related, are completely independent. Indeed, if willing to give up SR, one could in principle require that spacetime exhibit a local symmetry other than Lorentzian, and still work out a geometric theory of gravity. For instance, imposing local Galilean symmetry leads to Newton-Cartan geometry, pioneered by É. Cartan
in 1923. The result is a non-relativistic geometric theory of gravity.
In this work we study the recently discovered covariant formulation of non-relativistic gravity (NRG), obtained from an appropriate large speed of light expansion of GR, with the ultimate goal of obtaining its linearised spectrum. This involves, as a preliminary step, rewriting GR in terms of a timelike vielbein, a spatial metric and a torsionful connection, which in turn allows to write the Einstein-Hilbert Lagrangian in a so-called pre-non-relativistic form. This is proven to be the most suitable form in order to perform the aforementioned expansion. In addition, we also review Newton-Cartan geometry, as the natural geometrical framework of NRG, as well as related state-of-the-art techniques such as the obtention of relevant geometric fields via gauging procedures.
Building on this knowledge, and in analogy with the obtention of gravitational waves as a solution to the linearised Einstein field equations in GR, we aim to obtain the linearised spectrum of NRG from its action and corresponding equations of motion when considering small perturbations of the geometric fields around a flat Newton-Cartan background.
AdS/CFT correspondence is one of the major developments in theoretical physics, but lack of rigorous proof. It has been showed that planar sector of AdS/CFT can be solved completely, by the miracle tool called $integrability$. We are mainly focusing on the AdS/dCFT, a defected version of AdS/CFT. On the field theory side the full symmetry of $\mathcal{N}=4$ Super Yang-Mills theory is broken, and on the gravitational side a potential solution is probe brane system. Such a system can be solved completely by the conformal bootstrap given the knowledge of defect conformal data. The one point functions in dCFT can be obtained by the overlap between MPS (matrix product stateS) and Bethe eigenstates. Our thesis studies the $(SU(3),SO(3))$, $(SO(6),SO(3)\times SO(3))$ sector in $\mbox{D3-D5}$ probe brane system and $(SO(6),SO(5))$ sector in $\mbox{D3-D7}$ probe brane system using the twisted Yangian algebra.
A climate model is a canonical example of multi-physics multi-scale modeling that requires a significant amount of computation time to extract signals of climate change, especially in finer resolution configurations. Finer resolution is pertinent because it enhances the fidelity of resolved features, and allows for the representation of missing processes and feedback, such as mesoscale eddies in ocean modeling. This thesis aims to identify and optimize algorithms in the global climate model EC-Earth3. Specifically, the high resolution configuration (EC-Earth3-HR) at 0.25° in the ocean subcomponent Nucleus for European Modeling of the Ocean (NEMO). Using diagnostics on the average CPU and elapsed time per subroutine, it is identified that the calculation for horizontal transport of sea-ice advection/diffusion is most time-consuming. Another subroutine of interest is the calculation of diagnostics on the poleward heat and salt transports, which doubles in runtime time (both elapsed and CPU) when resources increase by a factor of 1.25. Further analysis must be carried out to determine which of these can lead to faster performance once optimized.
Quasars are manifestations of accreting supermassive blackholes in the centers of galaxies. It is currently uncertain how many quasars there are as quasar selection is known to be biased and incomplete. In this work we use a novel more unbiased quasar selection technique based on machine learning to reach a more reliable and complete census of quasars in a large section of the sky around the northern galactic pole.
Bacteria typically grow in communities and this provides them substantial advantages compared to solitary cells. These communities are often comprised of multiple bacterial species leading to the emergence of complex spatial patterns. The emergence of these complex spatial patterns can have a profound effect on bacterial function and survival within the communities. Most experimental studies investigating the mechanism behind this pattern formation have focused in two-dimensional systems. Here we propose a novel approach to study three-dimensional multi-species bacterial colonies. This three-dimensional setting replicates better some environmental bacterial habitats such as soil and intestines. Our results indicate that in three-dimensional multi-species colonies just the cells from the outer part of the colony are able to grow while the center of the colony remains static. We anticipate our protocol to be a starting point for further studies. For example, the protocol could be used to bring some light to many different biological and physical questions which are still unanswered such as the mechanism behind horizontal gene transfer and the social interactions arising within multi-species three-dimensional bacterial colonies. Furthermore, understanding how bacteria thrive in competitive habitats and their cooperative strategies for surviving extreme stress can be instructive, for instance, to inspire new investigations for developing a more rational approach for battling pathogenic bacteria resistant to antibiotics.
A wide range of living and artificial active matter exists in close contact with substrates and under strong confinement, where in addition to dipolar active stresses, quadrupolar active stresses can become important. Here, we numerically investigate the impact of quadrupolar non-equilibrium stresses on the emergent patterns of self-organisation in non-momentum conserving active nematics. Our results reveal that beyond having stabilising effects, the quadrupolar active forces can induce various modes of topological defect motion in active nematics. In particular, we find the emergence of both polar and nematic ordering of the defects, as well as new phases of self-organisation that comprise topological defect chains and topological defect asters. The results contribute to further understanding of emergent patterns of collective motion and non-equilibrium self-organisation in active matter.
Magnetic induction tomography (MIT) is a method of detecting and imaging conductive objects, based on the detection of induced radio-frequency electromagnetic fields. It is a promising non-invasive method, which can be used for biomedical applications such as the diagnostics of heart diseases. The induced electromagnetic fields are measured by the polarization of light sent through a cell filled with the Cesium gas, as the macroscopic atomic spin of the cesium gas responds to the electromagnetic fields. However, the sensitivity of MIT is limited by non classical noise. By applying methods such as stroboscopic spin-squeezing and back action evasion to reduce the quantum noise of Cesium gas inorder to get the sensitivity limited to the projection noise. Through varying the duration of the stroboscopic spin preparation with a duty cycle of 15% we are shown to have achieved more than -3 dB of squeezing at a Larmor frequency of 722 kHz, progressing towards quantum enhanced MIT measurements of a salt water sample.
What is Dark matter? Why is there an asymmetry between matter and anti-matter? Why do neutrinos have
mass? These three critical questions cannot be answered by the Standard Model of particle physics, despite it’s overwhelming success in describing the results of high-energy experiments at subatomic scales. They play a pivotal role in forming our understanding of the universe from its earliest epochs to its ultimate fate, at all levels from quarks to quasars. We thus require an extension of the current model to describe the phenomena that appears in beyond Standard Model physics, such as the ones that could blossom from the aforementioned
questions.
One of the simplest such extensions possibly addressing all of these, is the introduction of heavy/sterile neutrinos: close cousins of neutrinos, but far heavier. These hypothesized particles do not interact directly with any known force of the Standard Model, it is thus impossible to detect this particle directly. However, the heavy neutrino can decay to particles that can be detected directly, and thus if one were to know exactly how this decay would look - one could find evidence of it’s existence.
The thesis is thus concerned with finding possible decay processes (search strategies) that one could look for at the Large Hadron Collider at CERN to verify the existence of this heavy neutrino, which, if found, could shed light on many problems both in and beyond the Standard Model.
In this work, we use Differential scanning calorimetry (DSC) to detect the de-
naturation of pea protein isolate (PPI) with different relative humidity (20% RH
and 80 %RH) in different equilibrium time ( 4 months and 7 days ). Using this
approach, we obtained the temperatures of PPI denaturation and decomposition,
and also concluded that time has an important effect on the redistribution of water.
We also use DSC and Thermogravimetric analysis ( TGA ) to measure pea protein
samples treated by four ways ( heating, adding dithiothreitol(DTT) solution ). We
conclude that samples used four ways to treat are not stable and they will go back
to stable state. In addition, our samples are crystalline and amorphous except old
pea protein that is amorphous.
Use of EOF and SVD techniques to determine the relative importance of different atmospheric modes, using data of geopotential height, temperature, and others. The NCEP and ERA5 reanalyses are used for this and their results will be compared.
An important part of quantum condensed matter physics deals with the calculation of phase diagrams of interacting many-body systems. A novel approach for studying these systems is to utilize representation learning. This technique enables the study of quantum states in a human independent manner, as the representation of quantum phenomena is decided on by the AI. Therefore, this approach promises a new perspective into the study of phase diagrams and new insight into the description of quantum states.
In general, representation learning is a technique that allows a system to automatically discover the representations needed to perform a task from data. This is achieved by compressing the input into a smaller set of latent variables. In my thesis, variational autoencoders (VAEs) are used. An autoencoder is a type of neural network that learns an efficient encoding of data through refinement by attempting to regenerate the input from the encoding. The compression is achieved by passing the input through the latent space of smaller dimension.
The variational autoencoder adds the idea of constraining the latent variables to be normally distributed. This allows new output to be generated by sampling. Other types of constraints can also be applied to the latent space, all with the goal of "disentangling" the latent variables and encouraging them to be independent and learn unique features of the input. This disentanglement of latent variables is a key feature of representation learning.
One of the research goals in this thesis is the targeted representation learning of entangled states. In this problem, a dataset of groundstates of a spin-1 Hamiltonian with various degrees of entanglement entropy is used. By training a VAE on this dataset and disentangling its latent space, it is investigated how the entanglement entropy is meaningfully represented by the latent variables.
An honorary prize will be awarded to two presentations (oral and poster) judged to be the "best" overall, based on the content, the visual display and the ability to present the subject. A panel of faculty/supervisors (Heloisa Bordallo (chair), Kim Lefman, Oleg Rychayskiy, Jørgen Peder Steffensen) will judge the presentations and announce the winners.