HAMLET-PHYSICS 2025 Conference/Workshop
Lundbeck Auditorium
We are looking forward to welcoming you to the second annual HAMLET-PHYSICS Conference/Workshop, to be held in sunny Copenhagen, August 20 - 22, 2025.
The workshop has three main goals: 1. To bring together Danish and international physicists using ML to meet, share ideas, and build community across location and physics specialty 2. To bring domain scientists into close contact with ML experts, to build community across the theory - application bridge 3. To provide a friendly environment for researchers to share best practices, for students to interact with experts, and for other sciences and industry to understand the state of ML in physics |
![]() |
Scientific Program
- Keynotes, plenaries and parallels
- Discussions and (AI-assisted) research speed-dating
- Beer talks and train-ride chats
- Hackathons and demonstrations from experts in high performance computing and machine learning
Confirmed speakers include Sascha Caron (NIKHEF), Sune Lehman (DTU) and Mario Krenn (Max Planck Institute & University of Tübingen).
Abstracts are open for contributions at the intersection of machine learning and
- Particle physics
- Astrophysics and cosmology
- Quantum physics
- Biophysics
- Climate science
- Geophysics
- Molecular physics
- Condensed matter
This is not an exhaustive list. We warmly welcome all submissions for talks, suggestions and ideas, and will strive to accommodate all submissions.
Important Dates
- Registration & abstract submission opens: May 29, 2025
- Abstract deadline: July 17, 2025
- Notification of talks: July 20, 2025
- Program online: August 1, 2025
- Registration deadline: August 7, 2025
- Scientific program of the conference begins August 20, 09.00
- Scientific program of the conference ends August 22, 17.00
Abstracts submitted after the deadline will be considered on a case-by-case basis.
Social Program
Wednesday August 20th will feature a poster session and reception event at the University of Copenhagen Biocenter.
On Thursday evening August 21th, the workshop will take to the rails: A heritage 1950s Norwegian State Rail diesel locomotive will take workshop attendees from Copenhagen (Østerport Station) to Kronborg Castle in Helsingør (location of Shakespeare's Hamlet tale).
While on board, refreshments will be served, and breakout sessions will occur according to attendees research areas of interest. A visit of Kronborg will be included, along with an open-air conference dinner in Helsingør.
Organization
Local Organizing Committee
- Daniel Murnane (NBI)
- Troels Petersen (NBI)
- Inar Timiryasov (NBI)
- Jean-Loup Tastet (DIKU)
- Troels Haugbølle (NBI)
- Oswin Krause (DIKU)
Sponsors
-
-
08:30
→
09:20
Registration & Breakfast
-
09:20
→
09:40
Plenary: Intro
-
09:40
→
10:40
Keynote: Mario Krenn
-
09:40
Towards an artificial muse for new ideas in Science 1h
Artificial intelligence (AI) is a potentially disruptive tool for physics and science in general. One crucial question is how this technology can contribute at a conceptual level to help acquire new scientific understanding or inspire new surprising ideas. I will talk about how AI can be used as an artificial muse in physics, which suggests surprising and unconventional ideas and techniques that the human scientist can interpret, understand and generalize to its fullest potential.
[1] Krenn, Pollice, Guo, Aldeghi, Cervera-Lierta, Friederich, Gomes, Häse, Jinich, Nigam, Yao, Aspuru-Guzik, On scientific understanding with artificial intelligence. Nature Reviews Physics 4, 761–769 (2022).
[2] Gu, Krenn, Interesting Scientific Idea Generation Using Knowledge Graphs and LLMs: Evaluations with 100 Research Group Leaders. arXiv:2405.17044 (2024)
[3] Rodríguez, Arlt, Möckl, Krenn, Automated discovery of experimental designs in super-resolution microscopy with XLuminA. Nature Communications 15, 10658 (2024).
[4] Krenn, Drori, Adhikari, Digital Discovery of Interferometric Gravitational Wave Detectors. Physical Review X 15, 021012 (2025).
Speaker: Mario Krenn
-
09:40
-
10:40
→
11:05
Coffee 25m
-
11:05
→
11:55
Plenary: Physics Theory
-
11:05
Machine Learning and Quantum Field Theory: a two-way dialogue 25m
Quantum Field Theory (QFT) and modern Machine Learning (ML) share deep structural analogies, from path integrals and renormalization to latent spaces and marginalization. With its solid theoretical foundation, QFT offers a powerful lens to interpret global behaviors in ML that remain poorly understood. This talk explores the interplay between QFT and ML in both directions.
In QFT, the only systematically-improvable, first-principles approach is Lattice QFT, based on evaluating the path integral using MCMC algorithms which remains the state-of-the-art. Recently, architectures based on Normalizing Flows (NF) have been proposed to enhance or even replace these methods. We will show how ML can help address the long-standing signal-to-noise degradation problem in Lattice QFT by combining NF with stochastic automatic differentiation.
Conversely, we discuss how QFT provides a controlled environment to explore core ML concepts. In particular, we present a generative stochastic autoencoder trained to perform a Super Resolution task on field configurations, where notions like depth, latent structure, and resolution enhancement emerge in a physically meaningful context.Speaker: Pietro Butti -
11:30
Machine learning for analytic calculations in theoretical physics 25m
In this talk, we will present recent progress on applying machine-learning techniques to improve calculations in theoretical physics, in which we desire exact and analytic results. One example are so-called integration-by-parts reductions of Feynman integrals, which pose a frequent bottleneck in state-of-the-art calculations in theoretical particle and gravitational-wave physics. These reductions rely on heuristic approaches for selecting a finite set of linear equations to solve, and the quality of the heuristics heavily influences the performance. In this talk, we investigate the use of machine-learning techniques to find improved heuristics. We use funsearch, a genetic programming variant based on code generation by a Large Language Model, in order to explore possible approaches, then use strongly typed genetic programming to zero in on useful solutions. Both approaches manage to re-discover the state-of-the-art heuristics recently incorporated into integration-by-parts solvers, and in one example find a small advance on this state of the art.
Speaker: Matthias Wilhelm (SDU)
-
11:05
-
12:00
→
13:30
Lunch
-
13:30
→
13:55
Plenary: Foundation Model
-
13:30
PolarBERT, a foundation model for the IceCube Neutrino Observatory 25m
We report on our progress in training PolarBERT, a foundation model for the IceCube Neutrino Observatory, and studying its generalization properties under domain shift induced by simulation imperfections.
The IceCube Neutrino Observatory at the South Pole consists of a cubic kilometer of Antarctic ice, instrumented with 5,160 digital optical modules. These modules collect light induced by neutrino interactions in the ice. This data is then used to identify the neutrino directions, energies, and types, which are essential inputs for both particle physics and astrophysics. Deep learning methods, such as graph neural networks, have been successfully applied to the steady stream of data that IceCube receives. In this work, we train a transformer-based foundation model on simulated IceCube data using a self-supervised learning objective, i.e. without relying on the true labels, that would otherwise need to be obtained from simulation. This is a first step towards being able to pre-train the model on real, unlabeled physics data. This pre-trained model can then be fine-tuned on various downstream tasks, such as directional reconstruction of neutrino events, in a sample-efficient manner.
In terms of performance, PolarBERT compares favorably to state-of-the-art supervised models while offering greater flexibility.Speaker: Inar Timiryasov (NBI)
-
13:30
-
13:55
→
14:55
Keynote: Sascha Caron
-
13:55
AI for fundamental physics – next steps 1h
We examine current advances in large physics models and foundation model approaches, identifying key challenges for benchmarking
and promising avenues for further development of AI-assisted physics research.Speaker: Sascha Caron
-
13:55
-
14:55
→
15:25
Coffee 30m
-
15:25
→
16:45
Plenary: Particle Physics Applications
-
15:25
When MAGIC met IceCube: Doing Gamma-Ray Astronomy with Neutrino Event Reconstruction 25m
Neutrino and gamma-ray observatories might have more in common than you think. The MAGIC Telescope system, comprising a pair of 17 m Imaging Atmospheric Cherenkov Telescopes (IACTs), is located at Roque de Los Muchachos Observatory in La Palma, Spain. MAGIC is designed to detect gamma rays from around 50 GeV to over 50 TeV via atmospheric air showers. Arrays of IACTs rely on a complex pipeline in which each air shower imaged by the detectors generates a temporal stereo signal. The signal must then be calibrated, flattened into an image, cleaned, parameterized, and ultimately reconstructed by an ensemble of Random Forest algorithms. While Convolutional Neural Networks have shown promise for full event reconstruction in recent years, we demonstrate that neutrino event reconstruction techniques from IceCube can significantly reduce the path from raw telescope data to scientific output.
In contrast to standard analysis methods, this study directly leverages calibrated waveform data for the first time. These data consist of 30 ns temporal signals from each camera pixel. Due to the unconventional geometry of the MAGIC cameras and asynchronous clocks between pixels, we employ Graph Neural Networks (GNNs) as a classification algorithm for the first time in an IACT, using the neutrino GraphNeT framework and DynEdge model. In addition, we find that DeepIce, a transformer model for arrival direction reconstruction in IceCube, provides robust reconstruction when applied to MAGIC data. These findings show that GNNs and transformers excel at reconstructing raw MAGIC data, demonstrating the benefits of when techniques can hop between domains.
Speaker: Jarred Green (Max Planck Institute for Physics) -
15:50
Imaging Electron Beams with Virtual Diagnostics 25m
Modern machine learning is becoming more widely applied to the field of particle accelerators. One such type of application is a virtual diagnostic (VD), where one reconstructs the output of time-consuming or destructive diagnostics using machine learning methods. In this contribution we present the application of a general structure of artificial neural networks (ANN) and training procedures which has been used to construct VDs for multiple different facilities. We have focused on an application of VDs which allows for online extraction of the beam's longitudinal phase space (LPS), otherwise measured destructively with transverse deflecting structures. We present how specific architectures of ANNs were chosen to produce accurate LPS predictions, their advantages and disadvantages. We report results from implementing these methods at the particle accelerators MAX IV in Sweden, the FERMI FEL in Italy and the SwissFEL at PSI in Switzerland. We show how these systems can be used to reach reliable predictions of the LPS for all three facilities. For future work, we show how virtual diagnostics could be further developed to suit the specific needs of operations at each facility.
Speaker: Johan Lundquist (Lund University) -
16:15
Lightning talks 30m
-
15:25
-
16:45
→
18:00
Reception: Poster session
-
16:45
AI/ML Impact Areas in Physics and Materials Discovery of Halide Perovskite Nanomaterials 1h 15m
Artificial intelligence and machine learning (AI/ML) are transforming experimental condensed matter physics and material science, enhancing the discovery rate. Our work focuses on colloidal metal-halide perovskite quantum dots, which are versatile nanomaterials with strong potential as LEDs, quantum light sources, and other optoelectronic devices. Owing to their broad compositional tunability and simple fabrication, these materials have attracted significant academic attention, with over 1,500 publications per year in 2024. In this scientific landscape, AI/ML promises to make a significant impact in three key areas: managing the growing volume of primary experimental data relative to derivative results, enhancing reproducibility across research groups, and ultimately, optimizing complex synthesis parameters to accelerate material discovery. Recent advances have shown that large language models (LLMs) may be used for encoding, translation, and optimization of experimental procedures, as well as prediction of experimental outcomes. In this contribution, we propose an LLM-driven agentic framework that facilitates an ontology-supported encoding of fabrication procedures and material properties, learns continuously from the literature, streamlines research workflows for perovskites, and enhances reproducibility to accelerate discovery.
Speakers: Ms Lidiia Varhanik (Lund University), Dmitry Baranov (Lund University) -
16:45
Assessing Insect Biodiversity and Activity Patterns in Tropical Forests Using Entomological Lidar and Hierarchical Clustering Analysis 1h 15m
Conventional methods for monitoring insect populations and diversity often encounter limitations in spatial and temporal resolution, are labor-intensive, and carry inherent biases from light or bait traps. To overcome these challenges, our research group employs entomological Lidar. This technique, unlike traditional time-of-flight Lidar, uses a specialized Scheimpflug configuration that sharply focuses multiple targets, both near and far, onto the camera sensor simultaneously within a single exposure. We used this Lidar system for the non-invasive assessment of insect diversity and daily activity patterns within the Taï virgin forest in Côte d'Ivoire, West Africa.
The deployed Lidar system scanned various elevation angles along a forest edge. It was complemented by conventional trapping methods at different canopy heights. By scanning the airspace alongside diverse vegetation and tree types, we could assess insect behavior and populations across multiple microhabitats and canopy layers. This comprehensive approach enabled us to investigate insect composition and their spatial-temporal distribution throughout the forest canopy. Our findings reveal stratified patterns of insect activity at distinct heights. Variations in Lidar signals reflect distinct species compositions at different heights and times of day. This demonstrates a direct link between vegetation heights/canopy layers and insect biodiversity, with different species occupying specific levels.
A pivotal aspect of our work involves applying Hierarchical Clustering Analysis (HCA) to the Lidar-derived modulation power spectra. This technique effectively manages the inherent variability in entomological Lidar data by grouping modulation spectra based on their similarities. HCA enabled us to identify distinct insect clusters, which correlate with observed insect diversity even when direct species identification is not feasible. Furthermore, we analyzed the optical properties of captured insects, including wing specularity and polarimetric response. Correlating these properties with Lidar signals helped us elucidate distinct insect clusters and activity patterns across different canopy layers.
This study demonstrates how Lidar technology helps overcome many conventional monitoring challenges. It provides a comprehensive, high-resolution overview of insect diversity and population and its variations within microhabitats and over time. Our approach, which applies HCA clustering to Lidar data, reveals patterns in insect biodiversity that are valuable for ecological understanding and conservation efforts.
Speaker: Meng Li (Lund University, Combustion Physics) -
16:45
Characterizing ultrashort laser pulses with DenseNet 1h 15m
Knowledge of the shape and duration of ultrashort laser pulses plays an important role, e.g., in the optimization of high-harmonic generation (HHG), pump-probe spectroscopy, generation of Terahertz radiation [1], and can be an ideal diagnostic tool for lasers systems, be they conventional or novel. Among the current laser pulse characterization methods, the dispersion-scan (d-scan) technique emerges as a robust, inline technique, that has single-shot data acquisition variants. This data comprises of a 2D plot (i.e. d-scan trace) from which numerical routines can be use to extract/retrieve the laser pulse (composed of spectrum and spectral phase). A fast pulse retrieval is desirable, hence in this work we implement and extend the DenseNet [2] neural networks to retrieve d-scan traces (including spectrum and spectral phase) and compare their inference speed with the execution time of optimized conventional retrieval algorithms.
[1] M Kling et al., J. Opt. 27, 013002 (2025)
[2] S Kleinert et al., Opt. Lett. 44, 979-982 (2019)Speaker: Miguel Canhota (Lund University) -
16:45
Efficient Quantum Vision Transformers 1h 15m
Vision transformers (ViTs) have emerged as powerful tools in computer vision. However, ViTs can be resource-intensive because of their reliance on the self-attention mechanism that involves $\mathcal{O}(n^2)$ complexity, where $n$ is the sequence length. These challenges become even more pronounced in quantum computing, where handling large-scale models is constrained by limited qubit resources, inefficient data encoding, and the lack of native support for operations like softmax, necessitating alternative approaches that are better suited to quantum architectures. Recent advances have introduced ViT architectures in which the quadratic self-attention mechanism is replaced with FFT-based spectral filtering for improved efficiency. Building on this foundation, we propose QFT-ViT, a quantum-compatible extension of the FFT-based ViT, aims to address the computational constraints of quantum hardware. By leveraging the structural similarity between FFT and the quantum Fourier transform (QFT), QFT-ViT enables efficient global token mixing in quasi-linearithmic time $\mathcal{O}(n \log n)$, avoiding operations such as softmax and dot products that are not natively supported in quantum circuits. The model adaptively filters frequency components in the spectral domain to capture global context with reduced resource overhead. Experimental results on benchmark datasets demonstrate that QFT-ViT achieves competitive accuracy and offers a scalable solution for applying transformer models in quantum machine learning.
Speaker: Vinay Chakravarthi Gogineni (University of Southern Denmark) -
16:45
Investigating the insulator to metal transition in $\textrm{Ca}_2\textrm{RuO}_4$ with $k$-means 1h 15m
Many complex oxides undergo a temperature-driven insulator-to-metal transition (IMT), usually around room temperature. In some materials, an IMT can also be induced by applying a current, opening the possibility to use these materials as controllable switches in electronic devices. $\textrm{Ca}_2\textrm{RuO}_4$ is such a case: The resistivity can be changed by orders of magnitude when a current is applied, but the underlying physics is poorly understood. In this study, angle-resolved photoemission spectroscopy (ARPES) measurements of the current-induced IMT in $\textrm{Ca}_2\textrm{RuO}_4$ are clustered with $k$-means. The qualitative signs of the IMT are correctly identified by the clustering algorithm, both when clustering the full angle and energy-resolved spectra and when clustering the angle-integrated energy distributions. In addition, more detailed information on the changes in the electronic band structure under the IMT is revealed.
Speaker: Anders Sandermann Mortensen (Department of Physics and Astronomy - Aarhus University) -
16:45
Machine Learning for Predicting Catalyst Properties in Binary Alloys 1h 15m
I present a machine learning framework to investigate the catalytic activity of monolayer binary alloys toward the oxygen reduction reaction (ORR). Leveraging a dataset comprising thousands of density functional theory (DFT) calculations of OH adsorption energies on AgPt/Pt(111), AuCu/Cu(111), AuPt/Pt(111), and AuPd/Pd(111) monolayer alloy surfaces, I engineered 25 structural, energetic, and compositional features to capture complex physicochemical interactions. Tree-based models were developed to predict adsorption energies of ORR intermediates and to perform two classification tasks: (i) identifying the adsorption site (Au, Cu, Pd, or Pt), and (ii) classifying adsorption states as compressed or expanded. Both classification models achieved high accuracy, demonstrating robust performance. The adsorption energy of OH on monolayer surfaces was predicted through supervised regression using LightGBM and XGBoost models. Through cross-validation and hyperparameter tuning, model interpretability was enhanced using feature importance and SHAP analyses. Notably, despite comprehensive feature engineering, the lattice parameter of the guest emerged as an important predictive descriptor. This finding aligns with the established critical role of the lattice parameter in oxygen reduction reaction (ORR) activity.[[1],2] Overall, these findings illustrate how targeted feature engineering, coupled with interpretable machine learning, can identify key physicochemical descriptors and expedite the data-driven discovery of efficient ORR catalysts
References
1 Ozório, MS, et al., Journal of Catalysis, 443, 2025, 115988
2 Ozório, MS, et al., Journal of Catalysis, 433, 2024, 115484Speaker: Dr Mailde S. Ozório (University of Copenhagen) -
16:45
Perovskite Molecular Dynamics Enhanced by LATTE: Machine Learning Potentials Using Local Atomic Tensors 1h 15m
Perovskite materials are central to various technologies, particularly in photovoltaic applications. However, computational studies of perovskites using ab initio methods are limited by computational cost, especially when simulating large systems or long time scales. Molecular dynamics (MD) simulations offer a viable alternative, yet classical interatomic potentials often fall short in accurately modeling the complex interactions in these materials under different thermodynamic conditions [1].
In this work, we use the recently introduced Local Atomic Tensor Trainable Expansion (LATTE) descriptor [2] to construct a machine-learning interatomic potential for CsPbI₃ perovskite. The LATTE-based model achieves a lower loss compared to the Atomic Cluster Expansion (ACE) on the same dataset, highlighting its superior accuracy and generalization capabilities. These results suggest that LATTE provides a powerful approach for simulating perovskites, potentially enabling more predictive modeling of their behavior in realistic conditions.
[1] H. Zhang, H.C. Thong, L. Bastogne, C. Gui, X. He, P. Ghosez, Phys. Rev. B 110, 054109 (2024)
[2] F. Pellegrini, S. de Gironcoli, E. Küçükbenli, arXiv:2405.08137 (2024).
Speaker: Atefe Ebrahimi (SISSA-ICTP) -
16:45
Shape and Softness Alter Particle sorting in deterministic lateral displacement : An experimental and simulation study 1h 15m
This work presents a systematic investigation into how particle shape and softness—specifically in bacterial samples—affect sorting in deterministic lateral displacement (DLD) microfluidic devices. While previous studies have qualitatively observed the impact of non-spherical shapes, we provide a quantitative, two-level experimental analysis using shape-defined particles and high-speed imaging. To complement the experiments, we used finite-element simulations to model solid particles with defined density and velocity, examining their interaction with the flow regime and resulting shear stress.
A key finding is that non-spherical particles do not sort as expected, even when their longest axis matches that of a spherical particle, due to rotational dynamics altering their effective size. We also show that soft, non-spherical bacterial chains and clusters deform dynamically during sorting, affecting their trajectory between DLD posts and their sorting result.Speaker: Elham Akbari (Lund University) -
16:45
Understanding angle-resolved photoemission data in space and time 1h 15m
Angle-resolved photoemission spectroscopy (ARPES) is a key experimental technique to determine the electronic structure of quantum materials. Recently, two new variations of ARPES have been introduced. MicroAREPES adds spatial resolution, so that inhomogeneous samples or operating devices can be studied. Time-resolved ARPES adds time resolution, opening a window on dynamic processes, such as coherent phonons or the melting of charge density waves.
Both approaches greatly increase the number of parameters explored and the complexity of the data. In order to discover trends in a multi-dimensional parameter space, we employ clustering techniques such as k-means [1]. Despite the simplicity of the approach, valuable insights are gained on the dynamics of electrons in the three-dimensional Brillouin zone of a Weyl semimetal [2], on the melting of a charge density wave and on the general photoemission line shape in time-resolved ARPES [3].
References
[1] K. O. Mortensen et al., Procs. VLDB Endowment 16, 1740 (2023).
[2] Paulina Majchrzak et al., Physical Review Research 7, 013025 (2025).
[3] T. C. Meyer et al. arxiv2506.02137
Speaker: Philip Hofmann -
16:45
Unlocking the MeV Spectrum: Advancing Detector Technology with AI 1h 15m
MeV gamma-ray astronomy holds the key to studying some of the Universe’s most energetic and dynamic phenomena, including kilonovae, supernovae, and gamma-ray bursts. Yet, this region of the spectrum remains underexplored (the so-called “MeV Gap”) due to low photon interaction probability, high background levels, and complex signal responses in detectors.
At DTU Space, we address this challenge using AI-driven signal processing for advanced 3D Cadmium Zinc Telluride (CZT) drift strip detectors. By combining physics-based simulations with machine learning, we generate synthetic pulse shapes used to train neural networks, including convolutional neural networks (CNNs) and multilayer perceptrons (MLPs), to accurately reconstruct key photon interaction properties, such as position and energy.
We demonstrate that these neural networks outperform conventional algorithms, particularly near detector boundaries, enabling more precise event-by-event reconstruction. This work bridges physics-based modeling and modern deep learning to deliver compact, high-resolution detector systems suited for future space missions, astrophysical observations, medical imaging and nuclear safety applications.
This research is part of the i-RASE project (Intelligent Radiation Sensor Readout Systems), funded by Horizon EU, aiming to develop next-generation radiation detection technologies through the integration of AI, advanced sensor design, and application-driven innovation.
Speaker: Michał Kossakowski (DTU Space) -
16:45
Unraveling Breast Cancer Heterogeneity: Microfluidic Sorting and Bioassay-Based Functional Analysis 1h 15m
This paper reports a novel microfluidic approach for characterizing and sorting breast cancer cell subpopulations based on size and mechanical properties, using deterministic lateral displacement (DLD). The metastatic potential of cancer cells is closely linked to their mechanical characteristics, which evolve with tumor progression and reflect cellular heterogeneity. Small and large tumor cells serve distinct roles: smaller cells often exhibit higher proliferative capacity and initiate new tumors, while larger cells may be more differentiated or adapted to specific microenvironments [1]. Characterizing these differences is critical to understanding cancer progression and metastasis [2]. While previous research has shown that cell mechanics can be probed through deformation-based methods, including our earlier work with blood and skeletal stem cells using DLD [3, 4], the application of such techniques to aggressive cancer cell types remains underexplored.
Here, we present a DLD-based microfluidic device designed to sort MDA-MB-231 breast cancer cells into size-based subpopulations. The device features three inlets and outlets to fractionate cells into small, medium, and large groups. The medium outlet serves as a transitional fraction containing a mix of cell sizes. Microscopic imaging confirmed that the small outlet contains uniform small cells, whereas the large outlet includes both individual large cells and clusters spatially separated within the outlet. Sorting efficiency was validated using inverted microscopy, and size distributions were quantified via a custom Python script.
To explore functional differences among the sorted populations, we assessed both proliferation and migration behavior. All subpopulations maintained proliferative capacity over a seven-day period. Migration assays were performed by seeding cells on wells coated with different extracellular matrix proteins—Basement Membrane Extract (BME), fibronectin, and collagen—and capturing time-lapse images over 20 hours. Small cells consistently displayed significantly greater motility across all matrix conditions compared to large cells and unsorted controls-inlet, supporting the hypothesis that smaller cells possess enhanced metastatic potential. To evaluate invasiveness in a 3D environment,
spheroids were formed from small, large, and inlet populations. After seven days, the spheroids were embedded in Matrigel to generate 3D invasion models. Small-cell spheroids exhibited invasive outgrowth originating from the core, while large-cell spheroids remained compact with well-defined boundaries. Quantification of the spheroid spread area further confirmed significant differences in invasiveness between subpopulations, with small-cell spheroids covering a larger area over time.
This study demonstrates that DLD-based microfluidics is a robust, label-free, high-throughput method for sorting and studying cancer cell heterogeneity. It enables the identification of aggressive subpopulations based on physical properties and behavior. Ongoing work focuses on extending this approach to include deformability-based separation using Multi-Dc devices, with the ultimate goal of gaining deeper insight into tumor progression, drug resistance, and invasion mechanisms. These findings contribute to the development of precision strategies for treating aggressive breast cancer.- Celià-Terrassa T, Kang Y. Distinctive properties of metastasis initiating cells. Genes Dev. 2016;30(8).
- Wullkopf L, et al. Cancer cell mechanical adaptation to ECM stiffness correlates with invasiveness. Mol Biol Cell. 2018;29(20).
- Beech JP, et al. Sorting cells by size, shape, and deformability. Lab Chip. 2012;12(6).
- Xavier M, et al. Label-free enrichment of skeletal progenitor cells via DLD. Lab Chip. 2019;19(3).
Speaker: Esra Yilmaz (Lund University, Sweden)
-
16:45
-
08:30
→
09:20
-
-
09:00
→
09:10
Organizer Updates 10m
-
09:10
→
09:50
Plenary: AI for Society
-
09:10
The Quantum Technology ecosystem: A data-driven analysis of policy trajectories and the labor market 25m
As quantum technologies advance from foundational science to commercial deployment, a robust understanding of their surrounding ecosystem—spanning policy, workforce, and industrial dynamics—is critical. In this work, we demonstrate how AI and machine learning methods can illuminate the structure and evolution of the quantum technology (QT) ecosystem. Drawing on two large-scale datasets—(1) 62 national quantum strategy documents across 20 countries, a corpus of 12,786 paragraphs, and (2) over 3,600 global job postings related to QT—we apply natural language processing and GPT-classification techniques to extract thematic, temporal, and geographical insights.
Using BERTopic and K-means clustering on dataset (1) of strategy documents, we identify 13 key policy themes, track their evolution over two decades, and reveal a strategic shift from foundational quantum science toward commercialization, workforce development, and national program coordination. In parallel, through LLM-enhanced classification of job posts (dataset 2), we analyze regional hiring trends, degree and skill requirements, and the demand for specific quantum roles across academia and industry. Taken together, we are able to build a data-driven picture of the QT landscape, identifying policy foci and the corresponding state of the labor market. Our approach supports strategic decision-making at the interface of science, industry, and governance—crucial for guiding the global transition into the second quantum revolution.
Speaker: Simon Richard Goorney (Niels Bohr Institute, Copenhagen University and Department of Management, Aarhus University) -
09:35
Machine Unlearning 15m
While artificial intelligence (AI) has brought transformative benefits across numerous domains, it has also raised serious concerns about privacy and personal data management. A prominent example is the use of publicly shared images, which can be repurposed for facial recognition and potentially lead to unwanted surveillance. In response, regulations such as the General Data Protection Regulation (GDPR) have introduced provisions like the "right to be forgotten" to give individuals greater control over their data. However, AI models' capacity to memorize training data poses significant challenges in complying with such regulations, as models can inadvertently reveal details about the original data. Addressing this issue requires not only deleting the designated data but also removing any learned representations derived from it. This need has led to the emergence of machine unlearning, a field focused on removing the influence of specific data samples from trained AI models. In this context, we present our latest results advancing machine unlearning techniques to better support data privacy and regulatory compliance.
Speaker: Vinay Chakravarthi Gogineni (University of Southern Denmark)
-
09:10
- 09:50 → 10:50
-
10:50
→
11:10
Coffee 20m
-
11:10
→
12:00
Plenary: Nuclear & Quantum
-
11:10
Development of innovative methods for fission trigger construction 25m
The development of innovative methods for fission trigger construction addresses the challenge of recognising fission signature in very complex detector’s response functions.
The fission recognition approaches available today have intrinsic limitations.
To draw a clearer picture, the existing dedicated detectors for fission triggering present constraints regarding experimental setup geometry and fissioning mechanism compatibility.
The numerical alternatives for fission signature recognition, such as calorimetry or n-fold gamma coincidence, if applicable, can cause a major loss in statistics.
Despite the fission tag approach, in fissioning systems that require the use of a primary beam, fission can become a minor nuclear reaction compared to other processes.
Additionally, with the increasing size of nuclear physics experimental setup and the need to recognise rarer reaction mechanism, one of the main challenges in nuclear physics is to develop more and more selective data analysis methods for more and more complex datasets.
With this scenario in mind, we envisaged an AI-based fission trigger model that recognises fission solely based on the detector response function, improving event recognition statistics and without the need of an ancillary detector. This AI model could be used to evaluate the impact of each observable into identifying fission.This AI-based fission trigger (classification model) is being developed for the $\nu$-Ball2 gamma spectrometer (PARIS configuration) to study correlations measured with the use of a spontaneous ${}^{252}$Cf fission source. A dedicated training dataset was acquired using an ionisation chamber for a clean fission classification label. Another neural network model was developed to predict the fission event timing with enhanced time resolution (regression model). This result was achieved thanks to a user-custom loss function that allows to improve the ionisation chamber time resolution from 5-6 to ~3 ns with reduced computational cost. Further details will be presented at the talk.
Speaker: Brigitte PERTILLE RITTER (Université Paris-Saclay / IJCLab) -
11:35
Neural networks as trace-preserving quantum channels 25m
This work explores the potential of neural networks to find the quasi-inverse of qubit channels for any values of the channel parameters while keeping the quasi-inverse as a physically realizable quantum operation. We introduce a physics-inspired loss function based on the mean of the square of the modified trace distance (MSMTD). The scaled trace distance is used to so that the neural network does not increase the length of the Bloch vector of the quantum states, which ensures that the network behaves as a completely positive and trace-preserving (CPTP) quantum channel.
Speaker: Dr Muhammad Faryad (Lahore University of Management Sciences)
-
11:10
-
12:00
→
13:30
Lunch
-
13:30
→
14:20
Plenary: Imaging and Environment
-
13:30
Global glacier ice thickness inversion with supervised learning 25m
Accurate knowledge of ice volumes is essential for predicting future sea level rise, managing freshwater resources, and assessing impacts on societies, from regional to global. Efforts to better constrain ice volumes face challenges due to sparse thickness measurements, uncertainties in model input variables, and limitations in ice flow traditional model parameterizations. Glaciers currently account for approximately 20-30% of global sea level rise. Modeled glacier volume estimates vary widely, especially in arid regions such as the Andes and the Himalayan–Karakoram ranges, where billions rely on glacier-fed freshwater. On ice sheets, despite Synthetic Aperture Radar enabling unprecedented mapping of surface ice velocity from space, thickness inversion methods still yield significant errors - especially along coastal regions where complex bathymetry and fjord systems hinder mass conservation approaches, thus often requiring spatial interpolation techniques. Improved ice thickness estimates and bedrock mapping near grounding lines at the glacier termini of Greenland and Antarctica are crucial to improve ice flow models and reduce the uncertainties in future sea level rise projections. Over decades of surveys, millions of sparse thickness measurements have been collected across Earth's glaciers and ice sheets: approximately 4 million for glaciers worldwide, 20 million in Greenland, and 80 million in Antarctica, largely due to NASA’s Operation IceBridge. Despite the wealth of data, machine learning has seen limited use in harnessing its full potential. To help bridge this gap, we’ve developed a global machine learning system capable of estimating the thickness of every glacier on Earth. The model combines two gradient-boosted decision tree schemes trained on numerical features. It integrates both traditional physics-based variables, such as ice velocity and mass balance, and geometrical features commonly used in area-volume scaling approaches. We find that the system outperforms existing models almost everywhere, and by up to 30-40% at high latitudes where most ice is stored, and generalizes well in the ice sheet peripheries. I will present the rationale, benefits, and limitations of our machine learning approach, along with an envisioned strategy towards a machine learning-based map of the Greenland and Antarctic ice sheets.
Speaker: Dr Niccolo Maffezzoli (Ca' Foscari University of Venice) -
13:55
AI-enhanced High Resolution Functional Imaging Reveals Trap States and Charge Carrier Recombination Pathways in Perovskite 25m
Perovskite thin films are promising for optoelectronic applications such as solar cells and LEDs, but defect formation remains a major challenge. In our study, we combine high-resolution functional intensity modulation two-photon microscopy1 with AI-enhanced data analysis to gain a deeper understanding of defect-related trap states in perovskite microcrystals2,3.
Based on methylammonium lead bromide (MAPbBr₃) perovskite microcrystalline films, we developed a comprehensive carrier recombination model that captures the equilibrium dynamics of both excitonic and electron–hole pair photoluminescence (PL) emission. By systematically varying model parameters, we generated a large dataset of temperature-dependent, intensity-modulated PL spectra to train and optimize a machine learning regression framework for intensity-modulation two-photon microscopy (ML-IM2PM). To improve model performance and generalizability, a balanced classification sampling strategy was implemented during the training phase.
The resulting regression-chain model accurately predicts key physical parameters—exciton generation rate (G), initial trap concentration (N_TR), and trap activation energy (E_a)—across a spatially resolved 576-pixel map. These outputs were then used to solve a system of coupled ordinary differential equations (ODEs), enabling pixel-by-pixel simulations of carrier dynamics and recombination behavior under steady-state photoexcitation.
The simulations reveal pronounced spatial heterogeneity in exciton, electron, hole, and trap populations, as well as in both radiative and nonradiative recombination rates. Correlation analysis delineates three distinct recombination regimes: (i) a trap-filling regime dominated by nonradiative recombination, (ii) a transitional crossover regime, and (iii) a band-filling regime characterized by enhanced radiative efficiency. A critical trap density threshold of approximately 10¹⁷ cm⁻³ marks the boundary between these regimes.
Together, these findings establish ml-IM2PM as a robust platform for high-resolution, quantitative diagnosis of charge carrier dynamics in perovskite materials, offering valuable insights for targeted defect passivation and device optimization strategies.
Acknowledgements
We acknowledge financial support from the LU profile area Light and Materials (Wasp-Wise Project), the Knut and Alice Wallenberg Foundation (KAW), the Light-Material Profile, the Swedish Energy Agency, and the Swedish Research Council. Collaboration within NanoLund is acknowledged. The data handling was enabled by resources provided by LUNARC.
References
1 Q. Shi, P. Kumar, and T. Pullerits, ACS Phys. Chem. Au., 2023, 3, 467–476.
2 Q. Shi and T. Pullerits, ACS Photonics, 2024, 11, 1093–1102.
3 Q. Shi and T. Pullerits, Energy & Environmental Materials, p.e70062.Speaker: QI SHI (Lund University)
-
13:30
-
14:20
→
20:00
Social excursion and dinner
-
14:25
Walk to train 30m
-
14:55
Train 1h
-
15:55
Kronborg 1h 5m
-
17:00
Reception 1h
-
18:00
Conference Dinner 2h
-
14:25
-
09:00
→
09:10
-
-
09:00
→
09:05
Organizer Updates 5m
-
09:05
→
09:35
Plenary: Physics AI in Practice
-
09:05
Past Experiences with Machine Learning 30mSpeaker: Troels Petersen (Niels Bohr Institute)
-
09:05
-
09:35
→
10:25
Plenary: Condensed Matter and Materials
-
09:35
Learning from Noisy Spectra: AI-Assisted Characterization of Quantum Materials via Raman Data 25m
Spectroscopic characterization of quantum and low-dimensional materials remains a fundamental challenge in condensed matter physics, especially when data are noisy or scarce. In this work, we explore a general deep learning framework for the automated classification and structural identification of two-dimensional (2D) materials from Raman spectra. Our approach requires no manual feature engineering and remains robust under severe signal degradation, enabling accurate twist-angle identification in bilayer graphene and similar systems. We emphasize the method’s physics relevance: it can extract latent structural information typically accessible only through time-consuming manual preprocessing or high-resolution techniques. The framework also lends itself to generalization across domains—offering a blueprint for integrating generative modeling and representation learning in other spectroscopy-based fields. This work represents a concrete step toward physics-aware, noise-resilient machine learning and may open new avenues for real-time characterization in experimental condensed matter research.
Speaker: YAPING QI (Tohoku University) -
10:00
Achieving sub-temporal resolution in the analysis of two-state single-molecule trajectories 25m
While spatial resolution in flourescence microscopy and related fields during the last two decades reached the nanometer scale, the time resolution has remained essentially unchanged and is set by the camera system's imaging time. Yet adequate time resolution is crucial for accurate information acquisition about, for instance, dynamical processes in cells.
In a reaction-difffusion process in a cell a given molecule will undergo an alterating process: unbound (free molecule) to bound (molecule bound into a complex) and back. The two states (bound and unbound) are in general characterized by different diffusion constants, and the transitions between the two states are characterized by two rates (bound-to-unbound) and (bound-to-unbound).
The analysis of experimentally acquired trajctories for such two-state trajectories is often done using a discrete-time hidden Markov model, thus implicitly assuming that the observations generated by the hidden states are near-perfectly resolved, which is seldom the case in practise. The matter is brought to its head for rapid kinetics, where sub-time events that happen during imaging time are commonplace. To deal with type of rapid switching dynamics, we introduce a Bayesian parameter estimation procedure combined with a novel algorithm that efficiently calculates the exact probability of observed trajectories, including the many "unseen" switching events during imaging. Our method is based on an analytic derivation of generalised transition probabilities - transition-accretion probabilities - that probabilistically capture unseen switching behaviour during data acquisition. We do in-silico parameter inference where we compare this sub-time hidden Markov model to the standard variant (applicable to slow kinetics) as well as to an approximative method recently proposed (applicable to rapid kinetics). We find that our method works well irrespective of the temporal resolution of the setup.
Speaker: Tobias Ambjörnsson (Lund University)
-
09:35
-
10:25
→
10:50
Coffee 25m
-
10:50
→
11:50
Keynote: Panel Discussion: Big Ideas and New Directions in AI for Physics
-
10:50
Panel Discussion 1h
-
10:50
- 11:50 → 12:00
-
12:00
→
13:00
Lunch
-
13:00
→
14:00
Reception: (Optional) Farewell coffee / beer
-
09:00
→
09:05