Past Seminars & Tutorials

Exploring the Cosmos with future CMB polarisation data

The study of the cosmos has advanced significantly over the past three decades through measurements of the Cosmic Microwave Background (CMB) temperature anisotropies with unprecedented precision, as well as polarisation anisotropies with moderate sensitivity. In the coming decades, CMB polarisation will be measured with substantially higher sensitivity by experiments such as LiteBIRD, the Simons Observatory, and other forthcoming missions. The primary scientific objective of these experiments is the detection of primordial B-mode polarisation in the CMB. The same data will also enable the exploration of new physics, including cosmic birefringence (parity-violating effects in CMB photons) and extensions to the standard cosmological model, particularly those aimed at alleviating the Hubble tension. In this talk, I will present the prospects for detecting primordial B-mode signals with LiteBIRD and other proposed experiments in the presence of various foreground complexities. I will also discuss two novel science cases—cosmic birefringence and Rayleigh scattering—which are being targeted as potential extensions of the standard model of cosmology.

Conformal invariance implies strong constraints on the form of correlation functions of gauge invariant operators, and these correlators diverge when their conformal dimensions satisfy certain relations. These divergences and their renormalization has been understood up to three-point functions in general spacetime dimension, and for specific spacetime dimensions for holographic 4-point function. Going from odd to even dimensions increases the complexity of the analysis and in our work we discuss how to renormalize holographic 4-point functions in d=4. Analysis shows that new features arise when these correlators are renormalized and it may impose constraints on the spectrum of the CFT. The natural language to deal with these divergences is momentum space. So in this talk I will first discuss about momentum space CFT and then focus on divergences of these correlators and it’s renormalization.

In this talk, I will discuss how the scalar degree of freedom emerging in R2-gravity, namely the scalaron, can naturally serve as a viable dark matter (DM) candidate. The scalaron interacts only gravitationally with Standard Model fields through Planck-suppressed couplings, making it consistent with the current absence of DM detection beyond gravity. I will show how a non-minimal coupling of the Higgs field to gravity modifies the induced trilinear scalaron-Higgs interaction that governs the early-universe evolution of the scalaron. The interplay between the R2-gravity contribution and that from the non-minimal coupling determines both the initial conditions and evolution of the scalaron, leading to cold DM behavior at later epochs. Remarkably, depending on the strength of the non-minimal coupling, the scalaron mass can either lie in the meV range or within the keV-MeV window to yield the observed relic abundance, subject to additional constraints from collider and astrophysical observations.

Fluid–structure interaction (FSI) systems offer significant potential for sustainable and advanced energy harvesting, as appropriate tuning of structural and flow parameters can enable efficient extraction of energy from the surrounding fluid. Motivated by this prospect, immersed boundary method (IBM)-based direct numerical simulations are performed to investigate the complex flow physics and propulsive characteristics of FSI systems. As a representative three-dimensional configuration, a pitching plate oscillating about its leading edge is examined at a Reynolds number of (Re = 1000) over a wide range of pitching frequencies, corresponding to Strouhal numbers, 0.2 ≤St≤2. At moderate pitching frequencies, the wake exhibits a reverse von Kármán vortex street, which bifurcates into two distinct branches connected by vortex strands at higher Strouhal numbers. For panels with larger aspect ratios, an avalanche of large, highly entangled three-dimensional vortex structures emerges for St≥1.5. Strong spanwise wake compression is observed in these cases, significantly influencing the development of secondary instabilities at increased pitching frequencies. However, no direct correlation between the growth of secondary instabilities and the spanwise pressure gradient is established. With increasing Strouhal number, a stable central wake region develops, accompanied by a higher concentration of small-scale vortical structures near the jet plane. Continuous wavelet transform analysis reveals complete synchronization of the shed vortices with the pitching Strouhal number for the low-aspect-ratio panel. In contrast, the spectral response of the high-aspect-ratio panel at St=1 is dominated by a narrow band of low-frequency cells, and a period-doubling phenomenon is observed in the spanwise cells during space–time reconstruction of the velocity signals. Across the investigated range of Strouhal numbers, both thrust and lift signals exhibit constant-amplitude oscillations, with the lift amplitude being approximately twice that of the thrust. Furthermore, the root-mean-squared thrust and lift coefficients increase monotonically with both Strouhal number and panel aspect ratio.

Quantum Teleportation is a very useful scheme for transferring quantum information. The optimal teleportation fidelity of a shared bi-partite state of a system of distinguishable quantum particles is known to be (Fmaxd + 1)/(d + 1) with Fmax being the ‘maximal singlet fraction’ of the shared state. However, Parity Superselection Rule (PSSR) in Fermionic Quantum Theory (FQT) puts constraint on the allowed set of physical states and operations, and thereby, leads to a different notion of quantum entanglement preservation — `locally accessible’ and `locally inaccessible’. In the present work, we derive an expression for the optimal teleportation fidelity of locally accessible entanglement preservation, given that the quantum information to be teleported is encoded in Fermionic modes of dimension 2N × 2N using 2N × 2N – dim shared Fermionic resource state between the sender and receiver. To get the optimal teleportation fidelity in FQT, we introduce PSSR restricted twirling operations and establish Fermionic state-channel isomorphism. Remarkably, we notice that the structure of the canonical form of twirl-invariant Fermionic shared state differs from that of the isotropic state – the corresponding canonical-invariant form for teleportation in Standard Quantum Theory (SQT). In this context, we also introduce restricted Clifford twirling operation that constitute the unitary 2-design in case of FQT for experimentally validating such optimal average fidelity. Finally, we discuss the preservation of locally inaccessible entanglement for a class of Fermionic teleportation channel.

[Based on the work reported in arXiv:2312.04240(quant-ph), together with ongoing works.]

Gravitational waves (GWs) from compact binary coalescences, such as binary black holes (BBHs) or binary neutron stars (BNSs), offer a novel probe to study the expansion rate of the Universe. GWs provide a direct estimate of the luminosity distances of binary mergers but not their redshifts unless their electromagnetic (EM) counterparts are observed. In this presentation, I will discuss two Bayesian formalisms to estimate the Hubble constant from dark sirens, BBHs, and BNSs, which are not accompanied by EM counterparts. First, I will describe an approach to infer the Hubble constant from the cross-correlation between galaxies with known redshifts and individual BBH events, utilizing large-scale information that has so far not been used when statistically identifying the host of the GW event. Second, I will present a method that uses tidal deformabilities in BNS signals, combined with the knowledge of the neutron star equation of state (EoS), to break the redshift-mass degeneracy. This enables joint inference of the Hubble constant, EoS, and BNS population and remains effective for current as well as next-generation detectors.

A limitation in the analysis of channel capacities is perhaps an overemphasis on mathematical elegance over consideration of realistic scenarios. We attempt to fill some of these shortcomings by developing a framework for analyzing classical capacities where the set of states for encoding information is restricted based on various physical properties. As examples, we consider constrained classical capacities of noiseless and noisy energy-preserving qubit channels. Next, we elucidate the effects of energetic restrictions on information transmission and quantum advantage for entanglement-assisted capacities. Here, we introduce an energy-constrained dense coding (DC) scheme, which we show to be optimal in d = 2. Interestingly, the restricted framework may potentially alter many fundamental results in the standard framework of communication. In particular, we establish that Classical-Quantum (CQ) channels can exhibit enhanced capacities under entanglement assistance in the energy-constrained setting, an effect shown to be impossible in the unrestricted scenario. I will conclude with some of our results on quantum teleportation, focusing on fidelity deviation and the role of prior information.

Primordial black holes (PBHs) serve as key probes of the early Universe and cosmic evolution. In this study, we explore the formation of PBHs near the QCD phase transition, driven by a broadly peaked inflationary scalar power spectrum. This mechanism naturally results in an extended PBH mass distribution and generates two distinct stochastic gravitational-wave backgrounds (SGWBs): a scalar-induced SGWB from second-order tensor perturbations at the time of PBH formation, and a merger-driven SGWB arising from the evolution of the PBH binary population. We analyze both SGWB components using Bayesian methods, incorporating data from the NANOGrav 15-year dataset and the first three observing runs of LVK. We also project the continuous-wave signals expected from mini extreme–mass-ratio inspirals (mini-EMRIs), enabling direct comparison with existing constraints from NANOGrav and LVK.

Our parameter-space analysis reveals regions where the combined SGWB signal may be detectable by future ground- and space-based gravitational-wave observatories. Notably, the extended PBH mass spectrum naturally leads to the formation of mini-EMRIs, which are promising targets for next-generation ground-based detectors such as upgraded versions of LVK, ET, and CE. In much of the parameter space, the astrophysical SGWB masks the primordial contribution in the frequency range accessible to ground-based detectors. As a result, in scenarios with extended PBH mass functions, the detection of mini-EMRIs provides a more reliable probe of the PBH landscape than SGWB measurements alone.

Living systems often exhibit dynamic behaviours driven by interfacial chemical reactions that generate Marangoni stresses, leading to droplet motion and self-propulsion through surfactant activity. Building on this paradigm, we investigate the migration of a surfactant-laden droplet in a thermally driven Poiseuille flow, where a first-order interfacial reaction continuously modifies the surfactant concentration and thereby the local surface tension. The droplet hydrodynamics are governed by the Stokes equations. Under the small Péclet number limit, we solve the hydrodynamics using the solenoidal decomposition method and regular perturbation. This framework allows us to capture the coupled effects of interfacial chemistry, viscous flow, and imposed temperature gradients on droplet self-propulsion and cross-stream migration. Our results demonstrate how reaction kinetics and thermal forcing can be tuned to achieve controlled droplet transport, with potential applications in targeted biomedical delivery and microfluidic manipulation.

The universe is pervaded by magnetic fields across a vast range of scales, from planets and stars to galaxies and galaxy clusters. Observationally, magnetic field strengths span from several μ G in galaxies and clusters, up to a few Gauss (G)  for planets, and as high as 1012 G for neutron stars. Gamma-ray studies and Faraday rotation experiments further constrain the intergalactic medium (IGM) magnetic field between 10-10 and 10-22 G. While classical magnetohydrodynamics can amplify minuscule seed fields to explain present-day galactic and cluster magnetism, the large-scale primordial electromagnetic fields potentially stretching over Mpc distances require generation mechanisms operating in the early universe. Inflationary magnetogenesis, realized through breaking the conformal invariance of the Maxwell term via non-minimal coupling to scalar fields during inflation, provides a compelling scenario for producing such fields. Inflation, which underpins the observed large-scale structure of the universe, is fundamentally a quantum process. Crucially, it not only seeds primordial magnetic fields but also generates gravitational waves, both of which serve as potential observables probing quantum signatures from the early universe. This discussion presents an integrated approach to analyzing primordial electromagnetic and gravitational fields, introducing experimentally viable observables to quantify their non-classical, quantum origin.

Tidal Love numbers provide us a handle to test the nature of compact objects, as well as theories of gravity. There have been several clarifications recently, which makes our understanding of these Love numbers better. But further investigations have led to more confusion. I plan to discuss these recent developments and the confusing nature of recent literature on these issues. I will show that the tidal Love numbers of a non-rotating black hole identically vanishes in the static case, which may not be true in the dynamical scenario. While for rotating black holes the situation is a bit more involved. Besides, I will also discuss fermionic perturbations leading to non-zero static Love numbers for black holes. I will also highlight some remarkable novel features, for ultra-compact objects as well as for quantum BHs.

A central program in quantum foundations is to characterize quantum theory through the correlations it permits. Communication tasks have proven effective in this pursuit, typically involving one or more senders who transmit limited information to a receiver attempting to infer a target variable. Existing studies, however, primarily focus on inference success rates and how pre-shared correlations influence them. Here we propose a shift in perspective by introducing the notion of prior certainty – an inference is prior-certain if the receiver, before receiving any message, is already certain of its accuracy. We examine this notion within a distributed variant of random access codes (RAC) task, and find that quantum nonlocal correlations cannot support both prior certainty and optimal inference. Specifically, we prove that if the optimal inference success rate is attained by sharing quantum nonlocal resources, then prior-certain inference becomes impossible. Interestingly, this restriction is violated by certain post-quantum correlations, including those that obey known principles such as Information Causality (IC) and Macroscopic Locality. This reveals that, even within RAC-type scenarios, the violation of IC is not the sole signaure of post-quantum behavior. More broadly, our findings indicate that prior certainty, when applied to other communication settings, may further sharpen the distinction between quantum and post-quantum correlations

Communication and computation are the twin pillars of the modern internet structure, serving humanity efficiently since its inception. With the rise of quantum information theory, there has been a growing interest in the development of quantum internet architecture, which has already demonstrated clear advantages over its classical counterpart. But what if, in future, a more fundamental theory of nature emerges – one that includes both classical and quantum theory in their respective scales? Could such a theory enable an even more powerful internet infrastructure. In this talk, we explore this possibility and demonstrate no internet structure can surpass quantum one, offering simultaneous advantage for both communication and computation tasks.

In this talk, I will focus on some of our recent works on various aspects of gravitational wave memory effect. First, I will be talking about GW memory effect in the context of a certain class of ECOs or ‘BH mimickers’. Motivation behind this work is to see whether memory effect can be used as a future pointer towards exploring non-BH compact objects. We choose a class of wormhole solutions in this regard. The presence of extra dimension and the wormhole nature of the spacetime geometry gets imprinted in the memory effect. Since future gravitational wave detectors will be able to probe the memory effect, the present work provides another avenue to search for compact objects other than black holes. In the next part of my talk, I will be talking about GW memory, studied in the context of supernova neutrinos. Here I will discuss the impact of of neutrino self-interactions in the GW memory signal. Our results reveal that memory signals for self-interacting neutrinos are weaker than free-streaming neutrinos in the high-frequency range. I will also talk about the implications for detecting and differentiating between such signals for planned space-borne detectors like DECIGO, BBO and LISA.

Time: 02.30 pm to 03.30 pm

Title of the talk: From Balls to Paradoxes

Speaker: Dr. Arijit Ghosh, ACMU, ISI, Kolkata

Abstract: In this talk, we discuss the foundational result known as the Banach-Tarski Paradox, a counterintuitive theorem in set theory stating that a solid ball can be decomposed into finitely many pieces and reassembled into two identical copies of the original. We will explore the key ideas underlying this paradox, including the role of the Axiom of Choice, the construction of non-measurable sets such as Vitali sets, and the surprising consequences for our understanding of volume and measure.

 

 

Time: 04.00 pm to 05.00 pm

Title of the talk: Exclusivity principle and the Quantum Nature of the Physical World

Speaker: Dr. Manik Banik, S. N. Bose National Centre for Basic Sciences, Kolkata

Abstract: Physical theories are constructed to describe the world as observed through human experience. When new phenomena resist explanation within an existing framework, we are compelled to revise or replace the theory. A paradigmatic example is the emergence of quantum mechanics, which arose from the failure of classical physics to account for microscopic phenomena. Yet, quantum theory is notably abstract and deeply mathematical. What, then, makes it uniquely suited—among countless conceivable mathematical models—to describe our universe? In this talk, we explore this question through the lens of the Exclusivity (E) principle, which asserts that any set of pairwise exclusive events must also be jointly exclusive. As originally emphasized by Ernst Specker, this principle is not entailed by Kolmogorov’s axioms of classical probability theory. Recent results demonstrate that the E principle and its variants play a key role in explaining the constrained nature of contextuality and nonlocality exhibited by quantum theory in Hilbert space.

Life is full of complex, evolving systems – from financial markets to environmental systems. Using tools from physics, mathematics, statistics, and AI, scientists can unravel patterns hidden within large datasets. This talk offers a glimpse into how we decode real-world complexity using data science.

The introduction of the higher dimensional operators to the Standard Model Lagrangian violates the unitarity of 2 → 2 scattering processes, depending on the values of the Wilson coefficients of the higher dimensional operators. Bounds on these coefficients may be obtained from demanding that there be no such unitarity violation below the scale of the effective theory. For scalar extensions in the SM, a detailed study of the scalar potential for models with extra scalars is important to rule out certain portions of the parameter space and hence to pin down the search strategies in experiments. The potential of a real singlet scalar enhanced SM can be Z2 asymmetric or Z2 symmetric leading to a dark matter candidate. Also, a charged scalar particle is one of the appealing exotic candidates in particle phenomenology and has been searched for long. The Georgi Machacek (GM) model, consisting of two scalar triplets, and having a large triplet vev induces a large mixing between the GM and the SM sector leading to interesting search processes for the non standard scalars. In particular, the charged scalar of the GM model couples to the SM fermions with a strength directly proportional to the vev (vt) and a significantly large vt enhances the detection possibility of such charged scalar at the collider. We probed the parameter space for the singly charged Higgs decaying into tb, w±h, 𝜏𝜈, cs, and for the heavy neutral Higgs decaying into 𝜏+𝜏.

Target search processes arise ubiquitously across all disciplines of science e.g. search for extremum in a potential landscape, protein search for DNA binding site, and animals searching for food or home, etc. to name a few. A natural question that arises in such a context is: what would be an ideal search strategy so that the process is completed in a shorter period? Lately, the strategy of ‘stochastic resetting’ has garnered considerable attention as one of the useful strategies to expedite a stochastic search process. Here, one intermittently terminates the search and restarts it back from where it originally started. Despite being counterintuitive at first glance, stochastic resetting indeed aids in facilitating a search process. Notably, most of the studies till now assume resetting to be an instantaneous event such that the searcher returns to the starting position in zero time. Although such assumptions are easier to work with in theory, for experimental validations this becomes a major hindrance since no physical event can be instantaneous. In our work, we have considered  resetting to be a non-instantaneous event and furthermore, the searcher can also find the target during the return process. After developing a unified renewal formalism to treat a general stochastic search process with non-instantaneous return, we show that it is not only significant for practical implications but also serves as a superior strategy in expediting a search process compared to the classical instantaneous return.

Curvaton scenario provides an interesting scenario to produce observable scalar perturbation observable in Cosmic Microwave Background (CMB) radiation. But the presence of this additional field can also explain some other cosmological phenomena. In this talk I will present a couple of scenarios where curvaton can produce primordial black holes (PBH), and curvaton can be responsible for baryogenesis. Along with this I will present a novel mechanism that can provide us with an efficient computation scheme for curvaton scenarios.

We demonstrate how to incorporate a catalyst to enhance the performance of a heat engine. Specifically, we analyze efficiency in one of the simplest engine models, which operates in only two strokes and comprises of a pair of two-level systems, potentially assisted by a d-dimensional catalyst. When no catalysis is present, the efficiency of the machine is given by the Otto efficiency. Introducing the catalyst allows for constructing a protocol which overcomes this bound, while new efficiency can be expressed in a simple form as a generalization of Otto’s formula: 1-\frac{\omega_c}{d\omega_h}. The catalyst also provides a bigger operational range of parameters in which the machine works as an engine. Although an increase in engine efficiency is mostly accompanied by a decrease in work production (approaching zero as the system approaches Carnot efficiency), it can lead to a more favorable trade-off between work and efficiency. The provided example introduces new possibilities for enhancing performance of thermal machines through finite-dimensional ancillary systems.

This talk introduces nano-fabrication of bio-chemical ligand modified 2D material-based wafer level and inkjet-printed bio-electronic solid state detectors with spatiotemporal control, enabling innovations in machine-intelligence-controlled cancer biopsy spectrometer, chiral/helical quantum technology, environmental toxin monitoring, and brain-machine interface based neurological device research [1-3]. Inspired by nature’s intricate designs, our hierarchical stacked geometrical configuration (HSGC) facilitates real-time volatile organic compound (VOC) cancer biomarker spectrograms and chiral molecule recognition. It enables machine learning-enabled liquid cancer biopsy for predicting breast cancer tumors and cancer organoid mutation status using a breakthrough time-space resolved Cancer Spectrometer (TITAN) combined with multi-omics fusion and advanced generative AI, eliminating complex biochemical procedures. Furthermore, spin-sensitive detectors constructed from chiral and DNA-like helical nano-hybrids of 2D materials offer exciting possibilities for identifying chiral molecules. This advancement could pave the way for a new era of organic chiral and helical quantum devices. Our technology enhances environmental hazards surveillance using ultrafast field-effect transistors (FETs) with graphene/black phosphorus 2D FET channels, detecting heavy metals (lead, mercury, arsenic), toxic ions (phosphates), and microorganisms (E. coli, Ebola virus) in aquatic samples. These devices also demonstrate applications in flexible feedback devices for soft matter robotics and brain-machine interfaces (BMIs) for neuro-diseases, such as Parkinson’s, paralysis, and mute individuals, restoring speech and neuro ability. Our technology ensures sustainability through transient biodegradable electronics, minimizing device variation, promoting scalability, and advancing technology readiness levels (TRL). This presentation explores the transformative potential of 2D material-based nano- electronic devices for advancing our world utilizing various applications, such as, nano-electronics, novel room temperature quantum technology, organic spin device, medical and environmental monitoring device applications.

References: [1] Maity A. … and Haick, H. Ultra-Fast Portable and Wearable Sensing Design for Continuous and Wide-Spectrum Molecular Analysis and Diagnostics. Advanced Science 2022; 9(34), e2203693. [2] Maity, A., … and Haick, H. (2023), Spin- Controlled Helical Quantum Sieve Chiral Spectrometer. Advanced Materials 2023, e2209125. doi: 10.1002/adma.202209125. [3] Maity, A., … and Haick, H. (2022), Gate-Controlled Chiral Recognition and Spin Assessment with All-Electric Hybrid Quantum Wire-Based Transistors. Small, 2022 Dec 9;e2205038. [4] Maity, A.,.. Chen. J. Scalable graphene sensor array for real-time toxins monitoring in flowing water, Nature Communication 14, 4184 (2023). doi: 10.1038/s41467-023-39701-0.

 

BIO: Dr. Arnab Maity did his Ph. D (2015) in materials science from IIT Kharagpur, India and Post- doctoral works from University of Wisconsin, Milwaukee, USA (2015-19) and Technion, Israel Institute of Technology, Israel (2019-current). He did his M. Tech. in Advanced Materials Science & Technology (2010), M. Sc. in physics (2008) and B. Sc. (2006) in physics. His research works are related to sustainable materials, advanced 1D-2D materials, chiral/helical (DNA like) materials, ceramics and sensor design for gas/VOCs, heavy metals, cancer cells, organoids, bacteria, and protein identification.

His current research interests are bio-compatible and sustainable nano-electronic sensor device fabrication to detect heavy metals, chemicals, pesticides, microplastics, radioactive hazards, bacteria and viruses, toxic gas, human and animal protein in water and air; Advanced Cancer research and other chronic diseases (biomedical devices); Chiral Quantum device and DNA based biological quantum device; Magnetic Ceramic oxides for sensor and heavy metal separation from toxic water; Flexible Bio- degradable Device for Brain-Machine Interface to treat various neurological diseases; Electronic hardware & Software design for device level instrumentation with product design; AI & Large Language Model (LLM), robotics & cybernetics materials applications; Smart dust and Swarm Computation.

Ergodicity or chaos is ubiquitous in nature with very few interesting exceptions when a system exhibits periodic motion. One of the major goals of physics is to find such harmonies in the middle of utter chaos. In this seminar, I will talk about quantum many body scars (QMBS) which caused periodic revivals and absence of thermalization in a groundbreaking experiment recently. Since then, it has been one of the most active fields of research in non-equilibrium quantum matter. Ironically, the exact mechanism of scarring in the original Rydberg quantum simulator experiment has remained unclear. Is kinetic constraint (implemented via the Rydberg blockade mechanism) sufficient to generate QMBS, as thought initially ? In this talk, we will find an answer to this question. I will also discuss a richer class of non-ergodic phenomena induced by Floquet QMBS

In spite of the reasonable agreement of the ΛCDM model with various data sets, pressing historical issues like the cosmological constant problem and more recent issues like the H0 tension and the σ8 tension make relevant investigating the following question: can a modified gravity based late-time model exactly mimic a ΛCDM-like cosmological evolution? Indeed, this question is frequently addressed in the context of various modified gravity models, mostly via the reconstruction method. In my talk, I will revisit ΛCDM-mimicking f(R)-gravity models. An important question in this context is, even if the ΛCDM model and ΛCDM-mimicking f(R) models are cosmographically identical at the background level, can we obtain distinctive signatures at the perturbation level? The usual consensus would be that to answer this question we need to first reconstruct the form of the ΛCDM-mimicking f(R). This has been done previously and the answer, unfortunately, is too complicated (Hypergeometric functions) for any practical purpose. The main point of my talk is that such an explicit reconstruction of ΛCDM-mimicking f(R) is not necessary. There is a way around it. Consequently, we have been able to find analytically what kind of distinctive signatures can such models give rise to at the level of density perturbations. The talk is based on 2103.02274 and 2408.03998.

Active Matter Systems are made of entities that self-propel by drawing energy from the environment. Fascinating spatio-temporal patterns are commonly exhibited by such systems. These are often associated with phase transitions of one or the other type. Typical examples include interesting clustering dynamics within a flock of birds in the sky or in a school of fish inside a sea. In this lecture I will discuss how simple mathematical models can reproduce, as well as help obtain understanding of, some interesting and complex pattern dynamics in these and other active matter systems. While I will discuss primarily this general picture, towards the end, time permitting, I will also present a set of new results that we have gathered for a model system.

We present a new neutrino transport code for binary neutron star merger simulations for the numerical relativity code AthenaK. We use finite element and spectral approaches to handle the angular dependence while energy discretization is handled using a finite volume scheme. We employ an asymptotic-preserving discontinuous Galerkin (DG) method for the spatial discretization to ensure correct behavior in the diffusion-dominated regime. A semi-implicit time stepping scheme is used to handle the stiff and non-stiff sources correctly. In the first part of the talk we describe the two approaches for angular discretization: the finite-element method in angle (FEMN) and filtered spherical harmonics (FPN) with an emphasis on positivity preservation for multi-energy schemes. We also describe a strategy to obtain the two moments method (M1) from the formulated equations. We then compare the efficacy of the three approaches using various toy problems in the presence of a moving medium and general relativity.

Rotating black holes are well-known for amplifying perturbing bosonic fields within specific parameter ranges, a phenomenon commonly re- ferred to as superradiance. Besides spacetime rotation, charge also plays a sig- nificant role in amplification. In this presentation, we will explore the superradi- ant scattering states of a massive scalar field in the spacetime of a magnetically charged rotating black hole, arising from the interaction between a nonlinear electromagnetic field configuration and gravity. We will specifically examine how the magnetic charge of this spacetime influences the magnitude of ampli- fication and the allowed frequency ranges for superradiance. Furthermore, the effective potential for a massive scalar field in the rotating spacetime also leads to the superradiant bound states. Therefore, our next focus will be on the su- perradiant instability regime for the magnetically charged black hole due to the massive potential barrier. Along the talk, we will compare the results for this magnetically charged black hole with the electrically charged Kerr-Newmann black hole. Finally, we will end with the conclusion and the future outlook.

Gravitational wave (GW) memory predicts permanent distortions in the constituents of a system when kept in the passage of a specific GW. Although known for quite some time, this effect is still not verified in the current GW observations, mainly due to the ambient noise and the restrictions on the motion of the experimental apparatus. Here, we consider entanglement between two-level quantum probes in a GW burst background to understand whether GW memory can influence their dynamics. In this regard, first, we investigate the typical entanglement harvesting condition and find that the measure of the entanglement has an infrared divergence in terms of the detector energy gap when the GW burst relates to memory. We point out the resemblance of this finding with the leading term in Weinberg’s soft-graviton theorem. Second, we investigate the radiative process of entangled probes in a GW burst background. We observe that for GW bursts with and without memory, the collective transition profiles of the entangled probes will be characteristically different. We discuss the implications of our findings.

Recent advancements have unveiled an intriguing connection between asymptotic symmetries, soft theorems, and memory effects in gauge and gravity theories, culminating in a promising variant of flat space holography known as celestial holography. This presentation will cover two significant areas of research within this framework. First, we explore perturbative corrections to flat spacetime soft factors in the presence of a small negative cosmological constant. At the classical level, soft factors are derived from radiative profiles associated with gravitational and electromagnetic bremsstrahlung in the low-frequency limit. Our study investigates the scattering of a probe particle by a four-dimensional AdS black hole with a small negative cosmological constant, examining a double “soft limit” of radiation to extract the “soft factor.” Since the leading soft factor exhibits universality beyond tree level, which enables us to derive a correction to the Ward identity in alignment with the equivalence between large gauge Ward identities and soft photon theorems in asymptotically flat spacetimes. Additionally, we recover this corrected large gauge Ward identity from the CFT Ward identity at the boundary in the large AdS radius limit. In the second part of my presentation, I will talk about construction of a celestial CFT four-point correlator for a specific eikonal scattering on the horizon of an eternal Schwarzschild black hole, shedding light on the celestial holography perspective in black hole spacetime.

The cosmic microwave background (CMB) radiation, popularly known as the weak afterglow of the big-bang, provides precise measurements of cosmological parameters and helps understand physics of the early universe. At the end of three generations of satellite missions with completion of Planck, proposals of further sensitive future generation CMB experiments are being undertaken in India and globally. In this talk, I will discuss new analysis techniques of CMB signals using both the traditional (or non machine learning) and machine learning approaches that have been developed by my group. These methods can be employed in the analysis of observations of upcoming CMB missions for robust and accurate extraction of cosmological information.

I will discuss various aspects of dark matter search at collider experiments. The tell-tale signature of a WIMP-like DM is large missing transverse momenta (MET) in the final state. I will briefly discuss the prospects of having such signals in two-Higgs doublet Models. In this regard, we have employed machine learning techniques such as ANN, to enhance signal over background. I further discuss the prospect of detecting multi-component dark matter at future lepton colliders. Finally, I also discuss a scenario, namely co-scattering dark matter, which deviates from the standard WIMP-like scenario and its prospect of discovery in the long-lived particle search

The Monopole and Exotics Detector at the LHC (MoEDAL) is an experiment dedicated to the search for magnetic monopoles, highly electrically charged objects (HECOs) and other Highly Ionizing Particle (HIP) messengers of new physics at the LHC. The baseline MoEDAL detector consists of stacks of Nuclear Track Detectors (NTDs) made of up of CR39, Makrofol, and Lexan foils as well as trapping detectors comprised of aluminum elements placed near the interaction point (IP8) of the LHCb experiment. The MoEDAL experiment collected 2.2 fb-1 of p-p collision data at a center of mass energy of 8 TeV during LHC Run-1 and 6.46 fb-1 of collision data at a center of mass energy of 13 TeV during LHC’s Run-2. No HIP candidates were found to-date. For HIP pair production via the Drell-Yan mechanism and photon fusion and for monopoles with spin0, 1/2 and 1, this search place constraints on the direct production of magnetic monopoles with up to ten Dirac magnetic charges for masses ranging from 1450 GeV/c2 to 3.9 TeV/ c2. In addition, constraints were placed on the production of HECOs for charges ranging from 5e to 350e (e being the charge of the electron) for masses in the range of 80 GeV/c2 to 3.4 TeV/c2, depending on the spin of the particle. These are the best limits obtained on the production of HIPs at any accelerator facility until this date.

Non-Abelian anyons, a promising platform for fault-tolerant topological quantum computation, adhere to the charge super-selection rule (cSSR), which imposes restrictions on physically allowed states and operations. However, the ramifications of cSSR and fusion rules in anyonic quantum information theory remain largely unexplored. In this study, we unveil that the information-theoretic characteristics of anyons diverge fundamentally from those of non-anyonic systems such as qudits, bosons, and fermions and display intricate structures. In bipartite anyonic systems, pure states may have different marginal spectra, and mixed states may contain pure marginal states. More striking is that in a pure entangled state, parties may lack equal access to entanglement. This entanglement asymmetry is manifested in quantum teleportation employing an entangled anyonic state shared between Alice and Bob, where Alice can perfectly teleport unknown quantum information to Bob, but Bob lacks this capability. These traits challenge conventional understanding, necessitating new approaches to characterize quantum information and correlations in anyons. We expect that these distinctive features will also be present in non-Abelian lattice gauge field theories. Our findings significantly advance the understanding of the information-theoretic aspects of anyons and may lead to realizations of quantum communication and cryptographic protocols where one party holds sway over the other.

The fundamental physical properties of Dark Matter (DM) e.g. particle mass, spin, couplings, etc. still remain a mystery. If DM particles are spinless and ultra-light (m~10-22 eV), what are the observational constraints on its properties like mass and in particular, self-couplings? In this talk we attempt to answer this question by considering the following scenarios: (a) Using observational upper limits on the amount of mass contained within some region around the galactic centre, one can impose constraints in the λ-m plane, where allowed self-couplings can be as small as λ~±10-96, (b) requiring that observed galactic rotation curves of dwarf galaxies as well as an empirical soliton-halo relation have to be simultaneously satisfied allows one to probe self-couplings as small as λ~𝒪(10-90), and (c) survival of dwarf satellite galaxies orbiting in the potential of larger halos on cosmological timescales can be used to probe both attractive and repulsive self-couplings as small as λ~±10-92. Towards the end of the talk, we shall also discuss how machine learning models like neural networks could be used to learn DM parameters (in particular ULDM mass m) from galactic rotation curves.

In this seminar, I will present an overview of the research activities of my group, the CU Aerospace Nanoscale Transport Modeling (CUANTAM) Laboratory at the University of Colorado Boulder. In CUANTAM Laboratory, we combine concepts from solid state physics, materials chemistry and nano- to microscale device physics, and develop physics-aware AI approaches to model thermal and electronic transport properties of materials in technological applications. I will highlight our research on two topics.

(1) Thermal models of nanoelectronic devices: The state-of-the-art nanometer (nm)-scale microelectronic transistors are heterogeneous structures that include nm-scale semiconductors, dielectric materials and metals, within confining nanoscale dimensions. The confined geometry results in self- heating that accelerates defect generation leading to the decline of the transistors. Heat transport in nm- scale systems can be drastically different from their bulk counterparts. Atomistic modeling techniques have shown remarkable accuracy while predicting thermal conductivities of isolated nanoscale systems, however, the computational costs of first-principles approaches prohibit us to analyze such complex systems. Heat transport in nanoscale systems that include multiple confining surfaces, interfaces, and materials with different degrees of crystallinity or disorder, is far from understood. In this talk, I will discuss how atomistic modeling combined with machine learning methods allow us to overcome this challenge and develop thermal model of nanoscale field effect transistors. Our model reveals the mechanisms responsible for self-heating and informs transistor design with desired thermal budget and power consumption.

(2) Image to Properties—Extracting Atomic Structure Information from Spectroscopy Images: AI- assisted approaches have remarkably accelerated materials design and discovery, however, largely are based on the forward process: one assumes the atomic arrangement of a new material and uses various techniques to model the physical properties of the material. However, to design a material with target properties, an expensive trial-and-error loop is often followed until the target is achieved or novel discoveries are made. The rapid advancements of AI techniques and materials science research present us unique opportunities to disrupt the process and establish a new reverse paradigm for materials discovery. Large number of high-quality characterization images are now easily generated using spectroscopy techniques, such as the angle-resolved photoemission spectroscopy (ARPES). In parallel, AI models have made significant breakthroughs in image generation and processing. We develop a new AI-assisted modeling framework that reads spectroscopy images and predicts the properties of the underlying atomic structure of the materials. Our framework establishes an approach to expedite the design and discovery of complex materials with desired electronic band structures, going beyond combinatorial approaches.

 

Speaker Bio: 

Sanghamitra Neogi is an Assistant Professor at the Ann and H.J. Smead Department of Aerospace Engineering Sciences at the University of Colorado Boulder. Additionally, she is a Program Faculty at the Materials Science and Engineering Program at the University of Colorado Boulder. Prior to joining CU, she received her B.Sc. and M.Sc. in Physics from Jadavpur University, Kolkata, and Indian Institute of Technology, Kanpur, India, respectively. She received her Ph.D. in theoretical condensed matter physics from the Pennsylvania State University and was a postdoctoral research associate at the Max Planck Institute for Polymer Research, Mainz, Germany. Her research received mention in the Journal of Physics D: Applied Physics article “The 2022 applied physics by pioneering women: a roadmap.” She is an Associate Editor for the European Physical Journal B: Condensed Matter and Complex Systems.

I will start my talk with a brief overview of the standard reheating scenario. Then, I will discuss reheating through the evaporation of primordial black holes (PBHs) if one assumes PBHs are formed during the phase of reheating. Depending on their initial mass, abundance, and inflaton coupling with the radiation, I discuss two physically distinct possibilities of reheating the universe. In one possibility, the thermal bath is solely obtained from the decay of PBHs, while inflaton plays the dominant energy component in the entire process. In the other possibility, PBHs dominate the total energy budget of the universe during evolution, and then their subsequent evaporation leads to a radiation-dominated universe. Furthermore, I will discuss the impact of both monochromatic and extended PBH mass functions and estimate the detailed parameter ranges for which those distinct reheating histories are realized. The evaporation of PBHs is also responsible for the production of DM. I will show its parameters in the background of reheating obtained from two chief systems in the early universe: the inflaton and the primordial black holes (PBHs). Then, I will move my discussion towards stable PBHs and discuss the effects of the parameters describing the epoch of reheating on the abundance of PBHs and the fraction of cold dark matter that can be composed of PBHs. If PBHs are produced due to the enhancement of the primordial scalar power spectrum on small scales, such primordial spectra also inevitably lead to strong amplification of the scalar-induced secondary gravitational waves (GWs) at higher frequencies. I will show how the recent detection of the stochastic gravitational wave background (SGWB) by the pulsar timing arrays (PTAs) has opened up the possibility of directly probing the very early universe through the scalar-induced secondary gravitational waves. Finally, I will conclude my talk by elaborating on the effect of quantum correction on the Hawking radiation for ultra-light PBHs and its observational signature through dark matter and gravitational waves.

We show that in the usual type-I seesaw framework, augmented solely by a neutrino portal interaction, the dark matter relic density can be created through freeze-in, in a manner fully determined by the seesaw interactions and the DM particle mass. This simple freeze-in scenario, where dark matter is not in a seesaw state, proceeds through slow, seesaw-induced decays of Higgs W and Z bosons. We identify two scenarios, one of which predicts the existence of an observable neutrino line.

Neutrinos are massless in the standard model, but neutrino oscillation experiments have confirmed that at least two out of the three active neutrino species have mass. Cosmological data can be an important probe for neutrino properties, like mass, energy density, and non-standard interactions. In this talk, I shall first discuss the bounds on the neutrino mass sum and the mass hierarchy from cosmological data, and how cosmological data cannot differentiate between the normal and inverted hierarchy well. Next, using Bayesian evidence and KL Divergence calculations, we shall see that there is no conclusive evidence for normal neutrino mass hierarchy from the combined power of the latest neutrino oscillations, neutrinoless double beta decay, and cosmological data, when we consider mass hierarchy agnostic priors. Finally, we shall look at constraints from cosmological data, on the possible neutrino non-standard self-interactions mediated by a heavy scalar, its role as a potential solution to the Hubble tension, and how this self-interaction model can help reconcile two inflationary models: Natural Inflation and Coleman-Weinberg Inflation, with cosmological data, even though these inflationary models are ruled out at more than 2-sigma in the Lambda-CDM model.

In recent years, it has been demonstrated that the propagation of acoustic-like perturbations within a flowing fluid can be described by an effectively curved space-time, which in many senses, resembles a black hole space-time. Such correspondence helps to study analogies of the black hole horizon-related phenomena in terrestrial laboratory setups. We study the Bose-Einstein Condensate as a quantum analogue system, and demonstrate that the shock-wave-induced acoustic naked singularity, which is evident to develop in a frictionless fluid for a non-dispersive shock wave, is prohibited to form in such a system. The reason behind this is the microscopic structure of the underlying ether and the resulting effective trans-Planckian dispersion. Approaching the instant of shock, rapid spatial oscillations of density and velocity develop around the shock location, which begins to emerge already slightly before the instant of shock, due to the quantum pressure in the condensate. These oscillations render the acoustic spacetime structure completely regular and therefore lead to the removal (censoring) of the spacetime singularity. Thus, distinct from the cosmic censorship the hypothesis of Penrose formulated within Einsteinian gravity, the quantum pressure in Bose-Einstein condensates censors (prohibits) the formation of a naked shock-wave singularity, instead of hiding it behind a horizon.

Dark Matter and Dark Energy are two main ingredients of the universe occupying nearly 96% of its total energy budget. Usually these two fluids are assumed to be conserved separately (in other words, they do not interact with each other) and the resulting picture is well described by the Λ-Cold Dark Matter (ΛCDM). However, recent observational evidences are suggesting that a revision of the ΛCDM cosmology is needed, and, as a result, various cosmological models have been proposed. In this talk, I shall discuss a special cosmological theory which allows a non-gravitational interaction between dark matter and dark energy, known as ‘interacting dark matter-dark energy’ or ‘interacting dark energy’. The models of interacting dark energy have many appealing consequences. In particular, I shall discuss how interacting dark energy models play a crucial role in alleviating the Hubble constant (H0) tension.

We show a relation, based on parallel repetition of the Magic Square game, that can be solved, with probability exponentially close to 1 (worst-case input), by 1D (uniform) depth 2, geometrically-local, noisy (noise below a threshold), fan-in 4, quantum circuits. We show that the same relation cannot be solved, with an exponentially small success probability (averaged over inputs drawn uniformly), by 1D (non-uniform) geometrically-local, sub-linear depth, sub-quadratic size, classical circuits consisting of fan-in 2 NAND gates. Quantum and classical circuits are allowed to use input-independent (geometrically-non-local) resource states, that is entanglement and randomness respectively. To the best of our knowledge, previous best (analogous) depth separation for a task between quantum and classical circuits was constant v/s sub-logarithmic, although for general (geometrically non-local) circuits. Our hardness result for classical circuits is based on a direct product theorem about classical communication protocols from Jain and Kundu [JK22]. As an application, we propose a protocol that can potentially demonstrate verifiable quantum advantage in the NISQ era. We also provide generalizations of our result for higher dimensional circuits as well as a wider class of Bell games.

Non-Lorentzian physics has recently emerged as an active field of research. We will consider two symmetries in this category- Galilean and Carrollian. Apart from the usual Galilean symmetries, Carrolloian symmetries will be discussed. While Galilean symmetries are understood as a particular limit of Lorentz transformations, Carroll symmetries are understood as a particular case of Sengupta’s transformations. Some applications will be considered.

This will be a pedagogical talk aimed primarily at doctoral/post-doctoral students.

The first direct detection of gravitational waves was made by the LIGO-Virgo collaborations in 2015. Such spacetime ripples with wavelengths of the order of kilometres are generated during the final few milliseconds of stellar mass black hole or neutron star mergers, beyond which their strength falls below our present instrumental sensitivity. However, continuous gravitational wave emissions had been predicted in colliding galaxies from supermassive black hole binaries (SMBHB) revolving around each other for years before the ultimate merger. Superposition of such emissions from a large number of SMBHBs is expected to create a persistent stochastic gravitational wave background (SGWB) with wavelengths of the order of light years (in nano-Hz frequencies). Detection of such a background requires detectors with light-year arm lengths that cannot be achieved by ground-based or even the advanced upcoming space-based gravitational wave detectors. Thankfully, nature has gifted us ultra-precise galactic clocks placed light years apart named ‘millisecond pulsars’ as potential tools to detect these nanohertz gravitational waves. Recent results announced by the Indian Pulsar Timing Array (InPTA), the European Pulsar Timing Array (EPTA), the North American Nanohertz Observatory for Gravitational Waves (NANOGrav), the Parkes Pulsar Timing Array (PPTA) from Australia, and the Chinese Pulsar Timing Array (CPTA), unravel the first strong direct hints of such a cosmic gravitational wave background.

We investigate the propagation of spherically symmetric shocks in relativistic homologously expanding media with density distributions following a power-law profile in their Lorentz factor. We find that the shock behavior can be characterized by their proper velocity. While generally, we do not expect the shock evolution to be self-similar, we find a critical value for which a self-similar solution with constant exists. We then use numerical simulations to investigate the behavior of general shocks. We find the region for monotonously growing shocks and the decreasing shocks which eventually die out. Finally, we present an analytic approximation, based on our numerical results, for the evolution of general shocks in the regime where it is ultra-relativistic.

Abstract: Inverse problems are encountered when required information about a physically unreachable domain needs to be obtained from the data collected at the accessible domain. Due to its frequent occurrence in heat transfer and many other fields, a family of methods has been developed over decades to tackle such problems. Recently, physics-constrained neural networks have shown great promisein providing fast, elegant solutions for inverse problems. In this work, a physics-informed neural network was developed to solve several unsteady inverse heat transfer problems. Using physics-constrained deep neural network model, we were able to predict the temperature profiles across the whole domain and estimate the unknown thermophysical parameters such as the material’s thermal diffusivity and boundary conditions at the inaccessible side (i.e., heat flux) with high accuracy. Furthermore, the method was extended to estimate the time dependent heat flux on the inaccessible side. The predicted temperatures and estimated parameters obtained from our inverse technique are in good agreement with their corresponding exact or true values. However, the physics-informed neural network was not able to predict the heat flux accurately if the boundary temperatures were not used for training the artificial neural network. To overcome this challenge, we have developed a hybrid method coupling the artificial neural network technique with a finite volume method. This hybrid method is capable of predicting the unknown heat flux within 1% of their true values. We also used the hybrid method for a thermal ablation problem with a moving boundary and obtained highly accurate heat flux at the inaccessible side for the thermal ablation problem.

 

Brief Bio: Prof. Prashanta Dutta is a Professor of Mechanical Engineering and the Director of the NSF NRT-LEAD program at Washington State University (WSU). He received his MS (1997) and Ph.D. (2001) degrees from the University of South Carolina and Texas A&M University, respectively. He joined the School of Mechanical and Materials Engineering of WSU in 2001. During his sabbatical years, he worked as a Visiting Professor at Konkuk University, Seoul, South Korea and the Technical University of Darmstadt, Germany. His primary research area is Micro, Nano, and Biofluidics with a specific focus on the development of new algorithms for multiscale and multiphysics problems. Lately, he developed a suite of physics-based machine learning models for heat and mass transfer problems.  He has published more than 200 peer-reviewed journal and conference articles. Prof. Dutta organized and chaired numerous sessions, fora, symposia, and tracks for several ASME (American Society of Mechanical Engineers) and APS (American Physical Society) conferences and served as the Chair of the ASME Micro/Nano Fluid Dynamics Technical Committee. Moreover, he served as an Associate Editor for the ASME Journal of Fluids Engineering; currently, he is an Editor for Electrophoresis. Prof. Dutta is an elected Fellow of ASME and a recipient of the prestigious Fulbright Professorship sponsored by the US Department of State.

 

Strong gravitational lensed systems with variable sources, like supernovae (SNe) and quasars (QSO), can be the next frontier in cosmic probes. One can obtain crucial constraints on cosmological parameters, such as the value of Hubble constant, evolution of dark energy etc, independent of other probes by measuring the time delays between the images. Lensed SNe and QSOs have their own advantages, e.g. lensed SNe are extremely rare as compared to lensed QSOs but the former have much better understood light curves with the time scale of a few months only. The upcoming time-domain surveys such as LSST and Roman will observe many lensed systems with both kinds of sources. However, many will have the images spatially unresolved due to the limited angular resolution of the wide-field surveys. In such cases, the observed lightcurves would be superposition of the time-delayed image fluxes. We investigate whether the unresolved sources can be recognized as lensed given only the lightcurve information and whether time delays can be extracted robustly.
In this talk, I will discuss a few such interesting techniques that can identify the unresolved lensed systems of both source kinds (SN and QSO). Most importantly, these techniques are very much generic and, hence, do not assume any particular property of the sub-classes of the sources, such as the type of SNe, the flux variability of QSOs etc. These techniques can be very useful in detecting the lensed systems in wide-field surveys and in measuring the time delays simultaneously to improve our understanding of the cosmos.

In this (ongoing) work, we try to address the main guiding feature of a many-body quantum system, when used as a battery, to have faster charging power of the battery. Unlike the existing common belief (that higher the many-body entanglement in the time-evolved state of the battery, faster will be the charging process), we try to argue here — substantiated with examples from 1D quantum spin chains — that it is the `circuit complexity’ of the many-body charging Hamiltonian that is indeed the fundamental ingredient for having faster charging.

Time: 10:30 – 11:15

A journey in the zoo of Turing patterns

Timoteo Carletti
University of Namur, Belgium

Time: 11:45 – 12:15

Consensus Formation Among Mobile Agents in Networks of Heterogeneous Interaction Venues

Sayantan Nag Chowdhury
University of California, USA

Time: 12:15 – 13:00

Transient chaos causes high vulnerability of networked systems

Ulrike Feudel
Carl von Ossietzky University Oldenburg, Germany

Recent advances in mechanical sensing technologies have led to the suggestion that heavy dark matter candidates around the Planck mass range could be detected through their gravitational interaction alone. With this ultimate goal on the horizon, the Windchime collaboration is involved in developing the necessary techniques, systems, and experimental apparatus using arrays of optomechanical sensors. These can also be used to investigate non-gravitational signals from other dark matter candidates in the near-term. However, to achieve Planck-scale detection, measurements of these devices will need to go beyond the standard quantum limit. Hence we need to employ quantum-enhanced readout techniques for detecting the extremely weak impulses due to the gravitational interaction of dark matter. Here we discuss the different techniques for achieving such quantum-enhanced measurements in optical and microwave domain, which would help us in reducing the measurement-added noise floor in experimentally relevant parameter regimes in order to reach our desired sensitivity.

Despite the stringent constraints on the primordial power spectra from the Cosmic Microwave Background (CMB) on large cosmological scales, the dynamics at small scales during the primordial epoch of inflation remain hitherto unconstrained. During inflation, scalar and tensor fluctuations can grow via various interesting mechanisms (e.g. ultra slow-roll) at small scales, leading to peak profiles (with features) in the respective power spectra. In particular, large scalar fluctuations can result in the formation of abundant primordial black holes (PBH) after inflation and can induce large tensor fluctuations, which are realised as a stochastic background of induced gravitational waves (GW). With the tremendous sensitivities proposed by upcoming GW surveys and several bounds on the abundance of PBH over a huge range of masses, it is exciting to analyse relevant inflation models for their predictions for PBH and induced GW. In light of recent advances in this field of research, in this talk I will discuss a specific model of interest for PBH and induced GW and possible 1-loop correction to the scalar power spectrum. Moreover, I will also discuss formation and evolution of PBH and GW in non-standard post-inflationary epochs, which is interesting with the prospect of highly sensitive upcoming GW observations. 

We show that a slowly varying Newton’s constant, consistent with all observations, may partly or wholly mitigate the need for dark matter. It can also explain why MOND seems to work only at galactic scales. When extrapolated to short distances, the model predicts a new form of `asymptotic freedom’ in quantum gravity. We discuss the possible origin and implications of this variation.

Centres of active galaxies (active galactic nuclei or AGN) or X-ray binaries(microquasars) are very bright. Radiative output of AGNs are ten to fourteen orders of magnitude higher than sun, while that of microquasars, it is four to six orders of magnitude higher. The only viable explanation of this phenomena is through conversion of a fraction of gravitational energy released into EM radiation. Accretion is the process of accumulation of matter by some process on to a certain point/region. In astrophysics, it means matter falling onto the surface of gravitation centre (stars, neutrons stars, white dwarf) or into a black hole. While the radiation requires accretion of matter, but such AGNs and microquasars also show collimated, relativistic outflows (jets) or uncollimated outflows or winds.

In this overview we list the problems in theoretical understanding of the observed phenomena, even in the era of GRMHD simulation.

                                                                   

Time: 11:00 am

Tipping in an ecological system under external forcing

Syamal K. Dana
Jadavpur University, India

Tipping or critical transitions in climate, ocean circulation, Greenland ice cap and Indian monsoon, and ecological systems known as regime shift, occurs due to faster changes in system parameters under the influence of external conditions. In recent time, major attention has been attracting researchers from various disciplines on studies of tipping using model systems that represents various natural phenomena. It mainly focused near the saddle-node bifurcation points that mostly express the dynamical features of the above systems. We use an ecological model to explore tipping against the time-varying carrying capacity of the system. If the carrying capacity is varied at a linear rate, the system does not show sharp transitions as expected immediately at the bifurcation points but tips to the alternate states after an elapse of time. Additionally, we consider any impacts of environmental shocks which is modeled by a triangular shape impulse. Delayed tipping occurs in such a situation of external shock but shows a dependence on the falling and rising rates of the impulse. The active time window of the external impulse on the carrying capacity called as exceedance time plays a decisive role on the occurrence of tipping. Furthermore, we apply a second impulse, in case the first impulse fails to induce any tipping. The role of the rate parameters and strength of the impulses, and most importantly, the time interval of the impulses is considered in detail to delineate the tipping zones in parameter space.

Time: 11:30 am

Mathematical modelling of spiking neural networks accompanied by astrocytes

Susanna Yu. Gordleeva
Lobachevsky State University of Nizhny Novgorod, Russia

Spiking neural networks, being replicas of biological ones, are expected to have a higher computational potential than traditional artificial neural networks (ANNs). The critical problem is in the design of robust learning algorithms aimed at building a “living computer” based on SNNs. We show how SNN implements associative learning by exploiting the spatial properties of spike-timing-dependent plasticity (STDP). Accumulated data over the past decade shows that long-neglected glial cells, especially astrocytes, are intricately involved in the activity of neural networks. It has been shown that astrocytes exhibit a high degree of heterogeneity concerning gene expression profiles, morphology, synaptic inputs responsiveness and in their subsequent Ca2+-activity responses. This huge heterogeneity is observed on different levels, e.g. in different brain regions, cortical layers, and different neural circuits. In addition, it has become clear that diffuse extracellular signaling is also very critical for brain functions. Such signals directly affect information transfer and storage in the neuronal networks. Astroglial cells have been highlighted as critical players in the activity modulation of neural networks and the generation of physiological signals by fastening local fluctuations and nonlinear diffusion of intracellular Ca2+-wave.

Time: 12:30 pm

Synchronization in higher-order networks

Md Sayeed Anwar
Indian Statistical Institute, India

The stability analysis of synchronization in time-varying higher-order networked structures (simplicial complexes) is a challenging problem due to the presence of time-varying group interactions. In this context, most of the previous studies have been done either on temporal pairwise networks or on static simplicial complexes. Here, we discuss a general framework to study the synchronization phenomenon in temporal simplicial complexes. We show that the synchronous state exists as an invariant solution and obtain the necessary condition for it to emerge as a stable state in the fast-switching regime. We prove that the time-averaged simplicial complex plays the role of synchronization indicator whenever the switching among simplicial topologies is adequately fast. We attempt to transform the stability problem into a master stability function form. Unfortunately, for the general circumstances, the dimension reduction of the master stability equation is cumbersome due to the presence of group interactions. However, we overcome this difficulty in two interesting situations based on either the functional forms of the coupling schemes or the connectivity structure of the simplicial complex, and we demonstrate that the necessary condition mimics the form of a master stability function in these cases. We verify our analytical findings by applying them on synthetic and real-world networked systems. We find that the presence of temporality along with the multiway interactions improves the synchronization phenomena as compared to the static higher-ordered or temporal pairwise system. In addition, our results also reveal that with sufficient higher-order coupling and adequately fast rewiring, the temporal simplicial complex achieves synchrony even in a very low connectivity regime.

Time: 3:00 pm

Diagnostics and study of mental disorders based on the analysis of fMRI-derived functional brain networks

Semen A. Kurkin
Immanuel Kant Baltic Federal University, Russia

In this talk, I will present the results of our studies of the fMRI-derived functional brain networks of patients with major depressive disorder (MDD), which aimed to identify characteristic abnormalities in their functional networks and to develop effective classifiers for diagnosing MDD. I will focus on the following points: network-level statistical analysis, analysis of the standard network measures and topology features, development of simple ML classifiers and their interpretability, application of graph neural networks, and the investigation of high-order interactions in the functional brain networks.

Time: 3:30 pm

Reservoir computing approach for analysis and prediction of complex network dynamics
Andrey V. Andreev
Immanuel Kant Baltic Federal University, Russia

Prediction of a system’s behavior is an essential task encountering the complex networks theory. Machine learning offers supervised algorithms, e.g., recurrent neural networks and reservoir computers that predicts the behavior of model systems whose states consist of multidimensional time series. In real life, we often have limited information about the behavior of complex networks. The brightest example is the brain neural network described by the electroencephalogram. Prediction of the behavior of these systems is a more challenging task but provides a potential for real-life application. In the current work, we train reservoir computer to predict the macroscopic signal produced by the adaptive network of phase oscillators. The Lyapunov analysis revealed the chaotic nature of the signal and reservoir computer failed to forecast it. Augmenting the feature space using Takkens’ theorem improved the quality of forecasting. RC achieved the best prediction score when the number of signals coincided with the embedding dimension estimated via the nearest false neighbors method. Another application of RC is to restore the missing signals of the network’s nodes by the neighbors’ dynamics. We show that neural network solves the task better than the approximation methods, and even one neighbor’s dynamics is enough to restore the missing signal with a high accuracy.

Time: 4:30 pm

Dynamics of swarmalators with higher-order interactions
Gourab Kumar Sar
Indian Statistical Institute, India

Higher-order interactions shape collective dynamics, but how they affect transitions between different states in swarmalator systems is yet to be determined. To that effect, we here study an analytically tractable swarmalator model that incorporates both pairwise and higher-order interactions, resulting in four distinct collective states: async, phase wave, mixed, and sync states. We show that even a minute fraction of higher-order interactions induces abrupt transitions from the async state to the phase wave and the sync state. We also show that higher-order interactions facilitate an abrupt transition from the phase wave to the sync state by bypassing the intermediate mixed state. Moreover, elevated levels of higher- order interactions can sustain the presence of phase wave and sync state, even when pairwise interactions lean towards repulsion. The insights gained from these findings unveil self-organizing processes that hold the potential to explain sudden transitions between various collective states in numerous real-world systems.

The experimental works on which Nobel Prizes have been awarded this year, involve a profound discovery by John. S. Bell in 1964. This is famously known as Bell’s theorem and has been considered to be the ‘most profound discovery of Science’ by some physicists. Bell theorem shows that quantum world is not compatible with the twin concepts of local-realism propounded by Einstein in a work famously identified as EPR paradox. Various experiments establish that nature is indeed incompatible with local-realism. In this talk we shall discuss the issues related to the discovery of Bell’s theorem in a simplest possible way. In particular, no expertise in quantum mechanics will be needed to understand the essence of Bell theorem.

The quantum speed limit provides a fundamental bound on how fast a quantum system can evolve between the initial and the final states under any physical operation. The celebrated Mandelstam-Tamm (MT) bound has been widely studied for various quantum systems undergoing unitary time evolution. Not only of fundamental importance, but motivated by the immense potential for it to be useful in quantum metrology and practical quantum technology, we find out newer quantum speed limit bounds from time energy uncertainty relations. Specifically, here we derive a tighter uncertainty relation for general mixed quantum states and then derive a new quantum speed limit for general quantum states from it such that it reduces to that of the pure quantum states derived from tighter uncertainty relations. We show that the MT bound is a special case of the tighter quantum speed limit derived here. We also show that this bound can be improved when optimized over many different sets of basis vectors. We illustrate the tighter speed limit for pure states with examples using random Hamiltonians and show that the new quantum speed limit outperforms the MT bound. Thereafter, we derive a quantum speed limit for mixed quantum states using the stronger uncertainty relation for mixed quantum states and unitary evolution. We also show that this bound can be optimized over different choices of operators for obtaining a better bound. We illustrate this bound with some examples and show its better performance with respect to some important earlier and recent bounds. Our work will thus be useful in various areas of quantum metrology and quantum control.

The most elementary empirical truth associated with any experiment involving light (electromagnetic radiation) propagation is the distinction between the source (region of cause) and the detector (region of effect), i.e. “cause/effect” distinction, based on which one can speak of “distance between source and detector”, “propagation from source to detector” and, therefore, “action at a distance”, “velocity of propagation”. According to EPR’s completeness condition (ECC), “cause/effect” distinction should be taken into account in a theory that is supposed to provide explanations for such an experiment, the simplest one being the Hertz experiment. Then, in principle, one can decide whether “cause before effect” or “cause after effect” i.e. the logic of causality remains decidable. I show that, working with Maxwell’s equations and “cause/effect” distinction to explain Hertz experiment, Poynting’s theorem is unprovable. It is provable if and only if “cause/effect” distinction is erased by choice through an act of free will, but the logic of causality becomes undecidable. The current theoretical foundation behind the hypothesis of ‘light propagation’ comes into question as theoretical optics is founded upon Maxwell’s equations and Poynting’s theorem. A revisit to the foundations of electrodynamics, with an emphasis on the interplay among logic, language and operation, seems necessary and motivated.

Despite having infinite pure preparations quantum communication can not overpower its classical counterpart, in question of input-independent decoding at receiver’s end. This result was first established by A. S. Holevo, considering mutual information as a quantifier of communication utility. Here, exploring a general quantification for the (input-independent) communication scenario, we will introduce a task which can be accomplished perfectly by communicating a two-level quantum system. However, a perfect classical bit, even with infinite dimensional shared randomness is unable to do so. Further, considering the presence of noise in the communication line, we establish that a quantum channel can not always be replaced by a classical channel of identical communication capacity.

Devising reliable communication with a high rate in a network consisting of multiple transmitters and receivers is a problem of importance in communication theory. Interestingly, resources like nonlocal quantum correlations have been shown to be useful in enhancing the performance of some communication networks. In this talk, we present our results on entanglement-assisted communication over classical network channels. We consider multiple access channels, an essential building block for many complex networks, and develop a framework for n-senders and 1-receiver multiple access channels based on nonlocal games. We obtain generic results for computing correlation assisted sum-capacities of these channels. The considered channels introduce less noise on winning and more noise on losing the game, and the correlation assistance is classified as local (L), quantum (Q), or no-signaling (NS). Furthermore, we consider a broad class of multiple access channels such as depolarizing ones that admix a uniform noise with some probability and prove general results on their sum-capacities. Finally, we apply our analysis to three specific depolarizing multiple access channels based on Clauser-Horne-Shimony-Holt, magic square, and Mermin-GHZ nonlocal games. In all three cases we find enhancements in sum-capacities on using nonlocal correlations. We obtain either exact expressions for sum-capacities or suitable upper and lower bounds on them.

Reference: Jiyoung Yun, Ashutosh Rai, Joonwoo Bae, Non-Local and Quantum Advantages in Network Coding for Multiple Access Channels, arXiv:2304.10792 [quant-ph] (2023).

LIGO-Virgo-Kagra (LVK) Collaboration has detected two confident binary neutron stars up to the third observing run. One of them (GW170817) has an electromagnetic counterpart helping direct estimation of redshift. But most of the expected detections will not have an electromagnetic counterpart, and hence other methods need to be developed to estimate cosmological parameters. In this talk, we will discuss how to estimate redshift from the population distribution of the source frame mass of binary neutron stars. In the first half of the talk, we will deal with a pedagogical set-up to infer both cosmological and population level parameters simultaneously. We will also present a realistic forecast for the current and future observations in LVK. Finally we will end with a remark on the number of events needed to get a sub-percent measurement of the Hubble constant to comment on the existence of Hubble tension.

Experimental implementations of linear optics based quantum information processing tasks are of paramount importance due to ease in implementation, although it often suffers from reliability. We need to, therefore, understand properly what is the ultimate efficiency of any linear optics based quantum information processing task, and thereby, to look for minimally resourceful (as well as implementation friendly) non-linear gadgets or, some other minimally resourceful gadgets (e.g., entanglement in other degrees of freedom, etc.) to achieve 100% (or, nearby) efficiency in the respective implementations. Here, in the present talk, we focus on the corresponding scenario in the context of the well-known issue of LOCC (local quantum operations and classical communication) discrimination of bi-partite quantum states — shared between distant labs. When we are not concerned about any specific physical implementation of LOCC based state discrimination task (i.e., when we look at the problem just mathematically within the purview of quantum theory), we already have several important examples of sets of LOCC- indistinguishable states as well as sets of LOCC- distinguishable states — using one or more than one copy of the individual states. We will look at some such examples of sets of LOCC-distinguishable states from the perspective of their implementations via linear optics, and thereby, try to figure out the limitations, if any. We will then briefly discuss about how to overcome such limitations using extra resources.

The motivation of this talk will be to explain three well-accepted standard model (SM) problems namely dark matter (DM), neutrino mass and baryon asymmetry of the Universe (BAU) which need beyond SM physics to be tackled. In this context, I will try to discuss the triplet fermions as a suitable dark matter candidate and the possible origin of the neutrino mass and BAU. First, I will discuss the present bounds on the triplet fermions coming from direct detection, indirect detection and collider search of DM. The bounds reflect that the neutral part of the triplet fermion can not satisfy the full abundance of DM. As a remedy, I elaborate on the possible ways to make it a viable DM candidate. The first approach is by adding a minimal set of particles which will help us to have either a WIMP or FIMP type DM candidate depending on the neutral components mass. An interesting finding of this study will be to probe the FIMP type DM at the proposed MATHUSLA detector. The second approach will discuss another possibility of making it a viable DM with full DM abundance by introducing the non-standard cosmology where we assume the presence of an extra species before BBN whose energy density dominates over the radiation in the early times. With the non-standard cosmology, we will discuss the possible origin of BAU and the change of the leptogenesis scale to the usual standard case which has been studied before. As usual, the triplet fermions which take part in the leptogenesis will generate neutrino mass by the Type III seesaw mechanism.

The Big Bang theory of Cosmology has been the most accurate explanation of cosmic history till date. But it is plagued with some subtle issues which await resolution. I am interested in unravelling the subtleties of the universe at both large and small scales by studying astrophysical objects, phenomena, proven theories and observational data of the universe. I am interested in studying all aspects of the physics of Gravity (General Relativity, Quantum Gravity, Modified Theories of Gravity), Cosmology (Cosmic Microwave Background, Dark Matter, Dark Energy, Inflation, Bounce) and the interrelations between these subjects (for eg. explaining dark matter and dark energy by modified theories of gravity or searching for cosmological signatures of Quantum Gravity theories etc). I am also keen in examining high energy phenomena like Gamma Ray Bursts and Supernovae, both astrophysical and cosmological aspects. In this talk I shall discuss the results of some of my research works that deals with 1) application of modified quantum theory to inflation in order to explain the problem of quantum to classical transition of inflationary perturbations 2) study primordial black holes in the context of bouncing cosmology 3) constraining dark matter condensates 4) gamma ray bursts. In my talk I shall briefly discuss the motivation and interesting outcomes which I have obtained in these research works

In this talk, I will be discussing two classes of Bell diagonal indecomposable entanglement witnesses in C^4 ⊗ C^4. The first class is a generalization of the well-known Choi witness from C^3 ⊗ C^3 , while the second one contains the reduction map. I will show contrary to C^3 ⊗ C^3 case, the generalized Choi witnesses are no longer optimal. Thereafter, I will talk about an optimization procedure for finding spanning vectors that eventually give rise to optimal witnesses. Operators from the second class turn out to be optimal, however, without the spanning property. I will also discuss the concept of mirrored entanglement witnesses. Our analysis sheds a new light into the intricate structure of optimal entanglement witnesses.

We have shown that the process of non-instantaneous reheating during the post-inflationary period can have a sizable impact on the charged lepton equilibration temperature in the early Universe. This suggests a relooking into the flavor effects of leptogenesis where the production and decay of right- handed neutrinos take place within this extended era of reheating. We observe that the decay of the lightest RHN in the set-up not only provides a platform to study flavor leptogenesis during reheating, but also a new paradigm of quasi- thermal leptogenesis emerges.

Neutral hydrogen (HI) has persisted for much of cosmic history, making it the best tracer to probe the Universe. Observation of the redshifted 21-cm signal due to the hyperfine transition of HI is a promising method to study its three-dimensional distribution in the Universe. The first billion years of cosmic history of the Universe mark the formation of the first stars and galaxies. After years of theoretical predictions, a considerable international effort is now producing the first tentative results, e.g. EDGES, SARAS, GMRT, LOFAR, MWA, HERA, etc. The coming theory and observations will reveal the mysterious era of cosmic dawn and reionization, including the properties of the first stars and galaxies, and possibly more exotic discoveries. There are also plans to probe the pre-stellar “dark ages” with telescopes on the moon. I will give a summary and update on current studies of 21-cm Cosmology and the Epoch of Reionization (EoR).

We will investigate hydrodynamic flows sourced by motors in fluid membranes. The membrane is modelled as a monolayer of viscous fluid, surrounded by external solvents of different viscosities. The in-plane 2D fluid flows sourced by these motors are modelled as point defects, such as vortices and force dipoles. We will explore the effects of membrane curvature and topology on the flows sourced by these motors. We will understand the hydrodynamic interactions between them and discover interesting regimes of co- ordinated activity, vortex lattice formation, global rotation and aggregate formation in the fluid membrane interface. We will also present relevant simulations where several mathematical theorems such as Poincare Index theorem, Liouville-Arnold theorem and Kimura’s conjecture will play a role.

Colloidal nano/micro-(bio)particles carry an electrostatic charge in aqueous media, and this charge is critical in defining their stability, (bio)adhesion properties, or toxicity toward humans and biota. Determination of interfacial electrostatics of these particles is often performed from zeta potential estimation using the electrophoresis theory by Smoluchowski. The latter, however, strictly applies to the ideal case of hard particles defined by a surface charge distribution under the strict conditions of particle impermeability to electrolyte ions and to flow. Herein, we review sound theoretical alternatives for capturing electrokinetic and therewith electrostatic features of soft colloids of practical interest defined by a 3D distribution of their structural charges and by a finite permeability to ions and/or flow (e.g., bacteria, viruses, nanoplastics, (bio)functionalized particles or engineered nanoparticles). Reasons for the inadequacy of commonly adopted hard particle electrophoresis models when applied to soft particulate materials are motivated, and analytical expressions that properly capture their electrophoretic response are comprehensively reviewed.

Quantum networks distribute high-fidelity, high-rate entanglement between network nodes as a resource for information processing applications. Current proposals for entanglement distribution in quantum networks utilize the local operation and classical communication (LOCC) framework to obtain high-fidelity states. However, due to the resource intensive nature of LOCC protocols, the entanglement distribution rates over even modest distances (~ 100 km) are extremely low and limit the advantage of all quantum protocols. In this talk, I will describe our recent work that utilizes a previously unused class of entanglement manipulation protocols, the entanglement-assisted local operations and classical communication (ELOCC) protocols, for high-fidelity entanglement distribution in quantum networks. Specifically, I will describe the application of catalytic entanglement transformations over network edges that can significantly enhance the rate of high- fidelity entanglement distribution in quantum networks. I will close with interesting further research directions.

Many applications of emerging quantum technologies, such as quantum teleportation and quantum key distribution, require singlets, maximally entangled states of two quantum bits. It is thus of utmost importance to develop optimal procedures for establishing singlets between remote parties. In general, this is not always possible with certainty. However, in some cases, conversion can still be achieved by using a catalyst that remains unchanged in the process. Therefore, it is very important to study the role of catalysis in entangled state transformations. We investigate different aspects of entanglement catalysis, both for quantum states and quantum channels. We prove that entanglement entropy completely characterizes state transformations in the presence of entangled catalysts. Furthermore, for transformations between bipartite pure states, we prove the existence of a universal catalyst, which can enable all possible transformations in this setup. We demonstrate the advantage of catalysis in asymptotic settings, going beyond the typical assumption of independent and identically distributed systems. We further develop methods to estimate the number of singlets that can be established via a noisy quantum channel when assisted by entangled catalysts. For various types of quantum channels, our results lead to optimal protocols, allowing us to establish the maximal number of singlets with a single use of the channel. We also demonstrate the usefulness of catalysis for entanglement distribution via a specific noisy channel.

The observation of the Cosmic Microwave Background (CMB), also known as the first light in the Universe, is a powerful probe to unravel many mysteries of the late-time Universe. During the first half of the talk, I will summarize the recent findings of the different processes in the Universe inferred from the CMB measurements. In the second part of my talk, I will discuss the detailed physics behind the formation of first-generation stars, the evolution of galaxies, and missing baryon problems by considering CMB as a probe of the evolution of baryons and electrons. Furthermore, I will focus on some unsolved problems at the late-time Universe that we aim to solve in a decade. I will also talk about “line intensity mapping” , a novel technique that will provide us with new information from the star formation in galaxies to the expansion of our Universe. I will conclude by describing how these new probes can be useful in resolving the current tension in Cosmology.

We propose a Dirac neutrino portal dark matter scenario by minimally extending the particle content of the Standard Model with three right-handed neutrinos , a Dirac fermion dark matter candidate (ψ) and a complex scalar (φ), all of which are singlets under the SM gauge group. An additional symmetry has been introduced for the stability of dark matter candidate ψ and also ensuring the Dirac nature of light neutrinos at the same time. In this scenario, we can have thermal or non-thermal dark matter depending upon couplings involving these particles. Most importantly, one can easily correlate the cosmological evolution of dark matter with the dynamics of right-handed neutrinos. This leads to a strong constraint on dark matter parameter space from the measurement of the effective number of relativistic degrees of freedom by Planck. The next generation experiments like CMB-S4, SPT-3G etc. will have the required sensitivities to probe a major portion of the entire model parameter space, offering a promising way of probing such light dark matter, where the traditional direct detection experiments are still not sensitive enough.

The Raychaudhuri equation predicts the convergence of geodesics and gives rise to the singularity theorems. The quantum Raychaudhuri equation (QRE), on the other hand, shows that quantal trajectories, the quantum equivalent of geodesics, do not converge and are not associated with any singularity theorems. Furthermore, the QRE gives rise to the quantum corrected Friedmann equation. The quantum correction is dependent on the wavefunction of the perfect fluid whose pressure and density enter the Friedmann equation. We show that for a suitable choice of the wavefunction this term can give rise to a small positive cosmological constant, just as observed in nature. We discuss implications.

In the talk, I will discuss continuously self- similar gravitational dynamics in 1+1 spacetime dimensions. I will show how the assumption of self-similarity fixes the form of the two-dimensional theory, two classes of which are well-known in the literature. I will discuss some exotic static solutions and how inclusion of matter fields leads to non-trivial dynamics. I will argue for the occurrence of singularities based on a simple feature of differential equations. Time permitting, I will also discuss numerical work relating to dynamics of matter field collapse.

Communication complexity of functions plays a pivotal role in many computation and communication tasks. In this article we consider communication complexity of relations (CCR), where the receiver outputs one of many correct answers. We show that there exists a class of relations such that there is no advantage when the nature of the communication is quantum. However, a stronger version of the task, where the receiver is required to output all correct answers in different runs, entails quantum advantage. We call this task strong communication complexity of relations (S-CCR). Interestingly one of the examples of such a task imply quantum advantage in communication without contextuality. A randomized version of the task unveils curious ordering of communication and shared resources. Besides foundational importance, this work explores a number of applications of S-CCR such as dimension witnesses, detecting nonclassical resources under information constraints and detection of Mutually Unbiased Bases (MUBs).

In this tutorial, we would be providing a general overview of the various tools in the SEAVEA toolkit with detailed hands- on-experience in variance based sensitivity analysis. The participants would be guided to install the tools on their laptops and conduct sensitivity analysis for models which are currently developed in the SEAVEA project. We can also provide guidance on how to integrate any program that the participants have already developed to the SEAVEA toolkit so that sensitivity analysis can be performed on them. With this tutorial we hope to highlight the utility of sensitivity analysis in dynamical models and make it more accessible to a wider audience. Given its applicability, importance and user-friendliness, we also hope that more research groups will find the tools in the SEAVEA toolkit useful in improving their research capabilities.

The experimental works on which Nobel Prizes have been awarded this year, involve a very important result discovered by John. S. Bell in 1964. This is famously known as Bell’s theorem and has been considered to be the ‘most profound discovery of Science’ by some physicists. Bell’s theorem shows that quantum world is not compatible with the twin concepts of local-realism propounded by Einstein. Experiments establish that nature is indeed incompatible with local-realism. In this talk we shall discuss the work of John Bell in a simplest possible way. In particular, no expertise in quantum mechanics is needed to understand the essence of Bell’s theorem.

Robertson-Heisenberg like uncertainty relation is derived for two incompatible observables in a pre- and post-selected (PPS) quantum system which can express the impossibility of jointly sharp preparation of pre- and post-selected quantum states for measuring those observables. Motivated by the fact that when the post-selected state is same as the pre-selected state, PPS system turns to be a standard system and we take this as our basis of the derivation for the PPS system based standard deviation (uncertainty). We provide here physical interpretations of the newly defined standard deviation and the uncertainty relation in the PPS system. It is shown that joint sharp preparation of a quantum state for non-commuting observables is possible when the standard system is transformed into a PPS system with certain conditions, an impossible task in standard system. Some applications of uncertainty and uncertainty relation in the PPS system are provided here: (i) Detection of mixedness of the given pre-selection using two different definitions of the PPS system based standard deviation, (ii) stronger uncertainty relation in the standard system using the uncertainty relation in the PPS system, (iii) genuine quantum mechanical uncertainty relation can be found using the first definition of uncertainty when the pre-selection is a mixed state, (iv) state dependent tight uncertainty relation in the standard system, and (v) tight upper bound for the out-of-time-order correlation function.

Collisional models are a category of microscopic framework designed to study open quantum systems. The framework involves a system sequentially interacting with a bath comprised of identically prepared units. In this regard, quantum homogenization is a process where the system state approaches the identically prepared state of bath unit in the asymptotic limit. Here, we study the homogenization process for qubits in the non- Markovian collisional model framework generated via additional bath-bath interaction. With partial swap operation as both system-bath and bath-bath unitary, we show that homogenization is achieved irrespective of the initial states of the system or bath units. This is reminiscent of the Markovian scenario, where partial swap is the unique operation for a universal quantum homogenizer. On the other hand, we observe that the rate of homogenization is slower than its Markovian counterpart. Interestingly, a different choice of bath-bath unitary speeds up the homogenization process but loses the universality being dependent on the initial states of the bath units. In our process, we found a regime of transition of non-Markovian dynamics to Markovian dynamics (and vice-versa).

Following recent work [Zhang, Duval, Gibbons and Horvathy (PRD, 2017)], there has been growing interest in understanding memory effects through the study of geodesic motion. One can, in principle, arrive at a class of memory effects (displacement and velocity memory) by solving the geodesic equation or the equation of geodesic deviation. Another route to memory (also termed as B-memory) involves the study of geodesic congruences by utilising the Raychaudhuri equation. In this talk, we will provide an overview of our recent work on such diverse aspects of memory in the context of exact, radiative solutions in General Relativity and modified theories of gravity.

In this talk, I am going to discuss about the “Assembly of colloids into linear chains and study of their structure and dynamics”. To assemble the colloids into permanent colloidal linear chains, we investigated a novel ice templating methods as well as we exploit the electric and magnetic field. A novel aspect of this work is to render the colloidal chains active, viz. out of equilibrium, by adsorbing catalytic platinum nanoparticles on their surface and by conducting reactions catalyzed by the nanoparticles. I have demonstrated that the diffusion of passive Brownian chains does not depend on chain flexibility whereas the diffusion of “active” colloidal chains is a function of their flexibility. Another novel aspect is to render the colloidal chains thermos-responsive by adhering poly N-isopropyl acrylamide micro-gel particles on the colloidal surface. Rigid chains show a modest decrease in size but exhibit no qualitative change in their shape. Relatively flexible chains form compact structures as they collapse, resulting a large increase in the local monomer number density within the chain. Chains with intermediate flexibility show the formation of helical structures on heating. Finally, I am going to talk about micromotors. Here, we study systematically the translational and rotational dynamics of active particle (Janus particles) clusters. By extracting various parameters like net force, the torques and translational and rotational velocities we aim to find a generic relation between the cluster shape, particle distribution and the resultant dynamical trajectories. We expect our work to provide strategies for the designing active entities with tailored dynamical trajectories.

Molecular clouds are the cradles for star-formation. The stars are formed as a result of the gravitational collapse of compact gas-dust prestellar cloud cores. Magnetic fields are one of the important components in molecular clouds for regulating star-formation. Protostellar disks are formed very quickly after the collapse of a molecular cloud core. The formation and evolution of the disks play a crucial role in the formation of the planetesimals. My work focuses on the fundamental properties of core collapse and the evolution of protostar and a disk via analytic and numerical models, and comparing with observations in order to extract new insights. We investigate the fragmentation scales of gravitational instability of a rotationally supported self-gravitating protostellar disk as well as for the molecular cloud using linear perturbation analysis in the presence of nonideal magnetohydrodynamic (MHD) effects. Nonideal MHD effects result in the diffusion of magnetic flux. We show that the influence of the magnetic field and nonideal MHD on the preferred fragmentation mass for collapse leads to a modified threshold, as opposed to a Jeans mass, that might lead to giant planet formation in the early embedded phase. Our results also indicate that the trend found in the observed lifetime for the prestellar cores and fragmentation mass cannot be explained in a purely hydrodynamic scenario. Furthermore, I will also talk about the episodic mass accretion (therefore episodic luminosity) from a disk to star, which is considered to be one of the most important processes in mass growth of protostar. Our analytic work provides insight into global MHD simulations of protostellar disks that we carry out using the FEOSAD simulation code. Our results using FEOSAD demonstrate the long-term evolution of disks, and especially the episodic nature of accretion, which might explain the origin of observed knots in the molecular jet outflows. All of our studies from various perspectives might fill in many gaps of our knowledge of how the pre-main sequence stars formed over time and consolidate the broad picture of star formation.

John Bell’s seminal theorem revealed that quantum correlations between space- like separated events contradict any local hidden variable explanation for many such correlations. This work lead to resolution of the Einstein-Podolsky-Rosen paradox which emerged from a belief that at the most fundamental level nature respects local-realism. Subsequent experiments confirmed that quantum correlations indeed violate local-realism. Such Bell nonlocal correlations are considered as a powerful resource which lead to many possible applications in quantum information processing. It also forms a basis to ask further foundational questions like what are the limits of quantum nonlocal correlations and how these limits can be understood better. In this talk, we discuss some foundational as well as application aspect of nonlocal correlations. We will consider the simplest setting for a Bell experiment which constitute two space-like separated parties, each performing one of the two possible measurements with binary outcomes. Then we discuss: (i) the geometry (boundary) of the set of quantum correlations, (ii) applications of these correlations in self-testing quantum devices, and (iii) distillation of weak quantum correlations to strong ones.

After the experimental discovery of two-dimensional (2D) graphene, a new horizon has opened up in the field of condensed matter and material science. The exotic and unconventional properties of graphene have led the scientific community to explore its various intriguing properties. Graphene and graphene like other 2D materials possess tremendous potential to completely alter the modern silicon based electronics industry. In this talk, I would like to explore the thermal and thermoelectric properties of different 2D materials from computational perspective. I will try to demonstrate how different 2D materials can be used for converting waste heat into electricity or how it can be used for thermal management applications.

This talk focusses on some investigations into a recently developed non-linear, three-dimensional equatorial model for ocean dynamics. The analysis is based on singular perturbation approach and is facilitated by the introduction of a pseudo-stream function. The development of the model had been motivated by observations and the model is able to capture some essential properties of the flow in the equatorial region. Analysis of velocity field and flow paths indicate that several known and unknown features (which are essentially non-linear and three dimensional such as upwelling/downwelling, cellular flow structures, divergence of flow from the equator and extra-equatorial flows, subsurface ocean ‘bridge’ in the equatorial direction and sharp change in gradient of the flow path) exist and can be simulated by the model. A subsequent detailed global bifurcation analyses of a 2D model incorporating wave-current interaction for stratified rotational flows and numerical results from continuation methods reveal the presence of far more complex particle paths, which may affect the primary production and pelagic spices in addition to the mass, carbon and energy transport.
We follow the historical development of deep learning from Markovian models to attention based transformer models and discuss the arrival of huge unsupervised models that will form the starting block of future models.

Einstein’s theory of general relativity, which Newton’s theory of gravity is a part of, is fraught with the problem of singularity that has been established as a theorem by Hawking and Penrose, the latter being awarded the Nobel Prize in recent years. The crucial hypothesis that forms the basis of both Einstein’s and Newton’s theories of gravity is that bodies with unequal magnitudes of mass fall with the same acceleration under the gravity of a source object. Since, the validity of Einstein’s equations is one of the assumptions based on which Hawking and Penrose have proved the theorem, therefore, the above hypothesis is implicitly one of the founding pillars of the singularity theorem. In this work, I demonstrate how one can possibly write a non-singular theory of gravity which manifests that the above mentioned hypothesis is only valid in an approximate sense in the “large distance” scenario. To mention a specific instance, under the gravity of the earth, a 5 kg and a 500 kg fall with accelerations which differ by approximately 113.148*10-32 meter/sec2 and the more massive object falls with less acceleration. Further, I demonstrate why the concept of gravitational field is not definable in the “small distance” regime which automatically justifies why the Einstein’s and Newton’s theories fail to provide any “small distance” analysis. In the course of writing down this theory, I demonstrate why the continuum hypothesis, as spelled out by Goedel, is undecidable. The theory has several aspects which provide the following realizations: (i) Descartes’ self-skepticism concerning exact representation of numbers by drawing lines (ii) Born’s wish of taking into account “natural uncertainty in all observations” while describing “a physical situation” by means of “real numbers” (iii) Klein’s vision of having “a fusion of arithmetic and geometry” where “a point is replaced by a small spot” (iv) Goedel’s assertion about “non-standard analysis, in some version” being “the analysis of the future”. To further justify Goedel’s assertion, I provide a glimpse of what I may call ‘non-standard physics’.

In this talk we will discuss the problem of intersection of subspaces in the context of measurement of a quantum state. We will see that this problem is very closely related with Jordan’s Decomposition theorem for a pair of projectors. As an application we will discuss the converse of Quantum Stein’s lemma which gives the best exponent for the decay of missed detection.Dr. Naqueeb Ahmad Warsi; Electronics and Communication Sciences Unit, Indian Statistical Institute, Kolkata

Maxwell’s verbal statement of Coulomb’s experimental verification of his hypothesis, concerning force between two electrified bodies, is suggestive of a modification of the respective computable expression on logical grounds. This modification is in tandem with the completeness condition for a physical theory that was stated by Einstein, Podolsky and Rosen in their seminal work. Working with such a modification, I show that the first Maxwell’s equation, symbolically identifiable as from the standard literature, is unprovable. This renders Poynting’s theorem to be unprovable as well. Therefore, the explanation of ‘light’ as ‘propagation of electromagnetic energy’ comes into question on theoretical grounds.

Freudenthal duality (F-duality), an anti-involution of charge vectors keep the entropy and attractor solutions invariant for an extremal supersymmetric black hole. This duality holds for both ungauged and gauged extremal black holes in four dimensions. In this talk, I will discuss the effect of F-duality on the entropy of a near-extremal black hole. Specifically, I will consider double-extremal STU black holes in four dimensional, N=2, ungauged supergravity. It is well known that the two-dimensional Jackiw-Teitelboim (JT) gravity governs the dynamics of the near-horizon regions of higher dimensional, near-extremal black holes. Thus, dimensionally reducing the four-dimensional supergravity theory one can construct a JT-gravity like model and compute the near-extremal entropy. I will then analyze the effect of F-duality on this near-extremal entropy. I will show that the F-duality breaks down for the case of near-extremal solutions if one considers the duality operation generated through near-extremal entropy rather than the extremal one.

Classical Bayes’ rule lays the foundation for the classical causal relation between cause (input) and effect (output). This rule is believed to be universally true for all physical processes. On the contrary, we show that it is inadequate to establish correct correspondence between cause and effect in quantum mechanics. In fact, there are instances where the use of classical Bayes’ theorem leads to inconsistencies in quantum measurement inferences, such as Frauchiger-Renner’s paradox. As a remedy, we introduce a deterministic causal relation based on quantum Bayes’ rule. It applies to general quantum processes even when a cause (or effect) is in a coherent superposition with other causes (or effects) as allowed by quantum mechanics and in the cases where causes belonging to one system induce effects in some other system as it happens in quantum measurement processes. This enables us to resolve Frauchiger-Renner’s paradox and reaffirm that quantum mechanics can consistently explain its use. We also revisit Hardy’s paradox and bipartite non-locality without a Bell-inequality and propose a possible resolution to the inconsistencies using quantum Bayes’ rule. We discuss the consequences of our results.

A detectable amount of magnetic field can be probed at all scales of the Universe. There are various possible origins of this observed magnetic field and it is very much possible that this magnetic field is generated during the primordial stage of the Universe. There are various models for the primordial magnetogenesis scenario and if the inflationary background is considered one needs to break conformal symmetry to generate a sufficient amount of magnetic field. To break conformal symmetry one can introduce different couplings between electromagnetic field and inflaton field or add higher derivative terms to the theory. One can also introduce different primordial scenarios like matter bounce to produce sufficient magnetic fields. One interesting way to study these different mechanisms of primordial magnetogenesis is to apply the generic approach of Effective Field Theory (EFT) where the system is described by EFT parameters and different choices of parameters correspond to different models and this approach is successfully applied to study the inflationary perturbations. In this talk, we will try to describe a consistent EFT framework for the primordial magnetogenesis scenario.

Random samples of quantum states are an important resource for various tasks in quantum information science, and samples in accordance with a problem-specific distribution can be indispensable ingredients. Some algorithms generate random samples by a lottery that follows certain rules and yield samples from the set of distributions that the lottery can access. Other algorithms, which use random walks in the state space, can be tailored to any distribution, at the price of autocorrelations in the sample and with restrictions to low-dimensional systems in practical implementations. In this work, we present a two-step algorithm for sampling from the quantum state space that overcomes some of these limitations. We first produce a CPU-cheap large proposal sample, of uncorrelated entries, by drawing from the family of complex Wishart distributions, and then reject or accept the entries in the proposal sample such that the accepted sample is strictly in accordance with the target distribution. We establish the explicit form of the induced Wishart distribution for quantum states. This enables us to generate a proposal sample that mimics the target distribution and, therefore, the efficiency of the algorithm, measured by the acceptance rate, can be many orders of magnitude larger than that for a uniform sample as the proposal. We demonstrate that this sampling algorithm is very efficient for one-qubit and two-qubit states, and reasonably efficient for three-qubit states, while it suffers from the “curse of dimensionality” when sampling from structured distributions of four-qubit states.

Given “ab = 0”, considering the arithmetic truth “0.0 = 0” we conclude that one possibility is “both a = 0 and b = 0”. Consequently, the roots of a quadratic equation appear to be mutually inclusive. However, the situation can be viewed as a ‘decision problem’ (Hilbert- Ackermann). Working with mutual inclusivity of the two roots, by choice, the concerned variable can acquire multiple identities in the same process of reasoning or, at the same time. The law of identity gets violated, which we call the problem of identity. In current practice such a step of reasoning is ignored by choice, resulting in the subsequent denial of “0.0 = 0”. Here, we deal with the problem of identity without making such a choice of ignorance. We demonstrate that the concept “identity of a variable” is meaningful only in a given context and does not have any significance in isolation other than the symbol, that symbolizes the variable, itself. We demonstrate visually how we actually realize multiple identities of a variable at the same time, in practice, in the context of a given quadratic equation. In this work we lay the foundations, based on which we intend to bring forth some hitherto unattended facets of reasoning concerned with the classical harmonic oscillator and the principle of superposition.

In this talk, we will discuss a class of weighted planar stochastic lattice (WPSL1) created by random sequential nucleation of seed from which a crack is grown parallel to one of the sides of the chosen block and ceases to grow upon hitting another crack. It results in the partitioning of the square into contiguous and non-overlapping blocks. Interestingly, we find that the dynamics of WPSL1 is governed by infinitely many conservation laws and each of the conserved quantities, except the trivial conservation of total mass or area, is a multifractal measure. On the other hand, the dual of the lattice is a scale-free network as its degree distribution exhibits a power-law. The network is also a small-world network as we find that (i) the total clustering coefficient C is high and independent of the network size and (ii) the mean geodesic path length grows logarithmically with N. Besides, the clustering coefficient of the nodes which have degree k decreases exactly as 2/(k − 1) revealing that it is also a nested hierarchical network.

Detection of entanglement in quantum states is one of the most important problems in quantum information processing. However, it is one of the most challenging tasks to find a universal scheme which is also desired to be optimal to detect entanglement for all states of a specific class – as always preferred by experimentalist. Although, the topic is well studied, at least in the case of lower dimensional compound systems (e.g., two-qubit systems), but in the case of continuous variable systems, this remains as an open problem. Even in the case of two-mode Gaussian states, the problem is not fully resolved. Here, we try to address this issue. At first, a limited number of Hermitian operators is given to test the necessary and sufficient criterion on the covariance matrix of separable two-mode Gaussian states. Thereafter, we present an interferometric scheme to test the same separability criterion in which the measurements are being done via Stokes-like operators. In such case, we consider only single-copy measurements on a two-mode Gaussian state at a time and the scheme amounts to the full state tomography. Although this latter approach is a linear optics based one, nevertheless it is not an economic scheme. Resource-wise a more economical scheme than the full state tomography can be obtained if we consider measurements on two copies of the two-mode Gaussian state at a time. However, optimality of the scheme is not yet known.

In this talk I describe various primordial sources of Gravitational Waves: inflationary tensor perturbations, primary and secondary, reheating, (p)reheating, phase transitions, cosmic strings, domain walls. I will show how a stochastic background of Gravitational Waves (SBGW) spectrum from such sources of cosmic origin acts as complementary tests of particle physics models compared to other laboratory searches and astrophysical observations.

The BFSS matrix model is a proposed non-perturbative definition of M- theory in which space is emergent. In this talk, I shall present a new paradigm of early- universe cosmology in the context of the BFSS theory. Specifically, I will show that matrix theory leads to an emergent non-singular cosmology which, at late times, can be described by an expanding phase of standard Big Bang cosmology. Crucially, the thermal fluctuations in the emergent phase source an approximately scale-invariant spectrum of cosmological perturbations. Hence, this model leads to a successful scenario for the origin of perturbations responsible for the currently observed structure in the universe, while providing a consistent UV-complete description, and naturally overcomes many of the obstacles of the current paradigm of inflation as an effective field theory.

We investigate the generation of magnetic fields from inflation, which occurs via breakdown of the conformal invariance of the electromagnetic field, when coupled with the Ricci scalar and the Gauss-Bonnet invariant. For the case of instantaneous reheating, the resulting strength of the magnetic field at present is too small and violates the observational constraints. However, the problem is solved provided there is a reheating phase with a non-zero e-fold number. During reheating, the energy density of the magnetic field is seen to evolve as a^-6 H^-2 and, after that, as a^-4 up to the present epoch (here ‘a’ is the scale factor and ‘H’ the Hubble parameter). It is found that this reheating phase –characterized by a certain e-fold number, a constant value of the equation of state parameter, and a given reheating temperature- renders the magnetogenesis model compatible with the observational constraints. The model provides, in turn, a viable way of constraining the reheating equation of state parameter, from data analysis of the cosmic microwave background radiation. The Schwinger backreaction has been studied in this regard.

In the present article, we have studied extensively the electrophoretic transport phenomenon of spherical soft particles. Electrophoresis is one of the important electrokinetic techniques, which is often used to characterize, and separation of colloids. It is commonly used as a separation technique and often used in the separation of DNA, protein molecules, serum to identify paraproteins, etc. Electrophoretic transport phenomenon is also used to understand the electric properties of several bio particles including virus, bacteria, humic cells and macromolecules and may be used to understand the transport of cargo vessel in treatments of various diseases, e.g., cancer, inflammation, multiple myeloma, rental pathological disorders and macroglobulinemia, etc. Thus, the proper understanding of the electrophoretic transport of soft particles is important to understand the characteristics features of various bio-colloids and macromolecules, which can be viewed as soft particles. In this article, we have first elaborated some of the existing simplified models for electrophoretic transport of soft particles. In addition, we have further extended it for the real situation, considering the effect of pH-dependent charge densities of the inner core and peripheral soft polymeric layer, effect of hydrodynamic slip length of the hydrophobic core surface, etc. In our present study, we have restricted ourselves with the low charge and weak electric field assumptions. We have adopted the linear perturbation analysis to linearize the governing equations for flow field, electrostatic potential, spatial distribution of ionic species, electrochemical potential. The reduced form of the governing equations further integrated to derive the closed form analytic expression for electrophoretic mobility of such a particle. We have further highlighted the effect of pertinent parameters on the neutralization of particle charge due to penetration of counterions across the peripheral soft layer.

Despite the obvious discrepancies between the quantum mechanical and the classical regimes of computation, there has been an inexorable push to establish quantum computation as the dominant methodology for the future. The drive towards miniaturization coupled with the possibility of hitherto unfeasible parallelism overpowers the ugly circuit implementation aspects inherent at quantum scales. Though quantitatively indistinguishable from a classical Turing machine, Quantum computers do not function along with the same operating principles as their classical counterparts. Indeed, analogies to classical computations are limited by the fact that very few algorithms truly show ‘quantum supremacy’. Some of the most difficult aspects of quantum computations are the understanding and reinterpretation of classical terms such as computational power, memory, and intelligence in the quantum domain. One of the more elegant implementations of quantum computing and information relies heavily on optical approaches, at the forefront of which, due scalability and near room temperature operation, our techniques excel. These approaches primarily rely on light-matter interactions, typically at ultrashort timescales. Though not directly realizable, ultrashort times can also be connected to the ultra-small sizes. Spatiotemporal control aspects of pulsed laser experiments rely on the ability to modulate the shape of the generated pulses in an efficient manner. Drawing from current state-of-the-art theoretical aspects of computational simulations to reduce the sim-to-real bottlenecks, we devised a novel schematic for the generation of on-the-fly calibrated pulse trains with more accountability than existing techniques under the domain of optimal control theory. The techniques presented today further diminish the divide between experiment and theory.

Use our Web App