The hypothesis that our universe is only one element among others within a larger reality, the multiverse, is being debated. Moreover, the scientific character of the hypothesis is questioned, and some compare it to a religious belief. The lecture will aim to distinguish and evaluate the different types of motivation for the hypothesis. I will take as a starting point a discussion on string theory and the "landscape problem". String theory can be used to derive, depending on the geometrical assumptions adopted, a wide range of solutions. Some have been tempted to see this as a sign of the existence of a multiverse (a landscape of universes). I will argue that this argument is unconvincing because it exploits principles that are not strictly scientific. In a second part, it will be shown that some physical theories and cosmological models could nevertheless, in principle, associate the existence of a multiverse with an empirically detectable signature. Based on collaborative work with James Read (University of Oxford).
Philosopher of science Ian Hacking (1989) provocatively claimed that astrophysics is not a natural science at all. The experimental method is the heart of the scientific method, he reasoned, and no one can manipulate or intervene on astrophysical systems. In contrast, I have argued that physically intervening on a research target is not necessary to generate empirical evidence. Instead, what matters is that it derives from a causal chain with one end anchored in that target. While my view applies to much observational astrophysics, the story is a bit more subtle when it comes to terrestrial laboratory astrophysical experiments. In this talk, I explain how applying my view in a case study of National Ignition Facility research on the effect of high energy flux conditions on the structure of the Rayleigh-Taylor instability in young supernova remnants lead me to uncover a new challenge: how can experimenters reason from the laboratory to the stars, when the very conditions of their experiment undercut the arguments for physical similarity they would have liked to invoke?
The lack of new physics discoveries at the LHC has had a dramatic effect on the efforts of the particle physics community. As the predictions of model-based searches have failed, physicists have increasingly turned to model-independent approaches. This marks a strong shift in scientific methodology that may be here to stay. In this talk, I will review some of the major views on methodology from the history and philosophy of science from Newton to Popper. I then characterise model independence, and the Standard Model Effective Field Theory in particular, as a move away from classic hypothesis testing. I finish by reviewing some of potential worries about the future of particle physics at these new crossroads.
For two centuries, collaborative research has kept developing, which has been explained in various ways. We offer a novel functional explanation of this development, grounded in a sequential model of scientific research where the priority rule applies. Robust patterns about the differential successfulness of collaborative groups over their competitors are derived and it is argued that they feed the development of collaboration. This global mechanism may trigger an arms race and is compatible with some decrease of productivity of collaborative groups and some over collaboration. The proposed explanation can integrate various factors usually associated with the rise of collaboration.
Since the discovery of dark matter in the 1980s, multiple experiments have been set up to detect dark matter particles through some other interaction than gravity--despite the fact that the only available evidence for dark matter's existence is through its gravitational effects. I show that the justifications for why these experiments should be able to detect dark matter take on a different structure than what is often the case in experimental practice. By illuminating this 'method-driven logic', I shed new light on questions surrounding measurement robustness and methodological pluralism in context of dark matter research.
Since the Santa Barbara conference in July 2019, where the most recent measurements of the Hubble constant were announced, there is a sense of crisis with respect to the standard model of cosmology. Indeed, the tension that had already been observed between the low value of the early universe-based measurement of the Hubble constant and the high value found for late universe-based measurements of the same constant has only been worsened by the development of new techniques for measuring H0, which seem to agree with the high value and challenge the standard model. On one side of the debate, the robustness of the high value is taken to confirm that systematic errors possibly explaining the discrepancy have been excluded and that something is wrong with the standard model of cosmology. Hundreds of papers have already been published, suggesting solutions to the Hubble crisis by amending or replacing the standard model. On the other hand, some astrophysicists insist that the chequered history of the Hubble constant measurements shows that extra prudence is needed in estimating the amount of random and systematic errors and their potential impact on the H0 value. How should we then evaluate and react to this discrepancy? Does this tension really call for a crisis in astrophysics and cosmology? And how can we explain why different teams react differently to this tension, some calling it a crisis, the others a ‘surd'?
In this talk, I examine the methodology and epistemology of LIGO, with a focus on the role of models and simulations in the experimental process. This includes post-Newtonian approximations, models generated through the effective one-body formalism, and numerical relativity simulations, as well as hybrid models that incorporate aspects of all three approaches. I then present an apparent puzzle concerning the validation of these models: how can we successfully validate these models and simulations through our observations of black holes, given that our observations rely on our having valid models of the systems being observed? I argue that there is a problematic circularity here in how we make inferences about the properties of compact binaries. The problem is particularly acute when we consider these experiments as empirical tests of general relativity. I then consider strategies for responding to this challenge.