Okay, good.
Yeah, good.
Bring the dark matter test science project and then start some discussion.
So the presentation is going to be rather high level because it's a, it's a place where more software will go.
So I didn't concentrate on one software in particular.
And the high energy physics software has already been presented by Lucas Heinrich in the talks from HSF.
OK, so starting from the second slide, there's many different hypotheses for dark matter.
And you see on the left hand side a plot that shows the interaction cross section and the mass of the possible dark matter candidates.
So you can imagine that with such a vast parameter space to find an answer for one of the big problems of the universe, then there are many ways to detect it, depending on where you are in that plot.
There's many different experiments and therefore there's many different data and workflow needs and many different data and results sharing policies.
So it's a very diverse community.
What we thought about for this test science project is somehow factorized out the problem of what is dark matter and concentrate on one hypothesis.
This hypothesis is as good as any.
If you, I mean, some people might have a theoretical prejudice, but in general, not many do because the likelihood of one particular model is small.
We decided to start with the most common dark matter model that is weakly interactive massive particles, mostly because it's well studied and one that can be extended.
It's also not completely excluded because everyone in the news will say, oh, we have not found dark matter supersymmetry is dead and all of that. But what is not realized, at least mainstream media, is that it could just be more rare.
We haven't seen it yet.
The other good thing about this test case is that complementary experiments could find it.
If you believe your model enough, then you can make all the dark matter in the universe with it.
But to me, the most attractive of all of these characteristics that the WIMP has is that the searches can be extended to other models and connected to many other models.
So you start with something that is fairly generic and then you modify it so that this is a lot more usable. And so it's reusable analysis rather than software only.
So moving to slide three, this is an explanation why the complementarity is necessary in the dark matter world.
It's because observations, experiments, and theories are all needed for discovery of dark matter. Maybe one experiment on its own can exclude a given model, but if you have a discovery, you have to have everything.
So direct and indirect detection are the only experiments that can discover dark matter with cosmological origin.
So you essentially wait for dark matter to be detected and dark matter does come from places where you know the dark matter is.
It's with colliders and accelerator experiments that one can produce dark matter in controlled conditions and probe the dark interactions.
So you need to have this in order to understand what's in these blocks of the interaction.
We also cannot forget that the observation that motivates dark matter only come from astrophysics and gravitational interaction so far.
And then in order to put everything in context, then you need a theoretical framework. So in our case, we're thinking about the wind, but it can be extended to more.
So in light of this complementarity, there's two synergistic initiatives that follow the European strategy update. And you see this sort of picture on the left that has the three strategies for astroparticle particle nuclear physics.
This is where you discuss the question of what is dark matter. They lay off foundations that is common to your ground instrumentation and data acquisition and software computing.
So there is one part that is needed so that one can define and compare common dark matter implementations.
So this is not the only thing that this initiative will do. It's an initiative called IDMU that has been created following a workshop that the three communities together.
And if you want to go and see the talk on Thursday, you'll learn more about that.
But in order to have the interpretation and the discovery of dark matter, then you need the tools that do that.
You need to create the curves that you are going to compare. And this is where the dark matter project, the dark matter test science project, sits in.
What we want is a comparison of end-to-end analysis workflow that is implemented in software catalog and gives input to the design of the European open science cloud.
This will allow us to create the experimental curves with the analysis pipelines that we're building.
So on slide five, this is an end-to-end WIMP analysis workflow.
And this is just a simplified abstraction that would fit in the slide.
But it goes from taking the experimental data with different experiments, to processing this data, to analyzing this data, to interpreting this data, not only interpreting this data, but also reinterpreting this data.
And the reinterpretation can happen at different levels. So you can redo the data analysis, but you can also just take the curves and the information that is public, not necessarily the entirety of the data,
and then interpret this information in a different model.
So what I have understood and what I think you've seen in the previous talks is that it might not be that all the workflows included in the dark matter science case are end-to-end.
So they might not go from experimental data to interpretation of results. It could be that they start later. They start from the interpretation or their interpretation.
So we need to make sure that we have defined ways to tell collaborations, if you want to do this, then you will need to implement it this way.
And this is more of a connection to how many resources you need. If you want to reinterpret or refit your data, you probably need different resources than if you need to reprocess the data.
And there's also the other point that for some collaboration, the data processing is behind the wall somehow.
So it's not completely open to reprocess the data and there's reason for doing that. So not all the boxes might come at the same time.
So I just conclude with slide six that is the input to the discussion. And this is a conversation from people in ESCAPE that making data fair is relatively straightforward. So you just release a data set.
Zugänglich über
Offener Zugang
Dauer
00:28:38 Min
Aufnahmedatum
2020-07-27
Hochgeladen am
2020-07-28 01:16:21
Sprache
en-US
Speaker
Caterina Doglioni, University of Geneva, and discussion of the participants to the workshop
Content
Introduction to the test science project on Dark Matter between various partners to the ESCAPE project and following discussion on the role of community software development in joint projects.
The Workshop
The Workshop on Open-Source Software Lifecycles (WOSSL) was held in the context of the European Science Cluster of Astronomy & Particle Physics ESFRI infrastructures (ESCAPE), bringing together people, data and services to contribute to the European Open Science Cloud. The workshop was held online from 23rd-28th July 2020, organized@FAU.
Copyright: CC-BY 4.0