The following content has been provided by the University of Erlangen-Nürnberg.
Thanks for the introduction and for having me here. So the title of my talk is Proof-Caring
Services, and this is also the name of a sub-project in the Colorado Research Center on the Flight
Computing that we are running currently in Paderborn. So I would like to talk about three
things. First, to give a very brief and that would be also a very shallow overview over
the CRC, what the mission is and what kind of research is being done there. Then I will
focus more on what's the title of my talk, Proof-Caring Services. I will basically explain
what we understand on the Proof-Caring Services, and this is a special issue, Proof-Caring
Hardware, which is more or less my area of research in this sub-project. And then I will
in the third part zoom in even more and talk about one specific topic in this Proof-Caring
Hardware area, which is actually a combination of Proof-Caring Hardware and Proof-Caring
Code, so verification of software and hardware altogether. And finally we'll sum up. So starting
with the vision of our CRC. So the vision is that in future, software will not be like
today, mostly in huge monolithic packages. It will be on the fly, configured and executed
out of very small services. So that means that the software should be a highly personalized
and individualized service. And the small services that are available, they would be
traded on world-wide markets. This is kind of the vision that some software people have
and that drove this kind of setup of the CRC. And also such markets do not exist today.
So another issue is also that such markets need to be organized. So the players on the
market need to be identified, the incentives, the business models, and so on. So putting
this together, there's of course a number of known research areas that would contribute
to such a vision. For example, the configuration of software services. There is today something
like software-oriented architectures, which could be seen as a starting point for these
kind of service markets. The execution, that could be cloud computing, but we envision
not a computing infrastructure as cloud computing today, which is mostly homogeneous machines.
We envision much more higher dynamics and heterogeneity. Distributed computing has a
lot of things to offer for the technical markets infrastructure, which is networks and distributed
computing. And mechanism design is a main underlying area for organization of markets.
I would like to start to show with an example, an example how we envision how this could
work. So there is a customer. The customer could be, in this example, say the community
of Paraborn that wants to optimize or lay out maybe the water supply networks, the pipes
and all kinds of towers and so on. And that would be the customer. So this customer then
goes to a specific person, which is in our terminology a service provider for optimization.
So water pipes need to be optimized. And that is a person or an entity that can provide
these optimization services domain specifically. So domain is optimization. And this person
knows that, after talking to the customer, that in order to do these kind of optimization
jobs, following things are needed. There is a data management system needed, model generation,
then solvers, depending on what kind of models these are, continuous, discrete, et cetera,
et cetera. Visualization is a typical part here. Decisions are brought. And eventually
the complete application that is then configured has to be executed somewhere. So in order
to come up with an offer for the customer, the service provider would then on the worldwide
markets contact providers for single software components and also for compute services.
There might be many offers for solvers that are around. And the ODS service provider would
make sure that the things that are being selected work together properly and fulfill the request
of the customer. So this was actually our application example that we had in the first
phase of the CRC, because we had expertise in that in part of one. But of course, this
is not really on the fly market that we're wishing for future, because in this optimization
business there are still software packages which are quite big, optimizations, solvers,
big visualization tools, and so on. And also this looks like a one-dimensional chain. So
in general, it could be a graph.
Presenters
Prof. Dr. Marco Platzner
Zugänglich über
Offener Zugang
Dauer
01:09:31 Min
Aufnahmedatum
2015-07-24
Hochgeladen am
2015-10-06 12:18:33
Sprache
de-DE
The vision of the Paderborn based cooperative research centre (CRC) 901, On-The-Fly Computing, is the nearly automatic configuration and execution of individualized IT services, which are constructed out of base services traded in world-wide available markets. In this talk, we first briefly overview the goal and the research challenges of the CRC 901 and outline its structure.
Then we turn to proof-carrying services, our approach for guaranteeing service properties in the On-The-Fly Computing scenario. The approach builds on the original concept of proof-carrying code, where the producer of a service creates a proof showing that the service adheres to a desired property. The service together with the proof is then delivered to a consumer, that will only execute the service if the proof is both correct and actually about the desired property. On-The-Fly Computing requires us to establish trust in services at runtime and benefits from a main feature of the proof-carrying code concept, which is shifting the burden of creating a proof to the service producer at design time, while at runtime the consumer performs the relatively easy task of checking an existing proof.
We extend the concept of proof-carrying code to reconfigurable hardware and show a tool flow to prove functional equivalence of a reconfigurable module with its specification. As a concrete example we detail our work on combined verification of software and hardware for processors with dynamic custom instruction set extensions. We present three variants for combining software and hardware analyses and report on experiments to compare their runtimes.