Welcome to the lecture Secure Multi-Party Computation.
We are now in lecture number five.
My name is Dominik Sødan.
So as always, we will start this lecture by a short review of where we currently are within
this course, and then I'll give an outline what we have planned for this lecture.
So in the previous lecture, we discussed the notion of security.
And defining the security for two-party computation turns out to be non-trivial for several reasons.
First of all, you have to find the security notion that is somewhat valid for all possible
functions.
And it's clear that depending on the function that you wish to compute, a completely different
security model might make sense.
So intuitively in the security notion where we started was that we said we have two parties.
For example, P1 having its private input x and P2 having its private input y, and they
interact with each other.
And they would like to compute some function on these inputs, some functionality on these
inputs.
And now the question, of course, is what does it mean to compute this function securely?
The challenge here was that, for example, right on an intuitive level, we would say
that both parties only learn the output of the function.
And even in this case, right, we have observed that the function might have different outputs
for the individual parties.
So in particular, this function might consist of some output f1 of x, y and f2 of x, y,
where f1 is the output that P1 receives and f2 is the output that P2 receives.
And we essentially started with the intuition that we said the adversary or essentially
the participants should only learn the output of the function.
But there the challenge was that sometimes the output of the function truly leaks more.
And typical examples that we discussed are or that are easy to see are the following.
For example, one could have a functionality that simply computes the first input or alternatively
also function that computes the second input of the second party.
And then it's clear that this function in some sense reveals even some information about
the private inputs.
So simply saying the function only reveals the result is not sufficient.
And the security notion that we came up with essentially followed the simulation paradigm.
And this paradigm essentially says the following.
Yes, it might be the case that you learn more than it would ideally be the case, more besides
only the output of the functionality.
But the point is that this information, what you learn more, you essentially can deduce
from the output immediately of the computation, the output of the computation.
And this means that you gain no additional information by participating in this interaction.
And the way this is then formalized is in the real ideal paradigm or the real ideal
world.
And here the observation was that you have an ideal world where essentially a trusted
party is computing everything for you.
Which means P1 sends its private input X, P2 sends its private input Y, and then they
both receive the respective outputs.
And it's clear that this is the ideal world.
We cannot achieve anything better than the channels between the trusted party and the
individual parties are secure channels.
And this is the best version we can hope for.
And now the security notion in some sense said, well, whatever you can learn in your
Presenters
Zugänglich über
Offener Zugang
Dauer
01:30:49 Min
Aufnahmedatum
2021-05-23
Hochgeladen am
2021-05-23 23:07:10
Sprache
en-US
Yao's garbled circuit