25 - Recap Clip 5.5: Multi-Attribute Utility (Part 1) [ID:30427]
50 von 59 angezeigt

Mainly, we talked about multivalue attributes.

We had only one value,

one random variable we assessed per utility,

and we're trying to scale that to multiple attribute.

Then, that's the typical thing you have to do when you have

a decision to make that depends on many factors.

I have a very small network of influences here.

There is one of, again, these places where we have to do something fishy.

The maths only helps us to a certain degree.

What we would like to have is dominance relations,

where we're sure that B is always better than A in utility.

We have whole regions that dominate

a certain thing so that we can actually search fast.

We can just get rid of

whole regions of the space of all possible configurations here,

the two space of configuration.

This is what we would like to have.

There are a couple of factors that play a fly in the ointment.

We have typically with more and more dimensions,

these situations appear less and less.

Since we have uncertainty in our attributes,

we have these whole regions we have to look at.

Then there, we're going to get much more overlaps.

We have to do something different.

One of the things you can do is you can just basically look at

center of gravity like arguments which translate mathematically

into integrating over

these areas of the probability distributions.

If you have two probability distributions that have

close by centers of gravity,

then you can basically look at these integrals,

and then you see that one curve dominates the other.

We can actually take a decision.

The concept that comes out of this is stochastic dominance.

What we would really need to do is to look at

stochastic dominance to compute multi-attribute utilities.

You can imagine that we have to know quite a lot for that.

We have to know a lot about the utility function.

We have to know a lot about the probability distributions.

Then we have to integrate.

That gets prohibitive typically.

What you really do in most cases is that you actually look at

qualitative influences first to cut down on the space of all solutions.

Then if things get close, you can then try on a much smaller space to do things like this.

Quantitative things are, I've just shown you the example,

is that you mark in your Bayesian network positive influences

that also qualitatively coincide with dominance.

Then you can basically annotate positive and negative influences as a second relation.

I've shown this here by making the influences we had anyway red and green,

but essentially that's a different thing.

Teil eines Kapitels:
Recaps

Zugänglich über

Offener Zugang

Dauer

00:06:03 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-31 10:56:50

Sprache

en-US

Recap: Multi-Attribute Utility (Part 1)

Main video on the topic in chapter 5 clip 5.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen