So the leading question for today's meeting is as follows.
What is explainability in the context of AI systems in the legal field?
Why is this principle relevant and how can we ensure that explainability is sustained?
So I think the question of explainability is very different for different legal technologies.
And the question kind of implies that explainability is necessary.
I don't think that's always necessary.
For instance, when we talk about legal search, that it's a legal tech system, but it will
rank certain answers in a certain way.
Maybe you would want to know why it ranked certain things on top and certain things in
a different place, why it found some documents and some not.
And as a researcher, I would love to have all this information.
But as a user of the system, it's quite unworkable if you have to, if you need to use an explanation
for everything.
So here, this is an example of something that is used widely and we kind of agreed that
we don't need an explanation.
So those systems exist and quite often.
The same time, there are systems based on which you might want to make a decision.
For instance, as a worker predicting corporate decisions, perhaps if you want to predict
a decision and explain to, let's say you're a lawyer, you predict a decision, you want
to explain to your client why a certain outcome is more likely based on that prediction.
It could be that you don't need an explanation.
As a lawyer, maybe it's your job to provide that explanation.
Maybe what you need is an explanation of how the system came to the decision, but not necessarily
legal explanation of why a certain law would be applied in a certain way to get to that decision.
Maybe as a lawyer, you can do that.
So that's it's explanations are important, but they can also be different levels of explanations.
So what we see today is that explanations that are built into systems that predict court
decisions or attempt to predict court decisions, they're not really grounded in law.
They are never evaluated for legal soundness.
They are mostly evaluated for being similar to how legal reasoning was produced in other cases.
So and it's often evaluated using machine translation techniques.
So it's basically the evaluating methods compare how many words from the reasoning in the training
data are the same as what the system produced.
So this is not necessarily the way that you would want to evaluate legal soundness when you
actually use these systems to make certain decisions where I hope not court decisions,
but maybe decisions on how to advise your client or something like that.
So by being that if the systems do not function without explainability, for instance,
if you were to use it in a court, and again, I have a personal opinion to not do that.
But if somebody were to use it in a court, you cannot just tell the parties that this is the decision level.
The system would have to be explained. Right.
But then the system and you really can't use a system that doesn't do that.
But then the system would have to be really evaluated in a way that it that the soundness of
lawyers evaluate, to be honest, I don't know of any widespread methods to do it today.
So I think we're quite far from that yet.
And this is to to answer the question of why this principle is relevant.
Sometimes you really can't do without it.
But then as of today, I think those systems that should not be used and we need as researchers
continue to look into how this system should be developed, that they would actually be used.
Thank you very much, Masha.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:04:42 Min
Aufnahmedatum
2024-11-29
Hochgeladen am
2024-11-29 11:41:42
Sprache
en-US