5 - User-generated Evidence in the Age of Generative AI - Human Rights Talks 2023/2024: Artificial Intelligence as a Human Rights Chance or Challenge? [ID:55657]
50 von 207 angezeigt

Hello and welcome to this video of the Human Rights Parks of 2023 and 2024.

Today we will be explaining to you the state of user-generated evidence in the age of generative

AI.

Firstly, we would like to give you a brief overview of what user-generated evidence is,

then we will delve into its problems regarding admissibility.

Then we're going to look at one compounding problem, which is that of deepfakes in the

modern era and how that leads to a set of challenges and solutions that user-generated

evidence has to face, including one novel route, open source intelligence or OSINT and

open source verification.

First and foremost, user-generated evidence refers to any form of content created and

shared by individuals rather than professionals or official entities.

Its primary significance lies in its ability to facilitate the real-time documentation

and incidents as they occur.

This content includes audio recordings, digital images and video recordings, and it can originate

from various sources.

In the context of human rights, the focus is predominantly on direct sources, such as

individuals using smartphones or dash cams to record incidents.

Additionally, user-generated evidence can enhance content uploaded to social media platforms.

By encouraging public participation in the justice system and empowering individuals,

user-generated evidence plays a crucial role in enhancing access.

But in order to be valuable to the overall case, the evidence needs to be admissible

and authenticated.

However, for user-generated content, problems arise in this matter.

In general, to be admissible, evidence must be relevant to affect an issue and needs to

have a purpose.

So in all its evidentiary forms, audiovisual content must demonstrate that it is what it

purports to be.

The usual way that evidence can be proven to be authentic is by interviewing an eyewitness,

which often is not possible or would be inappropriate for ethical reasons in the context of human

rights cases.

For the authentication of audiovisual evidence, thoughts will have to look at other verification

methods, such as metadata analysis or the chain of custody.

Metadata means data about data and is defined as the data providing information about one

or more aspects of the original piece of data.

It is used to summarize basic information about data that can make tracking and working

with specific data easier.

To ensure authenticity in the metadata analysis, data such as timestamps, device information

and file history is being examined.

So the metadata of the photo, for example, would include the specific time and date of

the original creation down to seconds.

It's proven to be problematic that many pieces of user-generated evidence, which courts may

want to consider, often lack detailed metadata or may not contain any metadata at all.

Additionally, most social media tools strip out metadata from images and share and compress

them as you can see here.

It then becomes very hard for open source investigators to then determine who is pictured

in the photo, what they're doing and when and where it was taken.

Under these circumstances, authentication proves to be more difficult.

For further authentication, one can also look at the chain of custody in which we focus

on the documentation and the integrity of the evidence.

In the documentation, it's tried to keep a detailed record of who has handled the evidence

Zugänglich über

Offener Zugang

Dauer

00:13:14 Min

Aufnahmedatum

2024-11-29

Hochgeladen am

2024-11-29 11:41:28

Sprache

en-US

Tags

artificial intelligence Human Rights Human Rights Talks
Einbetten
Wordpress FAU Plugin
iFrame
Teilen