Hello everyone, welcome back to the lecture series on computer vision.
This is lecture 2 part 3.
We have been talking about colors.
In this lecture we are going to talk about how colors are stored in digital format and
what are the different color spaces available to do that.
Let's go ahead.
So what is color?
And do we even really care about color or no?
Or do we really care about how human vision responds to colors?
What is the connection?
Vision basically is about mimicking human vision and cameras are designed to mimic human
eye, right?
So let's look at some of these things.
But do we really care about human vision?
We don't necessarily, but only because biological vision shows that it is possible to make
important judgments about images.
Using colors, we can interpret interesting things, interesting aspects of objects like
we saw in the previous lecture by studying the reflectance properties, the spectral profiles
of objects.
We can actually recognize and detect those objects.
There is a very interesting field in that direction called hyperspectral imaging.
And it's a human world, right?
So we have designed tools like cameras to capture this frequency response that is being
imitated by the human eye.
And when you look at how the camera is designed, you will realize that it's using the exact
working of human vision, capturing the image, storing them in a sensor area, and digitally
processing them.
How color sensing happens in camera, therefore, that should be the next question, right?
So in cameras, basically, there are different types of cameras out there.
There are single chip versus three chip cameras.
And every camera and both of these type of cameras have their pros and cons.
One chip camera basically has one sensor area on one chip.
And it's easy to capture the intensity of light.
But it's not so refined.
It's not dynamic.
It's not so highly resolved.
But it gives you an advantage of cost over quality.
And the three chip versus one chip is just how the sensor areas are organized in the
form of crates.
But that makes them expensive and cheap based on how sophisticatedly they are arranged.
And why there are more green colors or green sensors on the camera areas?
And why are there only three colors, right?
When we look at the working or the design of the camera, we see that there is one signal
processor and there is a sensor area which captures three different colors in this form.
And when you look at this grid, it's called a Bayer filter, which is the basic camera
sensor array, which sits at the back of the camera.
It captures the light incident on them and then converts it into a digital format and
stores it.
And how does this work?
So Bayer grid basically has three colors.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:16:02 Min
Aufnahmedatum
2021-04-19
Hochgeladen am
2021-04-19 13:06:57
Sprache
en-US