Thanks for having me here. I think my background is maybe a little bit different and so also
my talk is going to be a little bit different. So I decided to try to introduce some of the
interesting challenges that we are facing when working with neuromorphic computing and what
exactly that means I will explain in a few moments. And because of that I'm not really going to talk
about any specific like one deep learning technology or any of the scientific work that
I did myself but instead it's more like trying to raise some interesting questions. And since I think
that maybe this term neuromorphic computing or neuromorphic hardware is new for a lot of you,
I will give some introduction in the beginning. And the way the talk is structured there are going
to be a couple of different topics or examples at the end. So if we run out of time because we talk
longer about one of the topics because it's interesting to you and so on that's completely
fine. So feel free to ask questions if you're interested. So yeah then let me dive in.
So I want to start with a bit of a motivation. So why do we even care about hardware? And it's not
just a boring topic I think but there's actually something there are good reasons to be interested
in hardware structures as well not just models on the theoretical side. And the motivation for me is
really if you look at nature for example if you look at this little animal here which you see on
the right it's an insect kind of a normal looking flying insect only that this picture is not a
photograph but it's an electron microscopy image because this insect is really tiny. It's 300
micrometers in length so for reference human hair can be on the order of 100 micrometers thick.
So this tiny wasp it's a wasp is not much larger than three human hairs. And despite its really
small size it's a normal animal. It's an insect parasitic insect that has all of the sensors
that you might expect an insect to have. So it has vision, touch, smell and it does all of this with
relatively few neurons. So it's on the order of 7400 neurons and 4600 of those are in the brain
the rest are distributed throughout the body. And that is enough to allow this tiny animal to do all
sorts of different behaviors. So flying it can find food, it can find mates, it can find hosts,
where it then lays the eggs and so on. And of course the engineer I think in each of us is
wondering shouldn't it be possible to somehow to build brains like this artificially. And I think
that is a vision that people have had for a very long time at the very least I would say since the
1950s when it became a realistic option to do information processing in structures that are
similar to the brain in hardware and electronic hardware. And so that is a motivation that is I
think guiding a lot of the AI hardware development. So I briefly want to say when I talk about we
are interested in this topic who that we is. And so I'm working at Fonova as Leon already said.
Fonova if you're not familiar with it it has a very deeply nested hierarchy so sometimes it can
be a bit confusing. I'm working in an embedded AI group but I'm working in a department for broadband
and broadcast. So it's for me it was a curious thing because when I started there so my background
is in theoretical neuroscience actually but when I started there my job description was modem designer
and this has not my it doesn't have much to do with what I actually do. So yeah this is a picture
of me from pre-corona times but we have video so you don't need that. And so AI is a very broad
topic so of course people do AI on all sorts of different levels of complexity and also at Fonova
there are different topics that we are working on with different technologies but I only want to look
at one of them and that is neuromorphic ASICs. And the reason will become clear in a moment I think
and what's the application area for this sort of technology is really rather low level so it's
close to the sensor where every joule of energy consumption counts and typically we are talking
about a bit simpler AI tasks not the super complicated tasks like language models or
these large recommender systems or stuff like this but it's more like
yeah processing close to a sensor. But this is just to give you some background so that you
understand where I'm coming from. So maybe now I should define what I'm actually talking about.
What is neuromorphic hardware? And I think it's kind of obvious that it's composed of these three
different parts so the neuro and neuromorphic as you can imagine it means we take inspiration from
neuroscience to some degree. There's a lot we take a lot of liberty with that so when people
talk about neuromorphic hardware what they really mean is it's hardware that typically implements
Zugänglich über
Offener Zugang
Dauer
01:17:23 Min
Aufnahmedatum
2021-01-13
Hochgeladen am
2021-01-18 15:48:59
Sprache
en-US
Johannes Leugering on "Neuromorphic Computing and its Mathematical Challenges"
The stellar rise of Deep Learning in recent years has produced models numbering millions, if not billions of parameters and consuming vast amounts of compute resources. Consequently, a lot of theoretical work and implementation effort has been invested into optimizing these models for efficient execution on CPUs and GPUs – in terms of operations performed, memory used or the number of parameters. But instead of thus adapting our models to the available hardware, we could also develop novel hardware on which to execute our models efficiently - that is the premise of Neuromorphic Computing.
In this talk, I’d like to present the basic idea behind Neuromorphic Computing and highlight some of the mathematical challenges for Deep Learning that arise in this context.