The following content has been provided by the University of Erlangen-Nürnberg.
Thank you very much for a kind introduction and also I'm very
thanks to this accept to host my visit here at FAU.
So first of all, I'm basically a programming language guy, not a high-performance computing guy.
So today I would like to talk about our recent research on the implementation technique of domain-specific languages.
I think most of or some of you already know DSL very well, very much, but let me first introduce kind of very, very brief background of DSL technology today.
So because some of you are developing DSL here for various things, systems like from HPC to embedded systems.
So everybody knows everybody knows the benefits of using DSL today.
This is a very brief summary. So why we use DSLs? Because it provides high level abstraction.
It's good for programmers. So programmers can consent if they use DSL, programmers can concentrate their program domain.
So if the developer programmers are expert in computer simulation, they can concentrate on how precisely simulate phenomena they are interested in without concerning, let's say, performance, for instance.
The biggest problem domain itself is very complicated. So as average human beings thinking about both things at the same time, I mean simulation and also performance,
and sometimes productivity or sometimes extensibility toward the future hardware, well, yeah, you guys are hardware experts.
So what I'm saying is when we write a program, thinking about many, many things, many different concerns at the same time is very difficult for human beings.
So ideally, programmers should be able to focus on one concern at a time.
He should be able to concentrate a simulation concern first and then switch to performance concern and then switch to future modern hardware design thing, blah, blah, blah.
And so the aim of using DSL is to make this kind of development possible.
And also with respect to performance concern, usually compiler should take optimization role.
So programmers concentrate only on simulation and exploiting modern hardware should be done by a compiler, automatically by compiler.
But if you use a DSL, there is another approach, which is development by multiple experts.
So suppose you have a team and some of the members are expert in simulation.
Some of them are expert in, let's say, parallel computing. Some of them are expert in, well, super computing and so on and so on.
And of course, if you use a general purpose language like C, C++, Fortran and so on, working together is really hard.
Even if we have a GitHub, we have to talk, we have to directly talk to people how we write a program because we have to consider everything at the same time.
But hopefully if a design of DSL is good enough, the code should be very well modularized.
And one module is only describe simulation, how to simulate.
And the other module should describe only how to use a CUDA and so on.
And a DSL compiler takes all the different source code and mixes it together and to generate the best binary.
So that's the ideal goal of this technology.
And there are two kinds of DSLs. One is external DSL.
The other kind is embedded DSLs. Let me summarize these two kinds of DSLs.
One is an external DSL is basically a standalone language.
So the drawback, well, benefit of external DSL is it's fast.
So if you spend lots of development cost, you would get a best compiler.
It's a better compiler than typical C++ compiler and so on.
Because you can use domain specific knowledge to optimize the code.
But this benefit is also a drawback because you have to spend lots of development cost of DSL.
So at least you have to develop a compiler, editors, debuggers, and IDEs, Eclipse plugins, and IntelliJ plugins, everything.
So it's a huge development cost.
So this approach is effective, feasible, only in a domain where many, many users are expected.
For instance, stent-through computing is a big area.
So it makes sense to develop an external DSL for stent-through computing.
But, well, I don't know. If your interest is very small kind of niche domain,
in that case, developing an external DSL for that small application domain would not make sense from the economical viewpoint.
Because software development is expensive.
So we always have to consider how much cost we can spend for development A software.
Well, honestly speaking, if your team developed a new hardware, experimental research hardware,
and want to develop an application to demonstrate the performance of the hardware,
in that case, well, developing a DSL for that dedicated research-based hardware would not pay off.
Of course, the product research is very final stage and ready to the industry.
Presenters
Prof. Shigeru Chiba
Zugänglich über
Offener Zugang
Dauer
00:50:40 Min
Aufnahmedatum
2016-03-04
Hochgeladen am
2016-03-09 18:34:29
Sprache
de-DE
As complex hardware architecture is widely adopted in high-performance computing (HPC), average HPC programmers are faced with serious difficulties in programming in a general-purpose language. Thus domain-specific languages (DSLs) are actively studied for HPC as a solution. DSLs are categorised into external DSLs and embedded DSLs. The latter DSLs are easy to develop but its expressiveness and execution performance are drawbacks. This talk present two techniques we are developing. The first one is protean operators, which give DSLs more flexible syntax, and the latter is deep reification, which is a language mechanism for helping DSL developers implement a more efficient DSL. Bytespresso is our prototype system to examine the idea of deep reification in Java. It is a platform of embedded DSLs in which DSL code is offloaded to external hardware for execution after domain-specific translation.