Report on the WG1.8 Workshop at CONCUR 2007---Part 1
Preface
As you may know, Jos Baeten, Wan Fokkink, Anna Ingolfsdottir, Uwe Nestmann and I organized a Workshop on Applying Concurrency Research in Industry, co-located with CONCUR 2007 in Lisbon, on behalf of WG1.8. The workshop was held on the afternoon of Friday, 7 September, and ran concurrently with the last two sessions of the main conference. Despite the competition with CONCUR, we had about 25 participants at the workshop, including several members of the working group. I think that this was a rather decent level of attendance for a "strategic event".
In this post, for the benefit of the WG members and of the community as a whole, I'll try to summarize the main discussion points that were raised during the presentations at the workshop and the ensuing panel discussion. I will also try to recall some of the questions that the audience asked the panel members. I hope that some of blog readers will want to contribute their opinion on these points and give their own answers to those questions themselves.
The organizers of the workshop will use all of the contributions that they'll receive in putting together an article for the concurrency column of the BEATCS.
Report on the Invited Talks
Note: I encourage the speakers to correct anything they said that I may have misinterpreted or misrepresented. I take full responsibility for any error I might have made in relaying the gist of the invited talks, and I'll be happy to post corrections and further comments. This is meant to be a live repository.
The workshop began with four invited presentations delivered by Hubert Garavel, Vincent Danos, Kim G. Larsen and Jan Friso Groote.
Hubert gave an overview of his twenty-year work on the CADP tool set, which is the oldest concurrency-theoretic tool still in activity. A thread underlying his work in what he called "applied concurrency theory" is that one must push the main results of our research area to industry and that this is only possible with the development of strong tool support for our methods. Hubert said that one of his design principles has been to restrict the tool's functionality for efficiency reasons, and that elegant semantics and efficient execution are two separate (at times conflicting) issues.
I have a note to the effect that, during Hubert's presentation, somebody (possibly Hubert himself) said that a lot of code doing bisimulation minimization has been lost over the years. We simply have not been consistent enough in preserving some of our earlier tooling efforts for the present generations of developers. Jan Friso said that current bisimulation minimization algorithms do not scale up to the size of the systems that are currently being analyzed, and asked whether it would be appropriate to rekindle research and implementation efforts on efficient and scalable bisimulation algorithms.
Earlier that day, Vincent had delivered an inspiring tutorial on his work in doing rule-based analysis of biological signalling. Listening to his tutorial, I was left with the impression that he is really having an impact in the life sciences, and that experimental biologists might very well use his tools based on concurrency-theoretic ideas. At the workshop, Vincent presented another way in which concurrency-theoretic ideas can help experimental biologists in their work. Experimental biology has a huge knowledge representation problem. (Vincent mentioned that there are two papers published in that area each minute!) In his opinion, experimental biologists can/should
Kim's talk was based on his experience with the ten-year development of the Uppaal tool, and reported on the knowledge transfer activity, which is part of his involvement in CISS. Apart from surveying the development of Uppaal and its recent offsprings, Kim's talk sent out the following general messages to the audience.
Some Questions to the Speakers
Here are some questions that were addressed to the speakers during the panel discussion, in no particular order. I encourage readers of this report to post their own answers and further questions as comments to the post. Later on, I will post the answers from the panel members as I recall them.
Addendum 14/9/2007: After I wrote this post, it occurred to me that the workshop discussion may have given the impression that industrial impact can solely be achieved by means of tools and joint case studies. Moshe Vardi's work on specification languages like ForSpec on the other hand indicates that the development of theoretically sound and clean specification languages that are actually used by industry is another area in which in the community can (and I believe should) have an impact.
I hope I can quote Moshe as saying
"In fact, I believe that much of my industrial impact has been achieved through the development of clean and useful theory."
As you may know, Jos Baeten, Wan Fokkink, Anna Ingolfsdottir, Uwe Nestmann and I organized a Workshop on Applying Concurrency Research in Industry, co-located with CONCUR 2007 in Lisbon, on behalf of WG1.8. The workshop was held on the afternoon of Friday, 7 September, and ran concurrently with the last two sessions of the main conference. Despite the competition with CONCUR, we had about 25 participants at the workshop, including several members of the working group. I think that this was a rather decent level of attendance for a "strategic event".
In this post, for the benefit of the WG members and of the community as a whole, I'll try to summarize the main discussion points that were raised during the presentations at the workshop and the ensuing panel discussion. I will also try to recall some of the questions that the audience asked the panel members. I hope that some of blog readers will want to contribute their opinion on these points and give their own answers to those questions themselves.
The organizers of the workshop will use all of the contributions that they'll receive in putting together an article for the concurrency column of the BEATCS.
Report on the Invited Talks
Note: I encourage the speakers to correct anything they said that I may have misinterpreted or misrepresented. I take full responsibility for any error I might have made in relaying the gist of the invited talks, and I'll be happy to post corrections and further comments. This is meant to be a live repository.
The workshop began with four invited presentations delivered by Hubert Garavel, Vincent Danos, Kim G. Larsen and Jan Friso Groote.
Hubert gave an overview of his twenty-year work on the CADP tool set, which is the oldest concurrency-theoretic tool still in activity. A thread underlying his work in what he called "applied concurrency theory" is that one must push the main results of our research area to industry and that this is only possible with the development of strong tool support for our methods. Hubert said that one of his design principles has been to restrict the tool's functionality for efficiency reasons, and that elegant semantics and efficient execution are two separate (at times conflicting) issues.
I have a note to the effect that, during Hubert's presentation, somebody (possibly Hubert himself) said that a lot of code doing bisimulation minimization has been lost over the years. We simply have not been consistent enough in preserving some of our earlier tooling efforts for the present generations of developers. Jan Friso said that current bisimulation minimization algorithms do not scale up to the size of the systems that are currently being analyzed, and asked whether it would be appropriate to rekindle research and implementation efforts on efficient and scalable bisimulation algorithms.
Earlier that day, Vincent had delivered an inspiring tutorial on his work in doing rule-based analysis of biological signalling. Listening to his tutorial, I was left with the impression that he is really having an impact in the life sciences, and that experimental biologists might very well use his tools based on concurrency-theoretic ideas. At the workshop, Vincent presented another way in which concurrency-theoretic ideas can help experimental biologists in their work. Experimental biology has a huge knowledge representation problem. (Vincent mentioned that there are two papers published in that area each minute!) In his opinion, experimental biologists can/should
- use concurrency-inspired languages to express biological understanding and
- display this information in a wiki-type system.
Kim's talk was based on his experience with the ten-year development of the Uppaal tool, and reported on the knowledge transfer activity, which is part of his involvement in CISS. Apart from surveying the development of Uppaal and its recent offsprings, Kim's talk sent out the following general messages to the audience.
- Tools are a necessary condition for the successful transfer of concurrency-theoretic ideas in industry. Tool development is labour intensive, and one needs the sustained effort of many people over many years to produce good software tools.
- Academic courses offered to students and to industry practitioners play a key role.
- Concurrency researchers should try and target different communities of potential users. One never knows where successful applications are going to stem from.
- A good beginning is useful! Being able to start with a success story may lead to further successes and a long-term collaborations. However, Kim warned against assuming that a good beginning is a guarantee of a good ending, and recounted the story of the Aalborg collaboration with Bang and Olufsen, who disappeared from sight after Klaus Havelund, Arne Skou and Kim found and fixed a bug in one of their protocols. See here.
- The success of CISS shows that several companies are very interested in applying concurrency-theoretic ideas and tools because this reduces time to market and increases the quality of their products.
- The impact of one's work on the application of concurrency-theoretic research in industry is not always directly proportional to the amount of effort one puts into the work itself. Kim gave the example of the synthesis of control software controlling the temperature and humidity in an actual pig stable in Denmark. This software was synthesized using Uppaal Cora in a short time and is actually running to the satisfaction of its customers :-)
- Finally, Kim called for an expansion of the scope for use of concurrency theory. We should link our work to testing, optimization etc. and embed it into standard software engineering methodologies, which are familiar to practitioners.
Some Questions to the Speakers
Here are some questions that were addressed to the speakers during the panel discussion, in no particular order. I encourage readers of this report to post their own answers and further questions as comments to the post. Later on, I will post the answers from the panel members as I recall them.
- What is the role/importance of real-time in modelling? Does industry want dense-time or discrete-time models?
- How does one involve small- and medium-size companies in collaborations with concurrency theoreticians/practitioners? Does "company size" matter?
- Is there any need for stochastic and probabilistic modelling in applications? More pointedly, have you met an example that you could not model because your tool does not support stochastic or probabilistic phenomena?
- How can we, as a community, foster the building of industrial-strength tools based on sound theories?
- What has concurrency theory offered industry so far? What are the next steps that the concurrency community should take in order to increase the impact of its research in an industrial setting? And what are future promising application areas for concurrency research?
Addendum 14/9/2007: After I wrote this post, it occurred to me that the workshop discussion may have given the impression that industrial impact can solely be achieved by means of tools and joint case studies. Moshe Vardi's work on specification languages like ForSpec on the other hand indicates that the development of theoretically sound and clean specification languages that are actually used by industry is another area in which in the community can (and I believe should) have an impact.
I hope I can quote Moshe as saying
"In fact, I believe that much of my industrial impact has been achieved through the development of clean and useful theory."
2 Comments:
What an interesting workshop; I deeply regret not being able to attend.
At Luca's gentle suggestion ;-) I thought I would have a go at questions he posed at the end of his post.
What is the role/importance of real-time in modelling? Does industry want dense-time or discrete-time models?
In my experience with Reactis, my company's testing and verification tool, Reactis customers absolutely need real-time support. This is due to their applications, which are in automotive and aerospace and develop embedded control software. The most widely used commercial modeling languages (Simulink, Stateflow, SCADE) also include real-time as an intrinsic part of their semantics.
Ironically, given the sound and fury in the academic community, the industrial people I have interacted with for the most part do not care whether time is discrete or continuous. Sometimes, I have encountered customers who want to do hybrid-systems style modeling, and for these people continuity is important.
How does one involve small- and medium-size companies in collaborations with concurrency theoreticians/practitioners? Does "company size" matter?
Regarding SMEs (small- and medium-size enterprises ... a common acronym among US policymakers): I think the best way to involve them is via projects funded by third parties (governments, or a large partner). SMEs generally don't have the overheads to support "blue-sky" research, and their investment-return horizons are necessarily of shorter duration. At both Reactive Systems and Fraunhofer, our concurrency-oriented SME collaborations have either involved collaborations on government research grants or project work on behalf of larger customer. In the latter cases, it was important that we work in commercial notations (e.g. Simulink) rather than research-oriented ones.
Large companies do have resources to put into more basic research, but there is another phenomenon to be aware of: researchers in these companies often view outside researchers as competitors for their internal research funds. So collaborations with these organizations are highly dependent on the personal connections between company and non-company researchers. So-called "business unit" personnel are often the easiest to deal with, but in this case there needs to be a clear, typically short-term, pay-off to them for the collaboration.
Is there any need for stochastic and probabilistic modelling in applications? More pointedly, have you met an example that you could not model because your tool does not support stochastic or probabilistic phenomena?
We support simple probabilistic modeling in Reactis in the form of probability distributions over system inputs that we sample when creating tests. This feature, however, is almost never used by our customers. The reasons for this mostly boil down to a lack of training these engineers receive in stochastic modeling and control, which in turn is tied into the lack of good (or maybe standard) theory for stochastic differential equations.
More precisely, the engineers in automotive and aero that I've dealt with are usually mechanical or electrical engineers with backgrounds in control theory. The feedback control they use relies on plant models (i.e. "environments") being given as differential equations, which are deterministic. The plant models they devise for testing their control-system designs often have parameters that they tweak in order to test how their ideas work under different conditions.
These engineers talk in the abstract about how useful it would be to develop analytical frameworks for probabilistic plants, but tractable theories of probability spaces of differential equations are unknown, as far as I can tell.
How can we, as a community, foster the building of industrial-strength tools based on sound theories?
To have an industial following, tools have to work with languages that industry uses. For most research tools this is a problem, because the input languages are typically invented by the tool developers.
I see two possibilities. One is to work on commercial languages such as Simulink. These languages are often a catastrophe from a mathematical perspective, but they also usually contain subsets that can be nicely formalized for the purposes of giving tool support. If tools have a nice "intermediate notation" into which these cores can be translated, then this offers a pathway for potential industrial customers to experiment with the tools.
The second approach is to become involved in standardization efforts for modeling languages. UML 2.0 has benefited to some extent from concurrency theory, but there are many aspects of that language that remain informal and imprecise.
What has concurrency theory offered industry so far? What are the next steps that the concurrency community should take in order to increase the impact of its research in an industrial setting? And what are future promising application areas for concurrency research?
I think the best way to answer first question is to "trace backward" from commercial tools / modeling languages that have some basis in concurrency. Such tools would include those based on Statecharts (Stateflow, STATEMATE, BetterState); others based on Message Sequence Charts (Rational Rose, other UML tools); the French synchronous-language tools (SCADE, Esterel); tools that include model checkers (the EDA = "electronic design automation" industry); tools that use model-checking-base ideas for other analyses (Reactis, DesignVerifier).
Unfortunately the process-algebra community has had relatively little impact on commercial tool development. This is not due to shortcomings in the theory, in my opinion, but in the inattention that compositionality continues to receive in the (non-research) modeling community. In my experience, event-based modeling is also relatively uncommon, at least in auto and aero: sampling of "state variables" is the preferred modeling paradigm.
I personally would like to see work on semantically robust combinations of specification formulas (e.g. MSCs + state machines, or temporal logic + process algebra) and related tools; theoretically well-founded approaches to verifying systems that use floating-point numbers; and compositional, graphical modeling languages (lots of work done already, but still no commercial interest).
this is one of the best blogs which i have ever seen.
Post a Comment
<< Home