- Project Dates: From May 2013 to April 2017
- Project Leader: Prof. Cliff Jones
- Sponsors: EPSRC
Computer programs are notoriously difficult to perfect (everyone has experienced some form of inconvenience from "bugs") and even if this situation is beginning to come under control, it is at enormous cost. One reason for the high development cost of software is that errors made early in the design of a system can lay undetected until that system is tested - or even worse, used by customers. Correcting errors at such late stages is extremely expensive because so much work has to be repeated. So-called "formal methods" were first deployed on safety-critical software but are becoming more and more cost-effective because their use throughout design can drastically reduce the "scrap and rework" that comes from late detection of design mistakes. Formal methods make this possible because they use formal notations for specifying what should be built and thus offer a notion of correctness for each design step. Verifying design decisions becomes a proof process that can be helped by appropriate theorem proving software. Unfortunately, just as real progress is being made (both with general engineering practices and with the application of formal methods), separate commercial developments are increasing the challenges enormously. The general direction of new difficulties is "concurrency". Programs that are concurrent have to run in contexts which interfere with their progress. Concurrency can come from a desire for better performance, from use of embedded programs in conjunction with physical devices such as cars and planes, or from the use of the latest hardware designs that are built from many processors. Traditional engineering approaches, and even many modern automatic tools for detecting errors in software, are not going to suffice for the world of massive concurrency because the number of execution paths is astronomically large when processes can interfere with each other. Fortunately, there is research (in which UK researchers are at the forefront) for reasoning both about where interference is absent and/or constrained. These research avenues, however, need to be brought together and tool support must be implemented before they are usable by engineers. These are the expected outputs of "Taming Concurrency".