School of Computing Science

Staff Profiles

Dr Nigel Thomas

Reader in Performance Engineering



I joined the School of Computing Science in January 2004 from the University of Durham, where I had been a lecturer since 1998. I was promoted to reader in 2009. My research interests lie in scalable performance analysis for networks, distributed algorithms, Cloud and IoT using queueing theory, stochastic process algebra and simulation. See myGoogle Scholar Citations page.

I was awarded a PhD in 1997 and an MSc in 1991, both from the University of Newcastle upon Tyne. I have previously held a number of positions of responsibility within the School, including Deputy Head of School, Director of Teaching and Director of Postgraduate Studies. I am currently Chair of PGR Examination Committees (for the CDT MRes programmes) and PGR Admissions Selector. I am also an Academic Appeals Adjudicator for the University, I sit of the Taught Programmes Regulations Sub-committee and I am one of the facilitators for the University training course for Chairs and Secretaries of Boards of Examiners.

I currently supervise 9 research students and I am seeking to recruit further self-financed international students for the Integrated PhD in Computer Science. Research topics of interest are listed below, please contact me for further information.

I am interested in the application of performance modelling to a variety of different problem areas. My preferred approach is to use stochastic process algebra, but I have also used queueing theory, stochastic Petri nets, discrete event simulation and trace-driven simulation. Different applications require different techniques for model specification and analysis. The challenge is forming the right models for a given problem and applying and developing appropriate analysis techniques. Some areas of current interest are listed here (in no particular order), but this is by no means exhaustive. Each area will have a number of possible projects.

1. Performance modelling of secure systems.

Security measures impose a performance overhead, as more work needs to be done in order to perform operations such as authentication, encryption and recording transactions. Understanding this overhead is vital in order to make intelligent choices in the design of secure systems. Of particular novel interest is the performance effect of attacks against the system. Choosing the right management policy (e.g. encryption key management) and a good protocol enables the system to maintain an acceptable performance even when under attack. 

2. Control of shared infrastructure.
Polling models have been widely applied to a variety of control problems in computing, communications and transport systems, which require sequential access to some critical resource shared amongst a variety of sources. Polling models may be applied to problems in the Internet of Things, where streams of data may be arriving from many sources exceeding the capacity of available resources for real-time analysis. In a polling model a resource needs to have a policy (a set of rules) by which it decides which data stream to process at which time and when to switch to another stream. If a resource spends too much time processing a particular stream then others become starved (potentially leading to data loss or performance degradation). However if a resource switches too often then time is wasted in switching which potentially reduces throughput.

3. Energy management in large scale systems.
Energy is a significant cost in the provision of large scale computing platforms, such as cloud computing. This cost is both environmental (CO2 output) and financial (£'s, €'s, $'s). Managing the level of resource provision to handle the service demand can help to limit the amount of energy used and hence decrease costs. However, it is not a simple matter to predict what demand will be in future. One approach is to form adaptive policies which react to changing demand. Different policies will work in different situations and are highly sensitive to the tuning of particular parameters.

4. Decision making with limited information.
There are many types of system where tasks are allocated to resources based on knowledge of the states of those resources, for example when routing packets in a network or allocating work flows in a cloud environment. Measuring, communicating and maintaining accurate and up to date knowledge about the status of resources is potentially costly, if it is possible at all. Therefore such systems generally employ some form of approximation to the current state of a resource in order to limit this problem. However, this approximation will generally make the allocation sub-optimal. It is therefore of practical interest to understand the impact of approximating the state of a given set of resources and, more significantly, how little information is necessary in order to make good decisions in a specific context.

5. Wireless network performance analysis.
Wireless networks are used for a wide range of applications in a wide range of operating scenarios. Clearly it is infeasible to expect that a given network protocol will perform well in all situations, as protocol are typically designed to perform well in respect to a limited set of performance metrics under a limited set of operating conditions. It is therefore necessary to understand the limitations of such protocols in terms of factors such as fairness and energy, when the network topology is sub-optimal or where nodes are misbehaving, for example when compromised due to infection by a virus or a fault, or subject to a denial of service attack.

6. User experience in smart environments
Physical environments, such as airports, hospitals and universities, can now be provisioned with sensors which support the automated control of user oriented services within the environment. Such control can be leveraged to improve the user experience of the environment, for example by improving flow, by providing information on progress or by making waiting more productive. Understanding the experience of such environments is a significant challenge in modelling which will potential lead to new classes of models, metrics and analysis techniques.