Dr Nigel Thomas
Reader in Performance Engineering

Introduction

I joined the School of Computing Science in January 2004 from the University of Durham, where I had been a lecturer since 1998. I was promoted to reader in 2009. My research interests lie in scalable performance analysis (see my personal homepage) in particular Markov modelling through queueing theory and stochastic process algebra. See also my Google Scholar Citations page.

I was awarded a PhD in 1997 and an MSc in 1991, both from the University of Newcastle upon Tyne.

I am organising the 8th International Workshop on Practical Applications of Stochastic Modelling (PASM'16) in Münster, Germany on 5th April 2016.

I am currently seeking to recruit self-financed international students for the Integrated PhD in Computer Science. Research topics of interest are listed below, please contact me for further information.

I am interested in the application of performance modelling to a variety of different problem areas. My preferred approach is to use stochastic process algebra, but I have also used queueing theory, stochastic Petri nets, discrete event simulation and trace-driven simulation. Different applications require different techniques for model specification and analysis. The challenge is forming the right models for a given problem and applying and developing appropriate analysis techniques. Some areas of current interest are listed here (in no particular order), but this is by no means exhaustive. Each area will have a number of possible projects.

1. Performance modelling of secure systems.

Security measures impose a performance overhead, as more work needs to be done in order to perform operations such as authentication, encryption and recording transactions. Understanding this overhead is vital in order to make intelligent choices in the design of secure systems. Of particular novel interest is the performance effect of attacks against the system. Choosing the right management policy (e.g. encryption key management) and a good protocol enables the system to maintain an acceptable performance even when under attack. 

2. Control of shared infrastructure.
Polling models have been widely applied to a variety of control problems in computing, communications and transport systems, which require sequential access to some critical resource shared amongst a variety of sources. Polling models may be applied to problems in the Internet of Things, where streams of data may be arriving from many sources exceeding the capacity of available resources for real-time analysis. In a polling model a resource needs to have a policy (a set of rules) by which it decides which data stream to process at which time and when to switch to another stream. If a resource spends too much time processing a particular stream then others become starved (potentially leading to data loss or performance degradation). However if a resource switches too often then time is wasted in switching which potentially reduces throughput.

3. Energy management in large scale systems.
Energy is a significant cost in the provision of large scale computing platforms, such as cloud computing. This cost is both environmental (CO2 output) and financial (£'s, €'s, $'s). Managing the level of resource provision to handle the service demand can help to limit the amount of energy used and hence decrease costs. However, it is not a simple matter to predict what demand will be in future. One approach is to form adaptive policies which react to changing demand. Different policies will work in different situations and are highly sensitive to the tuning of particular parameters.

4. Decision making with limited information.
There are many types of system where tasks are allocated to resources based on knowledge of the states of those resources, for example when routing packets in a network or allocating work flows in a cloud environment. Measuring, communicating and maintaining accurate and up to date knowledge about the status of resources is potentially costly, if it is possible at all. Therefore such systems generally employ some form of approximation to the current state of a resource in order to limit this problem. However, this approximation will generally make the allocation sub-optimal. It is therefore of practical interest to understand the impact of approximating the state of a given set of resources and, more significantly, how little information is necessary in order to make good decisions in a specific context.

5. Wireless network performance analysis.
Wireless networks are used for a wide range of applications in a wide range of operating scenarios. Clearly it is infeasible to expect that a given network protocol will perform well in all situations, as protocol are typically designed to perform well in respect to a limited set of performance metrics under a limited set of operating conditions. It is therefore necessary to understand the limitations of such protocols in terms of factors such as fairness and energy, when the network topology is sub-optimal or where nodes are misbehaving, for example when compromised due to infection by a virus or a fault, or subject to a denial of service attack.

6. User experience in smart environments
Physical environments, such as airports, hospitals and universities, can now be provisioned with sensors which support the automated control of user oriented services within the environment. Such control can be leveraged to improve the user experience of the environment, for example by improving flow, by providing information on progress or by making waiting more productive. Understanding the experience of such environments is a significant challenge in modelling which will potential lead to new classes of models, metrics and analysis techniques.