2020/21 Guest Seminars


Wednesday 12 May 2021- Prof. Ajitha Rajan, School of Informatics, University of Edinburgh

Event Title: Challenges in Automated Software Testing: GPUs and Smart Contracts

Event Overview: In the first part of the session, Ajitha will present testing challenges for Graphics Processing Units (GPUs). GPUs are massively parallel processors offering performance acceleration and energy efficiency unmatched by current processors (CPUs) in computers. These advantages along with recent advances in the programmability of GPUs have made them attractive for general-purpose computations. Despite the advances in programmability, GPU kernels are hard to code and analyse due to the high complexity of memory sharing patterns, striding patterns for memory accesses, implicit synchronisation, and combinatorial explosion of thread interleavings. Ajitha's team propose a testing technique for OpenCL kernels that combines mutation-based fuzzing and selective SMT solving with the goal of being fast, effective and scalable.

In the second part of the session, Ajitha will present automated test input generation techniques for smart contracts in the Ethereum blockchain. A blockchain is a distributed ledger that stores a growing list of unmodifiable records called blocks. Executing, verifying and enforcing credible transactions on blockchains is done using smart contracts. Ajitha will introduce a variety of test input generation techniques - fuzzing-based and genetic algorithms and present their effectiveness in revealing vulnerabilities in smart contracts.


Wednesday 28th April 2021 - Dr Bran Knowles (Senior Lecturer in Data Science, Lancaster University)

Event Title: Promoting public trust in AI

Event Overview: In this talk Dr Bran Knowles discussed that existing models of trust in AI do not scale to the context of pervasive AI, and argued instead that the public is trusting/distrusting in AI-as-an-institution – i.e. in a system which either demonstrably constrains AI in ways that prevent harm to the public, or a system which fails to do so.

This talk offered a theoretical scaffolding for understanding the importance of regulation, and AI documentation that enables this regulation, as the only path to promoting public trust in AI. It also attempted to unravel the faulty and dangerous narrative that trust in AI is important only for instrumental reasons, when in fact trust and distrust are key signals as to whether AI lives up to our moral standards and contributes to a society we want to live in.