Machine Learning Launch Event Abstracts

Slot 1: 10.50am – 12:30pm

Name: Ruth Walker

University: York

Title: Bayesian methods for extrapolating adult clinical trial data to improve the estimation of the effect of medical treatments in children.

Abstract:

A lower disease incidence in children means that fewer patients are eligible to take part in clinical trials, and research groups and pharmaceutical companies are wary of the increased effort required to conduct research with this population. As a result, compared to adults, in many disease areas, we are less certain about which medical treatments are most safe and effective for children.

When sufficient clinical trial evidence is available, network-meta-analysis (NMA), can inform decisions about which medicines to prescribe, by facilitating the comparison of multiple different treatment options for a disease/ condition. However, in the paediatric population, clinical trial evidence between certain treatments may be lacking and therefore, an NMA may be limited in its ability to inform healthcare decisions for children.

Bayesian information-sharing methods (ISMs) can help to overcome the scarcity of evidence in the paediatric population by extending a traditional NMA to include a separate but related population, e.g., an adult population. Bayesian ISMs facilitate the ‘extrapolation’ or ‘borrowing of strength’ from the adult population, so that information and conclusions are extended to make inferences about the effect of medicines in children.

Provided the disease manifestation, progression of the disease and safe dosage of treatment options are established in children, and similarities and/or differences with the adult population understood, Bayesian ISMs can be used safely to improve the certainty of treatment effect estimates in children. In doing so, Bayesian ISMs may reduce the need for additional clinical trials in children.

If these analyses show current treatment options to work similarly in adults and children (or identify consistent differences in treatment effect), Bayesian ISMs can also facilitate the prediction of the effect of new treatments in children. This could ultimately, lead to smaller trials being required to confirm estimates, rather than conducting full clinical trial programmes in children.

Name: Alberto Cabezas Gonzalez

University: Lancaster

Title: Parallel, efficient MCMC with Transport Elliptical Slice Sampling

Abstract:

We introduce a new framework for efficient sampling from complex probability distributions, using a combination of normalizing flows and elliptical slice sampling (Murray et al., 2010). The core idea is to learn a diffeomorphism, via normalizing flows, that maps the non-Gaussian structure of our target distribution to an approximately Gaussian distribution. We can then sample from our transformed distribution using the elliptical slice sampler, which is an efficient and tuning-free Markov chain Monte Carlo (MCMC) algorithm. The samples are then pulled back using an inverse normalizing flow to yield samples which approximate the stationary target distribution of interest. Our transformed elliptical slice sampler (TESS) is efficiently designed for modern computer architectures, where its adaptation mechanism utilizes parallel cores to rapidly run multiple Markov chains for only a few iterations. Numerical demonstrations show that TESS produce Monte Carlo samples from the target distribution with lower autocorrelation compared to non-transformed samplers. Additionally, assuming a sufficiently flexible diffeomorphism, TESS demonstrates significant improvements in efficiency when compared to gradient-based proposals designed to run on parallel computer architectures.

Name: Rebecca Stone

University: Leeds

Title: Visual bias mitigation driven by Bayesian epistemic uncertainties

Abstract:

Most state-of-the-art intelligent vision systems today rely on large quantities of data for training. This data most often contains biases, leading to undesirable "shortcuts" which neural networks are extremely susceptible to learning. Unlike deterministic methods, the Bayesian framework provides a principled method of inferring model or epistemic uncertainties, which arise due to lack of knowledge and will decrease given a more complete set of data. Using Bayesian neural networks, we explore the relationship between these uncertainties and bias-aligned and bias-conflicted samples, identifying correlations and leveraging them for visual bias mitigation both during and post training. We show on both synthetic and real-world datasets that our methods have potential for bias mitigation in settings where there is no prior knowledge of bias in the data and consider the strengths and limitations of such approaches.

Name: Thomas Mcdonald

University: Manchester

Title: Bayesian Deep Learning with Physics-informed Gaussian Processes

Abstract:

Dynamical systems are ubiquitous across the natural sciences, with many physical and biological processes being driven on a fundamental level by differential equations. Inferring ordinary differential equation (ODE) parameters using observational data from such systems is an active area of research, however, in particularly complex systems it is often infeasible to characterise all of the individual processes present and the interactions between them. Rather than attempt to fully describe a complex system, latent force models (LFMs) specify a simplified mechanistic model of the system which captures salient features of the dynamics present. This leads to a model which is able to readily extrapolate beyond the training input space, thereby retaining one of the key advantages of mechanistic modeling over purely data-driven techniques. However, modeling nonlinear dynamical systems presents an additional challenge, as shallow models such as LFMs are generally less capable of modeling the non-stationarities often present in nonlinear systems than deep probabilistic models such as deep Gaussian processes (DGPs), which possess greater representational power owing to their hierarchical structure.

In this talk, we will outline a novel approach to incorporating physical structure into a deep probabilistic model, whilst providing a sound quantification of uncertainty. This is achieved through derivation of physics-informed random Fourier features via the convolution of an exponentiated quadratic GP prior with the Green’s function associated with a first order ODE. These features are then used to form the kernel at each layer of a DGP. To ensure the scalability of this model to large datasets, stochastic variational inference is employed as a method for approximate Bayesian inference. The proposed framework is capable of capturing highly nonlinear dynamics effectively in both toy examples and real world data, whilst also being applicable to more general tabular regression problems.

Slot 2: 2:00pm – 2.40pm

Name: Saleh Rezaeiravesh

University: Manchester

Title: A Bayesian Hierarchical Multifidelity Model for High-fidelity Predictions of Turbulent Flows

Abstract:

Conducting high-fidelity experiments and scale-resolving numerical simulations of turbulent flows can be prohibitively expensive particularly at high Reynolds numbers which are relevant to engineering applications. On the other hand, it is necessary to develop accurate yet cost-effective models for datadriven outer-loop problems involving turbulent flows which include uncertainty quantification (UQ), data fusion, prediction, and robust optimization. In these problems, exploration of the space of inputs and design parameters demands a relatively large number of flow realizations. A solution can be using multifidelity models (MFMs) which aim at accurately predicting quantities of interest (QoIs) and their stochastic moments by combining the data obtained from different fidelities. When constructing MFMs, a given finite computational budget is optimally used through running only a few expensive (but accurate) simulations and many more inexpensive (but potentially less accurate) simulations. The present study reports our recent progress on further development and application of a class of Bayesian hierarchical multifidelity models with automatic calibration (HC-MFM) which rely on the Gaussian processes. At each fidelity level, which can be associated to any of the turbulence simulation approaches, both model inadequacy and aleatoric uncertainties in the process of data fusion are considered. As a main advantage of the present multifidelity modeling approach, the calibration parameters as well as the hyperparameters appearing in the Gaussian processes are simultaneously estimated within a Bayesian framework using a limited number of flow realizations. The Bayesian inference of the posterior distribution of various parameters is done using a Markov Chain Monte Carlo (MCMC) approach. As a major strong point of the HC-MFM, the predictions will be accompanied by the estimation of the associated confidence intervals. Given the generality of the HC-MFM, they can be applied to various fields of science and engineering.

Name: Adam D. Clayton

University: Leeds

Title: Autonomous Optimisation for Multistep Chemical Synthesis

Abstract:

Chemical reactions are an example of expensive-to-evaluate optimisation problems, owing to the substantial cost of materials and the time taken to conduct physical experiments. Self-optimisation platforms, which combine reactors, process analytics and machine learning algorithms in a feedback loop, have been shown to accelerate the development of single step reactions. However, active pharmaceutical ingredients require multiple transformations to synthesise, involving iterative reaction-workup-purification-isolation loops, which suffer from long production times and potential supply chain disruptions. Reaction telescoping, where multiple reactions are performed without the purification of intermediates, has the potential to significantly increase the efficiency and sustainability of pharmaceutical manufacturing. However, the task of optimising telescoped reactions remains highly challenging, as concatenating steps not only increases the number of variables, but also introduces complex interactions between the steps which must be considered holistically.

In this work, we develop an automated continuous flow platform for the simultaneous optimisation of telescoped reactions. Our approach is applied to a Heck-cyclisation-deprotection reaction sequence, used in the synthesis of a precursor for the treatment of neurological diseases. A simple method for multipoint sampling with a single online HPLC instrument was designed, enabling accurate quantification of each reaction, and an in-depth understanding of the reaction pathways. Notably, integration of a Bayesian optimisation algorithm, with an adaptive expected improvement acquisition function, identified an 81% overall yield in just 14 hours, and revealed the favorable competing pathway for formation of the desired product.

Name: Bao Nguyen

University: Leeds

Title: Toward Predicting Process Outcomes in Different Solvents: Solubility and Reactivity

Abstract:

Solvent selection is still one of the key challenges in sustainable chemical process development. Not only the choice needs to take into account the performance and selectivity of the reactions, it also has to lead to efficient workup/purification of the product and equipment cleaning. Most purification techniques, e.g. liquid-liquid extraction and crystallisation, depends on solubility of the desired products and impurities in organic solvents and aqueous phase. Thus, reliably predicting solubility and reactivity of organic compounds in different solvents is a cornerstone of holistic chemical process design.

We report here a successful approach to solubility prediction in organic solvents and water using a combination of machine learning algorithms and computational chemistry.1 Rational interpretation of dissolution process into a numerical problem led to a small set of descriptors and subsequent predictions which are relatively independent of the machine learning algorithms. These models gave significantly more accurate predictions compared open-access and commercial tools, i.e. the aqueous solubility prediction tools employed by the FDA and the ab initio tool COSMOTherm for solubility in organic solvents, achieving accuracy close to the expected level of noise in training data (LogS +/- 0.7). Importantly, they reproduced established physicochemical relationships between solubility and molecular properties in different solvents. This led to: (i) rational approaches to improve the accuracy of each models; (ii) accurate aqueous solubility prediction for novel compounds without any experimental data, and (iii) much higher level of confidence in applying the models to novel and unknown compounds.

The same approach was applied to predicting nucleophilicity in different organic solvents.2 Inclusion of the solvent descriptors from ACS Green Chemistry Solvent Selection Tool led to highly accurate prediction of solvent dependent nucleophilicity. This has the potential of predicting reaction outcomes in modern green solvents based on data in traditional, non-sustainable solvents.

1Nature Comm. 2020, 11, 5753; 2J. Chem. Info. Model. 2021, 61, 4890.

Name: Theodore Papamarkou

University: Manchester

Title: The premise of approximate MCMC in Bayesian deep learning

Abstract:

One of my primary research projects focuses on the development of approximate Markov chain Monte Carlo (MCMC) methods for Bayesian deep learning. Such methods are motivated by the problem of quantifying the uncertainty of predictions made by Bayesian neural networks.

Several challenges arise from sampling the parameter posterior of a neural network via MCMC, culminating to lack of convergence to the parameter posterior.Despite the lack of convergence, the approximate predictive posterior distribution contains valuable information (Papamarkou et al, 2022 (https://projecteuclid.org/journals/statistical-science/volume-37/issue-3/Challenges-in-Markov-Chain-Monte-Carlo-for-Bayesian-Neural-Networks/10.1214/21-STS840.short)).

One step towards scaling MCMC methods to sample neural network parameter sis based on evaluating the target density of a neural network on a subset (minibatch) of the data. By analogy to sampling data batches from a big dataset, I propose to sample subgroups of parameters from the neural network parameter space (Papamarkou, 2022(https://arxiv.org/abs/2208.11389)). While minibatch MCMC induces an approximation to the target density, parameter subgrouping can be carried out via blocked Gibbs sampling without introducing an additional approximation.

I will initially provide an overview of the developments for this project. Subsequently, I will outline plans for future computational and theoretical work that emerges from the proposal to sample each parameter subgroup separately.

Name: Oliver Kershaw

University: Leeds

Title: Machine learning directed multi-objective optimization of mixed variable chemical systems

Abstract:

The consideration of discrete variables (e.g. catalyst, ligand, solvent) in experimental self-optimization approaches remains a significant challenge. Herein I wish to present the application of a new mixed variable multi-objective optimization (MVMOO) algorithm for the self-optimization of chemical reactions. Coupling of the MVMOO algorithm with an automated continuous flow platform enables the identification of the trade-off curves for different performance criteria by optimizing the continuous and discrete variables concurrently. This published approach utilizes a Bayesian methodology to provide high optimization efficiency, enhances process understanding by considering key interactions between the mixed variables, and requires no prior knowledge of the reaction. Being presented are a Nucleophilic aromatic substitution (SNAr) and palladium catalyzed Sonogashira reactions which were investigated, where the effect of solvent and ligand selection on the regioselectivity and process efficiency were determined respectively whilst simultaneously determining the optimum continuous parameters in each case. Furthermore, this mixed-variable optimisation methodology has been applied to a multi-stage synthesis consisting of the first step Heck reaction followed by a second hydrolysis reaction. This was achieved using multi-point sampling along the flow system. This novel optimisation aims to maximise separate step objectives via manipulation of the continuous variables and discrete variables (ligand choice) selected.

Name: Sam Kay

University: Manchester

Title: Integrating autoencoder and Bayesian methods for batch process soft-sensor design

Abstract:

Viscosity represents a key product quality indicator but has been difficult to measure in-process in real-time. This is particularly true if the process involves complex mixing phenomena operated at dynamic conditions. To address this challenge, in this study, we developed an innovative soft-sensor by integrating advanced Bayesian methods. The soft-sensor first employs a deep learning autoencoder to extract information-rich process features by compressing high-dimensional industrial data, and then adopts either a Bayesian neural network or gaussian process to simultaneously predict product viscosity and associated uncertainty. To evaluate its performance, predictions of product viscosity were made for a number of industrial batches operated over different seasons. Furthermore, a heteroscedastic noise neural network was selected to benchmark against the Bayesian methods to compare the validity of traditional frequentist approaches against probabilistic approaches. It is found that the gaussian process based soft-sensor has both high accuracy and high reliability, indicating its potential for process monitoring and quality control

Return to article index