Sekundarni povzetek: |
Biological computing is held towards a new era of processing platforms based on the bio-logical computer structures that are at the heart of biological systems with information processing capabilities. These bio-logical computer structures are mostly based on gene regulatory networks, mainly because their dynamics reminds the computer logic structures functioning. The use of these bio-structures is still in its early days since they are for the time being far less effective than their silicon counterparts. However, their use can be already exploited for a wide range of applications, covering pharmacological, medical and industrial. In order to develop such applications, a precise design that is based on computational modelling is vital in the process of their implementation.
Gene regulatory networks can be described as a chemical reacting systems. The dynamics of such systems is defined at the molecular level with a set of interacting reactions. The stochastic simulation algorithm can be used to generate the time evolution trajectories of each chemical species by firing each reaction according to a Monte-Carlo experiment. The main shortcoming of this approach is its computational complexity, which increases linearly with the total number of reactions that have to be simulated. When the number of reactions becomes too high, the stochastic simulation algorithm turns out to be impracticable. This is the case of certain gene regulatory networks, which can be either found in nature or can be artificially constructed. An additional problem lies in the fact that reactions in such networks can often occur at different time scales, which can differ by many orders of magnitude. Such scenario occurs when gene regulatory networks contain multiple cis-regulatory binding sites, on which different transcription factors are able to bind non-cooperatively. The transcription factors binding occurs much faster than the average reactions in the gene expression, therefore, this time-scale gap needs to be accounted into the simulation. Moreover, the transcription control can be affected by specific dispositions of the bound transcription factors, which is only possible to simulate, if all the reactions that can produce the same dispositions are defined. The number of such reactions increases exponentially with the number of binding sites.
In order to decrease the time complexity of the stochastic simulation algorithm for such gene regulatory networks, an alternative algorithm called the dynamic multi-scale stochastic algorithm (DMSSA) is proposed, in which the reactions involved in the transcription regulation can be simulated independently, by performing the stochastic simulation algorithm in a nested fashion. This is conditioned by the property of the set of reactions, describing the gene regulatory network, being divided into two subsets, i.e. a set of "fast" reactions, which occur frequently in a short time scale, and a set of "slow" reactions, which occur less frequently in longer time scales. This thesis demonstrates the equivalence between this approach and the standard stochastic simulation algorithm and shows its capabilities on two gene regulatory models, that are commonly used as examples in systems and synthetic biology.
The thesis focuses on how to identify the most important input parameters of multi-scale models, that affect the system the most. This is a common practice during the design of bio-logical structures and can be achieved with the sensitivity analysis. It may be difficult to carry out such analysis for complex reaction networks exhibiting different time scales. In order to cope with this issue, an alternative computation of the elementary effects in the Morris screening method is proposed, which is able to sort all the model parameters, independently on their structural or time scale definitions, in order of importance, i.e. which parameter carries the largest influence on the response of the model.
To ease the use of the simulation algorithm and to perform the sensitivity analysis, the thesis presents ParMSSA, an OpenCL based engine for performing parallel stochastic simulations on multi-core architectures. ParMSSA aims to accelerate the simulations, performed with our approach. ParMSSA is capable to run concurrently multiple instances of DMSSA, which are usually needed for reducing the noisy results of stochastic simulations. ParMSSA provides also a framework for performing the Morris screening experiment on reaction networks, which allows users to carry out the sensitivity analysis of observed systems. The simulation results provided by the ParMSSA can be easily interpreted and can be used to assess the robustness of the bio-logical computer structures.
The proposed algorithms and the proposed simulation engine were applied on two case studies, i.e. on the Epstein-Barr virus genetic switch and on the synthetic repressilator with multiple transcription factor binding sites. The results of the sensitivity analysis of the repressilator revealed that larger numbers of binding sites increase the robustness of the system and thus the robustness of the oscillatory behaviour. |