Skip to content

Additional modeling information

Implementing more complex experimental designs

Not many real modelling works are as simple as this computer lab. In many cases, the experimental design consists of multiple steps. Such changes in the experiment have to be accounted for in the model some how. There are two major ways to do this, either with multiple simulations or with events. In these examples, lets assume that the cells are first allowed to rest, then a stimuli is added, then the stimuli is doubled, and then the stimuli is removed and the cells are allowed to go to rest.

Using multiple simulations

TBD

Using events

TBD

Using the model at different time-scales

All the parameters in a model have a magnitude, and they typically have a time dependent components. As en example, a parameter value might correspond to "change in concentration per minute". This value would naturally be different, if expressed as "change in concentration per hour". When comparing to data, the handling of different time-scales can be done in two different ways:

Rescaling the time points

Maybe the simplest way of rescaling to different

Rescaling the ODEs

To visualize the issue, imagine a time-series plot of an overshoot, now imagine how it would look if you let the simulation run for a much longer time. In the longer time plot, the differential equation in the overshoot is much steeper than in the short time version. This would correspond to the ODEs being much larger, i.e. a faster ODE.

In other words, if we imagine a time step as a fixed arbitrary step, independent of time-scale, what happens if we change the scale? In the long time case, a single step would have to have much higher impact, and conversely, a shorter time would have to have a lower impact

Therefore, to scale the ODEs, we need to multiply the ODEs with a time-scaling constant.

To go from minutes to hours, without rescaling the parameters, we would need to introduce the time-constant in the following way. \(\(d/dt(A) = (-k_1 \cdot A + u)\cdot t_{scale}\)\) \(\(where, t_{scale} = 60\)\)

This scaling needs to be done at all ODEs (unless not applicable), and varies depending on which time-scales are being converted

Introduction to profile-likelihood

In a profile-likelihood analysis, we plot one parameters value on the x-axis and the cost of the whole parameter set on the y-axis. Such a curve is typically referred to as a profile-likelihood curve. To get the full range of values for the parameter, it is common practice to fixate the parameter being investigated, by removing it from the list of parameters being optimized, and setting it explicitly in the cost function, and re-optimize the rest of the parameters. This is done several times, and each time the parameter is fixed to a new value, spanning a range to see how the cost is affected from changes in the value of this specific parameter (Figure 4). If the resulting curve is defined and passes a threshold for rejection in both the positive and negative direction (Figure 4A), we say that the parameter is identifiable (that is, we can define a limited interval of possible values), and if instead the cost stays low in the positive and/or the negative direction, we say that the parameter is unidentifiable (Figure 4B-D). This whole procedure is repeated for each model parameter. PL Figure 4: Examples of identifiable (A) and unidentifiable parameters (B-D). The blue line is the threshold for rejection, the red line is the value of the cost function for different values of the parameter.

In a prediction profile-likelihood analysis, predictions instead of parameter values are investigated over the whole range of possible values. Predictions cannot be fixated, like parameters can. Therefore, the prediction profile-likelihood analysis is often more computationally heavy to perform than the profile-likelihood analysis. Instead of fixation, a term is added to the cost to force the optimization to find a certain value of the prediction, while at the same time minimizing the residuals. Resulting in an objective function something like the following: \(\(v\left(\theta\right)=\ \sum_{\forall t}\frac{{(y_t-\ {\hat{y}}_t\left(\theta\right))}^2}{{SEM}_t^2}+\lambda\left(p-\hat{p}\right)\)\) Again, the whole range of values of the predictions are iterated through. A prediction profile-likelihood analysis can be used in the search for core predictions.

A simple (but not ideal) approach

Here, we will not fixate one parameter at time and reoptimize, instead we will use the already collected data. This is worse than fixing the parameters, but we will do it for the sake of time. Furthermore, as opposed to the uncertainty plots before, we do not have to select some parameter sets, we can instead use all of our collected sets. The easiest way to do a profile-likelihood plot, is to simply plot the cost against the values of one parameter. Then repeat for all parameters.

To make it slightly more pretty, one can filter the parameters to only give the lowest cost for each parameter value. This can be done using the function unique. A good idea is likely to round your parameter values to something with less decimals. This can be done with round. Note that round only rounds to integers. To get a specific decimal precision, multiply and then divide the parameter values with a reasonable value. E.g.round(p*100)/100.