I am trying to build a convolutional VAE. The inputs are images (tensors) of dimensions 320x800 and I should get an output size equal to the input size.
The problem is that I can't understand how the output size after ConvTranspose2d is calculated and so the final output size I obtain is different from the input one.
This is my code: by giving as inputs random tensors of size [3, 320, 800] I obtain tensors of size [3, 3, 64, 160]. Note that there are 3 channels because I am using rgb images. I hope you can help me.
import torch
import torch.nn as nn
import torch.nn.functional as F
init_channels = 64 # initial number of filters
image_channels = 3 # color channels
latent_dim = 100 # number of features to consider
# Define the model
# Initial immage size = 320 x 800
class ConvVAE(nn.Module):
def __init__(self):
super(ConvVAE,self).__init__()
# Encoder
# output size after this layer: 160 x 400
self.enc1 = nn.Conv2d(
in_channels=image_channels, out_channels=init_channels,
kernel_size=4, stride=2, padding=1
)
# output size after this layer: 80 x 200
self.enc2 = nn.Conv2d(
in_channels=init_channels, out_channels=init_channels*2,
kernel_size=4, stride=2, padding=1
)
# output size after this layer: 40 x 100
self.enc3 = nn.Conv2d(
in_channels=init_channels*2, out_channels=init_channels*4,
kernel_size=4, stride=2, padding=1
)
# output size after this layer: 20 x 50
self.enc4 = nn.Conv2d(
in_channels=init_channels*4, out_channels=init_channels*8,
kernel_size=4, stride=2, padding=1
)
# output size after this layer: 10 x 25
self.enc5 = nn.Conv2d(
in_channels=init_channels*8, out_channels=init_channels*16,
kernel_size=4, stride=2, padding=1
)
# output size after this layer: 4 x 10
self.enc6 = nn.Conv2d(
in_channels=init_channels*16, out_channels=1024,
kernel_size=(2,5), stride=2, padding=0
)
self.fc1 = nn.Linear(1024, 2048)
self.fc_mu = nn.Linear(2048, latent_dim)
self.fc_log_var = nn.Linear(2048, latent_dim)
self.fc2 = nn.Linear(latent_dim, 1024)
# Decoder
self.dec1 = nn.ConvTranspose2d(
in_channels=1024, out_channels=init_channels*16,
kernel_size=(2,5), stride=2, padding = 0
)
self.dec2 = nn.ConvTranspose2d(
in_channels=init_channels*16, out_channels=init_channels*8,
kernel_size=4, stride=2, padding = 1
)
self.dec3 = nn.ConvTranspose2d(
in_channels=init_channels*8, out_channels=init_channels*4,
kernel_size=4, stride=2, padding = 1
)
self.dec4 = nn.ConvTranspose2d(
in_channels=init_channels*4, out_channels=init_channels*2,
kernel_size=4, stride=2, padding = 1
)
self.dec5 = nn.ConvTranspose2d(
in_channels=init_channels*2, out_channels=init_channels,
kernel_size=4, stride=2, padding = 1
)
self.dec6 = nn.ConvTranspose2d(
in_channels=init_channels, out_channels=image_channels,
kernel_size=4, stride=2, padding = 1
)
def reparameterize(self, mu, log_var):
"""
:param mu: mean from the encoder's latent space
:param log_var: log variance from the encoder's latent space
"""
std = torch.exp(0.5*log_var) # standard deviation
eps = torch.randn_like(std) # `randn_like` as we need the same size
sample = mu + (eps * std) # sampling
return sample
def forward(self, x):
# encoding
x = F.relu(self.enc1(x))
x = F.relu(self.enc2(x))
x = F.relu(self.enc3(x))
x = F.relu(self.enc4(x))
x = F.relu(self.enc5(x))
x = F.relu(self.enc6(x))
batch, _, _, _ = x.shape
x = F.adaptive_avg_pool2d(x, 1).reshape(batch, -1)
hidden = self.fc1(x)
# get `mu` and `log_var`
mu = self.fc_mu(hidden)
log_var = self.fc_log_var(hidden)
# get the latent vector through reparameterization
z = self.reparameterize(mu, log_var)
z = self.fc2(z)
z = z.view(-1, 1024, 1, 1)
# decoding
x = F.relu(self.dec1(z))
x = F.relu(self.dec2(x))
x = F.relu(self.dec3(x))
x = F.relu(self.dec4(x))
x = F.relu(self.dec5(x))
reconstruction = torch.sigmoid(self.dec6(x))
return reconstruction, mu, log_var
Related
Speaking briefly, I faced a strange performance difference in equal implementations of Deep embedded clustering (DEC) in R which I included links of implementation in the following.
My question,
According to the following results and figures( the R implementation is included), the code in R coverages in the stop condition by labeling all the observations(i.e. data samples) in one cluster (see Figure2).
Figure1. Labeling after first initializing with kmeans
Figure2 Converging in one cluster after the stop condition ( final labeling)
here is the complete code in R:
library(keras)
K <- keras::backend()
library(MLmetrics)
library(aricode)
#' Clustering layer for Deep Embedded Clustering -----------------------------------------------------------
createAutoencoderModel <- function( numberOfUnitsPerLayer,
activation = 'relu',
initializer = 'glorot_uniform' )
{
numberOfEncodingLayers <- length( numberOfUnitsPerLayer ) - 1
# input of AE
inputs <- layer_input( shape = numberOfUnitsPerLayer[1],name = 'input' )
encoder <- inputs
# internal layers in encoder
for( i in seq_len( numberOfEncodingLayers - 1 ) )
{
encoder <- encoder %>%
layer_dense( units = numberOfUnitsPerLayer[i+1],
activation = activation, kernel_initializer = initializer )
}
# hidden layer
encoder <- encoder %>%
layer_dense( units = tail( numberOfUnitsPerLayer, 1 ) )
autoencoder <- encoder
# internal layers in decoder
for( i in seq( from = numberOfEncodingLayers, to = 2, by = -1 ) )
{
autoencoder <- autoencoder %>%
layer_dense( units = numberOfUnitsPerLayer[i],
activation = activation, kernel_initializer = initializer )
}
# output
autoencoder <- autoencoder %>%
layer_dense( numberOfUnitsPerLayer[1], kernel_initializer = initializer, name = 'decoder' )
return( list(
autoencoderModel = keras_model( inputs = inputs, outputs = autoencoder ),
encoderModel = keras_model( inputs = inputs, outputs = encoder ) ) )
}
# Defination of Clustering layer ---------------------------------------------------------------------------
ClusteringLayer <- R6::R6Class( "ClusteringLayer",
inherit = KerasLayer,
lock_objects = FALSE,
public = list(
numberOfClusters = 10,
initialClusterWeights = NULL,
alpha = 1.0,
name = '',
initialize = function( numberOfClusters,
initialClusterWeights = NULL, alpha = 1.0, name = '' )
{
self$numberOfClusters <- numberOfClusters
self$initialClusterWeights <- initialClusterWeights
self$alpha <- alpha
self$name <- name
},
build = function( input_shape )
{
if( length( input_shape ) != 2 )
{
stop( paste0( "input_shape is not of length 2." ) )
}
self$clusters <- self$add_weight(
shape = list( self$numberOfClusters, input_shape[[2]] ),
initializer = 'glorot_uniform', name = 'clusters' )
if( ! is.null( self$initialClusterWeights ) )
{
self$set_weights( self$initialClusterWeights )
self$initialClusterWeights <- NULL
}
self$built <- TRUE
},
call = function( inputs, mask = NULL )
{
# Uses Student t-distribution (same as t-SNE)
# inputs are the variable containing the data, shape = ( numberOfSamples, numberOfFeatures )
K <- keras::backend()
q <- 1.0 / ( 1.0 + ( K$sum( K$square(
K$expand_dims( inputs, axis = 1L ) - self$clusters ), axis = 2L ) / self$alpha ) )
q <- q^( ( self$alpha + 1.0 ) / 2.0 )
q <- K$transpose( K$transpose( q ) / K$sum( q, axis = 1L ) )
return( q )
},
compute_output_shape = function( input_shape )
{
return( list( input_shape[[1]], self$numberOfClusters ) )
}
)
)
layer_clustering <- function( object,
numberOfClusters, initialClusterWeights = NULL,
alpha = 1.0, name = '' )
{
create_layer( ClusteringLayer, object,
list( numberOfClusters = numberOfClusters,
initialClusterWeights = initialClusterWeights,
alpha = alpha, name = name )
)
}
#' Deep embedded clustering (DEC) model class --------------------------------------------------------------
DeepEmbeddedClusteringModel <- R6::R6Class( "DeepEmbeddedClusteringModel",
inherit = NULL,
lock_objects = FALSE,
public = list(
numberOfUnitsPerLayer = NULL,
numberOfClusters = 10,
alpha = 1.0,
initializer = 'glorot_uniform',
convolutional = FALSE,
inputImageSize = NULL,
initialize = function( numberOfUnitsPerLayer,
numberOfClusters, alpha = 1.0, initializer = 'glorot_uniform',
convolutional = FALSE, inputImageSize = NULL )
{
self$numberOfUnitsPerLayer <- numberOfUnitsPerLayer
self$numberOfClusters <- numberOfClusters
self$alpha <- alpha
self$initializer <- initializer
self$convolutional <- convolutional
self$inputImageSize <- inputImageSize
ae <- createAutoencoderModel( self$numberOfUnitsPerLayer,
initializer = self$initializer )
self$autoencoder <- ae$autoencoderModel
self$encoder <- ae$encoderModel
# prepare DEC model
clusteringLayer <- self$encoder$output %>%
layer_clustering( self$numberOfClusters, name = "clustering" )
self$model <- keras_model( inputs = self$encoder$input, outputs = clusteringLayer )
},
pretrain = function( x, optimizer = 'adam', epochs = 200L, batchSize = 256L )
{
self$autoencoder$compile( optimizer = optimizer, loss = 'mse' )
self$autoencoder$fit( x, x, batch_size = batchSize, epochs = epochs )
},
loadWeights = function( weights )
{
self$model$load_weights( weights )
},
extractFeatures = function( x )
{
self$encoder$predict( x, verbose = 0 )
},
predictClusterLabels = function( x )
{
clusterProbabilities <- self$model$predict( x, verbose = 0 )
return( max.col( clusterProbabilities ) )
},
targetDistribution = function( q )
{
weight <- q^2 / colSums( q )
p <- t( t( weight ) / rowSums( weight ) )
return( p )
},
compile = function( optimizer = 'sgd', loss = 'kld', lossWeights = NULL )
{
self$model$compile( optimizer = optimizer, loss = loss, loss_weights = lossWeights )
},
fit = function( x, maxNumberOfIterations = 2e4, batchSize = 256L, tolerance = 1e-3, updateInterval = 10)
{
# Initialize clusters using k-means
km <- stats::kmeans( self$encoder$predict( x, verbose = 0 ),
centers = self$numberOfClusters, nstart = 20 )
currentPrediction <- km$cluster # fitted( km )
previousPrediction <- currentPrediction
self$model$get_layer( name = 'clustering' )$set_weights( list( km$centers ) )
# Deep clustering
loss <- 10000
index <- 0
indexArray <- 1:( dim( x )[1] )
for( i in seq_len( maxNumberOfIterations ) )
{
if( i %% updateInterval == 1 )
{
q <- self$model$predict( x, verbose = 0 )
p <- self$targetDistribution( q )
# Met stopping criterion
currentPrediction <- max.col( q )
plot(currentPrediction, col="blue")
title(main = 'Current prediction')
deltaLabel <- sum( currentPrediction != previousPrediction ) / length( currentPrediction )
cat( "Itr", i, ": ( out of", maxNumberOfIterations,
"): loss = [", unlist( loss ), "], deltaLabel =", deltaLabel,
", ACC= ", Accuracy(previousPrediction, currentPrediction),
", NMI= ", NMI(previousPrediction, currentPrediction), "\n", sep = ' ' )
previousPrediction <- currentPrediction
if( i > 1 && deltaLabel < tolerance )
{
print('Reached tolerance threshold. Stopping training......')
break
}
}
# train on batch
batchIndices <- indexArray[( index * batchSize + 1 ):min( ( index + 1 ) * batchSize, dim( x )[1] )]
loss <- self$model$train_on_batch( x = x[batchIndices,], y = p[batchIndices,] )
if( ( index + 1 ) * batchSize + 1 <= dim( x )[1] )
{
index <- index + 1
} else {
index <- 0
}
}
return( currentPrediction )
}
)
)
# loading dataset ---------------------------------------------------------------------------------------------
fmnist <- dataset_fashion_mnist()
numberOfTrainingData <- length( fmnist$train$y )
numberOfTestingData <- length( fmnist$test$y )
numberOfPixels <- prod( dim( fmnist$test$x[1,,] ) )
3
fmnist$train$xreshaped <- array_reshape( fmnist$train$x,
dim = c( numberOfTrainingData, numberOfPixels ), order = "C" )
fmnist$test$xreshaped <- array_reshape( fmnist$test$x,
dim = c( numberOfTestingData, numberOfPixels ), order = "C" )
x <- rbind( fmnist$test$xreshaped, fmnist$train$xreshaped )/255.0
y <- c( fmnist$test$y, fmnist$train$y )
numberOfClusters <- length( unique( fmnist$train$y ) )
initializer <- initializer_variance_scaling(
scale = 1/3, mode = 'fan_in', distribution = 'uniform' )
pretrainOptimizer <- optimizer_sgd( lr = 1.0, momentum = 0.9 )
decModel <- DeepEmbeddedClusteringModel$new(
numberOfUnitsPerLayer = c( numberOfPixels, 32, 32, 256, 10 ),
numberOfClusters = numberOfClusters, initializer = initializer )
decModel$pretrain( x = x, optimizer = optimizer_sgd( lr = 1.0, momentum = 0.9 ),
epochs = 10L, batchSize = 256L )
decModel$compile( optimizer = optimizer_sgd( lr = 1.0, momentum = 0.9 ), loss = 'kld' )
yPredicted <- decModel$fit( x, maxNumberOfIterations = 2e4, batchSize = 256,
tolerance = 1e-3, updateInterval = 10 )
Train on 70000 samples
Epoch 1/10
70000/70000 [==============================] - 4s 60us/sample - loss: 0.0795
Epoch 2/10
70000/70000 [==============================] - 3s 45us/sample - loss: 0.0651
Epoch 3/10
70000/70000 [==============================] - 3s 46us/sample - loss: 0.0470
Epoch 4/10
70000/70000 [==============================] - 3s 45us/sample - loss: 0.0430
Epoch 5/10
70000/70000 [==============================] - 3s 45us/sample - loss: 0.0402
Epoch 6/10
70000/70000 [==============================] - 3s 45us/sample - loss: 0.0359
Epoch 7/10
70000/70000 [==============================] - 3s 45us/sample - loss: 0.0345
Epoch 8/10
70000/70000 [==============================] - 3s 45us/sample - loss: 0.0337
Epoch 9/10
70000/70000 [==============================] - 3s 45us/sample - loss: 0.0326
Epoch 10/10
70000/70000 [==============================] - 3s 45us/sample - loss: 0.0311
<tensorflow.python.keras.callbacks.History>
Figure 3 Training with 10 epochs
Itr 1 : ( out of 20000 ): loss = [ 10000 ], deltaLabel = 0 , ACC= 1 , NMI= 1
Itr 11 : ( out of 20000 ): loss = [ 1.02756 ], deltaLabel = 0.8403571 , ACC= 0.1596429 , NMI= 0.2638058
Itr 21 : ( out of 20000 ): loss = [ 1.016267 ], deltaLabel = 0.3924 , ACC= 0.6076 , NMI= 0
Itr 31 : ( out of 20000 ): loss = [ 1.467916 ], deltaLabel = 0 , ACC= 1 , NMI= NaN
[1] "Reached tolerance threshold. Stopping training......"
Figure 4 fitting the DEC model
Which it was accordingly respect to Figure 5 and Figure 6 in Python
Figure 5 Initializing the labels by kmeans
Figure 6 final labeling after the model coverage (x-axis is the samples and y-axis is the labels)
Could you please let me know why this is happening?
I have tried other loss functions, still, the similar phenomena happen (for example "categorical_crossentropy")
Information about the machine:
Python 3.7 lunched by Spyder 4.1
Rstodio Version 1.2.5033
Dataset "mnist" for both implementations
The implementation in Python: https://www.dropbox.com/s/ii3k7rklz7z6446/DEC_original.py?dl=0
I'm using the R interface to Keras (for the 1st time) with a timeseries_generator per the reprex below. How do I get fit_generator to complete all epochs?
I'm looking at custom generators as a possible alternative, but would ideally like to use the timeseries_generator assuming it is appropriate to my intended usage.
The intent is to predict the actual values several steps ahead by using the "feature" and three lagged copies of the actuals as predictors.
I'm assuming that differencing is not required since the feature is expected to indicate trend.
library(tidyverse)
library(keras)
# Function to generate multiple lags
# See: https://gist.github.com/romainfrancois/469ed8224ba4be48befec28cb1e1ff80
many_lags <- function(var, n = 10) {
var <- enquo(var)
indices <- seq_len(n)
map(indices, ~ quo(lag(!!var, !!.x))) %>%
set_names(sprintf("lag_%s_%02d", rlang::quo_text(var), indices))
}
# --- Simulate time-series
set.seed(42)
lags <- 3
samples <- 1000
ts <- tibble(time = 1:(1000 + lags)) %>%
mutate(
actual = seq(0.5, (1000 + lags) / 2, by = 0.5) + arima.sim(model = list(
ar = c(0.2, 0.3),
ma = 0.2,
order = c(2, 1, 1)
), 999 + lags) %>% as.double(),
feature = seq(0.5, (1000 + lags) / 2, by = 0.5)
) %>%
mutate(!!!many_lags(actual, lags)) %>%
tail(samples)
ts %>% ggplot(aes(time, actual)) +
geom_line()
# --- Centre & scale predictors
test_window <- 200
temp <- ts %>%
head(-test_window)
col_mean <- map_dbl(temp, mean) %>% replace(c(1:2), 0)
col_sd <- map_dbl(temp, sd) %>% replace(c(1:2), 1)
train_data <- ts %>%
head(-test_window) %>%
scale(col_mean, col_sd)
test_data <- ts %>%
tail(test_window) %>%
scale(col_mean, col_sd)
# --- Create generator objects
batch_size = 5
length <- 5 # forecast timesteps
train_gen <- timeseries_generator(
train_data[, 3:6],
targets = train_data[, 2],
length = length,
sampling_rate = 1,
stride = 1,
start_index = 1,
end_index = 600,
shuffle = FALSE,
reverse = FALSE,
batch_size = batch_size
)
val_gen <- timeseries_generator(
train_data[, 3:6],
targets = train_data[, 2],
length = length,
sampling_rate = 1,
stride = 1,
start_index = 601,
end_index = 800,
shuffle = FALSE,
reverse = FALSE,
batch_size = batch_size
)
test_gen <- timeseries_generator(
test_data[, 3:6],
targets = test_data[, 2],
length = length,
sampling_rate = 1,
stride = 1,
start_index = 1,
end_index = 200,
shuffle = FALSE,
reverse = FALSE,
batch_size = batch_size
)
# --- Train the model
model <- keras_model_sequential() %>%
layer_gru(
units = ncol(train_data[, 3:6]),
batch_size = batch_size,
input_shape = list(NULL, ncol(train_data[, 3:6]))
) %>%
layer_dense(units = 1)
summary(model)
#> Model: "sequential"
#> ___________________________________________________________________________
#> Layer (type) Output Shape Param #
#> ===========================================================================
#> gru (GRU) (5, 4) 108
#> ___________________________________________________________________________
#> dense (Dense) (5, 1) 5
#> ===========================================================================
#> Total params: 113
#> Trainable params: 113
#> Non-trainable params: 0
#> ___________________________________________________________________________
model %>% compile(loss = "mse",
optimizer = optimizer_rmsprop(),
metrics = list("mae"))
history <- model %>%
fit_generator(
train_gen,
steps_per_epoch = floor((train_gen$end_index - train_gen$start_index + 1) / batch_size),
epochs = 100,
validation_data = val_gen,
validation_steps = floor((val_gen$end_index - val_gen$start_index + 1) / batch_size)
)
Created on 2019-09-12 by the reprex package (v0.3.0)
I get the following messages:
2019-09-12 15:42:44.613462: I **tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA**
Epoch 1/100
119/119 [==============================] - 1s 10ms/step - loss: 20351.4386 - mean_absolute_error: 120.7280 - val_loss: 77980.3295 - val_mean_absolute_error: 279.1719
Epoch 2/100
**WARNING:tensorflow:Your dataset iterator ran out of data; interrupting training. Make sure that your iterator can generate at least `steps_per_epoch * epochs` batches** (in this case, 11900 batches). You may need touse the repeat() function when building your dataset.
0/119 [..............................] - ETA: 0s - loss: 0.0000e+00 - mean_absolute_error: 120.7280 - val_loss: 0.0000e+00 - val_mean_absolute_error: 0.0000e+00
I have the data below:
Kd1Par<-as.matrix(c(1,2,3))
Kd2Par<-as.matrix(c(1,2,3))
and the algorithm which uses nested for loops:
for (i in 1:length(Kd1Par)){
for (j in 1:length(Kd2Par)){
Kd1 <- Kd1Par[i]
Kd2 <- Kd2Par[j]
print(c(Kd1 = Kd1Par[i], Kd2 = Kd2Par[j]))
myDose[i, j] <- 10
print(c(Dose = myDose[i,j]))
}}
in order to give me this output :
Kd1 Kd2
1 1
Dose
10
Kd1 Kd2
1 2
Dose
10
Kd1 Kd2
1 3
Dose
10
Kd1 Kd2
2 1
Dose
10
Kd1 Kd2
2 2
Dose
10
Kd1 Kd2
2 3
Dose
10
Kd1 Kd2
3 1
Dose
10
Kd1 Kd2
3 2
Dose
10
Kd1 Kd2
3 3
Dose
10
The problem is that my real dataset is too big and the for loops is an efficient method but very slow so I would like to replace it with a method that gives me exactly the same result as above. Note that myDose[i, j] <- 10 is not always 10 in my real project but comes from another calculation that gives every time another result but here I set is as 10 in order to simplify the problem.
# my app in case it makes more sense to understand the issue
library(deSolve)
library(caTools)
library(shiny)
library(ggplot2)
library(ggpubr)
library(minpack.lm)
library(reshape2)
library(pracma)
ui <- fluidPage(
# fluidRow(title='Schematic of Two Memb Bound Target ',
# img(src='twoMemBound.png',width='100%')),
plotOutput('PKPlot'),
actionButton(inputId = "click",
label = "Run"),
fluidRow(
column(4,
h6("Dosing regimen Parameters",style = "color:red",align="center"),
sliderInput("nIter", label = h6("Contour Smoothness"),
min = 2, max = 15, value = 3),
sliderInput("reqMinInh", label = h6("Minimum Inhibition"),
min = 10, max = 100, value = 90),
sliderInput("nd", label = h6("Number of Doses"),
min = 3, max = 100, value = 4),
# sliderInput("endTime", label = h6("Simulation time in Days"),
# min = 0, max = 500, value = 77),
sliderInput("tau", label = h6("Dosing interval in Days"),
min = 0.1, max = 50, value = 7),
sliderInput("BW", label = h6("Bodyweight in Kg"),
min = 60, max = 100, value = 70)
),
column(4,
h6("Drug Parameters",style = "color:red",align="center"),
sliderInput("CL", label = h6("Drug Clearance (L/day)"),
min = 0.1, max = 0.3, value = 0.24),
sliderInput("Vp", label = h6("Volume of Plasma Comp (L)"),
min = 0.1, max = 3, value = 3),
sliderInput("Kon1", label = h6("Drug Affinity for Target 1 (1/(nmol/L)/day)"),
min = 0.1, max = 2, value = 1.3824),
sliderInput("Kon2", label = h6("Drug Affinity for Target 2 (1/(nmol/L)/day)"),
min = 0.1, max = 2, value = 1.3824),
sliderInput("MW", label = h6("Molecular Weight in da"),
min = 50e3, max = 200e3, value = 150e3)
# sliderInput("Vph", label = h6("Volume of Peripheral Comp (L)"),
# min = 0.1, max = 5, value = 3.1),
# sliderInput("Vt", label = h6("Volume of Tissue Comp (L)"),
# min = 0.1, max = 0.2, value = 0.192),
# sliderInput("k_01", label = h6("First Order Absorption Rate (1/day)"),
# min = 0.1, max = 2, value = 1),
),
column(4,
h6("Target Parameters",style = "color:red",align="center"),
sliderInput("R01", label = h6("Baseline Conc of Target 1 (nmol/L)"),
min = 0.01, max = 10, value = 0.1),
sliderInput("R02", label = h6("Baseline Conc of Target 2 (nmol/L)"),
min = 0.01, max = 10, value = 0.1),
sliderInput("HL1", label = h6("Half-life of Target 1 (min)"),
min = 0.01, max = 100, value = 100),
sliderInput("HL2", label = h6("Half-life of Target 2 (min)"),
min = 0.01, max = 100, value = 100)
)
)
)
server <- function(input, output) {
v <- reactiveValues(doPlot = FALSE)
observeEvent(input$click, {
v$doPlot <- input$click
})
output$PKPlot <- renderPlot({
if (v$doPlot == FALSE) return()
isolate({
reqMinInh <- input$reqMinInh # (%) Min inhibition of Target
nd <- input$nd # Number of doses
tau <- input$tau
endTime <- (nd+1)*tau
BW <- input$BW
MW <- input$MW
nIter <- input$nIter
Kd1Par <- logspace(-1.98,1.698,n = nIter)
Kd2Par <- logspace(-1.98,1.698,n = nIter)
myDose <- matrix(c(0), nrow= length(Kd1Par), ncol = length(Kd2Par))
Kon_m1 <- input$Kon1 # (1/(nmol/L)/day)
Kon_m2 <- input$Kon2 # (1/(nmol/L)/day)
Base1 <- input$R01
Base2 <- input$R02
HL1 <- input$HL1
HL2 <- input$HL2
Kint_m1 <- 0.693*60*24/HL1 # (1/day)
Kint_m2 <- 0.693*60*24/HL2 # (1/day)
Kdeg_m1 <- Kint_m1 # (1/day)
Kdeg_m2 <- Kint_m2 # (1/day)
Ksyn_m1 <- Base1*Kdeg_m1 # (nmol/L/day)
Ksyn_m2 <- Base2*Kdeg_m2 # (nmol/L/day)
Vp <- input$Vp # (L) Ref: Vaishali et al. 2015
Vph <- 3.1 # (L) Ref: Tiwari et al. 2016
Vt <- 0.192 # (L) Spleen, Ref: Davis et al. 1993
k_01 <- 1 # (1/day) Ref: Leonid Gibiansky
CL <- input$CL # (L/day) Ref: Leonid Gibiansky
K_el <- CL/Vp # (1/day)
k_pph <- 0.186 # (1/day) Ref: Tiwari et al. 2016
k_php <- 0.184 # (1/day) Ref: Tiwari et al. 2016
Ktp <- 0.26 # (1/day)
Kpt <- 0.004992 # (1/day)
times <- seq(from = 0, to = endTime, by =0.1)
yInit <- c(Ap = 0.0, Dp = 0.0, Dt = 0.0,
M1 = Base1, M2 = Base2,
DtM1 = 0.0, DtM2 = 0.0, DtM1M2 = 0.0, Dph = 0.0)
derivs_pk1 <- function(t, y, parms) {
with(as.list(c(y,parms)),{
dAp_dt <- -k_01*Ap
dDp_dt <- k_01*Ap/Vp -K_el*Dp +Vt/Vp*Ktp*Dt -Kpt*Dp +Vph/Vp*k_php*Dph -k_pph*Dp
dDt_dt <- Vp/Vt*Kpt*Dp -Ktp*Dt -Kon_m1*Dt*M1 +Koff_m1*DtM1 -Kon_m2*Dt*M2 +Koff_m2*DtM2
dM1_dt <- Ksyn_m1 -Kdeg_m1*M1 -Kon_m1*Dt*M1 +Koff_m1*DtM1 -Kon_m1*DtM2*M1 +Koff_m1*DtM1M2
dM2_dt <- Ksyn_m2 -Kdeg_m2*M2 -Kon_m2*Dt*M2 +Koff_m2*DtM2 -Kon_m2*DtM1*M2 +Koff_m2*DtM1M2
dDtM1_dt <- -Kint_m1*DtM1 -Koff_m1*DtM1 +Kon_m1*Dt*M1 -Kon_m2*DtM1*M2 +Koff_m2*DtM1M2
dDtM2_dt <- -Kint_m2*DtM2 -Koff_m2*DtM2 +Kon_m2*Dt*M2 -Kon_m1*DtM2*M1 +Koff_m1*DtM1M2
dDtM1M2_dt <- Kon_m2*DtM1*M2 -Koff_m2*DtM1M2 +Kon_m1*DtM2*M1 -Koff_m1*DtM1M2 -Kint_m1*DtM1M2 -Kint_m2*DtM1M2
dDph_dt <- Vp/Vph*k_pph*Dp - k_php*Dph
list(c(dAp_dt,dDp_dt,dDt_dt,dM1_dt,dM2_dt,dDtM1_dt,dDtM2_dt,dDtM1M2_dt,dDph_dt))
})
}
ssq <- function(parmsToOptm){
Dose <- parmsToOptm[1]
injectEvents <- data.frame(var = "Ap",
time = seq(0,tau*(nd-1),tau),
value = Dose*1e6*BW/MW, # (nmol)
method = "add")
pars_pk1 <- c()
qss_pk10<-ode(times = times, y = yInit, func =derivs_pk1, parms = pars_pk1,events = list(data = injectEvents))
qss_pk1<- data.frame(qss_pk10)
temp <- qss_pk1[qss_pk1$time>tau*(nd-2)&qss_pk1$time<tau*(nd-1),]
inh1 <- (1-temp$M1/Base1)*100
inh2 <- (1-temp$M2/Base2)*100
if(min(inh1,inh2) %in% inh1) {
currMinInh <- inh1
} else {currMinInh <-inh2}
ssqres = currMinInh - reqMinInh
return(ssqres)
}
for (i in 1:length(Kd1Par)){
for (j in 1:length(Kd2Par)){
Kd1 <- Kd1Par[i]
Kd2 <- Kd2Par[j]
print(c(Kd1 = Kd1Par[i], Kd2 = Kd2Par[j]))
Koff_m1 <- Kon_m1*Kd1 # (1/day)
Koff_m2 = Kon_m2*Kd2 # (1/day)
# Initial guess
parmsToOptm <- c(10)
fitval<-nls.lm(par=parmsToOptm,fn=ssq,control = nls.lm.control(ftol = sqrt(.Machine$double.eps),
ptol = sqrt(.Machine$double.eps), gtol = 0, diag = list(), epsfcn = parmsToOptm[1]/100,
factor = 100, maxfev = integer(), maxiter = 50, nprint = 0))
myDose[i, j] <- c(coef(fitval))
print(c(Dose = myDose[i,j]))
}
}
KdMat <- expand.grid(Kd1Par,Kd2Par)
temp1 <- melt(myDose)
myDoseFormat <- data.frame(Kd1=KdMat$Var1, Kd2 = KdMat$Var2, Dose = temp1$value)
minDose <- myDoseFormat[myDoseFormat$Dose == min(myDoseFormat$Dose),]
Kd1 <- minDose$Kd1
Kd2 <- minDose$Kd2
Koff_m1 <- Kon_m1*Kd1 # (1/day)
Koff_m2 = Kon_m2*Kd2 # (1/day)
Dose <- minDose$Dose
injectEvents <- data.frame(var = "Ap",
time = seq(0,tau*(nd-1),tau),
value = Dose*1e6*BW/MW, # (nmol)
method = "add")
pars_pk1 <- c()
qss_pk10<-ode(times = times, y = yInit, func =derivs_pk1, parms = pars_pk1,events = list(data = injectEvents))
qss_pk1<- data.frame(qss_pk10)
mytheme_grey <- theme_grey(base_size=18)+theme(plot.caption=element_text(size=8, colour="grey60"))
p1 <- ggplot(myDoseFormat, aes(x = Kd1, y = Kd2, z = Dose)) +
geom_raster(aes(fill = Dose), interpolate=T) +
scale_x_log10() + scale_y_log10() +
labs(title = "Contours of dose (mg/kg)", x="Target-1 Kd (nM)",y="Target-2 Kd (nM)") +
guides(fill = guide_colorbar(title = "Dose (mg/kg)")) +
theme(legend.position=c(0.9, 0.75))
p2 <- ggplot(qss_pk1,aes(x=time/7)) +
geom_line(aes(y=Dp)) +
labs(x="Time (weeks)",y="Drug Conc (nmol/L)") +
mytheme_grey
cols <- c("Target 1" ="red", "Target 2" = "blue")
p3 <- ggplot(qss_pk1,aes(x=time/7)) +
geom_line(aes(y=M1, colour = "Target 1"), size = 1.5, linetype = 1) +
geom_line(aes(y=M2, colour = "Target 2"), size = 1.5, linetype = 2) +
labs(x="Time (weeks)",y="Target Conc (nmol/L)") +
scale_colour_manual(name = "Targets", values = cols)+
mytheme_grey
p4 <- ggplot(qss_pk1,aes(x=time/7)) +
geom_line(aes(y= (1-M1/Base1)*100, colour = "Target 1"), size = 1.5, linetype = 1) +
geom_line(aes(y= (1-M2/Base2)*100, colour = "Target 2"), size = 1.5, linetype = 2) +
labs(x="Time (weeks)",y="Target Occupancy (%)") +
scale_colour_manual(name = "Targets", values = cols)+
mytheme_grey
ggarrange(p1,p2,p3,p4,labels=c("A","B","C","D"), ncol=4,nrow=1)
})
})
}
shinyApp(ui = ui, server = server)
Do you need a loop?
# Create a data frame of all combinations
df <- expand.grid(Kd1Par = c(1,2,3), Kd2Par = c(1,2,3))
# Load libraries
library(dplyr)
library(purrr)
# If function is vectorised
df %>%
mutate(Dose = MyFunction(Kd1Par, Kd2Par))
# If function is not vectorised
df %>%
mutate(Dose = map2_dbl(Kd1Par, Kd2Par, MyFunction))
Here, I create all possible combinations of Kd1Par and Kd2Par and then run the dose function, which I called MyFunction.
For example,
# Example dose function
MyFunction <- function(x, y)x + y
would give something like
# Kd1Par Kd2Par Dose
# 1 1 1 2
# 2 2 1 3
# 3 3 1 4
# 4 1 2 3
# 5 2 2 4
# 6 3 2 5
# 7 1 3 4
# 8 2 3 5
# 9 3 3 6
Following this blog post, I'm trying to understand lstm for time series forecasting.
The thing is the result on the test data are too good, what am I missing?
Also everytime I re-run the fit it seems to get better, is the Net re-using the same weights?
The structure is very simple, the input_shape is [1, 1, 1].
Even with Epochs = 1, it learns all too well the test data.
Here's a reproducible example:
library(keras)
library(ggplot2)
library(dplyr)
Data creation and prep:
# create some fake time series
set.seed(123)
df_timeseries <- data.frame(
ts = 1:2500,
value = arima.sim(list(order = c(1,1,0), ar = 0.7), n = 2500)[-1] # fake data
)
#plot(df_timeseries$value, type = "l")
# first order difference
diff_serie <- diff(df_timeseries$value, differences = 1)
# Lagged data ---
lag_transform <- function(x, k= 1){
lagged = c(rep(NA, k), x[1:(length(x)-k)])
DF = as.data.frame(cbind(lagged, x))
colnames(DF) <- c( paste0('x-', k), 'x')
DF[is.na(DF)] <- 0
return(DF)
}
supervised <- lag_transform(diff_serie, 1) # "supervised" form
# head(supervised, 3)
# x-1 x
# 1 0.0000000 0.1796152
# 2 0.1796152 -0.3470608
# 3 -0.3470608 -1.3107662
# Split Train/Test ---
N = nrow(supervised)
n = round(N *0.8, digits = 0)
train = supervised[1:n, ] # train set # 1999 obs
test = supervised[(n+1):N, ] # test set: 500 obs
# Normalize Data --- !!! used min/max just from the train set
scale_data = function(train, test, feature_range = c(0, 1)) {
x = train
fr_min = feature_range[1]
fr_max = feature_range[2]
std_train = ((x - min(x) ) / (max(x) - min(x) ))
std_test = ((test - min(x) ) / (max(x) - min(x) ))
scaled_train = std_train *(fr_max -fr_min) + fr_min
scaled_test = std_test *(fr_max -fr_min) + fr_min
return( list(scaled_train = as.vector(scaled_train), scaled_test = as.vector(scaled_test) ,scaler= c(min =min(x), max = max(x))) )
}
Scaled = scale_data(train, test, c(-1, 1))
# Split ---
y_train = Scaled$scaled_train[, 2]
x_train = Scaled$scaled_train[, 1]
y_test = Scaled$scaled_test[, 2]
x_test = Scaled$scaled_test[, 1]
# reverse function for scale back to original values
# reverse
invert_scaling = function(scaled, scaler, feature_range = c(0, 1)){
min = scaler[1]
max = scaler[2]
t = length(scaled)
mins = feature_range[1]
maxs = feature_range[2]
inverted_dfs = numeric(t)
for( i in 1:t){
X = (scaled[i]- mins)/(maxs - mins)
rawValues = X *(max - min) + min
inverted_dfs[i] <- rawValues
}
return(inverted_dfs)
}
Model and Fit:
# Model ---
# Reshape
dim(x_train) <- c(length(x_train), 1, 1)
# specify required arguments
X_shape2 = dim(x_train)[2]
X_shape3 = dim(x_train)[3]
batch_size = 1 # must be a common factor of both the train and test samples
units = 30 # can adjust this, in model tuninig phase
model <- keras_model_sequential()
model%>% #[1, 1, 1]
layer_lstm(units, batch_input_shape = c(batch_size, X_shape2, X_shape3), stateful= F)%>%
layer_dense(units = 10) %>%
layer_dense(units = 1)
model %>% compile(
loss = 'mean_squared_error',
optimizer = optimizer_adam( lr= 0.02, decay = 1e-6 ),
metrics = c('mean_absolute_percentage_error')
)
# Fit ---
Epochs = 1
for(i in 1:Epochs ){
model %>% fit(x_train, y_train, epochs=1, batch_size=batch_size, verbose=1, shuffle=F)
model %>% reset_states()
}
# Predictions Test data ---
L = length(x_test)
scaler = Scaled$scaler
predictions = numeric(L)
for(i in 1:L){
X = x_test[i]
dim(X) = c(1,1,1) # praticamente prevedo punto a punto
yhat = model %>% predict(X, batch_size=batch_size)
# invert scaling
yhat = invert_scaling(yhat, scaler, c(-1, 1))
# invert differencing
yhat = yhat + df_timeseries$value[(n+i)] # could the problem be here?
# store
predictions[i] <- yhat
}
Plot for comparison just on the Test data:
Code for the plot and MAPE on Test data:
# Now for the comparison:
df_plot = tibble(
data = 1:nrow(test),
actual = df_timeseries$value[(n+1):N],
predict = predictions
)
df_plot %>%
gather("key", "value", -data) %>%
ggplot(aes(x = data, y = value, color = key)) +
geom_line() +
theme_minimal()
# mape
mape_function <- function(v_actual, v_pred) {
diff <- (v_actual - v_pred)/v_actual
sum(abs(diff))/length(diff)
}
mape_function(df_plot$actual, df_plot$predict)
# [1] 0.00348043 - MAPE on test data
Update: based on nicola's comment:
By changing the prediction part, where I reverse the difference the plot does make more sense.
But still, how can I fix this? I need to plot the actual values not the differences. How can I measure my performance and if the net is overfitting?
predict_diff = numeric(L)
for(i in 1:L){
X = x_test[i]
dim(X) = c(1,1,1) # praticamente prevedo punto a punto
yhat = model %>% predict(X, batch_size=batch_size)
# invert scaling
yhat = invert_scaling(yhat, scaler, c(-1, 1))
# invert differencing
predict_diff[i] <- yhat
yhat = yhat + df_timeseries$value[(n+i)] # could the problem be here?
# store
#predictions[i] <- yhat
}
df_plot = tibble(
data = 1:nrow(test),
actual = test$x,
predict = predict_diff
)
df_plot %>%
gather("key", "value", -data) %>%
ggplot(aes(x = data, y = value, color = key)) +
geom_line() +
theme_minimal()
If anyone can help me how to incorporate step in input parameter with respect to time. Please see the code below:
library(ReacTran)
N <- 10 # No of grids
L = 0.10 # thickness, m
l = L/2 # Half of thickness, m
k= 0.412 # thermal conductivity, W/m-K
cp = 3530 # thermal conductivity, J/kg-K
rho = 1100 # density, kg/m3
T_int = 57.2 # Initial temperature , degC
T_air = 19 # air temperature, degC
h_air = 20 # Convective heat transfer coeff of air, W/m2-K
xgrid <- setup.grid.1D(x.up = 0, x.down = l, N = N)
x <- xgrid$x.mid
alpha.coeff <- (k*3600)/(rho*cp)
Diffusion <- function (t, Y, parms){
tran <- tran.1D(C=Y, flux.down = 0, C.up = T_air, a.bl.up = h_air,
D = alpha.coeff, dx = xgrid)
list(dY = tran$dC, flux.up = tran$flux.up,
flux.down = tran$flux.down)
}
# Initial condition
Yini <- rep(T_int, N)
times <- seq(from = 0, to = 2, by = 0.2)
print(system.time(
out <- ode.1D(y = Yini, times = times, func = Diffusion,
parms = NULL, dimens = N)))
plot(times, out[,(N+1)], type = "l", lwd = 2, xlab = "time, hr", ylab = "Temperature")
I want the T_air to be constant for the 1st hour and it changes to another value for remaining 1 hr. This would be a step changein the parameter. How can I do it?
Any help would be appreciated.
Thanks,