R package VLMC dies if state space size exceeds 27 - r

I am using VLMC to fit some Markov models and it dies as soon as the alphabet size reaches 28.
I thought this was due to using a single letter in the alphabet by default, but it has the same behavior with "code1char = FALSE". This is true for me on real data as well as this fake example.
library(VLMC)
# works fine
ins <- sample(seq(1,27,1),50000,replace=T)
vlmc(ins, dump = 1,threshold.gen = 2, debug = TRUE)
#core dump
ins <- sample(seq(1,28,1),50000,replace=T)
vlmc(ins, dump = 1,threshold.gen = 2, debug = TRUE)
Any ideas?
The seg fault looks like this BTW. It looks to me like the alphabet after z is being mapped to NA which is causing an array bound issue.
library(VLMC)
sc <- 10
amp <- 13
x <- round(amp*sin(seq(0,2*sc*pi,0.01)))
x <- amp + x + rpois(NROW(x),1)
length(table(x))
length(x)
vlmc(x, dump = 1,threshold.gen = 2, debug = TRUE)
vlmc: Alpha = 'abcdefghijklmnopqrstuvwxyzNANANANANA' ; |X| = 31
vlmc: ctl.dump = 4 11
vlmc: n = |data| = 6284, cutoff{prune} = 21.8865, threshold{gen} = 2
vlmc: |alphabet| = 31, alphabet = abcdefghijklmnopqrstuvwxyzNA
generating...
*** caught segfault ***
address 0x0, cause 'memory not mapped'
Traceback:
1: .C("vlmc_p", data = Data, n = n, threshold.gen = as.integer(threshold.gen), cutoff.prune = as.double(cutoff.prune), alpha.len = as.integer(alpha.len), alpha = as.character(Alpha), debug = as.integer(as.logical(debug)), dump.flags = as.integer(c(dump, ctl.dump)), size = integer(4), PACKAGE = "VLMC")
2: vlmc(x, dump = 1, threshold.gen = 2, debug = TRUE)

As the maintainer of VLMC,
I can tell you that one of the longest standing TODO entries for VLMC has been to raise the currently builtin limit of 26 for the maximal alphabet size should be raised..
Of course it it is a bug that I don't give an error message in the case of a larger alphabet, but rather pass things to C and do not check there.
The next version of VLMC will not seg.fault for this anymore.
However, I'm not yet sure I'll find the time to allow a considerably larger alphabet....
Of course I'd happily accept patches ... it's free open source software.
Best regards,
Martin Maechler, ETH Zurich

Related

How to save GPU memory usage in PyTorch

In PyTorch I wrote a very simple CNN discriminator and trained it. Now I need to deploy it to make predictions. But the target machine has a small GPU memory and got out of memory error. So I think that I can set requires_grad = False to prevent PyTorch from storing the gradient values. However I didn't find it making any difference.
There are about 5 millions of parameters in my model. But when predicting a single batch of input, it consumes about 1.2GB of memory. I think there should be no need for such large memory.
The question is how to save GPU memory usage when I just want to use my model to make predictions?
Here is a demo, I use discriminator.requires_grad_ to disable/enable autograd of all parameters. But it seems to be no use.
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as functional
from pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
def getMemoryUsage():
usage = nvsmi.DeviceQuery("memory.used")["gpu"][0]["fb_memory_usage"]
return "%d %s" % (usage["used"], usage["unit"])
print("Before GPU Memory: %s" % getMemoryUsage())
class Discriminator(nn.Module):
def __init__(self):
super().__init__()
# trainable layers
# input: 2x256x256
self.conv1 = nn.Conv2d(2, 8, 5, padding=2) # 8x256x256
self.pool1 = nn.MaxPool2d(2) # 8x128x128
self.conv2 = nn.Conv2d(8, 32, 5, padding=2) # 32x128x128
self.pool2 = nn.MaxPool2d(2) # 32x64x64
self.conv3 = nn.Conv2d(32, 96, 5, padding=2) # 96x64x64
self.pool3 = nn.MaxPool2d(4) # 96x16x16
self.conv4 = nn.Conv2d(96, 256, 5, padding=2) # 256x16x16
self.pool4 = nn.MaxPool2d(4) # 256x4x4
self.num_flat_features = 4096
self.fc1 = nn.Linear(4096, 1024)
self.fc2 = nn.Linear(1024, 256)
self.fc3 = nn.Linear(256, 1)
# loss function
self.loss = nn.MSELoss()
# other properties
self.requires_grad = True
def forward(self, x):
y = x
y = self.conv1(y)
y = self.pool1(y)
y = functional.relu(y)
y = self.conv2(y)
y = self.pool2(y)
y = functional.relu(y)
y = self.conv3(y)
y = self.pool3(y)
y = functional.relu(y)
y = self.conv4(y)
y = self.pool4(y)
y = functional.relu(y)
y = y.view((-1,self.num_flat_features))
y = self.fc1(y)
y = functional.relu(y)
y = self.fc2(y)
y = functional.relu(y)
y = self.fc3(y)
y = torch.sigmoid(y)
return y
def predict(self, x, score_th=0.5):
if len(x.shape) == 3:
singlebatch = True
x = x.view([1]+list(x.shape))
else:
singlebatch = False
y = self.forward(x)
label = (y > float(score_th))
if singlebatch:
y = y.view(list(y.shape)[1:])
return label, y
def requires_grad_(self, requires_grad=True):
for parameter in self.parameters():
parameter.requires_grad_(requires_grad)
self.requires_grad = requires_grad
x = torch.cuda.FloatTensor(np.zeros([2, 256, 256]))
discriminator = Discriminator()
discriminator.to("cuda:0")
# comment/uncomment this line to make difference
discriminator.requires_grad_(False)
discriminator.predict(x)
print("Requires grad", discriminator.requires_grad)
print("After GPU Memory: %s" % getMemoryUsage())
By comment out the line discriminator.requires_grad_(False), I got output:
Before GPU Memory: 6350MiB
Requires grad True
After GPU Memory: 7547MiB
While by uncomment the line, I got:
Before GPU Memory: 6350MiB
Requires grad False
After GPU Memory: 7543MiB
You can use pynvml.
This python tool made Nvidia so you can Python query like this:
from pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
nvsmi.DeviceQuery('memory.free, memory.total')
You can always also execute:
torch.cuda.empty_cache()
To empty the cache and you will find even more free memory that way.
Before calling torch.cuda.empty_cache() if you have objects you don't use anymore you can call this:
obj = None
And after that you call
gc.collect()
Try to use model.eval() with torch.no_grad() on your target machine when making predictions. model.eval() will switch model layers to eval mode. torch.no_grad() will deactivate autograd engine and as a result memory usage will be reduced.
x = torch.cuda.FloatTensor(np.zeros([2, 256, 256]))
discriminator = Discriminator()
discriminator.to("cuda:0")
discriminator.eval()
with torch.no_grad():
discriminator.predict(x)
I guess it's not relevant anymore for your specific problem, but you could take a look at Torchscript
It's a good way to decrease the size and complexity of your model. It also speeds up the prediction. Unfortunately it cant help with the training itself. It is just in general a good idea for deployment of pytorch models used in other hardware or embedded in c++ code for efficiency.
Cheers. :-)

Parallel computing on two R servers using batchtools/BatchJobs

I'm trying to use batchtools/BatchJobs for parallel computing on two unix-based R servers. I'm completely new to this and hence followed a few articles and package details to do this. I have added some links below:
batchtools,
BatchJobs
So far I have not really understood how to use batchtools for multi-machines. On the other hand, with BatchJobs I have better progress.
I made an ssh connection from the terminal first and execute the following lines:
reg = makeRegistry("TestExp")
reg$cluster.functions = makeClusterFunctionsSSH(worker = makeSSHWorker(nodename="sla19438")) #By BatchJobs
#Test Function
piApprox = function(n) {
nums = matrix(runif(2 * n), ncol = 2)
d = sqrt(nums[, 1]^2 + nums[, 2]^2)
4 * mean(d <= 1)
}
set.seed(42)
piApprox(1000)
BatchJobs::batchMap(reg = reg, fun = piApprox, n = rep(1e7, 10))
getJobTable()
BatchJobs::submitJobs(reg = reg, resources = list(walltime = 3600, memory = 1024))
getStatus(reg = reg)
loadResult(reg = reg, id = 5)
mean(sapply(1:10, loadResult, reg = reg))
It works and gives me the results but I can't see any indication of the jobs being run on the other machine (sla19438) when I run "top" in the terminal.
Please help me understand what I'm doing wrong. Maybe there is some configuration needed but I don't see any material online which dumbs down the steps for a newbie like me.
Thanks

SCILAB ode: How to solve 2nd order ODE

I am coming from ode45 in MATLAB trying to learn ode in scilab. I ran into an exception I am not sure how to address.
function der = f(t,x)
wn3 = 2800 * %pi/30; //rad/s
m = 868.1/32.174; //slugs
k = m*wn3^2; //lbf/ft
w = 4100 * %pi/30; //rad/s
re_me = 4.09/32.174/12; //slug-ft
F0 = w^2*re_me; //lbf
der(1) = x(2);
der(2) = -k*x(1) + F0*sin(w*t);
endfunction
x0 = [0; 0];
t = 0:0.1:5;
t0 = t(1);
x = ode(x0,t0,t,f);
plot(t,x(1,:));
I get this error message that I don't understand:
lsoda-- at t (=r1), mxstep (=i1) steps
needed before reaching tout
where i1 is : 500
where r1 is : 0.1027287737654D+01
Excessive work done on this call (perhaps wrong jacobian type).
at line 35 of executed file C:\Users\ndomenico\Documents\Scilab\high_frequency_vibrator_amplitude_3d.sce
ode: lsoda exit with state -1.
Thank you!
Your ode is particularly stiff (k = 2319733). To me, it has no sense to give such a large final time. The time step you took (0.1) is also very large w.r.t to the driving frequency. If you replace the line
t = 0:0.1:5
by
t = linspace(0,0.1,1001)
i.e. request approximations of your solution for t in [0,0.1] and 1000 time steps you will have the following output:

Tensorflow: 6 layer CNN: OOM (use 10Gb GPU memory)

I am using the following code for running a 6 layer CNN with 2 FC layers on top (on Tesla K-80 GPU).
Somehow, it consumes entire memory 10GB and died out of memory.I know that i can reduce the batch_size and then run , but i also want to run with 15 or 20 CNN layers.Whats wrong with the following code and why it takes all the memory? How should i run the code for 15 layers CNN.
Code:
import model
with tf.Graph().as_default() as g_train:
filenames = tf.train.match_filenames_once(FLAGS.train_dir+'*.tfrecords')
filename_queue = tf.train.string_input_producer(filenames, shuffle=True, num_epochs=FLAGS.num_epochs)
feats,labels = get_batch_input(filename_queue, batch_size=FLAGS.batch_size)
### feats size=(batch_size, 100, 50)
logits = model.inference(feats, FLAGS.batch_size)
loss = model.loss(logits, labels, feats)
tvars = tf.trainable_variables()
global_step = tf.Variable(0, name='global_step', trainable=False)
# Add to the Graph operations that train the model.
train_op = model.training(loss, tvars, global_step, FLAGS.learning_rate, FLAGS.clip_gradients)
# Add the Op to compare the logits to the labels during evaluation.
eval_correct = model.evaluation(logits, labels, feats)
summary_op = tf.merge_all_summaries()
saver = tf.train.Saver(tf.all_variables(), max_to_keep=15)
# The op for initializing the variables.
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
summary_writer = tf.train.SummaryWriter(FLAGS.model_dir,
graph=sess.graph)
# Start input enqueue threads.
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
try:
step = 0
while not coord.should_stop():
_, loss_value = sess.run([train_op, loss])
if step % 100 == 0:
print('Step %d: loss = %.2f (%.3f sec)' % (step, loss_value))
# Update the events file.
summary_str = sess.run(summary_op)
summary_writer.add_summary(summary_str, step)
if (step == 0) or (step + 1) % 1000 == 0 or (step + 1) == FLAGS.max_steps:
ckpt_model = os.path.join(FLAGS.model_dir, 'model.ckpt')
saver.save(sess, ckpt_model, global_step=step)
#saver.save(sess, FLAGS.model_dir, global_step=step)
step += 1
except tf.errors.OutOfRangeError:
print('Done training for %d epochs, %d steps.' % (FLAGS.num_epochs, step))
finally:
coord.join(threads)
sess.close()
###################### File model.py ####################
def conv2d(x, W, b, strides=1):
# Conv2D wrapper, with bias and relu activation
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1],
padding='SAME')
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def maxpool2d(x, k=2,s=2):
# MaxPool2D wrapper
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, s,
s,1],padding='SAME')
def inference(feats,batch_size):
#feats size (batch_size,100,50,1) #batch_size=256
conv1_w=tf.get_variable("conv1_w", [filter_size,filter_size,1,256],initializer=tf.uniform_unit_scaling_initializer())
conv1_b=tf.get_variable("conv1_b",[256])
conv1 = conv2d(feats, conv1_w, conv1_b,2)
conv1 = maxpool2d(conv1, k=2,s=2)
### This was replicated for 6 layers and the 2 FC connected layers are added
return logits
def training(loss, train_vars, global_step, learning_rate, clip_gradients):
# Add a scalar summary for the snapshot loss.
tf.scalar_summary(loss.op.name, loss)
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, train_vars,aggregation_method=1), clip_gradients)
optimizer = tf.train.AdamOptimizer(learning_rate)
train_op = optimizer.apply_gradients(zip(grads, train_vars), global_step=global_step)
return train_op
I am not too sure what the model python library is. If it is something you wrote and can change the setting in the optimizer I would suggest the following which I use in my own code
train_step = tf.train.AdamOptimizer(learning_rate).minimize(cost, aggregation_method = tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N)
By default the aggeragetion_method is ADD_N but if you change it to EXPERIMENTAL_ACCUMULATE_N or EXPERIMENTAL_TREE this will greatly save memory. The main memory hog in these programs is that tensorflow must save the output values at every neuron so that it can compute the gradients. Changing the aggregation_method helps a lot from my experience.
Also BTW I don't think there is anything wrong with your code. I can run out of memory on small cov-nets as well.

Torch nn. Current error always is nan

I've wrote the following code:
require 'nn'
require 'cunn'
file = torch.DiskFile('train200.data', 'r')
size = file:readInt()
inputSize = file:readInt()
outputSize = file:readInt()
dataset = {}
function dataset:size() return size end;
for i=1,dataset:size() do
local input = torch.Tensor(inputSize)
for j=1,inputSize do
input[j] = file:readFloat()
end
local output = torch.Tensor(outputSize)
for j=1,outputSize do
output[j] = file:readFloat()
end
dataset[i] = {input:cuda(), output:cuda()}
end
net = nn.Sequential()
hiddenSize = inputSize * 2
net:add(nn.Linear(inputSize, hiddenSize))
net:add(nn.Tanh())
net:add(nn.Linear(hiddenSize, hiddenSize))
net:add(nn.Tanh())
net:add(nn.Linear(hiddenSize, outputSize))
criterion = nn.MSECriterion()
net = net:cuda()
criterion = criterion:cuda()
trainer = nn.StochasticGradient(net, criterion)
trainer.learningRate = 0.02
trainer.maxIteration = 100
trainer:train(dataset)
And it must works good (At least I think so), and it works correct when inputSize = 20. But when inputSize = 200 current error always is nan. At first I've thought that file reading part is incorrect. I've recheck it some times but it is working great. Also I found that sometimes too small or too big learning rate may affect on it. I've tried learning rate from 0.00001 up to 0.8, but still the same result. What I'm doing wrong?
Thanks,
Igor

Resources