TRT inference using onnx - Error Code 1: Cuda Driver (invalid resource handle) - tensorrt

Currently I'm tryin to convert given onnx file to tensorrt file, and do inference on the generated tensorrt file.
To do so, I used tensorrt python binding API, but
"Error Code 1: Cuda Driver (invalid resource handle)" happens and there is no kind description about this.
Can anyone help me to overcome this situation?
Thx in advance, and below is my code snippet.
def trt_export(self):
fp_16_mode = True
## Obviously, I provided appropriate file names
trt_file_name = "PATH_TO_TRT_FILE"
onnx_name = "PATH_TO_ONNX_FILE"
TRT_LOGGER = trt.Logger(trt.Logger.VERBOSE)
EXPLICIT_BATCH = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
builder = trt.Builder(TRT_LOGGER)
network = builder.create_network(EXPLICIT_BATCH)
parser = trt.OnnxParser(network, TRT_LOGGER)
config = builder.create_builder_config()
config.max_workspace_size = (1<<30)
config.set_flag(trt.BuilderFlag.FP16)
config.default_device_type = trt.DeviceType.GPU
profile = builder.create_optimization_profile()
profile.set_shape('input', (1, 3, IMG_SIZE, IMG_SIZE), (12, 3, IMG_SIZE, IMG_SIZE), (32, 3, IMG_SIZE, IMG_SIZE)) # random nubmers for min. opt. max batch
config.add_optimization_profile(profile)
with open(onnx_name, 'rb') as model:
if not parser.parse(model.read()):
for error in range(parser.num_errors):
print(parser.get_error(error))
engine = builder.build_engine(network, config)
buf = engine.serialize()
with open(trt_file_name, 'wb') as f:
f.write(buf)
def validate_trt_result(self, input_path):
TRT_LOGGER = trt.Logger(trt.Logger.VERBOSE)
trt_file_name = "PATH_TO_TRT_FILE"
trt_runtime = trt.Runtime(TRT_LOGGER)
with open(trt_file_name, 'rb') as f:
engine_data = f.read()
engine = trt_runtime.deserialize_cuda_engine(engine_data)
cuda.init()
device = cuda.Device(0)
ctx = device.make_context()
inputs, outputs, bindings = [], [], []
context = engine.create_execution_context()
stream = cuda.Stream()
index = 0
for binding in engine:
size = trt.volume(engine.get_binding_shape(binding)) * -1 # assuming one batch
dtype = trt.nptype(engine.get_binding_dtype(binding))
host_mem = cuda.pagelocked_empty(size, dtype)
device_mem = cuda.mem_alloc(host_mem.nbytes)
bindings.append(int(device_mem))
if engine.binding_is_input(binding):
inputs.append(HostDeviceMem(host_mem, device_mem))
context.set_binding_shape(index, [1, 3, IMG_SIZE, IMG_SIZE])
else:
outputs.append(HostDeviceMem(host_mem, device_mem))
index += 1
print(context.all_binding_shapes_specified)
input_img = cv2.imread(input_path)
input_r = cv2.resize(input_img, dsize = (256, 256))
input_p = np.transpose(input_r, (2, 0, 1))
input_e = np.expand_dims(input_p, axis = 0)
input_f = input_e.astype(np.float32)
input_f /= 255
numpy_array_input = [input_f]
hosts = [input.host for input in inputs]
trt_types = [trt.int32]
for numpy_array, host, trt_types in zip(numpy_array_input, hosts, trt_types):
numpy_array = np.asarray(numpy_array).astype(trt.nptype(trt_types)).ravel()
print(numpy_array.shape)
np.copyto(host, numpy_array)
[cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
#### ERROR HAPPENS HERE ####
context.execute_async_v2(bindings=bindings, stream_handle=stream.handle)
#### ERROR HAPPENS HERE ####
[cuda.memcpy_dtoh_async(out.host, out.device, stream) for out in outputs]
stream.synchronize()
print("TRT model inference result : ")
output = outputs[0].host
for one in output :
print(one)
ctx.pop()

Looks like ctx.push() line is missing before a line with memcpy_htod_async.
Such a error can happen if TensorFlow / PyTorch is also using CUDA in parallel with TensorRT.
See the related question/answer: https://stackoverflow.com/a/73996477/5655977

Related

TypeError: Caught TypeError in DataLoader worker process 0. TypeError: 'KeyError' object is not iterable

from torchvision_starter.engine import train_one_epoch, evaluate
from torchvision_starter import utils
import multiprocessing
import time
n_cpu = multiprocessing.cpu_count()
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
_ = model.to(device)
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.Adam(model.parameters(), lr=0.00001)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
step_size=3,
gamma=0.2,
verbose=True
)
# Let's train for 10 epochs
num_epochs = 1
start = time.time()
for epoch in range(10, 10 + num_epochs):
# train for one epoch, printing every 10 iterations
train_one_epoch(model, optimizer, data_loaders['train'], device, epoch, print_freq=10)
# update the learning rate
lr_scheduler.step()
# evaluate on the validation dataset
evaluate(model, data_loaders['valid'], device=device)
stop = time.time()
print(f"\n\n{num_epochs} epochs in {stop - start} s ({(stop-start) / 3600:.2f} hrs)")
Before I move on to this part, everything is OK. But after I run the part, the error is like below:
I have tried to add drop_last to the helper.py's function like:
data_loaders["train"] = torch.utils.data.DataLoader(
train_data,
batch_size=batch_size,
sampler=train_sampler,
num_workers=num_workers,
collate_fn=utils.collate_fn,
drop_last=True
)
But it doesn't work. By the way, the torch and torchvision are compatible and Cuda is available.
I wonder how to fix it.
The get_data_loaders function:
def get_data_loaders(
folder, batch_size: int = 2, valid_size: float = 0.2, num_workers: int = -1, limit: int = -1, thinning: int = None
):
"""
Create and returns the train_one_epoch, validation and test data loaders.
:param foder: folder containing the dataset
:param batch_size: size of the mini-batches
:param valid_size: fraction of the dataset to use for validation. For example 0.2
means that 20% of the dataset will be used for validation
:param num_workers: number of workers to use in the data loaders. Use -1 to mean
"use all my cores"
:param limit: maximum number of data points to consider
:param thinning: take every n-th frame, instead of all frames
:return a dictionary with 3 keys: 'train_one_epoch', 'valid' and 'test' containing respectively the
train_one_epoch, validation and test data loaders
"""
if num_workers == -1:
# Use all cores
num_workers = multiprocessing.cpu_count()
# We will fill this up later
data_loaders = {"train": None, "valid": None, "test": None}
# create 3 sets of data transforms: one for the training dataset,
# containing data augmentation, one for the validation dataset
# (without data augmentation) and one for the test set (again
# without augmentation)
data_transforms = {
"train": get_transform(UdacitySelfDrivingDataset.mean, UdacitySelfDrivingDataset.std, train=True),
"valid": get_transform(UdacitySelfDrivingDataset.mean, UdacitySelfDrivingDataset.std, train=False),
"test": get_transform(UdacitySelfDrivingDataset.mean, UdacitySelfDrivingDataset.std, train=False),
}
# Create train and validation datasets
train_data = UdacitySelfDrivingDataset(
folder,
transform=data_transforms["train"],
train=True,
thinning=thinning
)
# The validation dataset is a split from the train_one_epoch dataset, so we read
# from the same folder, but we apply the transforms for validation
valid_data = UdacitySelfDrivingDataset(
folder,
transform=data_transforms["valid"],
train=True,
thinning=thinning
)
# obtain training indices that will be used for validation
n_tot = len(train_data)
indices = torch.randperm(n_tot)
# If requested, limit the number of data points to consider
if limit > 0:
indices = indices[:limit]
n_tot = limit
split = int(math.ceil(valid_size * n_tot))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = torch.utils.data.SubsetRandomSampler(train_idx)
valid_sampler = torch.utils.data.SubsetRandomSampler(valid_idx) # =
# prepare data loaders
data_loaders["train"] = torch.utils.data.DataLoader(
train_data,
batch_size=batch_size,
sampler=train_sampler,
num_workers=num_workers,
collate_fn=utils.collate_fn,
drop_last=True
)
data_loaders["valid"] = torch.utils.data.DataLoader(
valid_data, # -
batch_size=batch_size, # -
sampler=valid_sampler, # -
num_workers=num_workers, # -
collate_fn=utils.collate_fn,
drop_last=True
)
# Now create the test data loader
test_data = UdacitySelfDrivingDataset(
folder,
transform=data_transforms["test"],
train=False,
thinning=thinning
)
if limit > 0:
indices = torch.arange(limit)
test_sampler = torch.utils.data.SubsetRandomSampler(indices)
else:
test_sampler = None
data_loaders["test"] = torch.utils.data.DataLoader(
test_data,
batch_size=batch_size,
shuffle=False,
num_workers=num_workers,
sampler=test_sampler,
collate_fn=utils.collate_fn,
drop_last=True
# -
)
return data_loaders
class UdacitySelfDrivingDataset(torch.utils.data.Dataset):
# Mean and std of the dataset to be used in nn.Normalize
mean = torch.tensor([0.3680, 0.3788, 0.3892])
std = torch.tensor([0.2902, 0.3069, 0.3242])
def __init__(self, root, transform, train=True, thinning=None):
super().__init__()
self.root = os.path.abspath(os.path.expandvars(os.path.expanduser(root)))
self.transform = transform
# load datasets
if train:
self.df = pd.read_csv(os.path.join(self.root, "labels_train.csv"))
else:
self.df = pd.read_csv(os.path.join(self.root, "labels_test.csv"))
# Index by file id (i.e., a sequence of the same length as the number of images)
codes, uniques = pd.factorize(self.df['frame'])
if thinning:
# Take every n-th rows. This makes sense because the images are
# frames of videos from the car, so we are essentially reducing
# the frame rate
thinned = uniques[::thinning]
idx = self.df['frame'].isin(thinned)
print(f"Keeping {thinned.shape[0]} of {uniques.shape[0]} images")
print(f"Keeping {idx.sum()} objects out of {self.df.shape[0]}")
self.df = self.df[idx].reset_index(drop=True)
# Recompute codes
codes, uniques = pd.factorize(self.df['frame'])
self.n_images = len(uniques)
self.df['image_id'] = codes
self.df.set_index("image_id", inplace=True)
self.classes = ['car', 'truck', 'pedestrian', 'bicyclist', 'light']
self.colors = ['cyan', 'blue', 'red', 'purple', 'orange']
#property
def n_classes(self):
return len(self.classes)
def __getitem__(self, idx):
if idx in self.df.index:
row = self.df.loc[[idx]]
else:
return KeyError(f"Element {idx} not in dataframe")
# load images fromm file
img_path = os.path.join(self.root, "images", row['frame'].iloc[0])
img = Image.open(img_path).convert("RGB")
# Exclude bogus boxes with 0 height or width
h = row['ymax'] - row['ymin']
w = row['xmax'] - row['xmin']
filter_idx = (h > 0) & (w > 0)
row = row[filter_idx]
# get bounding box coordinates for each mask
boxes = row[['xmin', 'ymin', 'xmax', 'ymax']].values
# convert everything into a torch.Tensor
boxes = torch.as_tensor(boxes, dtype=torch.float32)
# get the labels
labels = torch.as_tensor(row['class_id'].values, dtype=int)
image_id = torch.tensor([idx])
area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])
# assume no crowd for everything
iscrowd = torch.zeros((row.shape[0],), dtype=torch.int64)
target = {}
target["boxes"] = boxes
target["labels"] = labels
target["image_id"] = image_id
target["area"] = area
target["iscrowd"] = iscrowd
if self.transform is not None:
img, target = self.transform(img, target)
return img, target
def __len__(self):
return self.n_images
def plot(self, idx, renormalize=True, predictions=None, threshold=0.5, ax=None):
image, label_js = self[idx]
if renormalize:
# Invert the T.Normalize transform
unnormalize = T.Compose(
[
T.Normalize(mean = [ 0., 0., 0. ], std = 1 / type(self).std),
T.Normalize(mean = -type(self).mean, std = [ 1., 1., 1. ])
]
)
image, label_js = unnormalize(image, label_js)
if ax is None:
fig, ax = plt.subplots(figsize=(8, 8))
_ = ax.imshow(torch.permute(image, [1, 2, 0]))
for i, box in enumerate(label_js['boxes']):
xy = (box[0], box[1])
h, w = (box[2] - box[0]), (box[3] - box[1])
r = patches.Rectangle(xy, h, w, fill=False, color=self.colors[label_js['labels'][i]-1], lw=2, alpha=0.5)
ax.add_patch(r)
if predictions is not None:
# Make sure the predictions are on the CPU
for k in predictions:
predictions[k] = predictions[k].detach().cpu().numpy()
for i, box in enumerate(predictions['boxes']):
if predictions['scores'][i] > threshold:
xy = (box[0], box[1])
h, w = (box[2] - box[0]), (box[3] - box[1])
r = patches.Rectangle(xy, h, w, fill=False, color=self.colors[predictions['labels'][i]-1], lw=2, linestyle=':')
ax.add_patch(r)
_ = ax.axis("off")
return ax

Issue with TFX Trainer component not outputting model to filesystem

First of all, I am using TFX version 0.21.2 and Tensorflow version 2.1.
I have constructed a pipeline largely following the Chigaco taxi example. When the Trainer component is executed I can see the following in the logs:
INFO - Training complete. Model written to /root/airflow/tfx/pipelines/fish/Trainer/model/9/serving_model_dir
When checking the above directory it is empty. What am I missing?
This is my DAG definition file (import statements omitted):
_pipeline_name = 'fish'
_airflow_config = AirflowPipelineConfig(airflow_dag_config = {
'schedule_interval': None,
'start_date': datetime.datetime(2019, 1, 1),
})
_project_root = os.path.join(os.environ['HOME'], 'airflow')
_data_root = os.path.join(_project_root, 'data', 'fish_data')
_module_file = os.path.join(_project_root, 'dags', 'fishUtils.py')
_serving_model_dir = os.path.join(_project_root, 'serving_model', _pipeline_name)
_tfx_root = os.path.join(_project_root, 'tfx')
_pipeline_root = os.path.join(_tfx_root, 'pipelines', _pipeline_name)
_metadata_path = os.path.join(_tfx_root, 'metadata', _pipeline_name,
'metadata.db')
def _create_pipeline(pipeline_name: Text, pipeline_root: Text, data_root: Text,
module_file: Text, serving_model_dir: Text,
metadata_path: Text,
direct_num_workers: int) -> pipeline.Pipeline:
examples = external_input(data_root)
example_gen = CsvExampleGen(input=examples)
statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])
infer_schema = SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
validate_stats = ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=infer_schema.outputs['schema'])
trainer = Trainer(
examples=example_gen.outputs['examples'], schema=infer_schema.outputs['schema'],
module_file=_module_file, train_args= trainer_pb2.TrainArgs(num_steps=10000),
eval_args= trainer_pb2.EvalArgs(num_steps=5000))
model_validator = ModelValidator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'])
pusher = Pusher(
model=trainer.outputs['model'],
model_blessing=model_validator.outputs['blessing'],
push_destination=pusher_pb2.PushDestination(
filesystem=pusher_pb2.PushDestination.Filesystem(
base_directory=_serving_model_dir)))
return pipeline.Pipeline(
pipeline_name=_pipeline_name,
pipeline_root=_pipeline_root,
components=[
example_gen,
statistics_gen,
infer_schema,
validate_stats,
trainer,
model_validator,
pusher],
enable_cache=True,
metadata_connection_config=metadata.sqlite_metadata_connection_config(
metadata_path),
beam_pipeline_args=['--direct_num_workers=%d' % direct_num_workers]
)
runner = AirflowDagRunner(config = _airflow_config)
DAG = runner.run(
_create_pipeline(
pipeline_name=_pipeline_name,
pipeline_root=_pipeline_root,
data_root=_data_root,
module_file=_module_file,
serving_model_dir=_serving_model_dir,
metadata_path=_metadata_path,
# 0 means auto-detect based on on the number of CPUs available during
# execution time.
direct_num_workers=0))
And this is my module file:
_DENSE_FLOAT_FEATURE_KEYS = ['length']
real_valued_columns = [tf.feature_column.numeric_column('length')]
def _eval_input_receiver_fn():
serialized_tf_example = tf.compat.v1.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
features = tf.io.parse_example(
serialized=serialized_tf_example,
features={
'length': tf.io.FixedLenFeature([], tf.float32),
'label': tf.io.FixedLenFeature([], tf.int64),
})
receiver_tensors = {'examples': serialized_tf_example}
return tfma.export.EvalInputReceiver(
features={'length' : features['length']},
receiver_tensors=receiver_tensors,
labels= features['label'],
)
def parser(serialized_example):
features = tf.io.parse_single_example(
serialized_example,
features={
'length': tf.io.FixedLenFeature([], tf.float32),
'label': tf.io.FixedLenFeature([], tf.int64),
})
return ({'length' : features['length']}, features['label'])
def _input_fn(filenames):
# TFRecordDataset doesn't directly accept paths with wildcards
filenames = tf.data.Dataset.list_files(filenames)
dataset = tf.data.TFRecordDataset(filenames, 'GZIP')
dataset = dataset.map(parser)
dataset = dataset.shuffle(2000)
dataset = dataset.batch(40)
dataset = dataset.repeat(10)
return dataset
def trainer_fn(trainer_fn_args, schema):
estimator = tf.estimator.LinearClassifier(feature_columns=real_valued_columns)
train_input_fn = lambda: _input_fn(trainer_fn_args.train_files)
train_spec = tf.estimator.TrainSpec(
train_input_fn,
max_steps=trainer_fn_args.train_steps)
eval_input_fn = lambda: _input_fn(trainer_fn_args.eval_files)
eval_spec = tf.estimator.EvalSpec(
eval_input_fn,
steps=trainer_fn_args.eval_steps,
name='fish-eval')
receiver_fn = lambda: _eval_input_receiver_fn()
return {
'estimator': estimator,
'train_spec': train_spec,
'eval_spec': eval_spec,
'eval_input_receiver_fn': receiver_fn
}
Thank you in advance for your help!
Posting the solution for anyone that is facing the same problem that I faced.
The reason that the model was not written in the filesystem was that the estimator needs a config argument to know where to write the model.
The following modification to the trainer_fn function should solve the problem.
run_config = tf.estimator.RunConfig(save_checkpoints_steps=999, keep_checkpoint_max=1)
run_config = run_config.replace(model_dir=trainer_fn_args.serving_model_dir)
estimator=tf.estimator.LinearClassifier(feature_columns=real_valued_columns,config=run_config)

LLVMLITE Hello World Example Produces Wrong Output

The problem I am faced with regards to llvmlite is producing a simple hello world example.
I am unable to display the string global variable value in the function I created.
It appears to always print out the number one.
I have already tried to return the stringtype which produced a error.
from llvmlite import ir
i64 = ir.IntType(64)
i8 = ir.IntType(16)
hellostr = 'hello, world!'
stringtype = ir.ArrayType(i64, len(hellostr))
module = ir.Module( name="m_hello_example" )
hello = ir.GlobalVariable(module, stringtype, '.str4')
fn_int_to_int_type = ir.FunctionType(i64, [stringtype.as_pointer()] )
fn_hel = ir.Function( module, fn_int_to_int_type, name="fn_hel" )
fn_hel_block = fn_hel.append_basic_block( name="fn_hel_entry" )
builder = ir.IRBuilder(fn_hel_block )
# zero = builder.constant(i64, 0)
# const_1 = ir.Constant(stringtype,1);
# builder.ret(const_1)
const_1 = ir.Constant(i64,1);
# print(const_1)
builder.ret(const_1)
print( module )
I was expecting the output to print out the string 'hello, world!'.
Any help would be much appreciated.
Thanks.
It ended up that I was able to solve my problem with the following code:
import llvmlite.ir as ir
import llvmlite.binding as llvm
from ctypes import CFUNCTYPE
def main():
m = ir.Module()
func_ty = ir.FunctionType(ir.VoidType(), []) #defining printer function as type void
func = ir.Function(m, func_ty, name="printer") #define function as printer
builder = ir.IRBuilder(func.append_basic_block('entry')) #defining the entry point of the function printer
fmt = "%s\n\0" #in function printf allows for inserting arg in, next global_fmt statements allow for creating #"fstr" assignment
c_fmt = ir.Constant(ir.ArrayType(ir.IntType(8), len(fmt)),
bytearray(fmt.encode("utf8")))
global_fmt = ir.GlobalVariable(m, c_fmt.type, name="fstr")
global_fmt.linkage = 'internal'
global_fmt.global_constant = True
global_fmt.initializer = c_fmt
arg = "Hello, World!\0" #args will be passed into printf function.
c_str_val = ir.Constant(ir.ArrayType(ir.IntType(8), len(arg)),
bytearray(arg.encode("utf8"))) #creates the c_str_value as a constant
printf_ty = ir.FunctionType(ir.IntType(32), [], var_arg=True) #creation of the printf function begins here and specifies the passing of a argument
printf = ir.Function(m, printf_ty, name="printf")
c_str = builder.alloca(c_str_val.type) #creation of the allocation of the %".2" variable
builder.store(c_str_val, c_str) #store as defined on the next line below %".2"
voidptr_ty = ir.IntType(8).as_pointer()
fmt_arg = builder.bitcast(global_fmt, voidptr_ty) #creates the %".4" variable with the point pointing to the fstr
builder.call(printf, [fmt_arg, c_str]) #We are calling the prinf function with the fmt and arg and returning the value as defiend on the next line
builder.ret_void()
#Next lines are for calling llvm and returning the assembly.
llvm.initialize()
llvm.initialize_native_target()
llvm.initialize_native_asmprinter()
print(str(m)) #PRINTING OUT THE ASSEMBLY
llvm_module = llvm.parse_assembly(str(m)) #Parsing teh assembly
tm = llvm.Target.from_default_triple().create_target_machine() #creating the target machine
with llvm.create_mcjit_compiler(llvm_module, tm) as ee:
ee.finalize_object() #Making sure all modules owned by the execution engine are fully processed and usable for execution
fptr = ee.get_function_address("printer") #fptr will reference the printer function
py_func = CFUNCTYPE(None)(fptr)
py_func() #run the function printer
if __name__ == "__main__":
main()
It appears that I didn't correctly assign the variable and hence why I wasn't returning anything.

Setting a loop in R

I have already discussed a similar type of a question in this following post
How to set a for -loop in R
each file contents as follows:
FILE_1.FASTA
>>TTBK2_Hsap ,(CK1/TTBK)
MSGGGEQLDILSVGILVKERWKVLRKIGGGGFGEIYDALDMLTRENVALKVESAQQPKQVLKMEVAVLKKLQGKDHVCRFIGCGRNDRFNYVVMQLQGRNLADLRRSQSRGTFT
FILE_2.FASTA
>>TTBK2_Hsap ,(CK1/TTBK)
MSGGGEQLDILSVGILVKERWKVLRKIGGGGFGEIYDALDMLTRENVALKVESAQQPKQVLKMEVAVLKKLQGKDHVCRFIGCGRNDRFNYVVMQLQGRNLADLRRSQSRGTFT
However, there is another package in R which works like this:
extractAPAAC(x, props = c("Hydrophobicity", "Hydrophilicity"), lambda = 30,
w = 0.05, customprops = NULL)
I tried creating a function to run it for number of file sequences and the program looks like this
read_and_extract <- function(fasta) {
seq <- readFASTA(fasta)[[1]]
return(extractAPAAC(seq, props = c("Hydrophobicity", "Hydrophilicity"), lambda = 30,
w = 0.05, customprops = NULL))
}
setwd("H:\\CC")
fasta_files <- dir(pattern = "[.]fasta$")
aa_comp <- vapply(fasta_files, read_and_extract, rep(pi, 80))
write.csv(aa_comp, file = "C:\\Users\\PAAC.csv")
This programs shows an error
Error: unexpected ',' in "w = 0.05,"
But I have given w=0.05 as of default value, could anyone tell me where is the actual problem?

R package VLMC dies if state space size exceeds 27

I am using VLMC to fit some Markov models and it dies as soon as the alphabet size reaches 28.
I thought this was due to using a single letter in the alphabet by default, but it has the same behavior with "code1char = FALSE". This is true for me on real data as well as this fake example.
library(VLMC)
# works fine
ins <- sample(seq(1,27,1),50000,replace=T)
vlmc(ins, dump = 1,threshold.gen = 2, debug = TRUE)
#core dump
ins <- sample(seq(1,28,1),50000,replace=T)
vlmc(ins, dump = 1,threshold.gen = 2, debug = TRUE)
Any ideas?
The seg fault looks like this BTW. It looks to me like the alphabet after z is being mapped to NA which is causing an array bound issue.
library(VLMC)
sc <- 10
amp <- 13
x <- round(amp*sin(seq(0,2*sc*pi,0.01)))
x <- amp + x + rpois(NROW(x),1)
length(table(x))
length(x)
vlmc(x, dump = 1,threshold.gen = 2, debug = TRUE)
vlmc: Alpha = 'abcdefghijklmnopqrstuvwxyzNANANANANA' ; |X| = 31
vlmc: ctl.dump = 4 11
vlmc: n = |data| = 6284, cutoff{prune} = 21.8865, threshold{gen} = 2
vlmc: |alphabet| = 31, alphabet = abcdefghijklmnopqrstuvwxyzNA
generating...
*** caught segfault ***
address 0x0, cause 'memory not mapped'
Traceback:
1: .C("vlmc_p", data = Data, n = n, threshold.gen = as.integer(threshold.gen), cutoff.prune = as.double(cutoff.prune), alpha.len = as.integer(alpha.len), alpha = as.character(Alpha), debug = as.integer(as.logical(debug)), dump.flags = as.integer(c(dump, ctl.dump)), size = integer(4), PACKAGE = "VLMC")
2: vlmc(x, dump = 1, threshold.gen = 2, debug = TRUE)
As the maintainer of VLMC,
I can tell you that one of the longest standing TODO entries for VLMC has been to raise the currently builtin limit of 26 for the maximal alphabet size should be raised..
Of course it it is a bug that I don't give an error message in the case of a larger alphabet, but rather pass things to C and do not check there.
The next version of VLMC will not seg.fault for this anymore.
However, I'm not yet sure I'll find the time to allow a considerably larger alphabet....
Of course I'd happily accept patches ... it's free open source software.
Best regards,
Martin Maechler, ETH Zurich

Resources