Why i am unable to create a sequence in R? - r

Here is my code to genrate sequence of numbers. The sequence starts from 3 and stops at the sqaure root of a number input by user.
num = as.numeric(readline(prompt = "Enter a number :"))
mysqrt = as.integer(sqrt(num))
myseq = seq(from = 3,to = mysqrt, by = 2)
The error I get is ->
Error in seq.default(from = 3, to = mysqrt, by = 2) :
wrong sign in 'by' argument
If i run ->
seq(3,as.integer(sqrt(25)), by = 2)
it works fine as expected.
If i run ->
num = 25
seq(3,as.integer(sqrt(num)), by = 2)
it gives the above error.

Related

ChAMP package:Error in cmdscale(d) : 'k' must be in {1, 2, .. n - 1}

I selected two GSM samples.
when I use the “champ. QC” function of ChAMP package, a error appears:
> champ.QC(beta = myLoad$beta,pheno=myLoad$pd$type)
[===========================]
[<<<<< ChAMP.QC START >>>>>>]
-----------------------------
champ.QC Results will be saved in ./CHAMP_QCimages/
[QC plots will be proceed with 411557 probes and 2 samples.]
<< Prepare Data Over. >>
Error in cmdscale(d) : 'k' must be in {1, 2, .. n - 1}
The complete code is as follows:
pd10 <- data.frame(stringsAsFactors = FALSE,
Sample_Name = c("GSM1669564","GSM1669589"),
type = c("lung_NSCLC_adenocarcinoma"))
idat.name10 <- list.files("/home/shuangshuang/R/Rstudio/03.MethyICIBERSORT/dataset/LUAD",
pattern = "*.idat") |> substr(1L,30L)
pd10$Sentrix_ID <- substr(idat.name10[seq(1,4,2)],12,21)
pd10$Sentrix_Position <- substr(idat.name10[seq(1,4,2)],23,28)
pd10$Sample_Type <- ("tumor")
write.csv(pd10,file = "sample_type1.csv",row.names = F,quote = F)
myDir="/home/shuangshuang/R/Rstudio/03.MethyICIBERSORT/dataset/LUAD"
myLoad <- champ.load(myDir, arraytype="450K")
class(myLoad)
#[1] "list"
champ.QC(beta = myLoad$beta,pheno=myLoad$pd$type)
So how to solve this problem? Thanks!
I checked my code and found no problems. I don't know where the problems are

TRT inference using onnx - Error Code 1: Cuda Driver (invalid resource handle)

Currently I'm tryin to convert given onnx file to tensorrt file, and do inference on the generated tensorrt file.
To do so, I used tensorrt python binding API, but
"Error Code 1: Cuda Driver (invalid resource handle)" happens and there is no kind description about this.
Can anyone help me to overcome this situation?
Thx in advance, and below is my code snippet.
def trt_export(self):
fp_16_mode = True
## Obviously, I provided appropriate file names
trt_file_name = "PATH_TO_TRT_FILE"
onnx_name = "PATH_TO_ONNX_FILE"
TRT_LOGGER = trt.Logger(trt.Logger.VERBOSE)
EXPLICIT_BATCH = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
builder = trt.Builder(TRT_LOGGER)
network = builder.create_network(EXPLICIT_BATCH)
parser = trt.OnnxParser(network, TRT_LOGGER)
config = builder.create_builder_config()
config.max_workspace_size = (1<<30)
config.set_flag(trt.BuilderFlag.FP16)
config.default_device_type = trt.DeviceType.GPU
profile = builder.create_optimization_profile()
profile.set_shape('input', (1, 3, IMG_SIZE, IMG_SIZE), (12, 3, IMG_SIZE, IMG_SIZE), (32, 3, IMG_SIZE, IMG_SIZE)) # random nubmers for min. opt. max batch
config.add_optimization_profile(profile)
with open(onnx_name, 'rb') as model:
if not parser.parse(model.read()):
for error in range(parser.num_errors):
print(parser.get_error(error))
engine = builder.build_engine(network, config)
buf = engine.serialize()
with open(trt_file_name, 'wb') as f:
f.write(buf)
def validate_trt_result(self, input_path):
TRT_LOGGER = trt.Logger(trt.Logger.VERBOSE)
trt_file_name = "PATH_TO_TRT_FILE"
trt_runtime = trt.Runtime(TRT_LOGGER)
with open(trt_file_name, 'rb') as f:
engine_data = f.read()
engine = trt_runtime.deserialize_cuda_engine(engine_data)
cuda.init()
device = cuda.Device(0)
ctx = device.make_context()
inputs, outputs, bindings = [], [], []
context = engine.create_execution_context()
stream = cuda.Stream()
index = 0
for binding in engine:
size = trt.volume(engine.get_binding_shape(binding)) * -1 # assuming one batch
dtype = trt.nptype(engine.get_binding_dtype(binding))
host_mem = cuda.pagelocked_empty(size, dtype)
device_mem = cuda.mem_alloc(host_mem.nbytes)
bindings.append(int(device_mem))
if engine.binding_is_input(binding):
inputs.append(HostDeviceMem(host_mem, device_mem))
context.set_binding_shape(index, [1, 3, IMG_SIZE, IMG_SIZE])
else:
outputs.append(HostDeviceMem(host_mem, device_mem))
index += 1
print(context.all_binding_shapes_specified)
input_img = cv2.imread(input_path)
input_r = cv2.resize(input_img, dsize = (256, 256))
input_p = np.transpose(input_r, (2, 0, 1))
input_e = np.expand_dims(input_p, axis = 0)
input_f = input_e.astype(np.float32)
input_f /= 255
numpy_array_input = [input_f]
hosts = [input.host for input in inputs]
trt_types = [trt.int32]
for numpy_array, host, trt_types in zip(numpy_array_input, hosts, trt_types):
numpy_array = np.asarray(numpy_array).astype(trt.nptype(trt_types)).ravel()
print(numpy_array.shape)
np.copyto(host, numpy_array)
[cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
#### ERROR HAPPENS HERE ####
context.execute_async_v2(bindings=bindings, stream_handle=stream.handle)
#### ERROR HAPPENS HERE ####
[cuda.memcpy_dtoh_async(out.host, out.device, stream) for out in outputs]
stream.synchronize()
print("TRT model inference result : ")
output = outputs[0].host
for one in output :
print(one)
ctx.pop()
Looks like ctx.push() line is missing before a line with memcpy_htod_async.
Such a error can happen if TensorFlow / PyTorch is also using CUDA in parallel with TensorRT.
See the related question/answer: https://stackoverflow.com/a/73996477/5655977

R:Error in UseMethod(generic = "as.sparse", object = x)

enter image description here
I'm trying to use "Seurat" to analyze B cells for my tutor, but the following error occurs when I try to create the object and I don't know how to correct it.
pbmc <- CreateSeuratObject( counts = BC.data$'B Cells`, Project = " BC.data",min.cells = 3, min.features = 200)
Error in UseMethod(generic = "as.sparse", object = x) :
no applicable method for 'as.sparse' applied to an object of class "Seurat"

How to create a DataSet of 1000 graphs in python

I need to create a dataset of 1000 graphs. I used the following code:
data_list = []
ngraphs = 1000
for i in range(ngraphs):
num_nodes = randint(10,500)
num_edges = randint(10,num_nodes*(num_nodes - 1))
f1 = np.random.randint(10, size=(num_nodes))
f2 = np.random.randint(10,20, size=(num_nodes))
f3 = np.random.randint(20,30, size=(num_nodes))
f_final = np.stack((f1,f2,f3), axis=1)
capital = 2*f1 + f2 - f3
f1_t = torch.from_numpy(f1)
f2_t = torch.from_numpy(f2)
f3_t = torch.from_numpy(f3)
capital_t = torch.from_numpy(capital)
capital_t = capital_t.type(torch.LongTensor)
x = torch.from_numpy(f_final)
x = x.type(torch.LongTensor)
edge_index = torch.randint(low=0, high=num_nodes, size=(num_edges,2), dtype=torch.long)
edge_attr = torch.randint(low=0, high=50, size=(num_edges,1), dtype=torch.long)
data = Data(x = x, edge_index = edge_index.t().contiguous(), y = capital_t, edge_attr=edge_attr )
data_list.append(data)
This works. But when I run my training function as follows:
for epoch in range(1, 500):
loss = train()
print(f'Loss: {loss:.4f}')
I keep getting the following error:
RuntimeError Traceback (most recent call
last) in ()
1 for epoch in range(1, 500):
----> 2 loss = train()
3 print(f'Loss: {loss:.4f}')
5 frames /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py
in linear(input, weight, bias) 1845 if
has_torch_function_variadic(input, weight): 1846 return
handle_torch_function(linear, (input, weight), input, weight,
bias=bias)
-> 1847 return torch._C._nn.linear(input, weight, bias) 1848 1849
RuntimeError: expected scalar type Float but found Long
Can someone help me troubleshoot this. Or make a 1000 graph dataset that doesn't throw this error.
Change your x and y tensor into FloatTensor, since Linear layer in python only accept FloatTensor inputs

RunUMAP gives segmentation fault

I tried the following
> my.exp <- RunUMAP(my.exp, dims = 1:30)
UMAP(a=None, angular_rp_forest=False, b=None, init='spectral',
learning_rate=1.0, local_connectivity=1, metric='correlation',
metric_kwds=None, min_dist=0.3, n_components=2, n_epochs=None,
n_neighbors=30, negative_sample_rate=5, random_state=None,
repulsion_strength=1.0, set_op_mix_ratio=1.0, spread=1.0,
target_metric='categorical', target_metric_kwds=None,
target_n_neighbors=-1, target_weight=0.5, transform_queue_size=4.0,
transform_seed=42, verbose=True)
Construct fuzzy simplicial set
0 / 14
1 / 14
2 / 14
*** caught segfault ***
address 0xfffffffffffffffa, cause 'memory not mapped'
Traceback:
1: py_call_impl(callable, dots$args, dots$keywords)
2: umap$fit_transform(as.matrix(x = object))
3: RunUMAP.default(object = data.use, assay = assay, n.neighbors = n.neighbors, n.components = n.components, metric = metric, n.epochs = n.epochs, learning.rate = learning.rate, min.dist = min.dist, spread = spread, set.op.mix.ratio = set.op.mix.ratio, local.connectivity = local.connectivity, repulsion.strength = repulsion.strength, negative.sample.rate = negative.sample.rate, a = a, b = b, seed.use = seed.use, metric.kwds = metric.kwds, angular.rp.forest = angular.rp.forest, reduction.key = reduction.key, verbose = verbose)
4: RunUMAP(object = data.use, assay = assay, n.neighbors = n.neighbors, n.components = n.components, metric = metric, n.epochs = n.epochs, learning.rate = learning.rate, min.dist = min.dist, spread = spread, set.op.mix.ratio = set.op.mix.ratio, local.connectivity = local.connectivity, repulsion.strength = repulsion.strength, negative.sample.rate = negative.sample.rate, a = a, b = b, seed.use = seed.use, metric.kwds = metric.kwds, angular.rp.forest = angular.rp.forest, reduction.key = reduction.key, verbose = verbose)
5: RunUMAP.Seurat(my.exp, dims = 1:30)
6: RunUMAP(my.exp, dims = 1:30)
I do not see a reason why it should be getting a sigfault here. I have run this function multiple times over last several months. This seems to be happening since about a week.
Any help is appreciated.
UPDATE: I have now restarted the machine one, removed entire older R installation, started with a fresh install. I am still getting exactly same error, including address 0xffffffffffffffffffa ....
Sameet

Resources