I have already discussed a similar type of a question in this following post
How to set a for -loop in R
each file contents as follows:
FILE_1.FASTA
>>TTBK2_Hsap ,(CK1/TTBK)
MSGGGEQLDILSVGILVKERWKVLRKIGGGGFGEIYDALDMLTRENVALKVESAQQPKQVLKMEVAVLKKLQGKDHVCRFIGCGRNDRFNYVVMQLQGRNLADLRRSQSRGTFT
FILE_2.FASTA
>>TTBK2_Hsap ,(CK1/TTBK)
MSGGGEQLDILSVGILVKERWKVLRKIGGGGFGEIYDALDMLTRENVALKVESAQQPKQVLKMEVAVLKKLQGKDHVCRFIGCGRNDRFNYVVMQLQGRNLADLRRSQSRGTFT
However, there is another package in R which works like this:
extractAPAAC(x, props = c("Hydrophobicity", "Hydrophilicity"), lambda = 30,
w = 0.05, customprops = NULL)
I tried creating a function to run it for number of file sequences and the program looks like this
read_and_extract <- function(fasta) {
seq <- readFASTA(fasta)[[1]]
return(extractAPAAC(seq, props = c("Hydrophobicity", "Hydrophilicity"), lambda = 30,
w = 0.05, customprops = NULL))
}
setwd("H:\\CC")
fasta_files <- dir(pattern = "[.]fasta$")
aa_comp <- vapply(fasta_files, read_and_extract, rep(pi, 80))
write.csv(aa_comp, file = "C:\\Users\\PAAC.csv")
This programs shows an error
Error: unexpected ',' in "w = 0.05,"
But I have given w=0.05 as of default value, could anyone tell me where is the actual problem?
Related
Currently I'm tryin to convert given onnx file to tensorrt file, and do inference on the generated tensorrt file.
To do so, I used tensorrt python binding API, but
"Error Code 1: Cuda Driver (invalid resource handle)" happens and there is no kind description about this.
Can anyone help me to overcome this situation?
Thx in advance, and below is my code snippet.
def trt_export(self):
fp_16_mode = True
## Obviously, I provided appropriate file names
trt_file_name = "PATH_TO_TRT_FILE"
onnx_name = "PATH_TO_ONNX_FILE"
TRT_LOGGER = trt.Logger(trt.Logger.VERBOSE)
EXPLICIT_BATCH = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
builder = trt.Builder(TRT_LOGGER)
network = builder.create_network(EXPLICIT_BATCH)
parser = trt.OnnxParser(network, TRT_LOGGER)
config = builder.create_builder_config()
config.max_workspace_size = (1<<30)
config.set_flag(trt.BuilderFlag.FP16)
config.default_device_type = trt.DeviceType.GPU
profile = builder.create_optimization_profile()
profile.set_shape('input', (1, 3, IMG_SIZE, IMG_SIZE), (12, 3, IMG_SIZE, IMG_SIZE), (32, 3, IMG_SIZE, IMG_SIZE)) # random nubmers for min. opt. max batch
config.add_optimization_profile(profile)
with open(onnx_name, 'rb') as model:
if not parser.parse(model.read()):
for error in range(parser.num_errors):
print(parser.get_error(error))
engine = builder.build_engine(network, config)
buf = engine.serialize()
with open(trt_file_name, 'wb') as f:
f.write(buf)
def validate_trt_result(self, input_path):
TRT_LOGGER = trt.Logger(trt.Logger.VERBOSE)
trt_file_name = "PATH_TO_TRT_FILE"
trt_runtime = trt.Runtime(TRT_LOGGER)
with open(trt_file_name, 'rb') as f:
engine_data = f.read()
engine = trt_runtime.deserialize_cuda_engine(engine_data)
cuda.init()
device = cuda.Device(0)
ctx = device.make_context()
inputs, outputs, bindings = [], [], []
context = engine.create_execution_context()
stream = cuda.Stream()
index = 0
for binding in engine:
size = trt.volume(engine.get_binding_shape(binding)) * -1 # assuming one batch
dtype = trt.nptype(engine.get_binding_dtype(binding))
host_mem = cuda.pagelocked_empty(size, dtype)
device_mem = cuda.mem_alloc(host_mem.nbytes)
bindings.append(int(device_mem))
if engine.binding_is_input(binding):
inputs.append(HostDeviceMem(host_mem, device_mem))
context.set_binding_shape(index, [1, 3, IMG_SIZE, IMG_SIZE])
else:
outputs.append(HostDeviceMem(host_mem, device_mem))
index += 1
print(context.all_binding_shapes_specified)
input_img = cv2.imread(input_path)
input_r = cv2.resize(input_img, dsize = (256, 256))
input_p = np.transpose(input_r, (2, 0, 1))
input_e = np.expand_dims(input_p, axis = 0)
input_f = input_e.astype(np.float32)
input_f /= 255
numpy_array_input = [input_f]
hosts = [input.host for input in inputs]
trt_types = [trt.int32]
for numpy_array, host, trt_types in zip(numpy_array_input, hosts, trt_types):
numpy_array = np.asarray(numpy_array).astype(trt.nptype(trt_types)).ravel()
print(numpy_array.shape)
np.copyto(host, numpy_array)
[cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
#### ERROR HAPPENS HERE ####
context.execute_async_v2(bindings=bindings, stream_handle=stream.handle)
#### ERROR HAPPENS HERE ####
[cuda.memcpy_dtoh_async(out.host, out.device, stream) for out in outputs]
stream.synchronize()
print("TRT model inference result : ")
output = outputs[0].host
for one in output :
print(one)
ctx.pop()
Looks like ctx.push() line is missing before a line with memcpy_htod_async.
Such a error can happen if TensorFlow / PyTorch is also using CUDA in parallel with TensorRT.
See the related question/answer: https://stackoverflow.com/a/73996477/5655977
I'm needing to perform deep pagination using R and the solr package. SOLR 7.2.1 server, R 3.4.3
I can't figure out how to get the nextCursorMark from the resultant dataframe. I usually do this in Python but this is stumping me.
res <- solr_all(base = myBase, rows = 100, verbose=TRUE,
sort = "unique_id asc",
fq="*:*",
cursorMark="*"
)
I cannot get the nextCursorMark from the result. Any help would be appreciated.
I have noticed that if I add the nextCursorMark to pageDoc it will return the value if parsetype is set to json, but not dataframe. So I guess another part is - where is that value if you return a dataframe?
So I finally got a way to make this work. This is not optimal, the final solution is in the github issue referenced in the comment. But this works:
dat <-"http://yadda.com"
cM = "*"
done = FALSE
rowCount = 0
a <- data.frame()
while (!done)
{
Data <- solr_search(base = dat, rows = 100, verbose=FALSE,
sort = "unique_id asc",
fq="*:*",
parsetype="json",
cursorMark=cM,
pageDoc = "nextCursorMark"
)
if (cM == Data$nextCursorMark) {
done = TRUE
} else {
cM = Data$nextCursorMark
}
a <- append(x = a, Data$response$docs)
rowCount = rowCount + length(Data$response$docs)
print(rowCount)
}
I am using package fda in particular function fRegress. This function includes another function that is called eigchk and checks if coeffients matrix is singular.
Here is the function as the package owners (J. O. Ramsay, Giles Hooker, and Spencer Graves) wrote it.
eigchk <- function(Cmat) {
# check Cmat for singularity
eigval <- eigen(Cmat)$values
ncoef <- length(eigval)
if (eigval[ncoef] < 0) {
neig <- min(length(eigval),10)
cat("\nSmallest eigenvalues:\n")
print(eigval[(ncoef-neig+1):ncoef])
cat("\nLargest eigenvalues:\n")
print(eigval[1:neig])
stop("Negative eigenvalue of coefficient matrix.")
}
if (eigval[ncoef] == 0) stop("Zero eigenvalue of coefficient matrix.")
logcondition <- log10(eigval[1]) - log10(eigval[ncoef])
if (logcondition > 12) {
warning("Near singularity in coefficient matrix.")
cat(paste("\nLog10 Eigenvalues range from\n",
log10(eigval[ncoef])," to ",log10(eigval[1]),"\n"))
}
}
As you can see last if condition checks if logcondition is bigger than 12 and prints then the ranges of eigenvalues.
The following code implements the useage of regularization with roughness pennalty. The code is taken from the book "Functional data analysis with R and Matlab".
annualprec = log10(apply(daily$precav,2,sum))
tempbasis =create.fourier.basis(c(0,365),65)
tempSmooth=smooth.basis(day.5,daily$tempav,tempbasis)
tempfd =tempSmooth$fd
templist = vector("list",2)
templist[[1]] = rep(1,35)
templist[[2]] = tempfd
conbasis = create.constant.basis(c(0,365))
betalist = vector("list",2)
betalist[[1]] = conbasis
SSE = sum((annualprec - mean(annualprec))^2)
Lcoef = c(0,(2*pi/365)^2,0)
harmaccelLfd = vec2Lfd(Lcoef, c(0,365))
betabasis = create.fourier.basis(c(0, 365), 35)
lambda = 10^12.5
betafdPar = fdPar(betabasis, harmaccelLfd, lambda)
betalist[[2]] = betafdPar
annPrecTemp = fRegress(annualprec, templist, betalist)
betaestlist2 = annPrecTemp$betaestlist
annualprechat2 = annPrecTemp$yhatfdobj
SSE1.2 = sum((annualprec-annualprechat2)^2)
RSQ2 = (SSE - SSE1.2)/SSE
Fratio2 = ((SSE-SSE1.2)/3.7)/(SSE1/30.3)
resid = annualprec - annualprechat2
SigmaE. = sum(resid^2)/(35-annPrecTemp$df)
SigmaE = SigmaE.*diag(rep(1,35))
y2cMap = tempSmooth$y2cMap
stderrList = fRegress.stderr(annPrecTemp, y2cMap, SigmaE)
betafdPar = betaestlist2[[2]]
betafd = betafdPar$fd
betastderrList = stderrList$betastderrlist
betastderrfd = betastderrList[[2]]
As penalty factor the authors use certain lambda.
The following code implements the search for the appropriate `lambda.
loglam = seq(5,15,0.5)
nlam = length(loglam)
SSE.CV = matrix(0,nlam,1)
for (ilam in 1:nlam) {
lambda = 10ˆloglam[ilam]
betalisti = betalist
betafdPar2 = betalisti[[2]]
betafdPar2$lambda = lambda
betalisti[[2]] = betafdPar2
fRegi = fRegress.CV(annualprec, templist,
betalisti)
SSE.CV[ilam] = fRegi$SSE.CV
}
By changing the value of the loglam and cross validation I suppose to equaire the best lambda, yet if the length of the loglam is to big or its values lead the coefficient matrix to singulrity. I recieve the following message:
Log10 Eigenvalues range from
-5.44495317739048 to 6.78194912518214
Created by the function eigchk as I already have mentioned above.
Now my question is, are there any way to catch this so called warning? By catch I mean some function or method that warns me when this has happened and I could adjust the values of the loglam. Since there is no actual warning definition in the function beside this print of the message I ran out of ideas.
Thank you all a lot for your suggestions.
By "catch the warning", if you mean, will alert you that there is a potential problem with loglam, then you might want to look at try and tryCatch functions. Then you can define the behavior you want implemented if any warning condition is satisfied.
If you just want to store the output of the warning (which might be assumed from the question title, but may not be what you want), then try looking into capture.output.
Using the PyQT library I am currently having issues with binding a function to the dinamically generated pushbuttons resulting from a loop.
So far I have managed to generate the buttons and bind a function to them with the lambda command. The problem is that since I need every single button to open a different file I find myself in an odd situation as all the button do open the same one. The last value assigned to the variable.
Any idea on how to fix the situation? As a last note, sorry in case of stupid mistakes. I am new to OOP and PyQT.
def searchTheStuff(self):
found = 0
data = MainWindow.intermediateCall()
f1 = open('Path.txt', 'r')
path = f1.read()
f1.close()
Yinc = 25
Y = 40
X = 20
results = 0
for path, dirs, files in os.walk(path, topdown=True):
for name in files:
if name.endswith('.txt'):
fullpath = os.path.join(path, name)
mail = open_close(fullpath)
if mail.find(data) != -1 and results<3:
found = 1
self.buttons.append(QtGui.QPushButton(self))
print fullpath
command = lambda : webbrowser.open()
self.buttons[-1].clicked.connect(command)
self.buttons[-1].setText(_translate("self", name, None))
self.buttons[-1].setGeometry(QtCore.QRect(X, Y, 220, 20))
results = results+1
if results == 33:
X = 260
Y = 15
Y = Y + Yinc
if found == 0:
self.label = QtGui.QLabel(self)
self.label.setGeometry(QtCore.QRect(20, 40, 321, 21))
self.label.setText(_translate("self", "No templates have been found:", None))
Assume that I have a (at least subjectively) complex function like this:
library(rgithub)
pull <- function(i){
commits <- get.pull.request.commits(owner = owner, repo = repo, id = i, ctx = get.github.context(), per_page=100)
links <- digest_header_links(commits)
number_of_pages <- links[2,]$page
if (number_of_pages != 0)
try_default(for (n in 1:number_of_pages){
if (as.integer(commits$headers$`x-ratelimit-remaining`) < 5)
Sys.sleep(as.integer(commits$headers$`x-ratelimit-reset`)- as.POSIXct(Sys.time()) %>% as.integer())
else
get.pull.request.commits(owner = owner, repo = repo, id = i, ctx = get.github.context(), per_page=100, page = n)
}, default = NULL)
else
return(commits)
}
list <- c(500, 501, 502)
pull_lists <- lapply(list, pull)
Let's say that I want to attain a deeper understanding of what actually happens inside this function. How can I add some type of logging that will help me trace what goes on inside of the function as it is being run?
You can use futile.logger
Than you can setup log threshold level using:
flog.threshold(INFO)
Functions, like flog.debug or flog.info are used to produce logging information
For further details see:
http://www.r-bloggers.com/better-logging-in-r-aka-futile-logger-1-3-0-released/
http://cran.r-project.org/web/packages/futile.logger/index.html