How to load custom tensorRT .engine model in yolov5 - tensorrt

I had converted my yolov5 model .pt model to .engine model yet.
How can i use this .engine model like bellow example code?
Thank for your help!
Example code:
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = attempt_load('best.pt', map_location=device) # model weight here, replace
yolov5s.pt
stride = int(model.stride.max())
cudnn.benchmark = True
names = model.module.names if hasattr(model, 'module') else model.names
cap = cv2.VideoCapture(0) # source: replace the 0 for other source.
while(cap.isOpened()):
ret, frame = cap.read()
if ret == True:
img = torch.from_numpy(frame)
img = img.permute(2, 0, 1 ).float().to(device)
img /= 255.0 # 0 - 255 to 0.0 - 1.0
if img.ndimension() == 3:
img = img.unsqueeze(0)
pred = model(img, augment=False)[0]
pred = non_max_suppression(pred, 0.28, 0.45) # img, conf, iou
for i, det in enumerate(pred):
if len(det):
for d in det: # d = (x1, y1, x2, y2, conf, cls)
x1 = int(d[0].item())
y1 = int(d[1].item())
x2 = int(d[2].item())
y2 = int(d[3].item())
conf = round(d[4].item(), 2)
c = int(d[5].item())
cv2.rectangle(frame, (x1, y1), (x2, y2), (0,255,0), 3) # box
.....

Related

RuntimeError: Format PNG-PIL cannot read in I mode Jupyter Nootebook/Google Colab

I'm using .png files with the Aubumentations library, but now I'm getting this error. Can someome help me please? I've checked the files and none of them seems to be corrupted.
Errors:
RuntimeError: Format PNG-PIL cannot read in I mode
ValueError: Could not find a format to read the specified file in multi-image mode
Here's the function I made to apply Aubumentations to my images:
def novas_imagens(imagens, mascaras, dir_salvo, img_altura=256, img_largura=256, augmentation=True):
for idx, (x, y) in tqdm(enumerate(zip(imagens, mascaras)), total = len(imagens)):
nome = x.split('\\')[-1].split('.')[0]
#print(nome)
x = cv2.imread(x)
y = mimread(y)[0]
if augmentation:
aug = HorizontalFlip(p = 1.0)
augmentation = aug(image = x, mask = y)
x1 = augmentation['image']
y1 = augmentation['mask']
aug = VerticalFlip(p = 1.0)
augmentation = aug(image = x, mask = y)
x2 = augmentation['image']
y2 = augmentation['mask']
aug = OpticalDistortion(p = 1.0, distort_limit = 2, shift_limit=0.5)
augmentation = aug(image = x, mask = y)
x3 = augmentation['image']
y3 = augmentation['mask']
aug = ElasticTransform(p = 1.0, alpha=120, sigma=120 * 0.05, alpha_affine=120 * 0.03)
augmentation = aug(image = x, mask = y)
x4 = augmentation['image']
y4 = augmentation['mask']
aug = GridDistortion(p = 1.0)
augmentation = aug(image = x, mask = y)
x5 = augmentation['image']
y5 = augmentation['mask']
X = [x, x1, x2, x3, x4, x5]
y = [y, y1, y2, y3, y4, y5]
X = [x, x1]
y = [y, y1]
else:
X = [x]
y = [y]
indice = 0
for img, mask in zip(X, y):
img = cv2.resize(img, (img_largura, img_altura))
mask = cv2.resize(mask, (img_largura, img_altura))
if len(X) == 1:
tmp_img_nome = f"{nome}.png"
tmp_mask_nome = f"{nome}.png"
else:
tmp_img_nome = f"{nome}_{indice}.png"
tmp_mask_nome = f"{nome}_{indice}.png"
path_imagem = os.path.join(dir_salvo, "images", tmp_img_nome)
path_mascara = os.path.join(dir_salvo, "mask", tmp_mask_nome)
cv2.imwrite(path_imagem, img)
cv2.imwrite(path_mascara, mask)
indice += 1
When I run the code in my computer I don't get this error, but since I need a stronger machine, I've uploaded the code in Google Colab to run with the GPU, I've fixed the files paths.

Incorporating forcing functions in the ODE model for Bayesian estimation

I am new to Turing package in Julia, and need some help!
I have been trying to estimate the parameters of a model with a discrete forcing function q(t), where the values of q(t) are discrete and are read from a file. The code is throwing up a BoundsError in solving the ODE model.
Below mentioned are the full Julia script used:
#=
Section 1: Import required packages
=#
using Turing, Distributions, DifferentialEquations, Interpolations
using MCMCChains, Plots, StatsPlots
using CSV, XLSX, DataFrames
using Random
Random.seed!(18431)
#=
Section 2: Read the data file containing observation data and get the NPI data into arrays
=#
my_data = DataFrame(XLSX.readtable("observation_data.xlsx","Sheet1"; infer_eltypes = true)...);
total_weeks = 36; # Total number of time points
N = 67081000; # Population
y_time = 1:1:total_weeks; # Timepoints (weeks)
y_S = Float64.(my_data.Susceptible); # Susceptible
y_S = y_S[1:total_weeks];
y_D = Float64.(my_data.Deceased); # Deceased
y_D = y_D[1:total_weeks];
y_HC = Float64.(my_data.Hosp_critical); # Critical hospitalizations
y_HC = y_HC[1:total_weeks];
y_T = Float64.(my_data.Hosp_total); # Total hospitalizations
y_T = y_T[1:total_weeks];
y_HNC = y_T - y_HC; # Non-critical hospitalizations
observation_data = [y_S y_D y_HC y_HNC];
wet_data = DataFrame(XLSX.readtable("Wetdata.xlsx","Wetdata"; infer_eltypes = true)...);
# IPTCC is a forcing function
IPTCC = wet_data.Normalized_IPTCC;
IPTCC = IPTCC[1:total_weeks];
mobil_data = DataFrame(XLSX.readtable("Mobdata.xlsx","Mobdata"; infer_eltypes = true)...);
# mobil is another forcing function
mobil = mobil_data.Mean;
mobil = mobil[1:total_weeks];
wet_forcing = interpolate(IPTCC, BSpline(Linear()));
mobil_forcing = interpolate(mobil, BSpline(Linear()));
forcing_params = (wet_forcing, mobil_forcing);
#=
Section 3: Define the model and the respective parameters
=#
function epidemic_wildtype(dy, y, p, t)
S, E, I, Hᵪ, Hₙ, R, D = y;
β, λ, α, γ, θᵪ, θₙ, γᵪ, γₙ, δᵪ, w, m = p;
N = 67081000;
dy[1] = -β*w(t)*m(t)*I*S/N + λ*R; # S
dy[2] = β*w(t)*m(t)*I*S/N - α*E; # E
dy[3] = α*E - (γ + θᵪ + θₙ)*I; # I
dy[4] = θₙ*I - γₙ*Hᵪ; # HNC
dy[5] = θᵪ*I - (γᵪ + δᵪ)*Hₙ; # HC
dy[6] = γ*I + γₙ*Hₙ + γᵪ*Hᵪ - λ*R; # R
dy[7] = δᵪ*Hᵪ; # D
end
#=
Section 4: Define the priors and the Bayesian model
=#
Turing.setadbackend(:forwarddiff)
#model function fitting_epidemic_wildtype(observ_data, w_forcing, m_forcing)
# Priors of model parameters
β ~ truncated(Normal(0.65, 0.1), 0, 2)
λ ~ truncated(Normal(0.5, 0.1), 0, 5)
α ~ truncated(Normal(0.25, 0.1), 0.1, 0.5)
γ ~ truncated(Normal(0.05, 0.1), 0, 5)
γₙ ~ Uniform(0.05, 0.1)
γᵪ ~ Uniform(0.05, 0.1)
θₙ ~ Uniform(0.09, 0.75)
θᵪ ~ Uniform(0.09, 0.75)
δᵪ ~ Uniform(0.1, 0.8)
p = (β, λ, α, γ, θᵪ, θₙ, γᵪ, γₙ, δᵪ, w_forcing, m_forcing);
# Priors of standard deviations
σ₁ ~ InverseGamma(1, 1) # Susceptible
σ₂ ~ InverseGamma(1, 1) # Deceased
σ₃ ~ InverseGamma(2, 3) # Critically hospitalized
σ₄ ~ InverseGamma(1, 1) # Non-critically hospitalized
# Initial conditions
N = 67081000;
S0 = N;
I0 = 100;
y0 = [S0, 0, I0, 0, 0, 0, 0];
#show typeof(y0)
#show eltype(p)
y0 = typeof(β).(y0);
# Solve the model and compare with observed data
problem = ODEProblem(epidemic_wildtype, y0, (1, 36), p)
predicted = solve(problem, Tsit5(), saveat=1)
for i = 1:length(predicted)
observ_data[i,1] ~ Normal(predicted[1,i], σ₁)
observ_data[i,2] ~ Normal(predicted[7,i], σ₂)
observ_data[i,3] ~ Normal(predicted[5,i], σ₃)
observ_data[i,4] ~ Normal(predicted[4,i], σ₄)
end
end
#=
Section 5: Run the model-inference system and save the chains
=#
model = fitting_epidemic_wildtype(observation_data, wet_forcing, mobil_forcing);
number_of_chains = 1;
chain = sample(model, NUTS(0.65), MCMCThreads(), 10000, number_of_chains);
Since the stack trace error is pretty long, I am sharing the link to view:
Full error.
In the error:
Line 62 is dy[1] = -β*w(t)*m(t)*I*S/N + λ*R; # S
Line 107 is predicted = solve(problem, Tsit5(), saveat=1.0)
Line 123 is chain = sample(model, NUTS(0.65), MCMCThreads(), 10000, number_of_chains);
Can someone pretty please help me out? 😢 Thanks in advance!

Given two Gaussian density curves, how do I identify v, such that v equally separate area under overlap?

Given two Gaussian density curves, how do I identify 'v', such that 'v' equally separate area under overlap?
The following code will create the visualisation of my problem. I am interested in calculating the area 'A' and then find the x-value 'v', which exactly splits the area in two?
# Define Gaussian parameters
mu1 = 10
sd1 = 0.9
mu2 = 12
sd2 = 0.6
# Visualise, set values
sprd = 3
xmin = min(c(mu1-sprd*sd1,mu2-sprd*sd2))
xmax = max(c(mu1+sprd*sd1,mu2+sprd*sd2))
x = seq(xmin,xmax,length.out=1000)
y1 = dnorm(x,mean=mu1,sd=sd1)
y2 = dnorm(x,mean=mu2,sd=sd2)
ymin = min(c(y1,y2))
ymax = max(c(y1,y2))
# Visualise, plot
plot(x,y1,xlim=c(xmin,xmax),ylim=c(ymin,ymax),type="l",col=2,ylab="Density(x)")
lines(x,y2,col=3)
abline(v=c(mu1,mu2),lty=2,col=c(2,3))
abline(h=0,lty=2)
legend("topleft",legend=c("N(mu1,sd1)","N(mu2,sd2)","mu1","mu2"),lty=c(1,1,2,2),col=c(2,3))
text(11,0.05,"A",cex=2)
Based on the comments on this post, I have written I have written my own proposal for a solution:
gaussIsect = function(mu1,mu2,sd1,sd2){
sd12 = sd1**2
sd22 = sd2**2
sqdi = sd12-sd22
x1 = (mu2 * sd12 - sd2*( mu1*sd2 + sd1*sqrt( (mu1-mu2)**2 + 2*sqdi * log(sd1/sd2) ) )) / sqdi
x2 = (mu2 * sd12 - sd2*( mu1*sd2 - sd1*sqrt( (mu1-mu2)**2 + 2*sqdi * log(sd1/sd2) ) )) / sqdi
return(c(x1,x2))
}
gaussSplitOlap = function(mu1,mu2,sd1,sd2){
if( mu1 > mu2 ){
tmp = c(mu1,mu2)
mu1 = tmp[2]
mu2 = tmp[1]
tmp = c(sd1,sd2)
sd1 = tmp[2]
sd2 = tmp[1]
}
isct = gaussIsect(mu1=mu1,mu2=mu2,sd1=sd1,sd2=sd2)
isct = isct[which(mu1 < isct & isct < mu2)]
a1 = 1-pnorm(isct,mean=mu1,sd=sd1)
a2 = pnorm(isct,mean=mu2,sd=sd2)
A = a1 + a2
v1 = qnorm(1-A/2,mean=mu1,sd=sd1)
v2 = qnorm(A/2,mean=mu2,sd=sd2)
results = list(isct=isct,A=A,v1=v1,v2=v2)
return(results)
}
test = gaussSplitOlap(mu1 = 10,sd1 = 0.9,mu2 = 12,sd2 = 0.6)
print(test)
The output from running this test is as follows
$isct
[1] 11.09291
$A
[1] 0.1775984
$v1
[1] 11.21337
$v2
[1] 11.19109
I would have assumed that the v1and v2 values were equal?
First solve analitycally the problem of finding the point x where it overlaps (this is deg 2 polynomial equation).
Then given this x the area is the sum of the two tails:
area = min(pnorm(x, mean = mu1, sd = sd1), 1 - pnorm(x, mean = mu1, sd = sd1)) +
min(pnorm(x, mean = mu2, sd = sd2), 1 - pnorm(x, mean = mu2, sd = sd2))
Like I said in the comment, you can to this using simple Monte Carlo simulation:
prob<-c()
med<-c()
for(i in 1:1000){
randomX<-runif(1000000,xmin,xmax)
randomY<-runif(1000000,0,0.3)
cond<-(randomY<dnorm(randomX,mean=mu1,sd=sd1) & randomY<dnorm(randomX,mean=mu2,sd=sd2))
prob<-c(prob,sum(cond)/1000000*(xmax-xmin)*0.3)
med<-c(med,median(randomX[which(cond==1)]))
}
cat("Area of A is equal to: ", mean(prob),"\n")
# Area of A is equal to: 0.1778459
cat("Value of v is equal to: ",mean(med),"\n")
# Value of v is equal to: 11.21008
plot(x,y1,xlim=c(xmin,xmax),ylim=c(ymin,ymax),type="l",col=2,ylab="Density(x)")
lines(x,y2,col=3)
abline(v=c(mu1,mu2,mean(med)),lty=2,col=c(2,3,4))
abline(h=0,lty=2)
legend("topleft",legend=c("N(mu1,sd1)","N(mu2,sd2)","mu1","mu2"),lty=c(1,1,2,2),col=c(2,3))
text(11,0.05,"A",cex=2)

non-finite function value integrate R: linear regression with error decomposition

I'm now working on a linear regression where the error term is decomposed with a normal random variable and a beta random variable(for more details see for instance http://www.eea-esem.com/files/papers/eea-esem/2012/405/Cardak-Johnston-Martin_20111110.pdf). The question is that while I do MLE with maxLik in R it returns errors and the process stops. Here is the code in R:
#######################DGP################
library(maxLik)
set.seed(1357)
n=1000
x1 = matrix(rnorm(n,3,1.5),n,1)
x2 = matrix(rnorm(n,1,2.5),n,1)
x3 = sample(c(0,1), n, replace = TRUE)
X = cbind(1,x1,x2)
b = matrix(c(1,2,-3),1,3)
y = X%*%t(b)+rnorm(n,0,2)
g0 = 2
g1 = 1
g2 = 1.5
g3 = 2.4
#######################
p = exp(g0+g1*x3)
q = exp(g2+g3*x3)
#cbind(p,q)
eta = rep(0,n)
for (i in 1:n){
eta[i] = rbeta(1,shape1=p[i],shape2=q[i])}
plot(density(eta))
y2 = y+eta*2
summary(lm(y~X-1))
ols = lm(y2~X-1)
summary(ols)
#######################Model specification################
normbeta = function(u,nu,sigma2,p,q,k){
eta = (u-nu)/k
ff= dnorm(nu,0,sqrt(sigma2))*dbeta(eta,shape1=p,shape2=q)/k
return(ff)
}
obj = function(u,sigma2,p,q,k){
integrate(normbeta,u=u,lower=u-k,upper=u,sigma2=sigma2,p=p,q=q,k=k)$value
}
objv = Vectorize(obj)
############MLE#################
loglik = function(b){
a = split(b,rep(1:7,c(3,1,1,1,1,1,1)))
beta = a[[1]]
sigma2 = a[[2]]^2
k2 = a[[3]]^2
k=sqrt(k2)
g0 = a[[4]]
g1 = a[[5]]
g2 = a[[6]]
g3 = a[[7]]
p = exp(g0+g1*x3)
q = exp(g2+g3*x3)
u = y-X%*%beta
l = objv(u=u,sigma2=sigma2,p=p,q=q,k=k)
return(log(l))
}
a = list()
a[[1]] = coef(ols)
a[[2]] = sqrt(deviance(ols)/df.residual(ols))
a[[3]] = c(1,2,1,1.5,2.4)
b = unlist(a)
sum(loglik(b))
MLE = maxLik(logLik=loglik,print.level=2,start=b,method="BHHH")
summary(MLE)
With the error reported:
----- Initial parameters: -----
fcn value: -2331.714
parameter initial gradient free
X1 2.042012 -338.04135 1
X2 1.953121 -1008.58628 1
X3 -2.969475 -310.82131 1
2.008750 212.29245 1
1.000000 -161.51965 1
2.000000 -75.76826 1
1.000000 -29.25573 1
1.500000 75.10844 1
2.400000 29.20714 1
Condition number of the (active) hessian: 1.657308e+14
Error in integrate(normbeta, u = u, lower = u - k, upper = u, sigma2 = sigma2, :
non-finite function value
In addition: Warning message:
In dbeta(eta, shape1 = p, shape2 = q) : NaNs produced
I believe that the error might come from dbeta(eta, shape1 = p, shape2 = q) that beta density parameter p and q should be positive. However I've already constrained them with exp() function. Anyone knows what is going on here? So many thanks.

solving for steady state PDE using steady.1D (rootSolve R)

I am trying to obtain a steady state for a spatially-explicit Lotka-Volterra competition model of two competing species (with spatial diffusion). Here is the model (without diffusion term):
http://en.wikipedia.org/wiki/Competitive_Lotka%E2%80%93Volterra_equations
where I let r1 = r2 = rG & alpha12 = alpha 21 = a. The carrying capacity of species 1 is assumed to vary linearly across space x i.e. K1 = x (while K2 = 0.5). And we assume Neumann BC. The spatial domain x is from 0 to 1.
Here is the example of coding in R for this model:
LVcomp1D <- function (time, state, parms, N, Da, x, dx) {
with (as.list(parms), {
S1 <- state[1:N]
S2 <- state[(N+1):(2*N)]
## Dispersive fluxes; zero-gradient boundaries
FluxS1 <- -Da * diff(c(S1[1], S1, S1[N]))/dx
FluxS2 <- -Da * diff(c(S2[1], S2, S2[N]))/dx
## LV Competition
InteractS1 <- rG * S1 * (1- (S1/x)- ((a*S2)/x))
InteractS2 <- rG * S2 * (1- (S2/(K2))- ((a*S1)/(K2)))
## Rate of change = -Flux gradient + Interaction
dS1 <- -diff(FluxS1)/dx + InteractS1
dS2 <- -diff(FluxS2)/dx + InteractS2
return (list(c(dS1, dS2)))
})
}
pars <- c(rG = 1.0, a = 0.8, K2 = 0.5)
dx <- 0.001
x <- seq(0, 1, by = dx)
N <- length(x)
Da <- 0.001
state <- c(rep(0.5, N), rep(0.5, N))
print(system.time(
out <- steady.1D (y = state, func = LVcomp1D, parms = pars,
nspec = 2, N = N, x = x, dx = dx, Da = Da, pos = TRUE)
))
mf <- par(mfrow = c(2, 2))
plot(out, grid = x, xlab = "x", mfrow = NULL,
ylab = "N(x)", main = c("Species 1", "Species 2"), type = "l")
par(mfrow = mf)
The problem is I cannot get the steady state solutions of the model. I keep getting a horizontal line passing through x-axis. Can you please help me since I do not know what is wrong with this code.
Thank you

Resources