How can i implement NMS(non-maximum suppression) on Yolov4 - yolov4

I'm training my own datasets using Yolov4 from Alexeyab but i got a multiple bounding boxes like this image below.
I googled and searched about NMS(non-maximum suppression) but all i can find is how to write a code in pytorch or tf....
i'm new to object detection so i have no idea how to implement this. All i wanted to do is just making only one bounding box for one class.
Please help me. Thank you.

I think NMS is ez to code, you could see explain in here. This codes below I see in fast-rcnn for every class.
import numpy as np
def nms(dets, thresh):
x1 = dets[:, 0]
y1 = dets[:, 1]
x2 = dets[:, 2]
y2 = dets[:, 3]
scores = dets[:, 4]
areas = (x2 - x1 + 1) * (y2 - y1 + 1)
order = scores.argsort()[::-1]
keep = []
while order.size > 0:
i = order[0]
keep.append(i)
xx1 = np.maximum(x1[i], x1[order[1:]])
yy1 = np.maximum(y1[i], y1[order[1:]])
xx2 = np.minimum(x2[i], x2[order[1:]])
yy2 = np.minimum(y2[i], y2[order[1:]])
w = np.maximum(0.0, xx2 - xx1 + 1)
h = np.maximum(0.0, yy2 - yy1 + 1)
inter = w * h
ovr = inter / (areas[i] + areas[order[1:]] - inter)
inds = np.where(ovr <= thresh)[0]
order = order[inds + 1]
return keep

Related

R variable scope using gradient function

I got error message argument is missing with no default when using the gradient function. It seems that variables are not passed to other functions. pi(1,3) works, but gradient(pi, 1,3) would result in error message "Error in s(p1, p2) : argument "p2" is missing, with no default" Can anyone help explain why this happens and how to fix it? Thanks. See code below
rm(list = ls())
n = 2000
# 1 for T, 2 for S, 0 for T, 1 for S
v1 = 8
v2 = 10
mc1 = 1
mc2 = 2
tc = 2 # travel cost
s = function(p1, p2) { # share for two markets
u1 = function(x) v1- p1 - tc * x
u2 = function(x) v2 - p2 - tc * (1 - x)
udiff = function(x) u1(x) - u2(x) # x prefer 1, (1-x) prefer 2
# previous if ensures a root in uniroot function
xbar = ifelse(u1(0) < u2(0), 0, ifelse(u1(1) > u2(1), 1,
uniroot(udiff, interval = c(0, 1))$root))
# in case utility negative
x1 = ifelse(u1(0) < 0, 0, ifelse(u1(1) >= 0, 1, uniroot(u1, interval = c(0, 1))$root))
x2 = ifelse(u2(1) < 0, 0, ifelse(u2(0) >= 0, 1, 1-uniroot(u2, interval = c(0, 1))$root))
s = c(min(xbar, x1), min(1 - xbar, x2))
}
pi = function(p1, p2) {
pi1 = (p1 - mc1) * s(p1, p2)[1]
pi2 = (p2 - mc2) * s(p1, p2)[2]
return(c(pi1, pi2))
}
g = function(p1, p2) diag(gradient(pi, p1,p2))
gradient(pi, 1,3)
If you type ?pracma::gradient, you will see that
Usage: gradient(F, h1 = 1, h2 = 1)
and
F: vector of function values, or a matrix of values of a function of two variables.
In your case pi is just the name of function, rather than numeric input arguments for gradient.
I have no clue what is your exact objective for your gradient or g function. An example to make your code run might be something like below
g = function(p1, p2) diag(gradient(pi(p1,p2), p1,p2))

Fused Lasso with Genlasso package in R

n = 150
g <- matrix(0,n,Nrofreps)
X <- array(0, dim = c(Nrofreps,n,1))
e <- rnorm(100+n,1)
a <- c(rep(0,100+0.3*n), rep(5,0.4*n), rep(5,0.3*n))
beta <- c(rep(0.0,100+0.7*n), rep(0.0,0.3*n))
y <- rep(0,101+n)
for (z in 1:(100+n)){
y[z+1]<- a[z]+beta[z]*y[z]+e[z]
}
y2 <- y[101:(100+n)]
g[,1]=y2
X[1,,1]=rep(5,n)
a1 = fusedlasso1d(g[,1],X = as.matrix(X[1,,]), minlam = 3, gamma = 0.1)
This is my code so far. In the future I want to extend it and be able to do more redraws. That is the reason that some variable have higher dimensions than necessary.
This code an error:
Error: (mak <- max(k)) <= m - 1 is not TRUE.
I do not know why this is or how to fix it, please help

How can I write for loop in R for creating an additional variable?

I'm fairly new to R and currently trying to write a for loop to calculate and add to the currently existing dataset.
It's not exactly what I'm trying to do but the idea is as follows.
R code below:
x1 = rnorm(n=100, m=0, sd=1)
x2 = rnorm(n=100, m=3, sd=2)
x3 = x1 * 4.12
d = data.frame(x1,x2,x3)
d$result[1] = (d[1,1] + d[1,2]) / d[1,3]
d$result[2] = (d[2,1] + d[2,2]) / d[2,3]
d$result[3] = (d[3,1] + d[3,2]) / d[3,3]
.
.
.
.
d$result[100] = (d[100,1] + d[100,2]) / d[100,3]
And I'm fully aware that I could add a result variable by simply applying
d$result = (x1 + x2) / x3
But as mentioned, as this isn't what I'm currently trying to, it'd be much appreciated if someone could please help me write the for loop mentioned above.
Many thanks in advance.
Try any of these:
transform(d, result = (x1 + x2) / x3)
d$result <- (d[, 1] + d[, 2]) / d[, 3]
d$result <- (d[[1]] + d[[2]]) / d[[3]]
d$result <- (d$x1 + d$x2) / d$x3
d$result <- with(d, (x1 + x2) / x3)
n <- nrow(d)
d$result <- sapply(seq_len(n), function(i) (d[i, 1] + d[i, 2]) / d[i, 3])
n <- nrow(d)
d$result <- NA_real_ # optional but pre-allocation is more efficient
for(i in seq_len(n)) d$result[i] <- (d[i, 1] + d[i, 2]) / d[i, 3]
One option if there are NA elements will be to use rowSums
d$result <- rowSums(d[1:2], na.rm = TRUE)/d[,3]

fminsearch in R is worse than in Matlab

There is my data (x and y columns are relevant):
https://www.dropbox.com/s/b61a7enhoa0p57p/Simple1.csv
What I need is to fit the data with the polyline. Matlab code that does this is:
spline_fit.m:
function [score, params] = spline_fit (points, x, y)
min_f = min(x)-1;
max_f = max(x);
points = [min_f points max_f];
params = zeros(length(points)-1, 2);
score = 0;
for i = 1:length(points)-1
in = (x > points(i)) & (x <= points(i+1));
if sum(in) > 2
p = polyfit(x(in), y(in), 1);
pred = p(1)*x(in) + p(2);
score = score + norm(pred - y(in));
params(i, :) = p;
else
params(i, :) = nan;
end
end
test.m:
%Find the parameters
r = [100,250,400];
p = fminsearch('spline_fit', r, [], x, y)
[score, param] = spline_fit(p, x, y)
%Plot the result
y1 = zeros(size(x));
p1 = [-inf, p, inf];
for i = 1:size(param, 1)
in = (x > p1(i)) & (x <= p1(i+1));
y1(in) = x(in)*param(i,1) + param(i,2);
end
[x1, I] = sort(x);
y1 = y1(I);
plot(x,y,'x',x1,y1,'k','LineWidth', 2)
And this does work fine, producing following optimization: [102.9842, 191.0006, 421.9912]
I've implemented the same idea in R:
library(pracma);
spline_fit <- function(x, xx, yy) {
min_f = min(xx)-1;
max_f = max(xx);
points = c(min_f, x, max_f)
params = array(0, c(length(points)-1, 2));
score = 0;
for( i in 1:length(points)-1)
{
inn <- (xx > points[i]) & (xx <= points[i+1]);
if (sum(inn) > 2)
{
p <- polyfit(xx[inn], yy[inn], 1);
pred <- p[1]*xx[inn] + p[2];
score <- score + norm(as.matrix(pred - yy[inn]),"F");
params[i,] <- p;
}
else
params[i,] <- NA;
}
score
}
But I get very bad results:
> fminsearch(spline_fit,c(100,250,400), xx = Simple1$x, yy = Simple1$y)
$xval
[1] 100.1667 250.0000 400.0000
$fval
[1] 4452.761
$niter
[1] 2
As you can see, it stops after 2 iterations and doesn't produce good points.
I'll be very glad for any help in resolving this issue.
Also, if anyone knows how to implement this in C# using any free library, it will be even better. I know whereto get polyfit, but not fminsearch.
The problem here is that the likelihood surface is very badly behaved -- there are both multiple minima and discontinuous jumps -- which will make the results you get with different optimizers almost arbitrary. I will admit that MATLAB's optimizers are remarkably robust, but I would say that it's pretty much a matter of chance (and where you start) whether an optimizer will get to the global minimum for this case, unless you use some form of stochastic global optimization such as simulated annealing.
I chose to use R's built-in optimizer (which uses Nelder-Mead by default) rather than fminsearch from the pracma package.
spline_fit <- function(x, xx = Simple1$x, yy=Simple1$y) {
min_f = min(xx)-1
max_f = max(xx)
points = c(min_f, x, max_f)
params = array(0, c(length(points)-1, 2))
score = 0
for( i in 1:(length(points)-1))
{
inn <- (xx > points[i]) & (xx <= points[i+1]);
if (sum(inn) > 2)
{
p <- polyfit(xx[inn], yy[inn], 1);
pred <- p[1]*xx[inn] + p[2];
score <- score + norm(as.matrix(pred - yy[inn]),"F");
params[i,] <- p;
}
else
params[i,] <- NA;
}
score
}
library(pracma) ## for polyfit
Simple1 <- read.csv("Simple1.csv")
opt1 <- optim(fn=spline_fit,c(100,250,400), xx = Simple1$x, yy = Simple1$y)
## [1] 102.4365 201.5835 422.2503
This is better than the fminsearch results, but still different from the MATLAB results, and worse than them:
## Matlab results:
matlab_fit <- c(102.9842, 191.0006, 421.9912)
spline_fit(matlab_fit, xx = Simple1$x, yy = Simple1$y)
## 3724.3
opt1$val
## 3755.5 (worse)
The bbmle package offers an experimental/not very well documented set of tools for exploring optimization surfaces:
library(bbmle)
ss <- slice2D(fun=spline_fit,opt1$par,nt=51)
library(lattice)
A 2D "slice" around the optim-estimated parameters. The circles show the optim fit (solid) and the minimum value within each slice (open).
png("splom1.png")
print(splom(ss))
dev.off()
A 'slice' between the matlab and optim fits shows that the surface is quite rugged:
ss2 <- bbmle:::slicetrans(matlab_fit,opt1$par,spline_fit)
png("slice1.png")
print(plot(ss2))
dev.off()

Writing the results from a nested loop into another vector in R

I'm pretty new to R, and am struggling a bit with it. I have the following code:
repeat {
if (t > 1000)
break
else {
y1 <- rpois(50, 15)
y2 <- rpois(50, 15)
y <- c(y1, y2)
p_0y <- matrix(nrow = max(y) - min(y), ncol = 1)
i = min(y)
while (i <= max(y)) {
p_0y[i - min(y), ] = (length(which(y1 == i))/50)
i <- i + 1
}
p_y <- matrix(nrow = max(y) - min(y), ncol = 1)
j = min(y)
while (j <= max(y)) {
p_y[j - min(y), ] = (length(which(y == j))/100)
j <- j + 1
}
p_0yx <- p_0y[rowSums(p_0y == 0) == 0]
p_yx <- p_y[rowSums(p_0y == 0) == 0]
g = 0
logvect <- matrix(nrow = (length(p_yx)), ncol = 1)
while (g <= (length(p_yx))) {
logvect[g, ] = (p_0yx[g])/(p_yx[g])
g <- g + 1
}
p_0yx %*% (log2(logvect))
print(p_0yx %*% (log2(logvect)))
t <- t + 1
}
}
i am happy with everything up to the last line, but instead of printing the value of p_0yx%*%(log2(logvect)) to the screen i would like to store this as another vector. any ideas? i have tried doing it a similar way as in the nested loop but doesnt seem to work.
Thanks
The brief answer is to first declare a variable. Put it before everything you've posted here. I'm going to call it temp. It will hold all of the values.
temp <- numeric(1000)
Then, instead of your print line use
temp[t] <- p_0yx %*% log2(logvect)
As an aside, your code is doing some weird things. Look at the first index of p_0y. It is effectively an index to item 0, in that matrix. R starts indexing at 1. When you create the number of rows in that matrix you use max(y) - min(y). If the max is 10 and the min is 1 then there's only 9 rows. I'm betting you really wanted to add one. Also, your code is very un R-like with all of the unnecessary while loops. For example, your whole last loop (and the initialization of logvect) can be replaced with:
logvect = (p_0yx)/(p_yx)
But back to the errors.. and some more Rness... could the following code...
p_0y <- matrix(nrow = max(y) - min(y), ncol = 1)
i = min(y)
while (i <= max(y)) {
p_0y[i - min(y), ] = (length(which(y1 == i))/50)
i <- i + 1
}
maybe be replaced more correctly with?
p_0y <- numeric(max(y) - min(y) + 1)
p_0y[sort(unique(y1)) - min(y1) + 1] = table(y1)/50
p_0y <- matrix(p_0y, ncol = 1)
(similar rethinking of the rest of your code could eliminate the rest of the loops as well)

Resources