I asked this question a year ago and got code for this "probability heatmap":
numbet <- 32
numtri <- 1e5
prob=5/6
#Fill a matrix
xcum <- matrix(NA, nrow=numtri, ncol=numbet+1)
for (i in 1:numtri) {
x <- sample(c(0,1), numbet, prob=c(prob, 1-prob), replace = TRUE)
xcum[i, ] <- c(i, cumsum(x)/cumsum(1:numbet))
}
colnames(xcum) <- c("trial", paste("bet", 1:numbet, sep=""))
mxcum <- reshape(data.frame(xcum), varying=1+1:numbet,
idvar="trial", v.names="outcome", direction="long", timevar="bet")
library(plyr)
mxcum2 <- ddply(mxcum, .(bet, outcome), nrow)
mxcum3 <- ddply(mxcum2, .(bet), summarize,
ymin=c(0, head(seq_along(V1)/length(V1), -1)),
ymax=seq_along(V1)/length(V1),
fill=(V1/sum(V1)))
head(mxcum3)
library(ggplot2)
p <- ggplot(mxcum3, aes(xmin=bet-0.5, xmax=bet+0.5, ymin=ymin, ymax=ymax)) +
geom_rect(aes(fill=fill), colour="grey80") +
scale_fill_gradient("Outcome", formatter="percent", low="red", high="blue") +
scale_y_continuous(formatter="percent") +
xlab("Bet")
print(p)
(May need to change this code slightly because of this)
This is almost exactly what I want. Except each vertical shaft should have different numbers of bins, ie the first should have 2, second 3, third 4 (N+1). In the graph shaft 6 +7 have the same number of bins (7), where 7 should have 8 (N+1).
If I'm right, the reason the code does this is because it is the observed data and if I ran more trials we would get more bins. I don't want to rely on the number of trials to get the correct number of bins.
How can I adapt this code to give the correct number of bins?
I have used R's dbinom to generate the frequency of heads for n=1:32 trials and plotted the graph now. It will be what you expect. I have read some of your earlier posts here on SO and on math.stackexchange. Still I don't understand why you'd want to simulate the experiment rather than generating from a binomial R.V. If you could explain it, it would be great! I'll try to work on the simulated solution from #Andrie to check out if I can match the output shown below. For now, here's something you might be interested in.
set.seed(42)
numbet <- 32
numtri <- 1e5
prob=5/6
require(plyr)
out <- ldply(1:numbet, function(idx) {
outcome <- dbinom(idx:0, size=idx, prob=prob)
bet <- rep(idx, length(outcome))
N <- round(outcome * numtri)
ymin <- c(0, head(seq_along(N)/length(N), -1))
ymax <- seq_along(N)/length(N)
data.frame(bet, fill=outcome, ymin, ymax)
})
require(ggplot2)
p <- ggplot(out, aes(xmin=bet-0.5, xmax=bet+0.5, ymin=ymin, ymax=ymax)) +
geom_rect(aes(fill=fill), colour="grey80") +
scale_fill_gradient("Outcome", low="red", high="blue") +
xlab("Bet")
The plot:
Edit: Explanation of how your old code from Andrie works and why it doesn't give what you intend.
Basically, what Andrie did (or rather one way to look at it) is to use the idea that if you have two binomial distributions, X ~ B(n, p) and Y ~ B(m, p), where n, m = size and p = probability of success, then, their sum, X + Y = B(n + m, p) (1). So, the purpose of xcum is to obtain the outcome for all n = 1:32 tosses, but to explain it better, let me construct the code step by step. Along with the explanation, the code for xcum will also be very obvious and it can be constructed in no time (without any necessity for for-loop and constructing a cumsum everytime.
If you have followed me so far, then, our idea is first to create a numtri * numbet matrix, with each column (length = numtri) having 0's and 1's with probability = 5/6 and 1/6 respectively. That is, if you have numtri = 1000, then, you'll have ~ 834 0's and 166 1's *for each of the numbet columns (=32 here). Let's construct this and test this first.
numtri <- 1e3
numbet <- 32
set.seed(45)
xcum <- t(replicate(numtri, sample(0:1, numbet, prob=c(5/6,1/6), replace = TRUE)))
# check for count of 1's
> apply(xcum, 2, sum)
[1] 169 158 166 166 160 182 164 181 168 140 154 142 169 168 159 187 176 155 151 151 166
163 164 176 162 160 177 157 163 166 146 170
# So, the count of 1's are "approximately" what we expect (around 166).
Now, each of these columns are samples of binomial distribution with n = 1 and size = numtri. If we were to add the first two columns and replace the second column with this sum, then, from (1), since the probabilities are equal, we'll end up with a binomial distribution with n = 2. Similarly, instead, if you had added the first three columns and replaced th 3rd column by this sum, you would have obtained a binomial distribution with n = 3 and so on...
The concept is that if you cumulatively add each column, then you end up with numbet number of binomial distributions (1 to 32 here). So, let's do that.
xcum <- t(apply(xcum, 1, cumsum))
# you can verify that the second column has similar probabilities by this:
# calculate the frequency of all values in 2nd column.
> table(xcum[,2])
0 1 2
694 285 21
> round(numtri * dbinom(2:0, 2, prob=5/6))
[1] 694 278 28
# more or less identical, good!
If you divide the xcum, we have generated thus far by cumsum(1:numbet) over each row in this manner:
xcum <- xcum/matrix(rep(cumsum(1:numbet), each=numtri), ncol = numbet)
this will be identical to the xcum matrix that comes out of the for-loop (if you generate it with the same seed). However I don't quite understand the reason for this division by Andrie as this is not necessary to generate the graph you require. However, I suppose it has something to do with the frequency values you talked about in an earlier post on math.stackexchange
Now on to why you have difficulties obtaining the graph I had attached (with n+1 bins):
For a binomial distribution with n=1:32 trials, 5/6 as probability of tails (failures) and 1/6 as the probability of heads (successes), the probability of k heads is given by:
nCk * (5/6)^(k-1) * (1/6)^k # where nCk is n choose k
For the test data we've generated, for n=7 and n=8 (trials), the probability of k=0:7 and k=0:8 heads are given by:
# n=7
0 1 2 3 4 5
.278 .394 .233 .077 .016 .002
# n=8
0 1 2 3 4 5
.229 .375 .254 .111 .025 .006
Why are they both having 6 bins and not 8 and 9 bins? Of course this has to do with the value of numtri=1000. Let's see what's the probabilities of each of these 8 and 9 bins by generating probabilities directly from the binomial distribution using dbinom to understand why this happens.
# n = 7
dbinom(7:0, 7, prob=5/6)
# output rounded to 3 decimal places
[1] 0.279 0.391 0.234 0.078 0.016 0.002 0.000 0.000
# n = 8
dbinom(8:0, 8, prob=5/6)
# output rounded to 3 decimal places
[1] 0.233 0.372 0.260 0.104 0.026 0.004 0.000 0.000 0.000
You see that the probabilities corresponding to k=6,7 and k=6,7,8 corresponding to n=7 and n=8 are ~ 0. They are very low in values. The minimum value here is 5.8 * 1e-7 actually (n=8, k=8). This means that you have a chance of getting 1 value if you simulated for 1/5.8 * 1e7 times. If you check the same for n=32 and k=32, the value is 1.256493 * 1e-25. So, you'll have to simulate that many values to get at least 1 result where all 32 outcomes are head for n=32.
This is why your results were not having values for certain bins because the probability of having it is very low for the given numtri. And for the same reason, generating the probabilities directly from the binomial distribution overcomes this problem/limitation.
I hope I've managed to write with enough clarity for you to follow. Let me know if you've trouble going through.
Edit 2:
When I simulated the code I've just edited above with numtri=1e6, I get this for n=7 and n=8 and count the number of heads for k=0:7 and k=0:8:
# n = 7
0 1 2 3 4 5 6 7
279347 391386 233771 77698 15763 1915 117 3
# n = 8
0 1 2 3 4 5 6 7 8
232835 372466 259856 104116 26041 4271 392 22 1
Note that, there are k=6 and k=7 now for n=7 and n=8. Also, for n=8, you have a value of 1 for k=8. With increasing numtri you'll obtain more of the other missing bins. But it'll require a huge amount of time/memory (if at all).
Related
I'm using the MCMCglmm function in its package to create a mixed-effects model for three-level categorical model. The goal is to get probabilities of each of the three classes in response for each row of data in the 5000 row test_week test set.
prior6 <- list(R=list(V=diag(2), nu=0.0001),
G=list(G1=list(V=diag(4), nu=0.0001, alpha.mu=rep(0,4), alpha.V=diag(4) * 25),
G2=list(V=diag(1), nu=0.0001, alpha.mu=rep(0,1), alpha.V=diag(1) * 25),
G3=list(V=diag(2), nu=0.0001, alpha.mu=rep(0,2), alpha.V=diag(2) * 25),
G4=list(V=diag(1), nu=0.0001, alpha.mu=rep(0,1), alpha.V=diag(1) * 25),
G5=list(V=diag(2), nu=0.0001, alpha.mu=rep(0,2), alpha.V=diag(2) * 25),
G6=list(V=diag(1), nu=0.0001, alpha.mu=rep(0,1), alpha.V=diag(1) * 25)))
mix_mod_fit6 <- MCMCglmm(response ~ 1 + x + y + z, random=~us(1 + x + y + z):a +
us(1):d + us(1 + x):b + us(1):e + us(1 + z):c + us(1):f,
rcov=~ us(trait):units, prior=prior6, family='categorical',
data=train_weeks_sample1, nitt=3850, thin=45, burnin=2500)
mixed_final_predictions6 <- predict.MCMCglmm(mix_mod_fit6, test_week,
type='response', interval='prediction')
The issue arises with the predict function, returning a 10000x1 matrix of numbers that very roughly mirror the probabilities of the 2nd and 3rd levels in response (the first 5000 rows corresponding to the 2nd level and the following 5000 to the 3rd level) after I split them into a 5000x2 matrix. I can then backsolve to get the predictions for the 1st level, but the predictions are problematic. They only predict in multiples of 1/30 as shown below (and also illustrated in the histograms in the picture).
temp6
0 0.0333333333333333 0.0666666666666667 0.1 0.133333333333333 0.166666666666667
7935 3914 2199 1901 1883 2173
0.2 0.233333333333333 0.266666666666667 0.3 0.333333333333333 0.366666666666667
2257 2198 1991 1703 1465 1184
0.4 0.433333333333333 0.466666666666667 0.5 0.533333333333333 0.566666666666667
987 756 527 410 268 164
0.6 0.633333333333333 0.666666666666667 0.7
98 35 24 6
Any insight or examples on how the predict.MCMCglmm function works, how to receive predictions more granular than just thirtieths, as well as predictions for all three levels rather than just two would be greatly appreciated! Thank you!
Note: The prior's V values were specifically selected to match the element sizes in the MCMCglmm call, and cannot be changed. However, altering the value of nu has no effect on the predictions, nor does ommitting alpha.mu and alpha.V.
Current results vs ideal results distribution example:
Consider the following dataset:
fictional.df <- data.frame(L1 = c(0,0,0,0,0,0,0,0),
L2 = c(0,1,0,0,0,1,1,0),
L3 = c(1,1,0,1,1,1,1,1),
L4=c(0,0,1,1,0,0,0,0))
I converted this to a phyDat object and then created a pairwise distance matrix as follows:
fictional.phydat <- as.phyDat(fictional.df,
type="USER",levels=c("1","0"),
names=names(fictional.df))
fictional.hamming <- dist.hamming(fictional.phydat)
From this distance matrix, I then estimated a UPGMA tree:
fictional.upgma <- upgma(fictional.hamming)
I then created bootstrap datasets:
set.seed(187)
fictional.upgma.bs <- bootstrap.phyDat(fictional.phydat, FUN =
function(xx) upgma(dist.hamming(xx)), bs=100)
I then calculated the proportion of partitions in the bootstrap set:
upgma.bs.part <- prop.part(fictional.upgma.bs)
So far so good. Here is where I would appreciate some help. When I call the function prop.clades, I do not understand the result:
prop.clades(fictional.upgma,fictional.upgma.bs)
[1] 100 NA 71
Why does this function return NA when there is evidence for that clade in the set of bootstrap trees?
A second question:
prop.clades(fictional.upgma,part=upgma.bs.part)
[1] 100 49 112
If there are only 100 bootstrap samples, why is the value for the final clade 112?
Your tree fictional.upgma is rooted and prop.clades return as default how often each bipartition occurs. In a rooted tree the two edges leading to the root both refer to the same bipartition or split:
prop.clades(unroot(fictional.upgma), fictional.upgma.bs)
[1] 100 71
For rooted trees you some times want to count the number of identical clades:
prop.clades(fictional.upgma, fictional.upgma.bs, rooted=TRUE)
[1] 100 49 71
This seems a bug and you best report it to Emmanuel Pardis
prop.clades(fictional.upgma,part=upgma.bs.part)
[1] 100 49 112
I'd like to run a program in R that asks the user to pick a number of die, the run a simulation on the dice and determine the probability of rolling the minimum number to the maximum number.
For example, if the user picks 5 die, then the minimum roll would be 5x1=5 and the maximum roll would be 5x6=30. I already have code for a set number of dice and a set total - just need to know how to augment it. 'd' is number of die, 'k' is the total of the roll, and 'nreps' is simulated runs (1,000,000 for example). I'd like to store each probability in a vector and then give a plot (poisson distribution) of prob vs. total of roll (from min to max).
probtotk <- function(d, k, nreps){
count <- 0
#do the experiment nreps times
for (rep in 1:nreps){
total <- sum(sample(1:6, d, replace = TRUE))
if (total == k) count <- count +1
}
return(count/nreps)
}
We can use R's vectorization to do this very quickly. As my comments suggest, I will not use k.
For d dice and nreps simulation, we will have d * nreps total die rolls. We simulate these all at once with sample(6, size = d * nreps, replace = T). We put the results in a matrix with nreps columns and d rows, so each column represents a roll of d dice. The column sums give the totals for each roll. The table function counts the occurrence of each total, and the prop.table function turns it into a proportion.
dice_tot_prob = function(d, nreps) {
rolls = matrix(sample(6, size = d * nreps, replace = T), ncol = nreps)
totals = colSums(rolls)
return(prop.table(table(totals)))
}
dice_tot_prob(5, 1e5)
totals
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
0.00015 0.00066 0.00200 0.00446 0.00904 0.01615 0.02655 0.03958 0.05456 0.07013 0.08379 0.09511 0.10065 0.10068 0.09214 0.08391 0.06936 0.05384 0.03891
24 25 26 27 28 29 30
0.02576 0.01664 0.00880 0.00474 0.00180 0.00044 0.00015
The prop.table result is nice because it has a default plotting method:
plot(dice_tot_prob(5, 1e5))
I think what you need is this:
library(magrittr)
sample(1:6, nreps * d, replace = TRUE) %>%
matrix(nrow = d) %>%
colSums() %>%
table() %>%
divide_by(nreps)
I have a parameter space given by (x,y) with x values from 1:5 and y values from 1:8. Let's say my current point p is located at (2,5) (it is colored in red). My goal is to try to pull all the points within one unit distance away from point p (the points in blue).
I was wondering if there was an efficient way to do this. Let's say my variables are stored in the following way:
xrange <- 1:5
yrange <- 1:8
grid <- expand.grid(xrange,yrange)
p <- data.frame(x=2,y=5)
I would like to store the other points below p in this fashion:
res <- data.frame(x=c(1,1,1,2,2,3,3,3),y=c(4,6,4,5,6,4,5,6))
res <- rbind(p,res)
> res
x y
1 2 5
2 1 4
3 1 6
4 1 4
5 2 5
6 2 6
7 3 4
8 3 5
9 3 6
The ultimate goal is to have a parameter space that is more than 2 dimensional. So I would eventually like to find all points that are some euclidean distance s away and similarly have a resulting dataframe with each column being a parameter in the parameter space and each row being a point with coordinates (x,y,z,..,etc) from its columns.
EDIT I have tried the following implementation if I wanted a circle or euclidean distance s and this seems to work. I am not sure how efficient the solution is though.
eucdist <- function(z,p){
return(dist(rbind(z, p)))
}
# in this case s=1 since that is the <= condition
res <- do.call(rbind,lapply(1:nrow(grid),function(m) if(eucdist(as.numeric(grid[m,]),as.numeric(p[1,])) <= 1){return(grid[m,])}))
More information: for now, my parameter space is discretized like the one in the picture above. Eventually some parameters will be continuous mixed in with discrete parameters as well. Thank you so much!
The euclidean distance of each point on the grid from the target point p can be efficiently computed with:
dist <- sqrt(rowSums(mapply(function(x,y) (x-y)^2, grid, p)))
Basically the inner mapply call will result in a matrix of the same size as grid but that has the squared distance of that point from the target point in that dimension; rowSums and sqrt efficiently then compute the euclidean distance.
In this case you are including anything with sqrt(2) Euclidean distance from the target point:
grid[dist < 1.5,]
# Var1 Var2
# 16 1 4
# 17 2 4
# 18 3 4
# 21 1 5
# 22 2 5
# 23 3 5
# 26 1 6
# 27 2 6
# 28 3 6
The use of mapply (operating over dimensions) and rowSums makes this much more efficient than an approach that loops through individual points on the grid, computing the distance to the target point. To see this, consider a slightly larger example with 1000 randomly distributed points in three dimensions:
set.seed(144)
grid <- data.frame(x=rnorm(1000), y=rnorm(1000), z=rnorm(1000))
p <- data.frame(x=rnorm(1), y=rnorm(1), z=rnorm(1))
lim <- 1.5
byrow <- function(grid, p, lim) grid[apply(grid, 1, function(x) sqrt(sum((x-p)^2))) < lim,]
vectorized <- function(grid, p, lim) grid[sqrt(rowSums(mapply(function(x,y) (x-y)^2, grid, p))) < lim,]
identical(byrow(grid, p, lim), vectorized(grid, p, lim))
[1] TRUE
library(microbenchmark)
# Unit: microseconds
# expr min lq mean median uq max neval
# byrow(grid, p, lim) 446792.71 473428.137 500680.0431 495824.7765 521185.093 579999.745 10
# vectorized(grid, p, lim) 855.33 881.981 954.1773 907.3805 1081.658 1108.679 10
The vectorized approach is 500 times faster than the approach that loops through the rows.
This approach can be used in cases where you have many more points (1 million in this example):
set.seed(144)
grid <- data.frame(x=rnorm(1000000), y=rnorm(1000000), z=rnorm(1000000))
p <- data.frame(x=rnorm(1), y=rnorm(1), z=rnorm(1))
lim <- 1.5
system.time(vectorized(grid, p, lim))
# user system elapsed
# 3.466 0.136 3.632
Here's how to do it with package FNN. The result is different from what you have because your solution has (1 4) and (2 5) twice. The solution also works with border data. You will only have 6 nearest neighbors if your x or y is 1 or on the edge of your matrix.
library(FNN)
x <-2
y <- 5
pt <-grid[grid$Var1==x & grid$Var2==y ,] #target point
distance <-knnx.dist(grid,pt,k=9) #distance from pt
k <-length(distance[distance<2]) #distance is less than 2. Useful for border data
nearest <-knnx.index(grid,pt,k=k) #find index of k nearest neighbors
grid[nearest,]
Var1 Var2
22 2 5
23 3 5
27 2 6
21 1 5
17 2 4
26 1 6
28 3 6
18 3 4
16 1 4
I see that you also have asked for higher dimensions. It would still work witht he following changes:
x <-2
y <- 5
z <-3
pt <-grid[grid$Var1==x & grid$Var2==y & grid$Var3==z ,] #3-dimensional point
distance <-knnx.dist(grid,pt,k=27) #increase to k=27
k <-length(distance[distance<2])
nearest <-knnx.index(grid,pt,k=k)
grid[nearest,]
I have a simple (indeed standard in economics) nonlinear constrained discrete maximisation problem to solve in R and am having trouble. I found solutions for parts of the problem (nonlinear maximisation; discrete maximisation) but not for the union of all the problems.
Here is the problem. A consumer wants to buy three products (ananas, banana, cookie), knows the prices and has a budget of 20€. He likes variety (i.e., he wants to have all three products if possible) and his satisfaction is decreasing in the amount consumed (he likes his first cookie way more than his 100th).
The function he wishes to maximise is
and of course since each has a price, and he has a limited budget, he maximises this function under the constraint that
What I want to do is to find the optimal buying list (N ananas, M bananas, K cookies) that satisfies the constraint.
If the problem were linear, I would simply use linprog::solveLP(). But the objective function is nonlinear.
If the problem were of a continuous nature, ther would be a simple analytic solution to it.
The question being discrete and nonlinear, I do not know how to proceed.
Here is some toy data to play with.
df <- data.frame(rbind(c("ananas",2.17),c("banana",0.75),c("cookie",1.34)))
names(df) <- c("product","price")
I'd like to have an optimization routine that gives me an optimal buying list of (N,M,K).
Any hints?
1) no packages This can be done by brute force. Using df from the question as input ensure that price is numeric (it's a factor in the df of the question) and calculate the largest number mx for each variable. Then create grid g of variable counts and compute the total price of each and the associated objective giving gg. Now sort gg in descending order of objective and take those solutions satisfying the constraint. head will show the top few solutions.
price <- as.numeric(as.character(df$price))
mx <- ceiling(20/price)
g <- expand.grid(ana = 0:mx[1], ban = 0:mx[2], cook = 0:mx[3])
gg <- transform(g, total = as.matrix(g) %*% price, objective = sqrt(ana * ban * cook))
best <- subset(gg[order(-gg$objective), ], total <= 20)
giving:
> head(best) # 1st row is best soln, 2nd row is next best, etc.
ana ban cook total objective
1643 3 9 5 19.96 11.61895
1929 3 7 6 19.80 11.22497
1346 3 10 4 19.37 10.95445
1611 4 6 5 19.88 10.95445
1632 3 8 5 19.21 10.95445
1961 2 10 6 19.88 10.95445
2) dplyr This can also be nicely expressed using the dplyr package. Using g and price from above:
library(dplyr)
g %>%
mutate(total = c(as.matrix(g) %*% price), objective = sqrt(ana * ban * cook)) %>%
filter(total <= 20) %>%
arrange(desc(objective)) %>%
top_n(6)
giving:
Selecting by objective
ana ban cook total objective
1 3 9 5 19.96 11.61895
2 3 7 6 19.80 11.22497
3 3 10 4 19.37 10.95445
4 4 6 5 19.88 10.95445
5 3 8 5 19.21 10.95445
6 2 10 6 19.88 10.95445
If you do not mind using a "by hand" solution:
uf=function(x)prod(x)^.5
bf=function(x,pr){
if(!is.null(dim(x)))apply(x,1,bf,pr) else x%*%pr
}
budget=20
df <- data.frame(product=c("ananas","banana","cookie"),
price=c(2.17,0.75,1.34),stringsAsFactors = F)
an=0:(budget/df$price[1]) #include 0 for all possibilities
bn=0:(budget/df$price[2])
co=0:(budget/df$price[3])
X=expand.grid(an,bn,co)
colnames(X)=df$product
EX=apply(X,1,bf,pr=df$price)
psX=X[which(EX<=budget),] #1st restrict
psX=psX[apply(psX,1,function(z)sum(z==0))==0,] #2nd restrict
Ux=apply(psX,1,uf)
cbind(psX,Ux)
(sol=psX[which.max(Ux),])
uf(sol) # utility
bf(sol,df$price) #budget
> (sol=psX[which.max(Ux),])
ananas banana cookie
1444 3 9 5
> uf(sol) # utility
[1] 11.61895
> bf(sol,df$price) #budget
1444
19.96
I think this problem is very similar in nature to this question (Solve indeterminate equation system in R). The answer by Richie Cotton was the basis to this possible solution:
df <- data.frame(product=c("ananas","banana","cookie"),
price=c(2.17,0.75,1.34),stringsAsFactors = F)
FUN <- function(w, price=df$price){
total <- sum(price * w)
errs <- c((total-20)^2, -(sqrt(w[1]) * sqrt(w[2]) * sqrt(w[3])))
sum(errs)
}
init_w <- rep(10,3)
res <- optim(init_w, FUN, lower=rep(0,3), method="L-BFGS-B")
res
res$par # 3.140093 9.085182 5.085095
sum(res$par*df$price) # 20.44192
Notice that the total cost (i.e. price) for the solution is $ 20.44. To solve this problem, we can weight the error terms to put more emphasis on the 1st term, which relates to the total cost:
### weighting of error terms
FUN2 <- function(w, price=df$price){
total <- sum(price * w)
errs <- c(100*(total-20)^2, -(sqrt(w[1]) * sqrt(w[2]) * sqrt(w[3]))) # 1st term weighted by 100
sum(errs)
}
init_w <- rep(10,3)
res <- optim(init_w, FUN2, lower=rep(0,3), method="L-BFGS-B")
res
res$par # 3.072868 8.890832 4.976212
sum(res$par*df$price) # 20.00437
As LyzandeR remarked there is no nonlinear integer programming solver available in R. Instead, you can use the R package rneos that sends data to one of the NEOS solvers and returns the results into your R process.
Select one of the solvers for "Mixed Integer Nonlinearly Constrained Optimization" on the NEOS Solvers page, e.g., Bonmin or Couenne. For your example above, send the following files in the AMPL modeling language to one of these solvers:
[Note that maximizing the product x1 * x2 * x3 is the same as maximising the product sqrt(x1) * sort(x2) * sqrt(x3).]
Model file:
param p{i in 1..3};
var x{i in 1..3} integer >= 1;
maximize profit: x[1] * x[2] * x[3];
subject to restr: sum{i in 1..3} p[i] * x[i] <= 20;
Data file:
param p:= 1 2.17 2 0.75 3 1.34 ;
Command file:
solve;
display x;
and you will receive the following solution:
x [*] :=
1 3
2 9
3 5
;
This approach will work for more extended examples were solutions "by hand" are not reasonable and rounded optim solutions are not correct.
To look at a more demanding example, let me propose the following problem:
Find an integer vector x = (x_i), i=1,...,10, that maximizes x1 * ... * x10, such that p1*x1 + ... + p10*x10 <= 10, where p = (p_i), i=1,...,10, is the following price vector
p <- c(0.85, 0.22, 0.65, 0.73, 0.91, 0.11, 0.31, 0.47, 0.93, 0.71)
Using constrOptim for this nonlinear optimization problem with a linear inequality constraint, I get solutions like 900 for different starting points, but never the optimal solutions that is 960 !