Does this problem have an "exact" solution? - r

I am working with R.
Suppose you have the following data:
#generate data
set.seed(123)
a1 = rnorm(1000,100,10)
b1 = rnorm(1000,100,10)
c1 = rnorm(1000,5,1)
train_data = data.frame(a1,b1,c1)
#view data
a1 b1 c1
1 94.39524 90.04201 4.488396
2 97.69823 89.60045 5.236938
3 115.58708 99.82020 4.458411
4 100.70508 98.67825 6.219228
5 101.29288 74.50657 5.174136
6 117.15065 110.40573 4.384732
We can visualize the data as follows:
#visualize data
par(mfrow=c(2,2))
plot(train_data$a1, train_data$b1, col = train_data$c1, main = "plot of a1 vs b1, points colored by c1")
hist(train_data$a1)
hist(train_data$b1)
hist(train_data$c1)
Here is the Problem :
From the data, only take variables "a1" and "b1" : using only 2 "logical conditions", split this data into 3 regions (e.g. Region 1 WHERE 20 > a1 >0 AND 0< b1 < 25)
In each region, you want the "average value of c1" within that region to be as small as possible - but each region must have at least some minimum number of data points, e.g. 100 data points (to prevent trivial solutions)
Goal : Is it possible to determine the "boundaries" of these 3 regions that minimizes :
the mean value of "c1" for region 1
the mean value of "c1" for region 2
the mean value of "c1" for region 3
the average "mean value of c1 for all 3 regions" (i.e. c_avg = (region1_c1_avg + region2_c1_avg + region3_c1_avg) / 3)
In the end, for a given combination, you would find the following, e.g. (made up numbers):
Region 1 : WHERE 20> a1 >0 AND 0 < b1 < 25 ; region1_c1_avg = 4
Region 2 : WHERE 50> a1 >20 AND 25 < b1 < 60 ; region2_c1_avg = 2.9
Region 3 : WHERE a1>50 AND b1 > 60 ; region3_c1_avg = 1.9
c_avg = (4 + 2.9 + 1.9) / 3 = 2.93
And hope that (region1_c1_avg, region2_c1_avg, region3_c1_avg and c_avg) are minimized
My Question:
Does this kind of problem have an "exact solution"? The only thing I can think of is performing a "random search" that considers many different definitions of (Region 1, Region 2 and Region 3) and compares the corresponding values of (region1_c1_avg, region2_c1_avg, region3_c1_avg and c_avg), until a minimum value is found. Is this an application of linear programming or multi-objective optimization (e.g. genetic algorithm)? Has anyone worked on something like this before?
I have done a lot of research and haven't found a similar problem like this. I decided to formulate this problem as a "multi-objective constrained optimization problem", and figured out how to implement algorithms like "random search" and "genetic algorithm".
Thanks
Note 1: In the context of multi-objective optimization, for a given set of definitions of (Region1, Region2 and Region3): to collectively compare whether a set of values for (region1_c1_avg, region2_c1_avg, region3_c1_avg and c_avg) are satisfactory, the concept of "Pareto Optimality" (https://en.wikipedia.org/wiki/Multi-objective_optimization#Visualization_of_the_Pareto_front) is often used to make comparisons between different sets of {(Region1, Region2 and Region3) and (region1_c1_avg, region2_c1_avg, region3_c1_avg and c_avg)}
Note 2 : Ultimately, these 3 Regions can defined by any set of 4 numbers. If each of these 4 numbers can be between "0 and 100", and through 0.1 increments (e.g. 12, 12.1, 12.2, 12.3, etc) : this means that there exists 1000 ^ 4 = 1 e^12 possible solutions (roughly 1 trillion) to compare. There are simply far too many solutions to individually verify and compare. I am thinking that a mathematical based search/optimization problem can be used to strategically search for an optimal solution.

Related

How to conduct LSD test with interactions in R?

I have a field data
sowing_date<- rep(c("Early" ,"Normal"), each=12)
herbicide<- rep (rep(c("No" ,"Yes"), each=6),2)
nitrogen<- rep (rep(c("No" ,"Yes"), each=3),4)
Block<- rep(c("Block 1" ,"Block 2", "Block 3"), times=8)
Yield<- c(30,27,25,40,41,42,37,38,40,48,47,46,25,27,26,
41,41,42,38,39,42,57,59,60)
DataA<- data.frame(sowing_date,herbicide,nitrogen,Block,Yield)
and I conducted 3-way ANOVA
anova3way <- aov (Yield ~ sowing_date + herbicide + nitrogen +
sowing_date:herbicide + sowing_date:nitrogen +
herbicide:nitrogen + sowing_date:herbicide:nitrogen +
factor(Block), data=DataA)
summary(anova3way)
There is a 3-way interaction among 3 factors. So, I'd like to see which combination shows the greatest yield.
I know how to compare mean difference with single factor like below, but in case of interactions, how can I do that?
library(agricolae)
LSD_Test<- LSD.test(anova3way,"sowing_date")
LSD_Test
For example, I'd like to check the mean difference under 3 way interaction, and also interaction between each two factors.
For example, I'd like to get this LSD result in R
Could you tell me how can I do that?
Many thanks,
One way which does take some manual work is to encode the experimental parameters as -1 and 1 in order to properly separate the 2 and 3 parameter interactions.
Once you have values encoded you can pull the residual degree of freedoms and the sum of the error square from the ANOVA model and pass it to the LSD.test function.
See Example below:
sowing_date<- rep(c("Early" ,"Normal"), each=12)
herbicide<- rep (rep(c("No" ,"Yes"), each=6),2)
nitrogen<- rep (rep(c("No" ,"Yes"), each=3),4)
Block<- rep(c("Block 1" ,"Block 2", "Block 3"), times=8)
Yield<- c(30,27,25,40,41,42,37,38,40,48,47,46,25,27,26,
41,41,42,38,39,42,57,59,60)
DataA<- data.frame(sowing_date,herbicide,nitrogen,Block,Yield)
anova3way <- aov (Yield ~ sowing_date * herbicide * nitrogen +
factor(Block), data=DataA)
summary(anova3way)
#Encode the experiment's parameters as -1 and 1
DataA$codeSD <- ifelse(DataA$sowing_date == "Early", -1, 1)
DataA$codeherb <- ifelse(DataA$herbicide == "No", -1, 1)
DataA$codeN2 <- ifelse(DataA$nitrogen == "No", -1, 1)
library(agricolae)
LSD_Test<- LSD.test(anova3way, c("sowing_date"))
LSD_Test
#Manually defining the treatment and specifying the
# degrees of freedom and Sum of the squares (Frin the resduals from the ANOVA)
print(LSD.test(y=DataA$Yield, trt=DataA$sowing_date, DFerror=14, MSerror=34.3))
#Example for a two parameter value
print(LSD.test(y=DataA$Yield, trt=(DataA$codeSD*DataA$codeherb), DFerror=14, MSerror=34.3))
print(LSD.test(y=DataA$Yield, trt=(DataA$codeSD*DataA$codeherb*DataA$codeN2), DFerror=14, MSerror=34.3))
#calaculate the means and std (as a check)
#DataA %>% group_by(sowing_date) %>% summarize(mean=mean(Yield), sd=sd(Yield))
#DataA %>% group_by(codeSD*codeherb*codeN2) %>% summarize(mean=mean(Yield), sd=sd(Yield))
You will need to manually track which run/condition goes with the -1 and 1 in the final report.
Edit:
So my answer above with show the overall effect based on interactions. For example how does the interaction of herbicide and nitrogen effect yield.
Based on your comment where you want to determine which combination provides the greatest yield, you the use the LSD.test() function again but passing a vector of parameter names.
LSD_Test<- LSD.test(anova3way, c("sowing_date", "herbicide", "nitrogen"))
LSD_Test
From the groups part of the out put you can see Normal, Yes and Yes is the optimal yield. The "groups" column will identify the unique clusters of results. For example the last 2 rows provide a similar yield.
...
$groups
Yield groups
Normal:Yes:Yes 58.66667 a
Early:Yes:Yes 47.00000 b
Normal:No:Yes 41.33333 c
Early:No:Yes 41.00000 cd
Normal:Yes:No 39.66667 cd
Early:Yes:No 38.33333 d
Early:No:No 27.33333 e
Normal:No:No 26.00000 e
...

Hierarchical clustering and k means

I want to run a hierarchical cluster analysis. I am aware of the hclust() function but not how to use this in practice; I'm stuck with supplying the data to the function and processing the output.
The main issue that I would like to cluster a given measurement.
I would also like to compare the hierarchical clustering with that produced by kmeans(). Again I am not sure how to call this function or use/manipulate the output from it.
My data are similar to:
df<-structure(list(id=c(111,111,111,112,112,112), se=c(1,2,3,1,2,3),t1 = c(1, 2, 1, 1,1,3),
t2 = c(1, 2, 2, 1,1,4), t3 = c(1, 0, 0, 0,2,1), t4 = c(2, 5, 7, 7,1,2),
t5 = c(1, 0, 1, 1,1,1),t6 = c(1, 1, 1, 1,1,1), t7 = c(1, 1, 1 ,1,1,1), t8=c(0,0,0,0,0,0)), row.names = c(NA,
6L), class = "data.frame")
I would like to run the hierarchical cluster analysis to identify the optimum number of clusters.
How can I run clustering based on a predefined measurement - in this case for example to cluster measurement number 2?
For hierarchical clustering there is one essential element you have to define. It is the method for computing the distance between each data point. Clustering is an state of art technique so you have to define the number of clusters based on how fair data points are distributed. I will teach you how to do this in next code. We will compare three methods of distance using your data df and the function hclust():
First method is average distance, which computes the mean across all distances for all points. We will omit first variable as it is an id:
#Method 1
hc.average <- hclust(dist(df[,-1]),method='average')
Second method is complete distance, which computes the largest value across all distances for all points:
#Method 2
hc.complete<- hclust(dist(df[,-1]),method='complete')
Third method is single distance, which computes the minimal value across all distances for all points:
#Method 3
hc.single <- hclust(dist(df[,-1]),method='single')
With all models we can analyze the groups.
We can define the number of clusters based on the height of hierarchical tree, the largest the height then we will have only one cluster equals to all dataset. It is a standard to choose an intermediate value for height.
With average method a height value of three will produce four groups and a value around 4.5 will produce 2 groups:
plot(hc.average, xlab='')
Output:
With the complete method results are similar but the scale measure of height has changed.
plot(hc.complete, xlab='')
Output:
Finally, single method produces a different scheme for groups. There are three groups and even with an intermediate choice of height, you will always have that number of clusters:
plot(hc.single, xlab='')
Output:
You can use any method you wish to determine the cluster for your data using cutree() function, where you set the model object and the number of clusters. One way to determine clustering performance is checking how homogeneous the groups are. That depends of the researcher criteria. Next the method to add the cluster to your data. I will choose last model and three groups:
#Add cluster
df$Cluster <- cutree(hc.single,k = 3)
Output:
id se t1 t2 t3 t4 t5 t6 t7 t8 Cluster
1 111 1 1 1 1 2 1 1 1 0 1
2 111 2 2 2 0 5 0 1 1 0 2
3 111 3 1 2 0 7 1 1 1 0 2
4 112 1 1 1 0 7 1 1 1 0 2
5 112 2 1 1 2 1 1 1 1 0 1
6 112 3 3 4 1 2 1 1 1 0 3
The function cutree() also has an argument called h where you can set the height, we have talked previously, instead of number of clusters k.
About your doubt of using some measure to define a cluster, you could scale your data excluding the desired variable so that the variable will have a different measure and can influence in the results of your clustering.

Stata twoway graph of means with confidence intervals

Using
clear
score group test
2 0 A
3 0 B
6 0 B
8 0 A
2 0 A
2 0 A
10 1 B
7 1 B
8 1 A
5 1 A
10 1 A
11 1 B
end
I want to scatter plot mean score by group for each test (same graph) with confidence intervals (the real data has thousands of observations). The resulting graph would have two sets of two dots. One set of dots for test==a (group==0 vs group==1) and one set of dots for test==b (group==0 vs group==1).
My current approach works but it is laborious. I compute all of the needed statistics using egen: the mean, number of observations, standard deviations...for each group by test. I then collapse the data and plot.
There has to be another way, no?
I assumed that Stata would be able to take as its input the score group and test variables and then compute and present this pretty standard graph.
After spending a lot of time on Google, I had to ask.
Although there are user-written programs, I lean towards statsby as a basic approach here. Discussion is accessible in this paper.
This example takes your data example (almost executable code). Some choices depend on the large confidence intervals implied. Note that if your version of Stata is not up-to-date, the syntax of ci will be different. (Just omit means.)
clear
input score group str1 test
2 0 A
3 0 B
6 0 B
8 0 A
2 0 A
2 0 A
10 1 B
7 1 B
8 1 A
5 1 A
10 1 A
11 1 B
end
save cj12 , replace
* test A
statsby mean=r(mean) ub=r(ub) lb=r(lb) N=r(N), by(group) clear : ///
ci means score if test == "A"
gen test = "A"
save cj12results, replace
* test B
use cj12
statsby mean=r(mean) ub=r(ub) lb=r(lb) N=r(N), by(group) clear : ///
ci means score if test == "B"
gen test = "B"
append using cj12results
* graph; show sample sizes too, but where to show them is empirical
set scheme s1color
gen where = -20
scatter mean group, ms(O) mcolor(blue) || ///
rcap ub lb group, lcolor(blue) ///
by(test, note("95% confidence intervals") legend(off)) ///
subtitle(, fcolor(ltblue*0.2)) ///
ytitle(score) xla(0 1) xsc(r(-0.25 1.25)) yla(-10(10)10, ang(h)) || ///
scatter where group, ms(none) mla(N) mlabpos(12) mlabsize(*1.5)
We can't compare your complete code or your graph, because you show neither.

R multiclass/multinomial classification ROC using multiclass.roc (Package ‘pROC’)

I am having difficulties understanding how the multiclass.roc parameters should look like.
Here a snapshot of my data:
> head(testing.logist$cut.rank)
[1] 3 3 3 3 1 3
Levels: 1 2 3
> head(mnm.predict.test.probs)
1 2 3
9 1.013755e-04 3.713862e-02 0.96276001
10 1.904435e-11 3.153587e-02 0.96846413
12 6.445101e-23 1.119782e-11 1.00000000
13 1.238355e-04 2.882145e-02 0.97105472
22 9.027254e-01 7.259787e-07 0.09727389
26 1.365667e-01 4.034372e-01 0.45999610
>
I tried calling multiclass.roc with:
multiclass.roc(
response=testing.logist$cut.rank,
predictor=mnm.predict.test.probs,
formula=response~predictor
)
but naturally I get an error:
Error in roc.default(response, predictor, levels = X, percent = percent, :
Predictor must be numeric or ordered.
When it's a binary classification problem I know that 'predictor' should contain probabilities (one per observation). However, in my case, I have 3 classes, so my predictor is a list of rows that each have 3 columns (or a sublist of 3 values) correspond to the probability for each class.
Does anyone know how should my 'predictor' should look like rather than what it's currently look like ?
The pROC package is not really designed to handle this case where you get multiple predictions (as probabilities for each class). Typically you would assess your P(class = 1)
multiclass.roc(
response=testing.logist$cut.rank,
predictor=mnm.predict.test.probs[,1])
And then do it again with P(class = 2) and P(class = 3). Or better, determine the most likely class:
predicted.class <- apply(mnm.predict.test.probs, 1, which.max)
multiclass.roc(
response=testing.logist$cut.rank,
predictor=predicted.class)
Consider multiclass.roc as a toy that can sometimes be helpful but most likely won't really fit your needs.

Gompertz Aging analysis in R

I have survival data from an experiment in flies which examines rates of aging in various genotypes. The data is available to me in several layouts so the choice of which is up to you, whichever suits the answer best.
One dataframe (wide.df) looks like this, where each genotype (Exp, of which there is ~640) has a row, and the days run in sequence horizontally from day 4 to day 98 with counts of new deaths every two days.
Exp Day4 Day6 Day8 Day10 Day12 Day14 ...
A 0 0 0 2 3 1 ...
I make the example using this:
wide.df2<-data.frame("A",0,0,0,2,3,1,3,4,5,3,4,7,8,2,10,1,2)
colnames(wide.df2)<-c("Exp","Day4","Day6","Day8","Day10","Day12","Day14","Day16","Day18","Day20","Day22","Day24","Day26","Day28","Day30","Day32","Day34","Day36")
Another version is like this, where each day has a row for each 'Exp' and the number of deaths on that day are recorded.
Exp Deaths Day
A 0 4
A 0 6
A 0 8
A 2 10
A 3 12
.. .. ..
To make this example:
df2<-data.frame(c("A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A"),c(0,0,0,2,3,1,3,4,5,3,4,7,8,2,10,1,2),c(4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36))
colnames(df2)<-c("Exp","Deaths","Day")
What I would like to do is perform a Gompertz Analysis (See second paragraph of "the life table" here). The equation is:
μx = α*e β*x
Where μx is probability of death at a given time, α is initial mortality rate, and β is the rate of aging.
I would like to be able to get a dataframe which has α and β estimates for each of my ~640 genotypes for further analysis later.
I need help going from the above dataframes to an output of these values for each of my genotypes in R.
I have looked through the package flexsurv which may house the answer but I have failed in attempts to find and implement it.
This should get you started...
Firstly, for the flexsurvreg function to work, you need to specify your input data as a Surv object (from package:survival). This means one row per observation.
The first thing is to re-create the 'raw' data from the summary tables you provide.
(I know rbind is not efficient, but you can always switch to data.table for large sets).
### get rows with >1 death
df3 <- df2[df2$Deaths>1, 2:3]
### expand to give one row per death per time
df3 <- sapply(df3, FUN=function(x) rep(df3[, 2], df3[, 1]))
### each death is 1 (occurs once)
df3[, 1] <- 1
### add this to the rows with <=1 death
df3 <- rbind(df3, df2[!df2$Deaths>1, 2:3])
### convert to Surv object
library(survival)
s1 <- with(df3, Surv(Day, Deaths))
### get parameters for Gompertz distribution
library(flexsurv)
f1 <- flexsurvreg(s1 ~ 1, dist="gompertz")
giving
> f1$res
est L95% U95%
shape 0.165351912 0.1281016481 0.202602176
rate 0.001767956 0.0006902161 0.004528537
Note that this is an intercept-only model as all your genotypes are A.
You can loop this over multiple survival objects once you have re-created the per-observation data as above.
From the flexsurv docs:
Gompertz distribution with shape parameter a and rate parameter
b has hazard function
H(x: a, b) = b.e^{ax}
So it appears your alpha is b, the rate, and beta is a, the shape.

Resources