Global multi-optimization function specification in R - r

I would like to use ngsa2 of mco package to solve an optimization problem with 3 objectives. In short, I am lookink for optimal land uses to solve environmental problem.
Here is my experiment:
- 100 land uses are possible in total (all.options in the code below), each land use being characterized by three performances (main.goal1, main.goal2 and main.goal3).
- I have 50 fields, whose characteristics (soil in fields.Kq) subset the 100 land uses (i.e., all land uses are not possible for each field) => options.soil1 and options.soil2
My objective is to assign a land use to each of my 50 fields, in order to minimize alltogether main.goal1, main.goal2 and main.goal3. From what I read, Genetic Algorithms are very powerful for such type of problems.
So here are my virtual data.
set.seed(0)
all.options<-data.frame(num.option=1:100,main.goal1 = abs(rnorm(100)),
main.goal2 = abs(rnorm(100)),
main.goal3 = abs(rnorm(100))) # all possible combinations of the 3 goals
options.soil1<-subset(all.options, main.goal1>0.5) # possible combinations for soil1
options.soil2<-subset(all.options, main.goal3<0.5) # possible combinations for soil2
fields.Kq<-data.frame(num.field=1:50,soil=round(runif(50,0,1),0))
I guess that my objective function should look like
my.function<-function(x) {
x[1]<-sum(A[,1) # main.goal1 for selected options for each of fields.Kq
x[2]<-sum(A[,2) # main.goal2 for selected options for each of fields.Kq
x[3]<-sum(A[,3) # main.goal3 for selected options for each of fields.Kq
} # where A should be a matrix of 50 lines with one line per field, and #"choosen" land use option
nsga2(my.function)
Unfortunately I could not go further, as I am new in optimizing with R. How to build the matrix A, with choosen land use for each field?
And using, nga, how to return these land uses? (together with the optimized (minimized) values for main.goal1, main.goal2 and main.goal3?
Thanks in advance for all the help you could provide me, I am really looking forward advices/links/books... to advance on my optimization problem.
Best regards,
LH

Here is how I solved the problem:
library("mco")
set.seed(0)
all.options<-data.frame(num.option=1:100,main.goal1 = abs(rnorm(100)),
main.goal2 = abs(rnorm(100)),
main.goal3 = abs(rnorm(100)),soil=c(rep("soilType1",50),rep("soilType2",50))) # all possible combinations of the 3 goals
fields.Kq<-data.frame(num.field=1:50,soil=rep(c("soilType1","soilType2"),25))
main.goal1=function(x) # x - a vector
{
main.goal1=sum(all.options[x,1]) # compute main.goal1
return(main.goal1) }
main.goal2=function(x) # x - a vector
{
main.goal2=sum(all.options[x,2]) # compute main.goal2
return(main.goal2) }
main.goal3=function(x) # x - a vector
{
main.goal3=sum(all.options[x,3]) # compute main.goal3
return(main.goal3) }
eval=function(x) c(main.goal1(x),main.goal2(x),main.goal3(x)) #objectivefunction
D<-length(fields.Kq[,1]) # number of fields
D2<-length(fields.Kq[,1])/2 # number of fields per type (simplified)
D.soil1<-max(which(all.options$soil=="soilType1")) # get boundary for bound soil1
D.soil2<-min(which(all.options$soil=="soilType2")) # get boundary for bound soil2
G=nsga2(fn=eval,idim=D,odim=3,
lower.bounds=c(rep(1,D2),rep(D.soil2,D2)),upper.bounds=c(rep(D.soil1,D2),rep(100,D2)), # lower/upper bound: min/max num option
popsize=20,generations=1:1000, cprob = 0.7, cdist = 5,
mprob = 0.2, mdist = 10)
I defined it thanks to exemples found in the very helpful and informative book "Modern optimization in R" by Paulo Cortez.
LH

Related

RecordLinkage package in R - add weight to individual linking variables

I'm following the excellent tutorial on RPubs which uses the magnificent RecordLinkage package. I'm applying this to my own data but I'll just use the tutorial to explain my problem.
In the two datasets for comparison there are a number of common fields used in the linkage:
patents <- patents[,c("seq", "firstname", "lastname", "city", "state", "organization")]
nsf <- nsf[, c("InvestigatorId", "FirstName", "LastName", "CityName", "StateCode", "Name")]
names(nsf) <- names(patents)
These fields are then compared using the compare.linkage() function:
a <- compare.linkage(nsf, patents, blockfld = c("state"), strcmp = T, exclude=c(1))
This creates a large RecLinkData object called 'a' that contains a bunch of comparison pairs.
The next step is calculating the M and U weights (agreement weights) using the expectation maximisation (EM) algorithm:
b <- emWeights(a, cutoff = 0.8)
I think this is basically creating an overall agreement weight which is a product of all the individual linking variables.
My question is how can I add importance for one of the individual linking variables?
So for example, I might know that the "lastname" field is reliable and accurate in both datasets, so if the lastname agreed exactly then to give this more weight in the overall agreement score.
Even some pointers on where to look would be helpful, I'm a bit lost on this and don't even know what to attempt in terms of code.
You can't input additional information to emWeights(), except maybe cutoff =, which accepts a single value or a vector with the same length as the number of attributes. So you can choose a high cutoff value for attributes you know to be accurate, so the number of random matches will be minimized.
Apart from that, the EM algorithm in RecordLinkage allows no further customization.
There is however the epiWeights() pendant which calculates weights between 0and 1 using an estimated error rate (default e= 0.01) and the average frequencies of values in each field (1/length(unique(all_values_in_a_field)). You can supply both to the function manually and this way tweaking the results.
Consider this example:
t1 <- data.frame(Vorname = c("Karl", "Fritz"), Name = c("Meister", "Schulz"), stringsAsFactors = F)
t2 <- data.frame(Vorname = c("Karl", "Fritz"), Name = c("Meister", "Schulze"), stringsAsFactors = F)
> epiWeights(linkage)$Wdata # e = 0.01
[1] 1.0000000 0.0000000 0.0000000 0.3855691
> epiWeights(linkage, e = c(0.01, 0.3)$Wdata
[1] 1.0000000 0.0000000 0.0000000 0.3120078
If you assume a higher error rate for field Nachname it gets lower weights.

Louvain community detection in R using igraph - format of edges and vertices

I have a correlation matrix of scores that I would like to run community detection on using the Louvain method in igraph, in R. I converted the correlation matrix to a distance matrix using cor2dist, as below:
distancematrix <- cor2dist(correlationmatrix)
This gives a 400 x 400 matrix of distances from 0-2. I then made the list of edges (the distances) and vertices (each of the 400 individuals) using the below method from http://kateto.net/networks-r-igraph (section 3.1).
library(igraph)
test <- as.matrix(distancematrix)
mode(test) <- "numeric"
test2 <- graph.adjacency(test, mode = "undirected", weighted = TRUE, diag = TRUE)
E(test2)$weight
get.edgelist(test2)
From this I then wrote csv files of the 'from' and 'to' edge list, and corresponding weights:
edgeweights <-E(test2)$weight
write.csv(edgeweights, file = "edgeweights.csv")
fromtolist <- get.edgelist(test2)
write.csv(fromtolist, file = "fromtolist.csv")
From these two files I produced a .csv file called "nodes.csv" which simply had all the vertex IDs for the 400 individuals:
id
1
2
3
4
...
400
And a .csv file called "edges.csv", which detailed 'from' and 'to' between each node, and provided the weight (i.e. the distance measure) for each of these edges:
from to weight
1 2 0.99
1 3 1.20
1 4 1.48
...
399 400 0.70
I then tried to use this node and edge list to create an igraph object, and run louvain clustering in the following way:
nodes <- read.csv("nodes.csv", header = TRUE, as.is = TRUE)
edges <- read.csv("edges.csv", header = TRUE, as.is = TRUE)
clustergraph <- graph_from_data_frame(edges, directed = FALSE, vertices = nodes)
clusterlouvain <- cluster_louvain(clustergraph)
Unfortunately this did not do the louvain community detection correctly. I expected this to return around 2-4 different communities, which could be plotted similarly to here, but sizes(clusterlouvain) returned:
Community sizes
1
400
indicating that all individuals were sorted into the same community. The clustering also ran immediately (i.e. with almost no computation time), which also makes me think it was not working correctly.
My question is: Can anyone suggest why the cluster_louvain method did not work and identified just one community? I think I must be specifying the distance matrix or edges/nodes incorrectly, or in some other way not giving the correct input to the cluster_louvain method. I am relatively new to R so would be very grateful for any advice. I have successfully used other methods of community detection on the same distance matrix (i.e. k-means) which identified 2-3 communities, but would like to understand what I have done wrong here.
I'm aware there are multiple other queries about using igraph in R, but I have not found one which explicitly specifies the input format of the edges and nodes (from a correlation matrix) to get the louvain community detection working correctly.
Thank you for any advice! I can provide further information if helpful.
I believe that cluster_louvain did exactly what it should do with your data.
The problem is your graph.Your code included the line get.edgelist(test2). That must produce a lot of output. Instead try, this
vcount(test2)
ecount(test2)
Since you say that your correlation matrix is 400x400, I expect that you will
get that vcount gives 400 and ecount gives 79800 = 400 * 399 / 2. As you have
constructed it, every node is directly connected to all other nodes. Of course there is only one big community.
I suspect that what you are trying to do is group variables that are correlated.
If the correlation is near zero, the variables should be unconnected. What seems less clear is what to do with variables with correlation near -1. Do you want them to be connected or not? We can do it either way.
You do not provide any data, so I will illustrate with the Ionosphere data from
the mlbench package. I will try to mimic your code pretty closely, but will
change a few variable names. Also, for my purposes, it makes no sense to write
the edges to a file and then read them back again, so I will just directly
use the edges that are constructed.
First, assuming that you want variables with correlation near -1 to be connected.
library(igraph)
library(mlbench) # for Ionosphere data
library(psych) # for cor2dist
data(Ionosphere)
correlationmatrix = cor(Ionosphere[, which(sapply(Ionosphere, class) == 'numeric')])
distancematrix <- cor2dist(correlationmatrix)
DM1 <- as.matrix(distancematrix)
## Zero out connections where there is low (absolute) correlation
## Keeps connection for cor ~ -1
## You may wish to choose a different threshhold
DM1[abs(correlationmatrix) < 0.33] = 0
G1 <- graph.adjacency(DM1, mode = "undirected", weighted = TRUE, diag = TRUE)
vcount(G1)
[1] 32
ecount(G1)
[1] 140
Not a fully connected graph! Now let's find the communities.
clusterlouvain <- cluster_louvain(G1)
plot(G1, vertex.color=rainbow(3, alpha=0.6)[clusterlouvain$membership])
If instead, you do not want variables with negative correlation to be connected,
just get rid of the absolute value above. This should be much less connected
DM2 <- as.matrix(distancematrix)
## Zero out connections where there is low correlation
DM2[correlationmatrix < 0.33] = 0
G2 <- graph.adjacency(DM2, mode = "undirected", weighted = TRUE, diag = TRUE)
clusterlouvain <- cluster_louvain(G2)
plot(G2, vertex.color=rainbow(4, alpha=0.6)[clusterlouvain$membership])

Iteration / Maximization Excel solver in R

I am trying to do a maximization in R that I have done previously in Excel with the solver. The problem is that I don't know how to deal with it (i don't have a good level in R).
let's talk a bit about my data. I have 26 Swiss cantons and the Swiss government (which is the sum of the value of the 26 cantons) with their population and their "wealth". So I have 27 observatios by variable. I'm not sure that the following descriptions are useful but I put them anyway. From this, I calculate some variables with while loops. For each canton [i]:
resource potential = mean(wealth2011 [i],wealth2012 [i],wealth2013 [i])
population mean = mean(population2011 [i],population2012 [i],population2013 [i])
resource potential per capita = 1000*resource potential [i]/population [i]
resource index = 100*resource potential capita [i]/resource potential capita [swiss government]
Here a little example of the kind of loops I used:
RI=0
i = 1
while(i<28){
RI[i]=resource potential capita [i]/resource potential capita [27]*100
i = i+1
}
The resource index (RI) for the Swiss government (i = 27) is 100 because we divide the resource potential capita of the swiss government (when i = 27) by itself and multiply by 100. Hence, all cantons that have a RI>100 are rich cantons and other (IR<100) are poor cantons. Until here, there was no problem. I just explained how I built my dataset.
Now the problem that I face: I have to create the variable weighted difference (wd). It takes the value of:
0 if RI>100 (rich canton)
(100-RI[i])^(1+P)*Pop[i] if RI<100 (poor canton)
I create this variable like this: (sorry for the weakness of the code, I did my best).
wd=-1
i = 1
a = 0
c = 0
tot = 0
while(i<28){
if(i == 27) {
wd[i] = a
} else if (RI[i] < 100) {
wd[i] = (100-RI[i])^(1+P)*Pop[i]
c = wd[i]
a = a+c
} else {
wd[i]= 0
}
i = i+1
}
However, I don't now the value of "p". It is a value between 0 and 1. To find the value of p, I have to do a maximization using the following features:
RI_26 = 65.9, it is the minimum of RI in my data
RI_min = 100-((x*wd [27])/((1+p)*z*100))^(1/p), where x and z are fixed values (x = 8'677, z = 4'075'977'077) and wd [27] the sum of wd for each canton.
We have p in two equation: RI_min and wd. To solve it in Excel, I used the Excel solver with the following features:
p_dot = RI_26/RI_min* p ==> p_dot =[65.9/100-((x* wd [27])/((1+p)*z*100))^(1/p)]*p
RI_26 = RI_min ==>65.9 =100-((x*wd [27])/((1+p)*z*100))^(1/p)
In Excel, p is my variable cell (the only value allowed to change), p_dot is my objective to define and RI_26 = RI_min is my constraint.
So I would like to maximize p and I don't know how to do this in R. My main problem is the presence of p in RI_min and wd. We need to do an iteration to solve it but this is too far from my skills.
Is anyone able to help me with the information I provided?
you should look into the optim function.
Here I will try to give you a really simple explanation since you said you don't have a really good level in R.
Assuming I have a function f(x) that I want to maximize and therefore I want to find the parameter x that gives me the max value of f(x).
First thing to do will be to define the function, in R you can do this with:
myfunction<- function(x) {...}
Having defined the function I can optimize it with the command:
optim(par,myfunction)
where par is the vector of initial parameters of the function, and myfunction is the function that needs to be optimized. Bear in mind that optim performs minimization, however it will maximize if control$fnscale is negative. Another strategy will be to change the function (i.e. changing the sign) to suit the problem.
Hope that this helps,
Marco
From the description you provided, if I'm not mistaken, it looks like that everything you need to do it's just an equation.
In particular you have the following two expressions:
RI_min = 100-((x*y)/((1+p)*z*100))^(1/p)
and, since x,y,z are fixed, the only variable is p.
Moreover, having RI_26 = RI_min this yields to:
65.9 =100-((x*y)/((1+p)*z*100))^(1/p)
Plugging in the values of x,y and z you have provided, this yields to
p=0.526639915936052
I don't understand what exactly you are trying to maximize.

K-means: Initial centers are not distinct

I am using the GA Package and my aim is to find the optimal initial centroids positions for k-means clustering algorithm. My data is a sparse-matrix of words in TF-IDF score and is downloadable here. Below are some of the stages I have implemented:
0. Libraries and dataset
library(clusterSim) ## for index.DB()
library(GA) ## for ga()
corpus <- read.csv("Corpus_EnglishMalay_tfidf.csv") ## a dataset of 5000 x 1168
1. Binary encoding and generate initial population.
k_min <- 15
initial_population <- function(object) {
## generate a population to turn-on 15 cluster bits
init <- t(replicate(object#popSize, sample(rep(c(1, 0), c(k_min, object#nBits - k_min))), TRUE))
return(init)
}
2. Fitness Function Minimizes Davies-Bouldin (DB) Index. Where I evaluate DBI for each solution generated from initial_population.
DBI2 <- function(x) {
## x is a vector of solution of nBits
## exclude first column of corpus
initial_centroid <- corpus[x==1, -1]
cl <- kmeans(corpus[-1], initial_centroid)
dbi <- index.DB(corpus[-1], cl=cl$cluster, centrotypes = "centroids")
score <- -dbi$DB
return(score)
}
3. Running GA. With these settings.
g2<- ga(type = "binary",
fitness = DBI2,
population = initial_population,
selection = ga_rwSelection,
crossover = gabin_spCrossover,
pcrossover = 0.8,
pmutation = 0.1,
popSize = 100,
nBits = nrow(corpus),
seed = 123)
4. The problem. Error in kmeans(corpus[-1], initial_centroid) : initial centers are not distinct`.
I found a similar problem here, where the user also had to used a parameter to dynamically pass in the number of clusters to use. It was solve by hard-coding the number of clusters. However for my case, I really need to dynamically pass in the number of clusters, since it is coming in from a randomly generated binary vector, where those 1's will represent the initial centroids.
Checking with the kmeans() code, I noticed that the error is caused by duplicated centers:
if(any(duplicated(centers)))
stop("initial centers are not distinct")
I edited the kmeans function with trace to print out the duplicated centers. The output:
[1] "206" "520" "564" "1803" "2059" "2163" "2652" "2702" "3195" "3206" "3254" "3362" "3375"
[14] "4063" "4186"
Which shows no duplication in the randomly selected initial_centroids and I have no idea why this error keeps occurring. Is there anything else that would lead to this error?
P/S: I do understand some may suggest GA + K-means is not a good idea. But I do hope to finish what I have started. It is better to view this problem as a K-means problem (well at least in solving the initial centers are not distinct error).
Genetic algorithms are not well suited for optimizing k-means by the nature of the problem - initialization seeds interact too much, ga will not be better than taking a random sample of all possible seeds.
So my main advise is to not use genetic algorithms at all here!
If you insist, what you would need to do is detect the bad parameters, then simply return a bad score for bad initialization so they don't "survive".
To answer your question just do:
any(corpus[520, -1] != corpus[564, -1])
Your 520 and 564 rows of corpus are the same, with the only difference in an attribute row.names, see:
identical(colnames(corpus[520, -1]), colnames(corpus[564, -1])) # just to be sure
rownames(corpus[520, -1])
rownames(corpus[564, -1])
Regarding the GA and k-means, see e.g.:
Bashar Al-Shboul, Myaeng Sung-Hyon, "Initializing K-Means using Genetic Algorithms", World Academy of Science, Engineering & Technology, Jun2009, Issue 30, p. 114, (especially section II B); or
BAIN KHUSUL KHOTIMAH, FIRLI IRHAMNI, AND TRI SUNDARWATI, "A GENETIC ALGORITHM FOR OPTIMIZED INITIAL CENTERS K-MEANS CLUSTERING IN SMEs", Journal of Theoretical and Applied Information Technology, 2016, Vol. 90, No. 1

Chi squared goodness of fit for a geometric distribution

As an assignment I had to develop and algorithm and generate a samples for a given geometric distribution with PMF
Using the inverse transform method, I came up with the following expression for generating the values:
Where U represents a value, or n values depending on the size of the sample, drawn from a Unif(0,1) distribution and p is 0.3 as stated in the PMF above.
I have the algorithm, the implementation in R and I already generated QQ Plots to visually assess the adjustment of the empirical values to the theoretical ones (generated with R), i.e., if the generated sample follows indeed the geometric distribution.
Now I wanted to submit the generated sample to a goodness of fit test, namely the Chi-square, yet I'm having trouble doing this in R.
[I think this was moved a little hastily, in spite of your response to whuber's question, since I think before solving the 'how do I write this algorithm in R' problem, it's probably more important to deal with the 'what you're doing is not the best approach to your problem' issue (which certainly belongs where you posted it). Since it's here, I will deal with the 'doing it in R' aspect, but I would urge to you go back an ask about the second question (as a new post).]
Firstly the chi-square test is a little different depending on whether you test
H0: the data come from a geometric distribution with parameter p
or
H0: the data come from a geometric distribution with parameter 0.3
If you want the second, it's quite straightforward. First, with the geometric, if you want to use the chi-square approximation to the distribution of the test statistic, you will need to group adjacent cells in the tail. The 'usual' rule - much too conservative - suggests that you need an expected count in every bin of at least 5.
I'll assume you have a nice large sample size. In that case, you'll have many bins with substantial expected counts and you don't need to worry so much about keeping it so high, but you will still need to choose how you will bin the tail (whether you just choose a single cut-off above which all values are grouped, for example).
I'll proceed as if n were say 1000 (though if you're testing your geometric random number generation, that's pretty low).
First, compute your expected counts:
dgeom(0:20,.3)*1000
[1] 300.0000000 210.0000000 147.0000000 102.9000000 72.0300000 50.4210000
[7] 35.2947000 24.7062900 17.2944030 12.1060821 8.4742575 5.9319802
[13] 4.1523862 2.9066703 2.0346692 1.4242685 0.9969879 0.6978915
[19] 0.4885241 0.3419669 0.2393768
Warning, dgeom and friends goes from x=0, not x=1; while you can shift the inputs and outputs to the R functions, it's much easier if you subtract 1 from all your geometric values and test that. I will proceed as if your sample has had 1 subtracted so that it goes from 0.
I'll cut that off at the 15th term (x=14), and group 15+ into its own group (a single group in this case). If you wanted to follow the 'greater than five' rule of thumb, you'd cut it off after the 12th term (x=11). In some cases (such as smaller p), you might want to split the tail across several bins rather than one.
> expec <- dgeom(0:14,.3)*1000
> expec <- c(expec, 1000-sum(expec))
> expec
[1] 300.000000 210.000000 147.000000 102.900000 72.030000 50.421000
[7] 35.294700 24.706290 17.294403 12.106082 8.474257 5.931980
[13] 4.152386 2.906670 2.034669 4.747562
The last cell is the "15+" category. We also need the probabilities.
Now we don't yet have a sample; I'll just generate one:
y <- rgeom(1000,0.3)
but now we want a table of observed counts:
(x <- table(factor(y,levels=0:14),exclude=NULL))
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 <NA>
292 203 150 96 79 59 47 25 16 10 6 7 0 2 5 3
Now you could compute the chi-square directly and then calculate the p-value:
> (chisqstat <- sum((x-expec)^2/expec))
[1] 17.76835
(pval <- pchisq(chisqstat,15,lower.tail=FALSE))
[1] 0.2750401
but you can also get R to do it:
> chisq.test(x,p=expec/1000)
Chi-squared test for given probabilities
data: x
X-squared = 17.7683, df = 15, p-value = 0.275
Warning message:
In chisq.test(x, p = expec/1000) :
Chi-squared approximation may be incorrect
Now the case for unspecified p is similar, but (to my knowledge) you can no longer get chisq.test to do it directly, you have to do it the first way, but you have to estimate the parameter from the data (by maximum likelihood or minimum chi-square), and then test as above but you have one fewer degree of freedom for estimating the parameter.
See the example of doing a chi-square for a Poisson with estimated parameter here; the geometric follows the much same approach as above, with the adjustments as at the link (dealing with the unknown parameter, including the loss of 1 degree of freedom).
Let us assume you've got your randomly-generated variates in a vector x. You can do the following:
x <- rgeom(1000,0.2)
x_tbl <- table(x)
x_val <- as.numeric(names(x_tbl))
x_df <- data.frame(count=as.numeric(x_tbl), value=x_val)
# Expand to fill in "gaps" in the values caused by 0 counts
all_x_val <- data.frame(value = 0:max(x_val))
x_df <- merge(all_x_val, x_df, by="value", all.x=TRUE)
x_df$count[is.na(x_df$count)] <- 0
# Get theoretical probabilities
x_df$eprob <- dgeom(x_df$val, 0.2)
# Chi-square test: once with asymptotic dist'n,
# once with bootstrap evaluation of chi-sq test statistic
chisq.test(x=x_df$count, p=x_df$eprob, rescale.p=TRUE)
chisq.test(x=x_df$count, p=x_df$eprob, rescale.p=TRUE,
simulate.p.value=TRUE, B=10000)
There's a "goodfit" function described as "Goodness-of-fit Tests for Discrete Data" in package "vcd".
G.fit <- goodfit(x, type = "nbinomial", par = list(size = 1))
I was going to use the code you had posted in an earlier question, but it now appears that you have deleted that code. I find that offensive. Are you using this forum to gather homework answers and then defacing it to remove the evidence? (Deleted questions can still be seen by those of us with sufficient rep, and the interface prevents deletion of question with upvoted answers so you should not be able to delete this one.)
Generate a QQ Plot for testing a geometrically distributed sample
--- question---
I have a sample of n elements generated in R with
sim.geometric <- function(nvals)
{
p <- 0.3
u <- runif(nvals)
ceiling(log(u)/log(1-p))
}
for which i want to test its distribution, specifically if it indeed follows a geometric distribution. I want to generate a QQ PLot but have no idea how to.
--------reposted answer----------
A QQ-plot should be a straight line when compared to a "true" sample drawn from a geometric distribution with the same probability parameter. One gives two vectors to the functions which essentially compares their inverse ECDF's at each quantile. (Your attempt is not particularly successful:)
sim.res <- sim.geometric(100)
sim.rgeom <- rgeom(100, 0.3)
qqplot(sim.res, sim.rgeom)
Here I follow the lead of the authors of qqplot's help page (which results in flipping that upper curve around the line of identity):
png("QQ.png")
qqplot(qgeom(ppoints(100),prob=0.3), sim.res,
main = expression("Q-Q plot for" ~~ {G}[n == 100]))
dev.off()
---image not included---
You can add a "line of good fit" by plotting a line through through the 25th and 75th percentile points for each distribution. (I added a jittering feature to this to get a better idea where the "probability mass" was located:)
sim.res <- sim.geometric(500)
qqplot(jitter(qgeom(ppoints(500),prob=0.3)), jitter(sim.res),
main = expression("Q-Q plot for" ~~ {G}[n == 100]), ylim=c(0,max( qgeom(ppoints(500),prob=0.3),sim.res )),
xlim=c(0,max( qgeom(ppoints(500),prob=0.3),sim.res )))
qqline(sim.res, distribution = function(p) qgeom(p, 0.3),
prob = c(0.25, 0.75), col = "red")

Resources