I am doing PCA. Here is the code for the same-
### Read .csv file #####
data<-read.csv(file.choose(),header=T,sep=",")
names(data)
data$qcountry
#### for the country-ARGENTINA#######
ar_data<-data[which(data$qcountry=="ar"),]
ar_data$qcountry<-NULL
names(ar_data)
names(ar_data)<-c("01_insufficient_efficacy","02_safety_issues","03_inconvenient_dosage_regimen","04_price_issues"
,"05_not_reimbursed","06_not_inculed_govt","07_insuficient_clinicaldata","08_previously_used","09_prescription_opted_for_some_patients","10_scientific_info_NA","12_involved_in_diff_clinical_trial"
,"13_patient_inappropriate_for_TT","14_patient_inappropriate_Erb","16_patient_over_65","17_Erbitux_alternative","95_Others")
names(ar_data)
ar_data_wdt_zero_columns<-ar_data[, colSums(ar_data != 0) > 0]
####Testing multicollinearity####
vif(ar_data_wdt_zero_columns)
#### Testing appropriatness of PCA ####
KMO(ar_data_wdt_zero_columns)
cortest.bartlett(ar_data_wdt_zero_columns)
#### Run PCA ####
pca<-prcomp(ar_data_wdt_zero_columns,center=F,scale=F)
summary(pca)
#### Compute the loadings for deciding the top4 most correlated variables###
load<-pca$rotation
write.csv(load,"loadings_argentina_2015_Q4.csv")
I have shown here for the one country, I have done this for 9countries. For each country I have to run this code. I am sure there must be easier way to automate this code. Please suggest !!
Thanks!!
Yes, this is doable for every country. You can make your custom function which takes appropriate parameters, e.g. country name and data. You do the magic inside and return an appropriate object (or not). Pass this magic to a processed data which you import and make pretty only once. The below code is not tested but should get you started.
A few comments.
Don't use file.choose() as it breaks your code 3 days down the line. How do you know what file to choose? Why click every time you run the script when you can make the script work for you? Be lazy in that sense.
You have a lot of clutter in your script. Adhere to some style and don't leave in random lines you try out for "shits and giggles". Use spaces in your code, at least.
Be more imaginative in choose object names. Try the name out first if perhaps the object already exists in a form of a base function, e.g. load.
myPCA <- function(my.country, my.data) {
ar_data <- data[data$qcountry %in% "ar", ]
ar_data$qcountry <- NULL
ar_data_wdt_zero_columns <- ar_data[, colSums(ar_data != 0) > 0]
#### Run PCA ####
pca <- prcomp(ar_data_wdt_zero_columns, center = FALSE, scale = FALSE)
#### Compute the loadings for deciding the top4 most correlated variables###
write.csv(pca$rotation, paste("loadings_", my.country, ".csv", sep = "")) # may need tweaking
return(list(pca = pca, vif = vif(ar_data_wdt_zero_columns),
kmo = KMO(ar_data_wdt_zero_columns), correlation = cortest.bartlett(ar_data_wdt_zero_columns))
}
data <- read.csv("relative_link_to_file", header = TRUE, sep = ",")
names(data) <- c("01_insufficient_efficacy","02_safety_issues","03_inconvenient_dosage_regimen","04_price_issues"
,"05_not_reimbursed","06_not_inculed_govt","07_insuficient_clinicaldata","08_previously_used","09_prescription_opted_for_some_patients","10_scientific_info_NA","12_involved_in_diff_clinical_trial"
,"13_patient_inappropriate_for_TT","14_patient_inappropriate_Erb","16_patient_over_65","17_Erbitux_alternative","95_Others")
sapply(data$qcountry, FUN = myPCA)
Related
I am very new to R and would like to use some code to run various batch code on all of the data that I have available. It should be clear what I'm trying to do:
# library(PerformanceAnalytics)
# mydata <- mtcars[, c('mpg', 'cyl', 'disp', 'hp', 'carb')];
# chart.Correlation(mydata, histogram=TRUE, pch=19)
library(MASS)
M_names = data(package = "MASS")$result[, "Item"]
for (i in 1:length(M_names)) {
eval(paste("MASS::", M_names[i], sep=""));
}
The commented part is some code I found that I haven't been able to integrate yet. The Correlation is a very cool correlation matrix, which I'm attempting to funnel every single dataset I have access to into so I can quickly review them instead of doing it all manually. I guess I will need to save them all to PNGs to have practical workflow around that, as it's clear there's no way to coax the X windows to appear or stay put when running R code as a script.
The behavior I observe as I execute this on my Mac is:
> library(MASS)
> M_names = data(package = "MASS")$result[, "Item"]
> for (i in 1:length(M_names)) {
+ eval(paste("MASS::", M_names[i], sep=""));
+ }
>
>
I don't know for sure what the silent + indicator means, but I'm pretty sure it just means that code line is inside the for loop scope. But the eval is swallowing the command I assembled. I'm just trying to get it to print out the content of the data at each iteration of the loop for now.
I also noticed this:
> eval("MASS::ships")
[1] "MASS::ships"
It just prints it when I try to eval it.
I also hope there is a way to programmatically print individual datasets. I'm already hacking really hard at this, and there is no way that what I am doing here is a good idea.
If you have the package dataset names in a vector the key to accessing them
by their character names is the get function:
library(MASS)
M_names = data(package = "MASS")$result[, "Item"]
head(get(M_names[1]), 1)
# state sex diag death status T.categ age
# 1 NSW M 10905 11081 D hs 35
You can then loop through the vector of names
for (DATA in M_names) print(summary(get(DATA)))
Another options is to use the envir argument of the data function to load the datasets into a specific environment. It may be worth adding the data to a new environment instead of polluting your workspace.You can do that with
data(list=M_names, package="MASS", envir = list_of_datafames<- new.env())
You can then look through the list_of_datafames as you would with an other list object:
lapply(list_of_datafames, summary)
I am trying to create a loop where I select one file name from a list of file names, and use that one file to run read.capthist and subsequently discretize, fit, derived, and save the outputs using save. The list contains 10 files of identical rows and columns, the only difference between them are the geographical coordinates in each row.
The issue I am running into is that capt needs to be a single file (in the secr package they are 'captfile' types), but I don't know how to select a single file from this list and get my loop to recognize it as a single entity.
This is the error I get when I try and select only one file:
Error in read.capthist(female[[i]], simtraps, fmt = "XY", detector = "polygon") :
requires single 'captfile'
I am not a programmer by training, I've learned R on my own and used stack overflow a lot for solving my issues, but I haven't been able to figure this out. Here is the code I've come up with so far:
library(secr)
setwd("./")
files = list.files(pattern = "female*")
lst <- vector("list", length(files))
names(lst) <- files
for (i in 1:length(lst)) {
capt <- lst[i]
femsimCH <- read.capthist(capt, simtraps, fmt = 'XY', detector = "polygon")
femsimdiscCH <- discretize(femsimCH, spacing = 2500, outputdetector = 'proximity')
fit <- secr.fit(femsimdiscCH, buffer = 15000, detectfn = 'HEX', method = 'BFGS', trace = FALSE, CL = TRUE)
save(fit, file="C:/temp/fit.Rdata")
D.fit <- derived(fit)
save(D.fit, file="C:/temp/D.fit.Rdata")
}
simtraps is a list of coordinates.
Ideally I would also like to have my outputs have unique identifiers as well, since I am simulating data and I will have to compare all the results, I don't want each iteration to overwrite the previous data output.
I know I can use this code by bringing in each file and running this separately (this code works for non-simulation runs of a couple data sets), but as I'm hoping to run 100 simulations, this would be laborious and prone to mistakes.
Any tips would be greatly appreciated for an R novice!
I'm learning R by using it on one project where I need to extract unique paths from logs.
Now, My workaround (lower) part of the code work, but I had to split the log into two files and perform grouping on them separately, while I tried the same on variables, I was getting all the data in all three path counts.
Can someone point me to what is wrong in the first approach, as I doubt that writing physically files to a disk is intended way?
a = read.csv('download-report-06-10-2017.csv')
yesterdays_data <- a[grepl("2017-10-05", a$Download.Time), ]
todays_data <- a[grepl("2017-10-06", a$Download.Time), ]
write.csv(yesterdays_data, "yesterdays.csv")
write.csv(todays_data, "todays.csv")
path_count <- as.data.frame(table(a$Path))
path_count_today <- as.data.frame(table(todays_data$Path))
path_count_yday <- as.data.frame(table(yesterdays_data$Path))
#### path_count, path_count_today & path_count_yday contain the same values and I expect them to be different ???
yd = read.csv('yesterdays.csv')
td = read.csv('todays.csv')
path_count_td <- as.data.frame(table(td$Path))
path_count_yd <- as.data.frame(table(yd$Path))
#### path_count_td and path_count_yd are different, as I'd expect in upper three variables
I worked a lot with MaxEnt in R recently (dismo-package), but only using a crossvalidation to validate my model of bird-habitats (only a single species). Now I want to use a self-created test sample file. I had to pick this points for validation by hand and can't use random test point.
So my R-script looks like this:
library(raster)
library(dismo)
setwd("H:/MaxEnt")
memory.limit(size = 400000)
punkteVG <- read.csv("Validierung_FL_XY_2016.csv", header=T, sep=";", dec=",")
punkteTG <- read.csv("Training_FL_XY_2016.csv", header=T, sep=";", dec=",")
punkteVG$X <- as.numeric(punkteVG$X)
punkteVG$Y <- as.numeric(punkteVG$Y)
punkteTG$X <- as.numeric(punkteTG$X)
punkteTG$Y <- as.numeric(punkteTG$Y)
##### mask NA ######
mask <- raster("final_merge_8class+le_bb_mask.img")
dataframe_VG <- extract(mask, punkteVG)
dataframe_VG[dataframe_VG == 0] <- NA
dataframe_TG <- extract(mask, punkteTG)
dataframe_TG[dataframe_TG == 0] <- NA
punkteVG <- punkteVG*dataframe_VG
punkteTG <- punkteTG*dataframe_TG
#### add the raster dataset ####
habitat_all <- stack("blockstats_stack_8class+le+area_8bit.img")
#### MODEL FITTING #####
library(rJava)
system.file(package = "dismo")
options(java.parameters = "-Xmx1g" )
setwd("H:/MaxEnt/results_8class_LE_AREA")
### backgroundpoints ###
set.seed(0)
backgrVMmax <- randomPoints(habitat_all, 100000, tryf=30)
backgrVM <- randomPoints(habitat_all, 1000, tryf=30)
### Renner (2015) PPM modelfitting Maxent ###
maxentVMmax_Renner<-maxent(habitat_all,punkteTG,backgrVMmax, path=paste('H:/MaxEnt/Ergebnisse_8class_LE_AREA/maxVMmax_Renner',sep=""),
args=c("-P",
"noautofeature",
"nothreshold",
"noproduct",
"maximumbackground=400000",
"noaddsamplestobackground",
"noremoveduplicates",
"replicates=10",
"replicatetype=subsample",
"randomtestpoints=20",
"randomseed=true",
"testsamplesfile=H:/MaxEnt/Validierung_FL_XY_2016_swd_NA"))
After the "maxent()"-command I ran into multiple errors. First I got an error stating that he needs more than 0 (which is the default) "randomtestpoints". So I added "randomtestpoints = 20" (which hopefully doesn't stop the program from using the file). Then I got:
Error: Test samples need to be in SWD format when background data is in SWD format
Error in file(file, "rt") : cannot open the connection
The thing is, when I ran the script with the default crossvalidation like this:
maxentVMmax_Renner<-maxent(habitat_all,punkteTG,backgrVMmax, path=paste('H:/MaxEnt/Ergebnisse_8class_LE_AREA/maxVMmax_Renner',sep=""),
args=c("-P",
"noautofeature",
"nothreshold",
"noproduct",
"maximumbackground=400000",
"noaddsamplestobackground",
"noremoveduplicates",
"replicates=10"))
...all works fine.
Also I tried multiple things to get my csv-validation-data in the correct format. Two rows (labled X and Y), Three rows (labled species, X and Y) and other stuff. I would rather use the "punkteVG"-vector (which is the validation data) I created with read.csv...but it seems MaxEnt wants his file.
I can't imagine my problem is so uncommon. Someone must have used the argument "testsamplesfile" before.
I found out, what the problem was. So here it is, for others to enjoy:
The correct maxent-command for a Subsample-file looks like this:
maxentVMmax_Renner<-maxent(habitat_all, punkteTG, backgrVMmax, path=paste('H:/MaxEnt',sep=""),
args=c("-P",
"noautofeature",
"nothreshold",
"noproduct",
"maximumbackground=400000",
"noaddsamplestobackground",
"noremoveduplicates",
"replicates=1",
"replicatetype=Subsample",
"testsamplesfile=H:/MaxEnt/swd.csv"))
Of course, there can not be multiple replicates, since you got only one subsample.
Most importantly the "swd.csv" Subsample-file has to include:
the X and Y coordinates
the Values at the respective points (e.g.: with "extract(habitat_all, PunkteVG)"
the first colum needs to consist of the word "species" with the header "Species" (since MaxEnt uses the default "species" if you don't define one in the Occurrence data)
So the last point was the issue here. Basically, if you don't define the species-colum in the Subsample-file, MaxEnt will not know how to assign the data.
I have a question that I can't seem to find the answer anywhere online. I apologize if it's already been answered, but here goes. I've written a script in R that will go through the process of forecasting for me, and returning the best point forecast based on cross validation and other criteria. I'm wanting to save this script as a function, that way I don't have to use the full script every time I go to forecast. The basic set up of my script is the following:
output <- read.csv("C:/Users/data.csv", header = T)
colnames(output)
month_count = length(output[,1]) ##used in calculations throughout code
current_year = output[1,1]
current_month = output[1,2]
months = 5 #months to forecast out
m = 0
data <- ts(output[,3][c(1:(month_count-m))],
frequency = 12, start = c(current_year,current_month))
#runs all the other steps from here on
The function that I'm writing will looking like this where it takes various inputs and then runs the script and prints back my forecasts
forecastMe = function(sourcefile,months,m)
{
#runs the data prints out the result
}
The problem I'm having is I want to be able to enter a directory and file name such as C:/Users/documents/data1.csv into the function (for the sourcefile part) and for it pick that up at this step of my R script.
output <- read.csv("C:/Users/sourcefile.csv", header = T)
I can't seem to find a way to get it to do it right. Any ideas or suggestions?
So...
function(sourcefile, etc) {
output <- read.csv(sourcefile, header = T)
etc
}
...that? I don't really see what you're asking exactly.
You were almost there. All you have to do is replace your constants with the variable names you want to pass to the function and delete your declarations you don't need anymore.
forecastMe = function(sourcefile,months,m) {
output <- read.csv(sourcefile, header = T)
colnames(output)
month_count = length(output[,1]) ##used in calculations throughout code
current_year = output[1,1]
current_month = output[1,2]
data <- ts(output[,3][c(1:(month_count-m))],
frequency = 12, start = c(current_year,current_month))
#runs all the other steps from here on
}