Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I'm loading a csv file that contains colleges and their conferences into R. When I read the file and create a data frame, it automatically makes the conferences class factor. All I want is to pull the conference but I can only pull the "levels" being random numbers. When I use as.character it stores the random numbers. Can anyone assist me with this?
the following issue has taken me so long to make zero progress so I'd greatly appreciate guidance / assistance.
> data <- read.csv("Regression Data Working File.csv",stringsAsFactors = FALSE)
# the file is essentially just a list of colleges in one column and their corresponding conference in the other column
> class(data$conference) # is a vector of college conferences (SEC, ACC, etc.)
[1] "character"
> data$conference[2]
[1] "7" # should be "ACC" and it is "ACC" when I use View(data)
Ok, here's what I did to fix this. My original file had the column of conferences populated using a vlookup but I made sure to copy and paste these results as values (not knowing if the vlookup function instead of the data would impact the data in the csv file / r). In response to the comment above to provide a sample data file, I copied and pasted the values into a new excel file and tried that data in r and it worked. So I went back to my previous data file and deleted the vlookup data array in a different sheet to try to find an explanation and that resolved the issue. So my guess is that something about the conversion from an excel file into a csv file used the data array that was used for the vlookup and stored the values as that. Thanks for your help in troubleshooting this! Have a great weekend
Thanks,
OP
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
As I'm dealing with a huge dataset I had to split my data into different buckets. Thus, I want to save some interim results in a csv to recall it later. However, my datafile contains some columns with lists, which according to R can not be exported (see snapshot). Do you guys know a simple way for a R newbie to make this work?
Thank you so much!
I guess the best way to solve your problem is switching to a more apropriate file format. I recomend using write_rds() from the readr package, which creates .rds files. The files you create with readr::write_rds('your_file_path') can be read in with readr::read_rds('your_file_path').
The base R functions are saveRDS() and readRDS() and the functions mentioned earlier form the readr are just wrappers with some convience features.
Just right click, then choose new csv to the folder where you want to save your work. Then set the separator of the csv to a comma.
Input all data in column form. You can later make it a matrix in your R program.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I have a large xlsx file called Run.xlsx. Inside this are multiple sheets and I want the sheet called "Factors". I also want to extract specific rows and columns from the factors sheet which are columns Z:AB and rows 15:71.
I have tried using the readxl package however it doesn't work for me.
If your Excel file columns order may change, it would be best to have an automatic code instead of selecting the columns number.
You could try
1- importing your xlsx file with "read.xlsx" function from "openxlsx" library
2- selecting columns with specific name
#1-import
library(openxlsx)
yourFile <- read.xlsx("yourPathway/yourFile.xlsx", sheet="yourSheet")
#2-columns selection
vectorNameColumns <- c("Age", "BMI", ..., "Gender")
vectorNameRows <- 15:71
refinedFile <- yourFile[vectorNameRows, vectorNameColumns]
It would also be best (for safety and time consuming purpose) to automatically select specific row names or IDs instead of row numbers in the case where your Excel file would be modified or if you want to apply the same code to another Excel file.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I'm quite new to R and currently stuck on the following task. I have spatial data in the following format:
lat long
1 49,6837508756316 8,97846155698244
2 49,9917393661473 8,2382869720459
3 51,308416699361 12,4118696787101
4 50,7048668720388 6,62725165486336
...
and so on. It's a pretty large data set.
I've been advised to convert my data set into sf data to properly work with it. Can somebody help my with that? I think one problem might also be, that the decimal mark is an ,.
Thanks for your help guys!
I assume the data is in a data.frame called sf:
sf <- data.frame(lat=c("49,6837508756316","49,9917393661473","51,308416699361","50,7048668720388"),long=c("8,97846155698244","8,2382869720459","12,4118696787101","6,62725165486336"), stringsAsFactors = FALSE)
The problem is, that the entries are characters, so you have to convert them to numeric. This can be done via as.numeric, but this function expects the decimals to be seperated by a dot ., hence you have to convert the comma to a dot and then call as.numeric. The conversion can be done using the function gsub.
sf$lat <- as.numeric(gsub(",",".",sf$lat))
sf$long <- as.numeric(gsub(",",".",sf$long))
If you have many columns and you dont want to copy-paste the above for every column, I would suggest you to use:
sf[] <- lapply(sf, function(colValues) as.numeric(gsub(",",".",colValues)))
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
First off, I have looked at all the examples and previous questions and have not been able to find a usable answer to my situation.
I have a data set of 300ish independent variables I'm trying to bring into R. The variables are all classified as factors. In my csv file I'm uploading, all of the variables are pricing data with two decimal places. I have used the following code and some of the variables have been converted with decimals. However, many of the converted columns are filled with NAs; in fact, some entire columns are completely NAs.
dsl$price = as.numeric(as.factor(dsl$price)) # <- this completely changes the data into something unrecognizablbe
dsl$price = as.numeric(as.character(dsl$price)) # <- lots of NAs or totally NAs
I've tried to recode the variables in the original CSV file to numeric, but with no luck.
Convert the factor into character which can then be converted into numeric
dsl$price <- as.numeric(as.character(dsl$price))
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I have the next link
[1] https://drive.google.com/open?id=0ByCmoyvCype7ODBMQjFTSlNtTzQ
This is a pdf file. The author of a paper gave the list of mutation in this format.
I need to annotate the mutation of this file.
I need a txt or TVS or VCF file to be reading by annovar.
Can you help me to convert this using R or other software in ubuntu?
In principle this is a job for tabulizer but I couldn't get it to work in this instance; I suspect the single table over so many pages confused it.
You can read it in to R as text with the pdftools package easily enough
library(pdftools)
txt <- pdf_text("selection.pdf")
Now txt is an R list, with each element of the list a character string for a single page in the original document. You might be able to do something fancy with regular expressions to convert this to more meaningful data.
However, it makes more sense to ask the original author for their data in an appropriate format. Publishing a 561 page PDF of tabular data is just nuts.