Excel and R do not see two values as being equal - r

I loaded data into two Excel sheets from online tables. Both tables include distinct information about the same group of baseball players, who are named in column B (or column 2 when converted to R) of each table. Neither Excel (VLOOKUP/MATCH) nor R will match up the players' names between the two tables, despite those names looking exactly the same in every way.
Yes, I have checked for extra spaces, capitalization, etc. I have attempted reformatting the cells in Excel that include the players' names. Please see input and output below from R (data was loaded as csv file):
> as.character(freeagentvalue$Name)[3064]
[1] "Travis Hafner"
> as.character(freeagentdata$Name)[294]
[1] "Travis Hafner"
> as.character(freeagentdata$Name)[294] == as.character(freeagentvalue$Name)[3064]
[1] FALSE
I would appreciate any information on why Excel and R are finding differences like the one above. Otherwise I have to retype a lot of names. Thank you in advance.

The two Travis Hafner strings in your example above differ in that in that the first example has a NBSP between the two names; the second has a normal space.
I suggest preprocessing the tables by Replacing all NBSP's with space You can do that either on the worksheet, using the SUBSTITUTE function; or in VBA, using Replace.

Related

Import excel (csv) data into R conducting bioinformatics task

I'm a new who is exploring bioinformatics via R. Right now I've encounter a trouble, where I imported my data in excel into R through changing it into csv format and using read.csv command, as you see in the pic there are 37 variables (column) where first column is supposed to be considered as fixed factor. And I would like to match it with another matirx which has only 36 variables in the downstream processing, what should I do to reduce variable numbers by fixing first column?
Many thanks in advance.
sure, I added str() properties of my data here.
If I am not mistaken, what you are looking for is setting the "Gene" column as metadata, indicating what gene those values in every row correspond to. You can try then to delete the word "Gene" in the Excel file because when you import it with the read.csv() function, the argument row.names = TRUE is set as default when "there is a header and the first row contains one fewer field than the number of columns".
You can find more information about this function using ?read.csv

Export dataframe from R to SAS

I have a data.frame in R, and I want to export it to a SAS file. I am using write.xport to do that. The column names are like:
a.b.c, a.b.d, a.f.g, ...
When I get the data in SAS, column names are like: a(1),a(2),..
How can I keep the original labels in exported SAS file?
I get the error:
Warning messages:
1: In makeSASNames(colnames(df)) :
Truncated 119 long names to 8 characters.
2: In makeSASNames(colnames(df)) : Made 106 duplicate names unique.
In addition to the length, it seems your column names contain '.'-character? SAS doesnt allow for those kind of names. SAS uses the . to represent e.g. library.dataset -notation and it has many others uses. The colnames cannot contain many + or - or & -chars either.
So to summarize; make your column names SAS -compatible. See the SAS documentation for more.
SAS uses the column labels, which allow for more complexity, only for the purposes of outputting, afaik. Thus, if you want to manipulate data in SAS, you need to rethink your column names first.

How do I export a custom list of numbers and letters to Excel from R?

To help with some regular label-making and printing I need to do, I am looking to write a script that allows me to enter a range of sequential numbers (some with string identifiers) that I can export with a specific format to Excel. For example, if I entered the range '1:16', I am looking for an output in Excel exactly as:
Example Excel Output
For each unique sequential number (i.e., 1 to 16) the first five rows must be labeled with a 'U", the next three rows with an 'F' and the last two rows must be the number alone. The final exported matrix will be n columns x 21 rows, where n will vary depending on the number range I enter.
My main problem is in writing to Excel. I can't find out how to customize this output and write to specific rows and columns as in the example above. I am limited to 'openxlsx' since I work on a corporate secure workstation. Here is what I have so far:
Example Code
Any help you may have would be very appreciated, thanks in advance!

R XLConnect getting index/formula to a chunk of data using content found in first cell

Sorry if this is difficult to understand - I don't have enough karma to add a picture so I will do the best I can to describe this! Using XLConnect package within R to read & write from/to Excel spreadsheets.
I am working on a project in which I am trying to take columns of data out of many workbooks and concatenate them together into rows of a new workbook based on which workbook they came from (each workbook is data from a consecutive business day). The snag is that the data that I seek is only a small part (10 rows X 3 columns) of each workbook/worksheet and is not always located in the same place within the worksheet due to sloppiness on behalf of the person who originally created the spreadsheets. (e.g. I can't just start at cell A2 because the dataset that starts at A2 in one workbook might start at B12 or C3 in another workbook).
I am wondering if it is possible to search for a cell based on its contents (e.g. a cell containing the title "Table of Arb Prices") and return either the index or reference formula to be able to access that cell.
Also wondering if, once I reference that cell based on its contents, if there is a way to adjust that formula to get to where I know another cell is compared to that one. For example if a cell with known contents is always located 2 rows above and 3 columns to the left of the cell where I wish to start collecting data, is it possible for me to take that first reference formula and increment it by 2 rows and 3 columns to get the reference formula for the cell I want?
Thanks for any help and please advise me if you need further information to be able to understand my questions!
You can just read the entire worksheet in as a matrix with something like
library(XLConnect)
demoExcelFile <- system.file("demoFiles/mtcars.xlsx", package = "XLConnect")
mm <- as.matrix(readWorksheetFromFile(demoExcelFile, sheet=1))
class(mm)<-"character" # convert all to character
Then you can search for values and get the row/colum
which(mm=="3.435", arr.ind=T)
# row col
# [1,] 23 6
Then you can offset those and extract values from the matrix how ever you like. In the end, when you know where you want to read from, you can convert to a cleaner data frame with
read.table(text=apply(mm[25:27, 6:8],1,paste, collapse="\t"), sep="\t")
Hopefully that gives you a general idea of something you can try. It's hard to be more specific without knowing exactly what your input data looks like.

Optimizing File reading in R

My R application reads input data from large txt files. it does not read the entire
file in one shot. Users specify the name of the gene, (3 or 4 at a time) and based on the user-input, app goes to the appropriate row and reads the data.
File format: 32,000 rows (one gene per row, first two columns contain info about
gene name, etc.) 35,000 columns with numerical data (decimal numbers).
I used read.table (filename, skip=10,000 ) etc. to go to the right row, then read
35,000 columns of data. then I do this again for the 2nd gene, 3rd gene (upto 4 genes max)
and then process the numerical results.
The file reading operations take about 1.5 to 2.0 Minutes. I am experimenting with
reading the entire file and then taking the data for the desired genes.
Any way to accelerate this? I can rewrite the gene data in another format (one
time processing) if that will accelerate reading operations in the future.
You can use the colClasses argument to read.table to speed things up, if you know the exact format of your files. For 2 character columns and 34,998 (?) numeric columns, you would use
colClasses = c(rep("character",2), rep("numeric",34998))
This would be more efficient if you used a database interface. There are several available via the RODBC package, but a particularly well-integrated-with-R option would be the sqldf package which by default uses SQLite. You would then be able to use the indexing capacity of the database to do lookup of the correct rows and read all the columns in one operation.

Resources