I am interested in (functional) vector manipulation in R. Specifically, what are R's equivalents to Perl's map and grep?
The following Perl script greps the even array elements and multiplies them by 2:
#a1=(1..8);
#a2 = map {$_ * 2} grep {$_ % 2 == 0} #a1;
print join(" ", #a2)
# 4 8 12 16
How can I do that in R? I got this far, using sapply for Perl's map:
> a1 <- c(1:8)
> sapply(a1, function(x){x * 2})
[1] 2 4 6 8 10 12 14 16
Where can I read more about such functional array manipulations in R?
Also, is there a Perl to R phrase book, similar to the Perl Python Phrasebook?
Quick ones:
Besides sapply, there are also lapply(), tapply, by, aggregate and more in the base. Then there are loads of add-on package on CRAN such as plyr.
For basic functional programming as in other languages: Reduce(), Map(), Filter(), ... all of which are on the same help page; try help(Reduce) to get started.
As noted in the earlier answer, vectorisation is even more appropriate here.
As for grep, R actually has three regexp engines built-in, including a Perl-based version from libpcre.
You seem to be missing a few things from R that are there. I'd suggest a good recent book on R and the S language; my recommendation would be Chambers (2008) "Software for Data Analysis"
R has "grep", but it works entirely different than what you're used to. R has something much better built in: it has the ability to create array slices with a boolean expression:
a1 <- c(1:8)
a2 <- a1 [a1 %% 2 == 0]
a2
[1] 2 4 6 8
For map, you can apply a function as you did above, but it's much simpler to just write:
a2 * 2
[1] 4 8 12 16
Or in one step:
a1[a1 %% 2 == 0] * 2
[1] 4 8 12 16
I have never heard of a Perl to R phrase book, if you ever find one let me know! In general, R has less documentation than either perl or python, because it's such a niche language.
Related
I am looking for a very efficient solution for for loop in R
where data_papers is
data_papers<-c(1,3, 47276 77012 77012 79468....)
paper_author:
paper_id author_id
1 1 521630
2 1 972575
3 1 1528710
4 1 1611750
5 2 1682088
I need to find the authors which are present in paper_author for a given paper in data_papers.There are around 350,000 papers in data_papers to around 2,100,000 papers in paper_author.
So my output would be a list of author_id for paper_ids in data_paper
authors:
[[1]]
[1] 521630 972575 1528710 1611710
[[2]]
[1] 826 338038 788465 1256860 1671245 2164912
[[3]]
[1] 366653 1570981 1603466
The simplest way to do this would be
authors<-vector("list",length(data_papers))
for(i in 1:length(data_papers)){
authors[i]<-as.data.frame(paper_author$author_id[which(paper_author$paper_id%in%data_papers[i])])}
But the computation time is very high
The other alternative is something like below taken from efficient programming in R
i=1:length(data_papers)
authors[i]<-as.data.frame(paper_author$author_id[which(paper_author$paper_id%in%data_papers[i])])
But i am not able to do this.
How could this be done.thanks
with(paper_author, split(author_id,paper_id))
Or you could use R's merge function?
merge(data_papers, paper_author, by=1)
Why are you not able to use this second solution you mentioned? Information on why would be useful.
In any case, what you want to do is to join two tables (data_papers and paper_authors). Doing it with pure nested loops, as your sample code does in either R for loops or the C for loops underlying vector operations, is pretty inefficient. You could use some kind of index data structure, based on e.g. the hash package, but it's a lot of work.
Instead, just use a database. They're built for this sort of thing. sqldf even lets you embed one into R.
install.packages("sqldf")
require(sqldf)
#you probably want to dig into the indexing options available here as well
combined <- sqldf("select distinct author_id from paper_author pa inner join data_papers dp on dp.paper_id = pa.paper_id where dp.paper_id = 1234;")
Once again, I am having a great time with Notebook and the emerging rmagic infrastructure, but I have another question about the bridge between the two. Currently I am attempting to pass several subsets of a pandas DataFrame to R for visualization with ggplot2. Just to be clear upfront, I know that I could pass the entire DataFrame and perform additional subsetting in R. My preference, however, is to leverage the data management capability of Python and the subset-wise operations I am performing are just easier and faster using pandas than the equivalent operations in R. So for the sake of efficiency and morbid curiosity...
I have been trying to figure out if there is a way to push several objects at once. The wrinkle is that sometimes I don't know in advance how many items will need to be pushed. To retain flexibility, I have been populating dictionaries with DataFrames throughout the front end of the script. The following code provides a reasonable facsimile of what I am working through (I have not converted via com.convert_to_r_dataframe for simplicity, but my real code does take this step):
import pandas as pd
from pandas import DataFrame
%load_ext rmagic
d1=DataFrame(np.arange(16).reshape(4,4))
d2=DataFrame(np.arange(20).reshape(5,4))
d_list=[d1,d2]
names=['n1','n2']
d_dict=dict(zip(names,d_list))
for name in d_dict.keys():
exec '%s=d_dict[name]' % name
%Rpush n1
As can be seen, I can assign a static name and push the DataFrame into the R namespace individually (as well as in a 'list' >> %Rpush n1 n2). What I cannot do is something like the following:
for name in d_dict.keys():
%Rpush d_dict[name]
That snippet raises an exception >> KeyError: u'd_dict[name]'. I also tried to deposit the dynamically named DataFrames in a list, the list references end up pointing to the data rather than the object reference:
df_list=[]
for name in d_dict.keys():
exec '%s=d_dict[name]' % name
exec 'df_list.append(%s)' % name
print df_list
for df in df_list:
%Rpush df
[ 0 1 2 3
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11
3 12 13 14 15,
0 1 2 3
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11
3 12 13 14 15
4 16 17 18 19]
%Rpush did not throw an exception when I looped through the lists contents, but the DataFrames could not be found in the R namespace. I have not been able to find much discussion of this topic beyond talk about the conversion of lists to R vectors. Any help would be greatly appreciated!
Rmagic's push uses the name that you give it both to look up the Python variable, and to name the R variable it creates. So it needs a valid name, not just any expression, on both sides.
There's a trick you can do to get the name from a Python variable:
d1=DataFrame(np.arange(16).reshape(4,4))
name = 'd1'
%Rpush {name}
# equivalent to %Rpush d1
But if you want to do more advanced things, it's best to get hold of the r object and use that to put your objects in. Rmagic is just a convenience wrapper over rpy2, which is a full API. So you can do:
from rpy2.robjects import r
r.assign('a', 1)
You can mix and match which interface you use - rmagic and rpy2 are talking to the same instance of R.
I have the following situation where Im pretty desperate.
paste("crossdata","$geno$'",1:4,"'$data",sep="")
generates 4 strings which look like that:
"crossdata$geno$'1'$data" "crossdata$geno$'2'$data" "crossdata$geno$'3'$data" "crossdata$geno$'4'$data"
I want to retrieve the corresponding data.frames of these 4 strings via evaluation of one of these strings and the combine them via cbind. However when Im doing something like this:
cbind(sapply(parse(text=paste("crossdata","$geno$'",i,"'$data",sep="")),eval))
that does not work. Can anybody help me out?
Thanks
datlist <- list(adat=data.frame(u=1:5,v=6:10),bdat=data.frame(x=11:15,y=16:20))
extdat <- c("datlist$adat","datlist$bdat")
do.call('cbind',lapply(extdat,function(i) eval(parse(text=i))))
u v x y
1 1 6 11 16
2 2 7 12 17
3 3 8 13 18
4 4 9 14 19
5 5 10 15 20
Of course this uses eval + parse, which usually means you are on the wrong track.
Using the combination of parse and eval is like saying that you know how to get from New York City to Boston and therefore making all your travel plans by going from your origin to New York, then to Boston, then to your desitination. In some cases this may not be to bad, but it is a bit of a long detour if you are traveling from London to Paris.
You should first learn the relationship and difference between subsetting lists using $ and [[ (see ?'[[' for the documentation) and when it is, and more importantly, is not appropriate to use $. Once you understand that you should be able to find solutions that do not require parse and eval.
Your problem may be as simple as (untested since your example is not reproducible):
do.call( cbind, lapply( 1:4, function(x) crossdata[['geno']][[x]][['data']] ) )
or possibly
do.call(cbind, lapply(as.character(1:4), function(x) crossdata$geno[[x]]$data ) )
I use R for most of my statistical analysis. However, cleaning/processing data, especially when dealing with sizes of 1Gb+, is quite cumbersome. So I use common UNIX tools for that. But my question is, is it possible to, say, run them interactively in the middle of an R session? An example: Let's say file1 is the output dataset from an R processes, with 100 rows. From this, for my next R process, I need a specific subset of columns 1 and 2, file2, which can be easily extracted through cut and awk. So the workflow is something like:
Some R process => file1
cut --fields=1,2 <file1 | awk something something >file2
Next R process using file2
Apologies in advance if this is a foolish question.
Try this (adding other read.table arguments if needed):
# 1
DF <- read.table(pipe("cut -fields=1,2 < data.txt| awk something_else"))
or in pure R:
# 2
DF <- read.table("data.txt")[1:2]
or to not even read the unwanted fields assuming there are 4 fields:
# 3
DF <- read.table("data.txt", colClasses = c(NA, NA, "NULL", "NULL"))
The last line could be modified for the case where we know we want the first two fields but don't know how many other fields there are:
# 3a
n <- count.fields("data.txt")[1]
read.table("data.txt", header = TRUE, colClasses = c(NA, NA, rep("NULL", n-2)))
The sqldf package can be used. In this example we assume a csv file, data.csv and that the desired fields are called a and b . If its not a csv file then use appropriate arguments to read.csv.sql to specify other separator, etc. :
# 4
library(sqldf)
DF <- read.csv.sql("data.csv", sql = "select a, b from file")
I think you may be looking for littler which integrates R into the Unix command-line pipelines.
Here is a simple example computing the file size distribution of of /bin:
edd#max:~/svn/littler/examples$ ls -l /bin/ | awk '{print $5}' | ./fsizes.r
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
4 5736 23580 61180 55820 1965000 1
The decimal point is 5 digit(s) to the right of the |
0 | 00000000000000000000000000000000111111111111111111111111111122222222+36
1 | 01111112233459
2 | 3
3 | 15
4 |
5 |
6 |
7 |
8 |
9 | 5
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 | 6
edd#max:~/svn/littler/examples$
and it takes for that is three lines:
edd#max:~/svn/littler/examples$ cat fsizes.r
#!/usr/bin/r -i
fsizes <- as.integer(readLines())
print(summary(fsizes))
stem(fsizes)
See ?system for how to run shell commands from within R.
Staying in the tradition of literate programming, using e.g. org-mode and org-babel will do the job perfectly:
You can combine several different programming languages in one script and execute then separate, in sequence, export the results or the code, ...
It is a little bit like sweave, only that the code blocks can by python, bash, R, sql, and numerous other. Check t out: org-mode and bable and an example using different programming languages
Apart from that, I think org-mode and babel is the perfect way of writing even pure R scripts.
Preparing data before working with it in R is quite common, and I have a lot of scripts for Unix and Perl pre-processing, and have, at various times, maintained scripts/programs for MySQL, MongoDB, Hadoop, C, etc. for pre-processing.
However, you may get better mileage for portability if you do some kinds of pre-processing in R. You might try asking new questions focused on some of these particulars. For instance, to load large amounts of data into memory mapped files, I seem to evangelize bigmemory. Another example is found in the answers (especially JD Long's) to this question.
Does anyone have any good thoughts on how to code complex tabulations in R?
I am afraid I might be a little vague on this, but I want to set up a script to create a bunch of tables of a complexity analogous to the stat abstract of the united states.
e.g.: http://www.census.gov/compendia/statab/tables/09s0015.pdf
And I would like to avoid a whole bunch of rbind and hbind statements.
In SAS, I have heard, there is a table creation specification language; I was wondering if there was something of similar power for R?
Thanks!
It looks like you want to apply a number of different calculations to some data, grouping it by one field (in the example, by state)?
There are many ways to do this. See this related question.
You could use Hadley Wickham's reshape package (see reshape homepage). For instance, if you wanted the mean, sum, and count functions applied to some data grouped by a value (this is meaningless, but it uses the airquality data from reshape):
> library(reshape)
> names(airquality) <- tolower(names(airquality))
> # melt the data to just include month and temp
> aqm <- melt(airquality, id="month", measure="temp", na.rm=TRUE)
> # cast by month with the various relevant functions
> cast(aqm, month ~ ., function(x) c(mean(x),sum(x),length(x)))
month X1 X2 X3
1 5 66 2032 31
2 6 79 2373 30
3 7 84 2601 31
4 8 84 2603 31
5 9 77 2307 30
Or you can use the by() function. Where the index will represent the states. In your case, rather than apply one function (e.g. mean), you can apply your own function that will do multiple tasks (depending upon your needs): for instance, function(x) { c(mean(x), length(x)) }. Then run do.call("rbind" (for instance) on the output.
Also, you might give some consideration to using a reporting package such as Sweave (with xtable) or Jeffrey Horner's brew package. There is a great post on the learnr blog about creating repetitive reports that shows how to use it.
Another options is the plyr package.
library(plyr)
names(airquality) <- tolower(names(airquality))
ddply(airquality, "month", function(x){
with(x, c(meantemp = mean(temp), maxtemp = max(temp), nonsense = max(temp) - min(solar.r)))
})
Here is an interesting blog posting on this topic. The author tries to create a report analogous to the United Nation's World Population Prospects: The 2008 Revision report.
Hope that helps,
Charlie