I would like to sum a single column of data that was output from an sqldf function in R.
I have a csv. file that contains groupings of sites with a uniqueID and their associated areas. For example:
occurrenceID sarea
{0255531B-904F-4E2D-B81D-797A21165A2F} 0.30626786
{0255531B-904F-4E2D-B81D-797A21165A2F} 0.49235953
{0255531B-904F-4E2D-B81D-797A21165A2F} 0.03490536
{0255531B-904F-4E2D-B81D-797A21165A2F} 0.00001389
{175A4B1C-CA8C-49F6-9CD6-CED9187579DC} 0.0302389
{175A4B1C-CA8C-49F6-9CD6-CED9187579DC} 0.01360811
{1EC60400-0AD0-4DB5-B815-221C4123AE7F} 0.08412911
{1EC60400-0AD0-4DB5-B815-221C4123AE7F} 0.01852466
I used the code below in R to pull out the largest area from each grouping of unique ID's.
> MyData <- read.csv(file="sacandaga2.csv", header=TRUE, sep=",")
> sqldf("select max(sarea),occurrenceID from MyData group by occurrenceID")
This produced the following output:
max(sarea) occurrenceID
1 0.49235953 {0255531B-904F-4E2D-B81D-797A21165A2F}
2 0.03023890 {175A4B1C-CA8C-49F6-9CD6-CED9187579DC}
3 0.08412911 {1EC60400-0AD0-4DB5-B815-221C4123AE7F}
4 0.00548259 {2412E244-2E9A-4477-ACC6-1EB02503BE75}
5 0.00295924 {40450574-ABEB-48E3-9BE5-09B5AB65B465}
6 0.01403846 {473FB631-D398-46B7-8E85-E63540BDFF92}
7 0.00257519 {4BABDE22-E8E0-435E-B60D-0BB9A84E1489}
8 0.02158115 {5F616A33-B028-46B1-AD92-89EAC1660C41}
9 0.00191211 {70067496-25B6-4337-8C70-782143909EF9}
10 0.03049355 {7F858EBB-132E-483F-BA36-80CE889373F5}
11 0.03947298 {9A579565-57EC-4E46-95ED-79724FA6F2AB}
12 0.02464722 {A9010BA3-0FE1-40B1-96A7-21122261A003}
13 0.00136672 {AAD710BF-1539-4235-87F1-34B66CF90781}
14 0.01139146 {AB1286C3-DBE3-467B-99E1-AEEF88A1B5B2}
15 0.07954269 {BED0433A-7167-4184-A25F-B9DBD358AFFB}
16 0.08401067 {C4EF0F45-5BF7-4F7C-BED8-D6B2DB718CB2}
17 0.04289261 {C58AC2C6-BDBE-4FE5-BD51-D70BBDFB4DB5}
18 0.03151558 {D4230F9C-80E4-454A-9D5D-0E373C6DCD9A}
19 0.00403585 {DD76A03A-CFBF-41E9-A571-03DA707BEBDA}
20 0.00007336 {E20DE254-8A0F-40BE-90D2-D6B71880E2A8}
21 9.81847859 {F382D5A6-F385-426B-A543-F5DE13F94564}
22 0.00815881 {F9032905-074A-468F-B60E-26371CF480BB}
23 0.24717113 {F9E5DC3C-4602-4C80-B00B-2AF1D605A265}
Now I would like to sum all the values in the max(sarea) column. What is the best way to accomplish this?
Either do it in sqldf or R, or assign your existing result and do it in R:
# assign your original
grouped_sum = sqldf("select max(sarea),occurrenceID from MyData group by occurrenceID")
# and sum in R
sum(grouped_sum$`max(sarea)`)
# you might prefer to use a standard column name so you don't need backticks
grouped_sum = sqldf(
"select max(sarea) as max_sarea, occurrenceID
from MyData
group by occurrenceID"
)
sum(grouped_sum$max_sarea)
If the intention is to do this in a single 'sqldf' call, use with
library(sqldf)
sqldf("with tmpdat AS (
select max(sarea) as mxarea, occurrenceID
from MyData group by occurrenceID
) select sum(mxarea)
as smxarea from tmpdat")
# smxarea
#1 0.6067275
data
MyData <-
structure(list(occurrenceID = c("{0255531B-904F-4E2D-B81D-797A21165A2F}",
"{0255531B-904F-4E2D-B81D-797A21165A2F}", "{0255531B-904F-4E2D-B81D-797A21165A2F}",
"{0255531B-904F-4E2D-B81D-797A21165A2F}", "{175A4B1C-CA8C-49F6-9CD6-CED9187579DC}",
"{175A4B1C-CA8C-49F6-9CD6-CED9187579DC}", "{1EC60400-0AD0-4DB5-B815-221C4123AE7F}",
"{1EC60400-0AD0-4DB5-B815-221C4123AE7F}"), sarea = c(0.30626786,
0.49235953, 0.03490536, 1.389e-05, 0.0302389, 0.01360811, 0.08412911,
0.01852466)), class = "data.frame", row.names = c(NA, -8L))
You can do it by getting the sum of maximum values:
sqldf("select sum(max_sarea) as sum_of_max_sarea
from (select max(sarea) as max_sarea,
occurrenceID from Mydata group by occurrenceID)")
# sum_of_max_sarea
# 1 0.6067275
Data:
Mydata <- structure(list(occurrenceID = structure(c(1L, 1L, 1L, 1L, 2L, 2L, 3L, 3L),
.Label = c("0255531B-904F-4E2D-B81D-797A21165A2F", "175A4B1C-CA8C-49F6-9CD6-CED9187579DC",
"1EC60400-0AD0-4DB5-B815-221C4123AE7F"), class = "factor"),
sarea = c(0.30626786, 0.49235953, 0.03490536, 1.389e-05, 0.0302389,
0.01360811, 0.08412911, 0.01852466)), class = "data.frame",
row.names = c(NA, -8L))
If DF is the last data frame shown in the question this sums the numeric column:
sqldf("select sum([max(sarea)]) as sum from DF")
## sum
## 1 11.07853
Note
We assume this data frame shown in reproducible form:
Lines <- "max(sarea) occurrenceID
1 0.49235953 {0255531B-904F-4E2D-B81D-797A21165A2F}
2 0.03023890 {175A4B1C-CA8C-49F6-9CD6-CED9187579DC}
3 0.08412911 {1EC60400-0AD0-4DB5-B815-221C4123AE7F}
4 0.00548259 {2412E244-2E9A-4477-ACC6-1EB02503BE75}
5 0.00295924 {40450574-ABEB-48E3-9BE5-09B5AB65B465}
6 0.01403846 {473FB631-D398-46B7-8E85-E63540BDFF92}
7 0.00257519 {4BABDE22-E8E0-435E-B60D-0BB9A84E1489}
8 0.02158115 {5F616A33-B028-46B1-AD92-89EAC1660C41}
9 0.00191211 {70067496-25B6-4337-8C70-782143909EF9}
10 0.03049355 {7F858EBB-132E-483F-BA36-80CE889373F5}
11 0.03947298 {9A579565-57EC-4E46-95ED-79724FA6F2AB}
12 0.02464722 {A9010BA3-0FE1-40B1-96A7-21122261A003}
13 0.00136672 {AAD710BF-1539-4235-87F1-34B66CF90781}
14 0.01139146 {AB1286C3-DBE3-467B-99E1-AEEF88A1B5B2}
15 0.07954269 {BED0433A-7167-4184-A25F-B9DBD358AFFB}
16 0.08401067 {C4EF0F45-5BF7-4F7C-BED8-D6B2DB718CB2}
17 0.04289261 {C58AC2C6-BDBE-4FE5-BD51-D70BBDFB4DB5}
18 0.03151558 {D4230F9C-80E4-454A-9D5D-0E373C6DCD9A}
19 0.00403585 {DD76A03A-CFBF-41E9-A571-03DA707BEBDA}
20 0.00007336 {E20DE254-8A0F-40BE-90D2-D6B71880E2A8}
21 9.81847859 {F382D5A6-F385-426B-A543-F5DE13F94564}
22 0.00815881 {F9032905-074A-468F-B60E-26371CF480BB}
23 0.24717113 {F9E5DC3C-4602-4C80-B00B-2AF1D605A265}"
DF <- read.table(text = Lines, check.names = FALSE)
Related
I am trying to calculate a ratio using this formula: log2(_5p/3p).
I have a dataframe in R and the entries have the same name except their last part that will be either _3p or _5p. I want to do this operation log2(_5p/_3p) for each specific name.
For instance for the first two rows the result will be like this:
LQNS02277998.1_30988 log2(40/148)= -1.887525
Ideally I want to create a new data frame with the results where only the common part of the name is kept.
LQNS02277998.1_30988 -1.887525
How can I do this in R?
> head(dup_res_LC1_b_2)
# A tibble: 6 x 2
microRNAs n
<chr> <int>
1 LQNS02277998.1_30988_3p 148
2 LQNS02277998.1_30988_5p 40
3 Dpu-Mir-279-o6_LQNS02278070.1_31942_3p 4
4 Dpu-Mir-279-o6_LQNS02278070.1_31942_5p 4
5 LQNS02000138.1_777_3p 73
6 LQNS02000138.1_777_5p 12
structure(list(microRNAs = c("LQNS02277998.1_30988_3p",
"LQNS02277998.1_30988_5p", "Dpu-Mir-279-o6_LQNS02278070.1_31942_3p",
"Dpu-Mir-279-o6_LQNS02278070.1_31942_5p", "LQNS02000138.1_777_3p",
"LQNS02000138.1_777_5p"), n = c(148L, 40L, 4L, 4L, 73L, 12L)), row.names = c(NA,
-6L), class = c("tbl_df", "tbl", "data.frame"))
We can use a group by operation by removing the substring at the end i.e. _3p or _5p with str_remove, then use the log division of the pair of 'n'
library(dplyr)
library(stringr)
df1 %>%
group_by(grp = str_remove(microRNAs, "_[^_]+$")) %>%
mutate(new = log2(last(n)/first(n)))
I just need to remove all replicate numbers and letter "R" from the end of all rows in a column, strain and create a new column with those results in mutant, preferable using dplyr so I can pipe the results forward.
For example
print(df)
strain measurement
1 CK522R1 75
2 CN344attBR1 50
3 GL065R13 32
4 GL078R100 27
Desired Output
strain measurement mutant
1 CK522R1 75 CK522
2 CN344attBR1 50 CN344attB
3 GL065R13 32 GL065
4 GL078R100 27 GL078
Reproducible Data
structure(list(strain = structure(1:4, .Label = c("CK522R1",
"CN344attBR1", "GL065R13", "GL078R100"), class = "factor"), measurement = c(75,
50, 32, 27)), class = "data.frame", row.names = c(NA, -4L))
From d.b's comment:
library(dplyr)
df %>% mutate(mutant=sub("R\\d+$", "",strain),replicate=regmatches(strain, regexpr("R\\d+$", strain)))
I am having some trouble cleaning up my data. It consists of a list of sold houses. It is made up of the sell price, no. of rooms, m2 and the address.
As seen below the address is in one string.
Head(DF, 3)
Address Price m2 Rooms
Petersvej 1772900 Hoersholm 10.000 210 5
Annasvej 2B2900 Hoersholm 15.000 230 4
Krænsvej 125800 Lyngby C 10.000 210 5
A Mivs Alle 119800 Hjoerring 1.300 70 3
The syntax for the address coloumn is: road name, road no., followed by a 4 digit postalcode and the city name(sometimes two words).
Also need to extract the postalcode.. been looking at 'stringi' package haven't been able to find any examples..
any pointers are very much appreciated
1) Using separate in tidyr separate the subfields of Address into 3 fields merging anything left over into the last and then use separate again to split off the last 4 digits in the Number column that was generated in the first separate.
library(dplyr)
library(tidyr)
DF %>%
separate(Address, into = c("Road", "Number", "City"), extra = "merge") %>%
separate(Number, into = c("StreetNo", "Postal"), sep = -4)
giving:
Road StreetNo Postal City Price m2 Rooms CITY
1 Petersvej 77 2900 Hoersholm 10 210 5 Hoersholm
2 Annasvej 121B 2900 Hoersholm 15 230 4 Hoersholm
3 Krænsvej 12 5800 Lyngby C 10 210 5 C
2) Alternately, insert commas between the subfields of Address and then use separate to split the subfields out. It gives the same result as (1) on the input shown in the Note below.
DF %>%
mutate(Address = sub("(\\S.*) +(\\S+)(\\d{4}) +(.*)", "\\1,\\2,\\3,\\4", Address)) %>%
separate(Address, into = c("Road", "Number", "Postal", "City"), sep = ",")
Note
The input DF in reproducible form is:
DF <-
structure(list(Address = structure(c(3L, 1L, 2L), .Label = c("Annasvej 121B2900 Hoersholm",
"Krænsvej 125800 Lyngby C", "Petersvej 772900 Hoersholm"), class = "factor"),
Price = c(10, 15, 10), m2 = c(210L, 230L, 210L), Rooms = c(5L,
4L, 5L), CITY = structure(c(2L, 2L, 1L), .Label = c("C",
"Hoersholm"), class = "factor")), class = "data.frame", row.names = c(NA,
-3L))
Update
Added and fixed (2).
Check out the cSplit function from the splitstackshape package
library(splitstackshape)
df_new <- cSplit(df, splitCols = "Address", sep = " ")
#This will split your address column into 4 different columns split at the space
#you can then add an ifelse block to combine the last 2 columns to make up the city like
df_new$City <- ifelse(is.na(df_new$Address_4), as.character(df_new$Address_3), paste(df_new$Address_3, df_new$Address_4, sep = " "))
One way to do this is with regex.
In this instance you may use a simple regular expression which will match all alphabetical characters and space characters which lead to the end of the string, then trim the whitespace off.
library(stringr)
DF <- data.frame(Address=c("Petersvej 772900 Hoersholm",
"Annasvej 121B2900 Hoersholm",
"Krænsvej 125800 Lyngby C"))
DF$CITY <- str_trim(str_extract(DF$Address, "[a-zA-Z ]+$"))
This will give you the following output:
Address CITY
1 Petersvej 772900 Hoersholm Hoersholm
2 Annasvej 121B2900 Hoersholm Hoersholm
3 Krænsvej 125800 Lyngby C Lyngby C
In R the stringr package is preferred for regex because it allows for multiple-group capture, which in this example could allow you to separate each component of the address with one expression.
As part of a project, I am currently using R to analyze some data. I am currently stuck with the retrieving few values from the existing dataset which i have imported from a csv file.
The file looks like:
For my analysis, I wanted to create another column which is the subtraction of the current value of x and its previous value. But the first value of every unique i, x would be the same value as it is currently. I am new to R and i was trying various ways for sometime now but still not able to figure out a way to do so. Request your suggestions in the approach that I can follow to achieve this task.
Mydata structure
structure(list(t = 1:10, x = c(34450L, 34469L, 34470L, 34483L,
34488L, 34512L, 34530L, 34553L, 34575L, 34589L), y = c(268880.73342868,
268902.322359863, 268938.194698248, 268553.521856105, 269175.38273083,
268901.619719038, 268920.864512966, 269636.604121984, 270191.206593437,
269295.344751692), i = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L)), .Names = c("t", "x", "y", "i"), row.names = c(NA, 10L), class = "data.frame")
You can use the package data.table to obtain what you want:
library(data.table)
setDT(MyData)[, x_diff := c(x[1], diff(x)), by=i]
MyData
# t x i x_diff
# 1: 1 34287 1 34287
# 2: 2 34789 1 502
# 3: 3 34409 1 -380
# 4: 4 34883 1 474
# 5: 5 34941 1 58
# 6: 6 34045 2 34045
# 7: 7 34528 2 483
# 8: 8 34893 2 365
# 9: 9 34551 2 -342
# 10: 10 34457 2 -94
Data:
set.seed(123)
MyData <- data.frame(t=1:10, x=sample(34000:35000, 10, replace=T), i=rep(1:2, e=5))
You can use the diff() function. If you want to add a new column to your existing data frame, the diff function will return a vector x-1 length of your current data frame though. so in your case you can try this:
# if your data frame is called MyData
MyData$newX = c(NA,diff(MyData$x))
That should input an NA value as the first entry in your new column and the remaining values will be the difference between sequential values in your "x" column
UPDATE:
You can create a simple loop by subsetting through every unique instance of "i" and then calculating the difference between your x values
# initialize a new dataframe
newdf = NULL
values = unique(MyData$i)
for(i in 1:length(values)){
data1 = MyData[MyData$i = values[i],]
data1$newX = c(NA,diff(data1$x))
newdata = rbind(newdata,data1)
}
# and then if you want to overwrite newdf to your original dataframe
MyData = newdf
# remove some variables
rm(data1,newdf,values)
I am trying to figure out how to get the time between consecutive events when events are stored as a column of dates in a dataframe.
sampledf=structure(list(cust = c(1L, 1L, 1L, 1L), date = structure(c(9862,
9879, 10075, 10207), class = "Date")), .Names = c("cust", "date"
), row.names = c(NA, -4L), class = "data.frame")
I can get an answer with
as.numeric(rev(rev(difftime(c(sampledf$date[-1],0),sampledf$date))[-1]))
# [1] 17 196 132
but it is really ugly. Among other things, I only know how to exclude the first item in a vector, but not the last so I have to rev() twice to drop the last value.
Is there a better way?
By the way, I will use ddply to do this to a larger set of data for each cust id, so the solution would need to work with ddply.
library(plyr)
ddply(sampledf,
c("cust"),
summarize,
daysBetween = as.numeric(rev(rev(difftime(c(date[-1],0),date))[-1]))
)
Thank you!
Are you looking for this?
as.numeric(diff(sampledf$date))
# [1] 17 196 132
To remove the last element, use head:
head(as.numeric(diff(sampledf$date)), -1)
# [1] 17 196
require(plyr)
ddply(sampledf, .(cust), summarise, daysBetween = as.numeric(diff(date)))
# cust daysBetween
# 1 1 17
# 2 1 196
# 3 1 132
You can just use diff.
as.numeric(diff(sampledf$date))
To leave off the last, element, you can do:
[-length(vec)] #where `vec` is your vector
In this case I don't think you need to leave anything off though, because diff is already one element shorter:
test <- ddply(sampledf,
c("cust"),
summarize,
daysBetween = as.numeric(diff(sampledf$date)
))
test
# cust daysBetween
#1 1 17
#2 1 196
#3 1 132