Selecting range in a data.frame containing of factors - r

If i have a data.frame like this, but much bigger
> df
# df
# 1 G0100
# 2 G0546
# 3 G1573
# 4 G1748
# 5 G2214
# 6 G2473
# 7 G2764
# 8 G3421
# 9 G5748
# 10 G8943
is there a beautiful way to select a range between G1500 and G2500 in much bigger data set?

We can use parse_number with between
library(dplyr)
library(readr)
df %>%
filter(between(parse_number(df), 1500, 2500))

A data.table option
> setDT(df)[, .SD[between(as.numeric(gsub("\\D", "", df)), 1500, 2500)]]
df
1: G1573
2: G1748
3: G2214
4: G2473

It is not really clear from the question what the general case is but we provide a variety of solutions based on different assumptions.
1) Assuming
the input shown reproducibly in the Note at the end
the lower and upper bounds are both 5 characters, as in the question
then use subset as shown. If all values in the data frame are 5 characters the first condition could be omitted.
subset(df, nchar(df) == 5 & df >= "G1500" & df <= "G2500")
giving:
df
3 G1573
4 G1748
5 G2214
6 G2473
2) Another possibility which relaxes the second assumption above is the following which gives the same output as above. The second argument of strapply is a function given in formula notation. x is the first argument corresponding to the first capture group and y is the second argument corresponding to the second capture group.
library(gsubfn)
subset(df, strapply(df, "(.)(.*)",
~ x=='G' & as.numeric(y) >= 1500 & as.numeric(y) <= 2500,
simplify = TRUE))
3) If every entry in the data frame begins with G or if we can ignore the letter then we could just omit it.
num <- as.numeric(sub("G", "", df$df))
subset(df, num >= 1500 & num <= 2500)
4) Another variation to read the first character and the rest into separate columns of a new data frame DF and then use subset:
DF <- read.table(text = sub("(.)", "\\1 ", df$df))
subset(df, DF$V1 == "G" & DF$V2 >= 1500 & DF$V2 <= 2500)
Note
Lines <- "
df
1 G0100
2 G0546
3 G1573
4 G1748
5 G2214
6 G2473
7 G2764
8 G3421
9 G5748
10 G8943"
df <- read.table(text = Lines)

Related

How to add a function inside sum() in R language

I have a dataframe:
SampleName <- c(A,A,A,A,B)
NumberofSample <- c(1,2,3,1,4)
SampleResult <- c(3,6,12,12,14)
Data <- data.frame(SampleName,NumberofSample,SampleResult)
head(Data)
SampleName NumberofSample SampleResult
1 A 1 3
2 A 2 6
3 A 3 12
4 A 1 12
4 B 4 14
My idea is: when SampleResult <15 && SampleResult >5, Sample A has 6 sample sites which match the condition, and Sample B has 4 sample sites which match it. So the ideal results would look like this:
SampleName Frequency
1 A 6
2 B 4
I write something like:
D1<- aggregate(SampleResult~SampleName, Data, function(x)sum(x<15 && x>5))
But I feel this lack something like
x * Data$NumberofSample[x]
So my question is what's the right way to code? Thank you
We can use dplyr. Grouped by 'SampleName', subset the 'NumberofSample' that meets the condition based on 'SampleResult' and get the sum
library(dplyr)
Data %>%
group_by(SampleName) %>%
summarise(Frequency = sum(NumberofSample[SampleResult < 15 &
SampleResult > 5]))
# A tibble: 2 x 2
# SampleName Frequency
# <chr> <int>
#1 A 6
#2 B 4
If we prefer the aggregate
aggregate(cbind(Frequency = NumberofSample * (SampleResult < 15 &
SampleResult > 5)) ~ SampleName, Data, sum)
# SampleName Frequency
#1 A 6
#2 B 4
Note that the output of && is a single TRUE/FALSE value
(1:3 > 1) && (2:4 > 2)
instead of a logical vector of the same length
akrun’s solution is spot-on. But it so happens that {dplyr} offers a convenience function for this kind of computation: count.
In its most common form it counts the number of rows in each group. However, it can also perform a weighted sum, and in your case we simply weight by whether the SampleResult is between your chosen bounds:
Data %>% count(
SampleName,
wt = NumberofSample[SampleResult > 5 & SampleResult < 15]
)
Maybe the following form of aggregate is simpler. I subset Data based on the condition you want and then take the length of each group.
inx <- with(Data, 5 < SampleResult & SampleResult < 15)
aggregate(SampleResult ~ SampleName, Data[inx, ], length)
#SampleName SampleResult
#1 A 3
#2 B 1
Another possibility would be
subData <- subset(Data, 5 < SampleResult & SampleResult < 15)
aggregate(SampleResult ~ SampleName, subData, length)
but I think the logical index solution is better since its memory usage is smaller.

Replace nth consecutive occurrence of a value

I want to replace the nth consecutive occurrence of a particular code in my data frame. This should be a relatively easy task but I can't think of a solution.
Given a data frame
df <- data.frame(Values = c(1,4,5,6,3,3,2),
Code = c(1,1,2,2,2,1,1))
I want a result
df_result <- data.frame(Values = c(1,4,5,6,3,3,2),
Code = c(1,0,2,2,2,1,0))
The data frame is time-ordered so I need to keep the same order after replacing the values. I guess that nth() or duplicate() functions could be useful here but I'm not sure how to use them. What I'm missing is a function that would count the number of consecutive occurrences of a given value. Once I have it, I could then use it to replace the nth occurrence.
This question had some ideas that I explored but still didn't solve my problem.
EDIT:
After an answer by #Gregor I wrote the following function which solves the problem
library(data.table)
library(dplyr)
replace_nth <- function(x, nth, code) {
y <- data.table(x)
y <- y[, code_rleid := rleid(y$Code)]
y <- y[, seq := seq_along(Code), by = code_rleid]
y <- y[seq == nth & Code == code, Code := 0]
drop.cols <- c("code_rleid", "seq")
y %>% select(-one_of(drop.cols)) %>% data.frame() %>% return()
}
To get the solution, simply run replace_nth(df, 2, 1)
Using data.table:
library(data.table)
setDT(df)
df[, code_rleid := rleid(df$Code)]
df[, seq := seq_along(Code), by = code_rleid]
df[seq == 2 & Code == 1, Code := 0]
df
# Values Code code_rleid seq
# 1: 1 1 1 1
# 2: 4 0 1 2
# 3: 5 2 2 1
# 4: 6 2 2 2
# 5: 3 2 2 3
# 6: 3 1 3 1
# 7: 2 0 3 2
You could combine some of these (and drop the extra columns after). I'll leave it clear and let you make modifications as you like.

How to count unique values from vector? [duplicate]

Let's say I have:
v = rep(c(1,2, 2, 2), 25)
Now, I want to count the number of times each unique value appears. unique(v) returns what the unique values are, but not how many they are.
> unique(v)
[1] 1 2
I want something that gives me
length(v[v==1])
[1] 25
length(v[v==2])
[1] 75
but as a more general one-liner :) Something close (but not quite) like this:
#<doesn't work right> length(v[v==unique(v)])
Perhaps table is what you are after?
dummyData = rep(c(1,2, 2, 2), 25)
table(dummyData)
# dummyData
# 1 2
# 25 75
## or another presentation of the same data
as.data.frame(table(dummyData))
# dummyData Freq
# 1 1 25
# 2 2 75
If you have multiple factors (= a multi-dimensional data frame), you can use the dplyr package to count unique values in each combination of factors:
library("dplyr")
data %>% group_by(factor1, factor2) %>% summarize(count=n())
It uses the pipe operator %>% to chain method calls on the data frame data.
It is a one-line approach by using aggregate.
> aggregate(data.frame(count = v), list(value = v), length)
value count
1 1 25
2 2 75
length(unique(df$col)) is the most simple way I can see.
table() function is a good way to go, as Chase suggested.
If you are analyzing a large dataset, an alternative way is to use .N function in datatable package.
Make sure you installed the data table package by
install.packages("data.table")
Code:
# Import the data.table package
library(data.table)
# Generate a data table object, which draws a number 10^7 times
# from 1 to 10 with replacement
DT<-data.table(x=sample(1:10,1E7,TRUE))
# Count Frequency of each factor level
DT[,.N,by=x]
To get an un-dimensioned integer vector that contains the count of unique values, use c().
dummyData = rep(c(1, 2, 2, 2), 25) # Chase's reproducible data
c(table(dummyData)) # get un-dimensioned integer vector
1 2
25 75
str(c(table(dummyData)) ) # confirm structure
Named int [1:2] 25 75
- attr(*, "names")= chr [1:2] "1" "2"
This may be useful if you need to feed the counts of unique values into another function, and is shorter and more idiomatic than the t(as.data.frame(table(dummyData))[,2] posted in a comment to Chase's answer. Thanks to Ricardo Saporta who pointed this out to me here.
This works for me. Take your vector v
length(summary(as.factor(v),maxsum=50000))
Comment: set maxsum to be large enough to capture the number of unique values
or with the magrittr package
v %>% as.factor %>% summary(maxsum=50000) %>% length
Also making the values categorical and calling summary() would work.
> v = rep(as.factor(c(1,2, 2, 2)), 25)
> summary(v)
1 2
25 75
You can try also a tidyverse
library(tidyverse)
dummyData %>%
as.tibble() %>%
count(value)
# A tibble: 2 x 2
value n
<dbl> <int>
1 1 25
2 2 75
If you need to have the number of unique values as an additional column in the data frame containing your values (a column which may represent sample size for example), plyr provides a neat way:
data_frame <- data.frame(v = rep(c(1,2, 2, 2), 25))
library("plyr")
data_frame <- ddply(data_frame, .(v), transform, n = length(v))
You can also try dplyr::count
df <- tibble(x=c('a','b','b','c','c','d'), y=1:6)
dplyr::count(df, x, sort = TRUE)
# A tibble: 4 x 2
x n
<chr> <int>
1 b 2
2 c 2
3 a 1
4 d 1
If you want to run unique on a data.frame (e.g., train.data), and also get the counts (which can be used as the weight in classifiers), you can do the following:
unique.count = function(train.data, all.numeric=FALSE) {
# first convert each row in the data.frame to a string
train.data.str = apply(train.data, 1, function(x) paste(x, collapse=','))
# use table to index and count the strings
train.data.str.t = table(train.data.str)
# get the unique data string from the row.names
train.data.str.uniq = row.names(train.data.str.t)
weight = as.numeric(train.data.str.t)
# convert the unique data string to data.frame
if (all.numeric) {
train.data.uniq = as.data.frame(t(apply(cbind(train.data.str.uniq), 1,
function(x) as.numeric(unlist(strsplit(x, split=","))))))
} else {
train.data.uniq = as.data.frame(t(apply(cbind(train.data.str.uniq), 1,
function(x) unlist(strsplit(x, split=",")))))
}
names(train.data.uniq) = names(train.data)
list(data=train.data.uniq, weight=weight)
}
I know there are many other answers, but here is another way to do it using the sort and rle functions. The function rle stands for Run Length Encoding. It can be used for counts of runs of numbers (see the R man docs on rle), but can also be applied here.
test.data = rep(c(1, 2, 2, 2), 25)
rle(sort(test.data))
## Run Length Encoding
## lengths: int [1:2] 25 75
## values : num [1:2] 1 2
If you capture the result, you can access the lengths and values as follows:
## rle returns a list with two items.
result.counts <- rle(sort(test.data))
result.counts$lengths
## [1] 25 75
result.counts$values
## [1] 1 2
count_unique_words <-function(wlist) {
ucountlist = list()
unamelist = c()
for (i in wlist)
{
if (is.element(i, unamelist))
ucountlist[[i]] <- ucountlist[[i]] +1
else
{
listlen <- length(ucountlist)
ucountlist[[i]] <- 1
unamelist <- c(unamelist, i)
}
}
ucountlist
}
expt_counts <- count_unique_words(population)
for(i in names(expt_counts))
cat(i, expt_counts[[i]], "\n")

How to find the largest range from a series of numbers using R?

I have a data set where length and age correspond with individual items (ID #), there are 4 different items, you can see on the data set below.
range(dataset$length)
gives me the overall range of the length for all items. But I need to compare ranges to determine which item (ID #) has the largest range in length relative to the other 3.
length age ID #
3.5 5 1
7 10 1
10 15 1
4 5 2
8 10 2
13 15 2
3 5 3
7 10 3
9 15 3
4 5 4
5 10 4
7 15 4
This gives you the differences in ranges:
lapply( with(dat, tapply(length, ID, range)), diff)
And you can wrap which.max around htat list to give you the ID associated with the largest value:
which.max( lapply( with(dat, tapply(length, ID, range)), diff) )
2
2
In base R:
mins <- tapply(df$length, df$ID, min)
maxs <- tapply(df$length, df$ID, max)
unique( df$ID)[which.max(maxs-mins)]
group_by in dplyr may be helpful:
library(dplyr)
dataset %>%
group_by(ID) %>%
summarize(ID_range = n())
The above code is equivalent to the following (it's just written with %>%):
library(dplyr)
dataset <- group_by(dataset, ID)
summarize(dataset, ID_range = n())
An easy approach which doesn't use dplyr, though perhaps less elegant, is the which function.
range(dataset$length[which(dat$id == 1)])
range(dataset$length[which(dat$id == 2)])
range(dataset$length[which(dat$id == 3)])
range(dataset$length[which(dat$id == 4)])
You could also make a function that gives you the actual range (the difference between the max and the means) and use lapply to show you the IDs paired with their ranges.
largest_range <- function(id){
rbind(id,
(max(data$length[which(data$id == id)]) -
min(data$length[which(data$id == id)])))
}
lapply(X = unique(data$id), FUN = largest_range)

Count number of occurences for each unique value

Let's say I have:
v = rep(c(1,2, 2, 2), 25)
Now, I want to count the number of times each unique value appears. unique(v) returns what the unique values are, but not how many they are.
> unique(v)
[1] 1 2
I want something that gives me
length(v[v==1])
[1] 25
length(v[v==2])
[1] 75
but as a more general one-liner :) Something close (but not quite) like this:
#<doesn't work right> length(v[v==unique(v)])
Perhaps table is what you are after?
dummyData = rep(c(1,2, 2, 2), 25)
table(dummyData)
# dummyData
# 1 2
# 25 75
## or another presentation of the same data
as.data.frame(table(dummyData))
# dummyData Freq
# 1 1 25
# 2 2 75
If you have multiple factors (= a multi-dimensional data frame), you can use the dplyr package to count unique values in each combination of factors:
library("dplyr")
data %>% group_by(factor1, factor2) %>% summarize(count=n())
It uses the pipe operator %>% to chain method calls on the data frame data.
It is a one-line approach by using aggregate.
> aggregate(data.frame(count = v), list(value = v), length)
value count
1 1 25
2 2 75
length(unique(df$col)) is the most simple way I can see.
table() function is a good way to go, as Chase suggested.
If you are analyzing a large dataset, an alternative way is to use .N function in datatable package.
Make sure you installed the data table package by
install.packages("data.table")
Code:
# Import the data.table package
library(data.table)
# Generate a data table object, which draws a number 10^7 times
# from 1 to 10 with replacement
DT<-data.table(x=sample(1:10,1E7,TRUE))
# Count Frequency of each factor level
DT[,.N,by=x]
To get an un-dimensioned integer vector that contains the count of unique values, use c().
dummyData = rep(c(1, 2, 2, 2), 25) # Chase's reproducible data
c(table(dummyData)) # get un-dimensioned integer vector
1 2
25 75
str(c(table(dummyData)) ) # confirm structure
Named int [1:2] 25 75
- attr(*, "names")= chr [1:2] "1" "2"
This may be useful if you need to feed the counts of unique values into another function, and is shorter and more idiomatic than the t(as.data.frame(table(dummyData))[,2] posted in a comment to Chase's answer. Thanks to Ricardo Saporta who pointed this out to me here.
This works for me. Take your vector v
length(summary(as.factor(v),maxsum=50000))
Comment: set maxsum to be large enough to capture the number of unique values
or with the magrittr package
v %>% as.factor %>% summary(maxsum=50000) %>% length
Also making the values categorical and calling summary() would work.
> v = rep(as.factor(c(1,2, 2, 2)), 25)
> summary(v)
1 2
25 75
You can try also a tidyverse
library(tidyverse)
dummyData %>%
as.tibble() %>%
count(value)
# A tibble: 2 x 2
value n
<dbl> <int>
1 1 25
2 2 75
If you need to have the number of unique values as an additional column in the data frame containing your values (a column which may represent sample size for example), plyr provides a neat way:
data_frame <- data.frame(v = rep(c(1,2, 2, 2), 25))
library("plyr")
data_frame <- ddply(data_frame, .(v), transform, n = length(v))
You can also try dplyr::count
df <- tibble(x=c('a','b','b','c','c','d'), y=1:6)
dplyr::count(df, x, sort = TRUE)
# A tibble: 4 x 2
x n
<chr> <int>
1 b 2
2 c 2
3 a 1
4 d 1
If you want to run unique on a data.frame (e.g., train.data), and also get the counts (which can be used as the weight in classifiers), you can do the following:
unique.count = function(train.data, all.numeric=FALSE) {
# first convert each row in the data.frame to a string
train.data.str = apply(train.data, 1, function(x) paste(x, collapse=','))
# use table to index and count the strings
train.data.str.t = table(train.data.str)
# get the unique data string from the row.names
train.data.str.uniq = row.names(train.data.str.t)
weight = as.numeric(train.data.str.t)
# convert the unique data string to data.frame
if (all.numeric) {
train.data.uniq = as.data.frame(t(apply(cbind(train.data.str.uniq), 1,
function(x) as.numeric(unlist(strsplit(x, split=","))))))
} else {
train.data.uniq = as.data.frame(t(apply(cbind(train.data.str.uniq), 1,
function(x) unlist(strsplit(x, split=",")))))
}
names(train.data.uniq) = names(train.data)
list(data=train.data.uniq, weight=weight)
}
I know there are many other answers, but here is another way to do it using the sort and rle functions. The function rle stands for Run Length Encoding. It can be used for counts of runs of numbers (see the R man docs on rle), but can also be applied here.
test.data = rep(c(1, 2, 2, 2), 25)
rle(sort(test.data))
## Run Length Encoding
## lengths: int [1:2] 25 75
## values : num [1:2] 1 2
If you capture the result, you can access the lengths and values as follows:
## rle returns a list with two items.
result.counts <- rle(sort(test.data))
result.counts$lengths
## [1] 25 75
result.counts$values
## [1] 1 2
count_unique_words <-function(wlist) {
ucountlist = list()
unamelist = c()
for (i in wlist)
{
if (is.element(i, unamelist))
ucountlist[[i]] <- ucountlist[[i]] +1
else
{
listlen <- length(ucountlist)
ucountlist[[i]] <- 1
unamelist <- c(unamelist, i)
}
}
ucountlist
}
expt_counts <- count_unique_words(population)
for(i in names(expt_counts))
cat(i, expt_counts[[i]], "\n")

Resources