R function to count number of times when values changes - r

I am new to R,
I have 3 columns named A1, A2, ChangeInA that looks like this in a dataset
A1
A2
ChangeInA
10
20
10
24
30
24
22
35
35
54
65
65
15
29
15
The column 'ChangeInA' is either (A1 or A2)
I want to determine the number of times the 3rd column ('ChangeInA') changes.
Is there any function in R to do that?
Let me explain:
From the table, we can see that the 'ChangeInA' column switched twice,
first at row 3 and it switched again at row 5 (note that 'ChangeInA' can only have values of A1 or A2) so I want an R function to print how many times the switch happened. I can see the change on the dataset but I need to prove it on R
Below is a code I tried from previous answers
change<- rleid(rawData$ChangeInA == rawData$A1)
This showed me all the ChangeInA
change<- max(rleid(rawData$ChangeInA == rawData$A1))
This showed me the maximum number in ChangeInA

One option is to use rleid from data.table to keep track of when a change occurs in ChangeInA, which we can use on a conditional of whether ChangeInA is equal to A1. Then, we can just use max to get the total number of changes.
library(data.table)
max(rleid(df$ChangeInA == df$A1) - 1)
# 2
Or we could use dplyr with rleid:
library(dplyr)
df %>%
mutate(rlid = rleid(A1 == ChangeInA) - 1) %>%
pull(rlid) %>%
last()
Data
df <- structure(list(A1 = c(10L, 24L, 22L, 54L, 15L), A2 = c(20L, 30L,
35L, 65L, 29L), ChangeInA = c(10L, 24L, 35L, 65L, 15L)), class = "data.frame", row.names = c(NA,
-5L))

Related

Paste value from for loop into data frame R

I have two dataframes in R, recurrent and L1HS. I am trying to find a way to do this:
If a sequence in recurrent matches sequence in L1HS, paste a value from a column in recurrent into new column in L1HS.
The recurrent dataframe looks like this:
> head(recurrent)
chr start end X Y level unique
1: chr4 56707846 56708347 0 38 03 chr4_56707846_56708347
2: chr1 20252181 20252682 0 37 03 chr1_20252181_20252682
3: chr2 224560903 224561404 0 37 03 chr2_224560903_224561404
4: chr5 131849595 131850096 0 36 03 chr5_131849595_131850096
5: chr7 46361610 46362111 0 36 03 chr7_46361610_46362111
6: chr1 20251169 20251670 0 36 03 chr1_20251169_20251670
The L1HS dataset contains many columns containing genetic sequence basepairs and a column "Sequence" that should hopefully have some matches with "unique" in the recurrent data frame, like so:
> head(L1HS$Sequence)
"chr1_35031657_35037706"
"chr1_67544575_67550598"
"chr1_81404889_81410942"
"chr1_84518073_84524089"
"chr1_87144764_87150794"
I know how to search for matches using
test <- recurrent$unique %in% L1HS$Sequence
to get the Booleans:
> head(test)
[1] FALSE FALSE FALSE FALSE FALSE FALSE
But I have a couple of problems from here. If the sequence is found, I want to copy the "level" value from the recurrent dataset to the L1HS dataset in a new column. For example, if the sequence "chr4_56707846_56708347" from the recurrent data was found in the full-length data, I'd like the full-length data frame to look like:
Sequence level other_columns
chr4_56707846_56708347 03 gggtttcatgaccc....
I was thinking of trying something like:
for (i in L1HS){
if (recurrent$unique %in% L1HS$Sequence{
L1HS$level <- paste(recurrent$level[i])}
}
but of course this isn't working and I can't figure it out.
I am wondering what the best approach is here! I'm wondering if merge/intersect/apply might be easier/better, or just what best practice might look like for a somewhat simple question like this. I've found some similar examples for Python/pandas, but am stuck here.
Thanks in advance!
You can do a simple left_join to add level to L1HS with dplyr.
library(dplyr)
L1HS %>%
left_join(., recurrent %>% select(unique, level), by = c("Sequence" = "unique"))
Or with merge:
merge(x=L1HS,y=recurrent[, c("unique", "level")], by.x = "Sequence", by.y = "unique",all.x=TRUE)
Output
Sequence level
1 chr1_35031657_35037706 4
2 chr1_67544575_67550598 2
3 chr1_81404889_81410942 NA
4 chr1_84518073_84524089 3
5 chr1_87144764_87150794 NA
*Note: This will still retain all the columns in L1HS. I just didn't create any additional columns in the example data below.
Data
recurrent <- structure(list(chr = c("chr4", "chr1", "chr2", "chr5", "chr7",
"chr1"), start = c(56707846L, 20252181L, 224560903L, 131849595L,
46361610L, 20251169L), end = c(56708347L, 20252682L, 224561404L,
131850096L, 46362111L, 20251670L), X = c(0L, 0L, 0L, 0L, 0L,
0L), Y = c(38L, 37L, 37L, 36L, 36L, 36L), level = c(3L, 2L, 3L,
3L, 3L, 4L), unique = c("chr4_56707846_56708347", "chr1_67544575_67550598",
"chr2_224560903_224561404", "chr5_131849595_131850096", "chr1_84518073_84524089",
"chr1_35031657_35037706")), class = "data.frame", row.names = c(NA,
-6L))
L1HS <- structure(list(Sequence = c("chr1_35031657_35037706", "chr1_67544575_67550598",
"chr1_81404889_81410942", "chr1_84518073_84524089", "chr1_87144764_87150794"
)), class = "data.frame", row.names = c(NA, -5L))

R find nearest neighbor for selected point

I have a csv file with only 20 datapoints, and I'd like to know the nearest neighbor for a new data point.
My csv file looks like this
temp rain
79 12
81 13
79 4
61 9
60 15
45 5
34 5
100 9
101 3
59 11
58 16
So I would like to know the proper way to find the nearest neighbor for the point 65, 7 using the euclidean distance and KNN. Most of the algorithms available online are using large datasets such as iris or german from R, but this is so small it does not require cleaning, so I feel as though those solutions are over-complicating this problem. I am still very new to R so I may have overlooked a solution. Thank you for taking the time to read this!
I have tried the following code but it keeps throwing an error, again I think I am just over-complicating this
df <- read.csv("data.csv", header = FALSE, sep = ',')
head(df)
ran <- sample(1:nrow(df), 0.9 * nrow(df))
nor <-function(x) { (x -min(x))/(max(x)-min(x)) }
df_train <- df[ran,]
df_test <- df[-ran,]
##extract 5th column of train dataset because it will be used as 'cl' argument in knn function.
df_target_category <- df[ran,2]
##extract 5th column if test dataset to measure the accuracy
df_test_category <- df[-ran,2]
library(class)
pr <- knn(df_train,df_test,cl=df_target_category,k=13)
##create confusion matrix
tab <- table(pr,df_test_category)
accuracy <- function(x){sum(diag(x)/(sum(rowSums(x)))) * 100}
accuracy(tab)
I think base R is sufficient to calculate the euclidean distance, i.e.,
distance <- sqrt(rowSums((df-do.call(rbind,replicate(nrow(df),p,simplify = FALSE)))**2))
nearest <- df[which.min(distance),]
such that
> nearest
temp rain
4 61 9
DATA
df <- structure(list(temp = c(79L, 81L, 79L, 61L, 60L, 45L, 34L, 100L,
101L, 59L, 58L), rain = c(12L, 13L, 4L, 9L, 15L, 5L, 5L, 9L,
3L, 11L, 16L)), class = "data.frame", row.names = c(NA, -11L))
p <- structure(list(temp = 65, rain = 7), class = "data.frame", row.names = c(NA,
-1L))
I'm not sure how your question is related to KNN. Why not simply calculate the Euclidean distance of the new point to all other points in df, and then determine which point in df is closest? To do so, we can use use R's dist which returns a (by default: Euclidean) distance matrix.
Here is a minimal example in two steps, based on the sample you give.
# Calculate Euclidean distances of `pt` to all points in `df`
dist_to_pt <- as.matrix(dist(rbind(df, pt)))[nrow(df) + 1, 1:nrow(df)]
# Determine the point in `df` with minimal distance to `pt`
dist_to_pt[which.min(dist_to_pt)]
# 4
#4.472136
So point 4 in df is the nearest neighbour to the new point at (65, 7).
We can visualise old and new data
library(dplyr)
library(ggplot2)
rbind(df, pt) %>%
mutate(
pt_number = row_number(),
source = ifelse(pt_number > nrow(df), "new", "ref")) %>%
ggplot(aes(temp, rain, colour = source, label = pt_number)) +
geom_point() +
geom_text(position = position_nudge(y = -0.5))
Point 4 is the nearest neighbour of the new point 12 at (65, 7).
Sample data
df <- read.table(text =
"temp rain
79 12
81 13
79 4
61 9
60 15
45 5
34 5
100 9
101 3
59 11
58 16", header = T)
# New point
pt <- c(temp = 65, rain = 7)

R data.table get maximum value per row for multiple columns

I've got a data.table in R which looks like that one:
dat <- structure(list(de = c(1470L, 8511L, 3527L, 2846L, 2652L, 831L
), fr = c(14L, 81L, 36L, 16L, 30L, 6L), it = c(9L, 514L, 73L,
37L, 91L, 2L), ro = c(1L, 14L, 11L, 1L, 9L, 0L)), .Names = c("de",
"fr", "it", "ro"), class = c("data.table", "data.frame"), row.names = c(NA,
-6L))
I now wanna create a new data.table (having exactly the same columns) but holding only the maximum value per row. The values in the other columns should simply be NA.
The data.table could have any number of columns (the data.table above is just an example).
The desired output table would look like this:
de fr it ro
1: 1470 NA NA NA
2: 8511 NA NA NA
3: 3527 NA NA NA
4: 2846 NA NA NA
5: 2652 NA NA NA
6: 831 NA NA NA
There are several issues with what the OP is attempting here: (1) this really looks like a case where data should be kept in a matrix rather than a data.frame or data.table; (2) there's no reason to want this sort of output that I can think of; and (3) doing any standard operations with the output will be a hassle.
With that said...
dat2 = dat
is.na(dat2)[-( 1:nrow(dat) + (max.col(dat)-1)*nrow(dat) )] <- TRUE
# or, as #PierreLafortune suggested
is.na(dat2)[col(dat) != max.col(dat)] <- TRUE
# or using the data.table package
dat2 = dat[rep(NA_integer_, nrow(dat)), ]
mc = max.col(dat)
for (i in seq_along(mc)) set(dat2, i = i, j = mc[i], v = dat[i, mc[i]])
It's not clear to me whether you mean that you want to use the data.table package, or if you are satisfied with making a data.frame using only base functions. It is certainly possible to do the latter.
Here is one solution, which uses only max() and which.max() and relies on the fact that an empty data.frame will fill in all of the remaining cells with NA to achieve a rectangular structure.
maxdat <- data.frame()
for (col in names(dat)) {
maxdat[which.max(dat[,col]), col] <- max(dat[,col])
}

How can you loop this higher-order function in R?

This question relates to the reply I received here with a nice little function from thelatemail.
The dataframe I'm using is not optimal, but it's what I've got and I'm simply trying to loop this function across all rows.
This is my df
dput(SO_Example_v1)
structure(list(Type = structure(c(3L, 1L, 2L), .Label = c("Community",
"Contaminant", "Healthcare"), class = "factor"), hosp1_WoundAssocType = c(464L,
285L, 24L), hosp1_BloodAssocType = c(73L, 40L, 26L), hosp1_UrineAssocType = c(75L,
37L, 18L), hosp1_RespAssocType = c(137L, 77L, 2L), hosp1_CathAssocType = c(80L,
34L, 24L), hosp2_WoundAssocType = c(171L, 115L, 17L), hosp2_BloodAssocType = c(127L,
62L, 12L), hosp2_UrineAssocType = c(50L, 29L, 6L), hosp2_RespAssocType = c(135L,
142L, 6L), hosp2_CathAssocType = c(95L, 24L, 12L)), .Names = c("Type",
"hosp1_WoundAssocType", "hosp1_BloodAssocType", "hosp1_UrineAssocType",
"hosp1_RespAssocType", "hosp1_CathAssocType", "hosp2_WoundAssocType",
"hosp2_BloodAssocType", "hosp2_UrineAssocType", "hosp2_RespAssocType",
"hosp2_CathAssocType"), class = "data.frame", row.names = c(NA,
-3L))
####################
#what it looks like#
####################
require(dplyr)
df <- tbl_df(SO_Example_v1)
head(df)
Type hosp1_WoundAssocType hosp1_BloodAssocType hosp1_UrineAssocType
1 Healthcare 464 73 75
2 Community 285 40 37
3 Contaminant 24 26 18
Variables not shown: hosp1_RespAssocType (int), hosp1_CathAssocType (int), hosp2_WoundAssocType
(int), hosp2_BloodAssocType (int), hosp2_UrineAssocType (int), hosp2_RespAssocType (int),
hosp2_CathAssocType (int)
The function I have is to perform a chisq.test across all categories in df$Type. Ideally the function should switch to a fisher.test() if the cell count is <5, but that's a separate issue (extra brownie points for the person who comes up with how to do that though).
This is the function I'm using to go row by row
func <- Map(
function(x,y) {
out <- cbind(x,y)
final <- rbind(out[1,],colSums(out[2:3,]))
chisq <- chisq.test(final,correct=FALSE)
chisq$p.value
},
SO_Example_v1[grepl("^hosp1",names(SO_Example_v1))],
SO_Example_v1[grepl("^hosp2",names(SO_Example_v1))]
)
func
But ideally, i'd want it to be something like this
for(i in 1:nrow(df)){func}
But that doesn't work. A further hook is, that when for example, row two is taken, the final call looks like this
func <- Map(
function(x,y) {
out <- cbind(x,y)
final <- rbind(out[2,],colSums(out[c(1,3),]))
chisq <- chisq.test(final,correct=FALSE)
chisq$p.value
},
SO_Example_v1[grepl("^hosp1",names(SO_Example_v1))],
SO_Example_v1[grepl("^hosp2",names(SO_Example_v1))]
)
func
so the function should understand that the cell count its taking for out[x,] has to be excluded from colSums(). This data.frame only has 3 rows, so it's easy, but I've tried applying this function to a separate data.frame I have that consists >200 rows, so it would be nice to be able to loop this somehow.
Any help appreciated.
Cheers
You were missing two things:
To select the line i and select all but this line you want to use
u[i] and u[-i]
If an item is not the same length than the others given to Map, it is recycled, a very general property of the language. You then just have to add an argument to the function that corresponds to the line you want to oppose to the others, it will be recycled for all the items of the vectors passed.
The following does what you asked for
# the function doing the stats
FisherOrChisq <- function(x,y,lineComp) {
out <- cbind(x,y)
final <- rbind(out[lineComp,],colSums(out[-lineComp,]))
test <- chisq.test(final,correct=FALSE)
return(test$p.value)
}
# test of the stat function
FisherOrChisq(SO_Example_v1[grep("^hosp1",names(SO_Example_v1))[1]],
SO_Example_v1[grep("^hosp2",names(SO_Example_v1))[1]],2)
# making the loop
result <- c()
for(type in SO_Example_v1$Type){
line <- which(SO_Example_v1$Type==type)
res <- Map(FisherOrChisq,
SO_Example_v1[grepl("^hosp1",names(SO_Example_v1))],
SO_Example_v1[grepl("^hosp2",names(SO_Example_v1))],
line
)
result <- rbind(result,res)
}
colnames(result) <- gsub("^hosp[0-9]+","",colnames(result))
rownames(result) <- SO_Example_v1$Type
That said, what you are doing is very heavy multiple testing. I would be extremely cautious with the use of the corresponding p-values, you need at least to use a multiple testing correction such as what is suggested here.

How do summarize this data table with dplyr, then run a chisq.test (or similar) on the results and loop it all into one neat function?

This question was embedded in another question I asked here, but as it goes beyond the scope of what I wanted to know in the initial inquiry, I thought it might deserve a separate thread.
I've been trying to come up with a solution for this problem based on the answers I have received here and here using dplyr and the functions written by Khashaa and Jaap.
Using the solutions provided to me (especially from Jaap), I have been able to summarize the raw data I received into a matrix-looking data table
dput(SO_Example_v1)
structure(list(Type = structure(c(3L, 1L, 2L), .Label = c("Community",
"Contaminant", "Healthcare"), class = "factor"), hosp1_WoundAssocType = c(464L,
285L, 24L), hosp1_BloodAssocType = c(73L, 40L, 26L), hosp1_UrineAssocType = c(75L,
37L, 18L), hosp1_RespAssocType = c(137L, 77L, 2L), hosp1_CathAssocType = c(80L,
34L, 24L), hosp2_WoundAssocType = c(171L, 115L, 17L), hosp2_BloodAssocType = c(127L,
62L, 12L), hosp2_UrineAssocType = c(50L, 29L, 6L), hosp2_RespAssocType = c(135L,
142L, 6L), hosp2_CathAssocType = c(95L, 24L, 12L)), .Names = c("Type",
"hosp1_WoundAssocType", "hosp1_BloodAssocType", "hosp1_UrineAssocType",
"hosp1_RespAssocType", "hosp1_CathAssocType", "hosp2_WoundAssocType",
"hosp2_BloodAssocType", "hosp2_UrineAssocType", "hosp2_RespAssocType",
"hosp2_CathAssocType"), class = "data.frame", row.names = c(NA,
-3L))
Which looks as follows
require(dplyr)
df <- tbl_df(SO_Example_v1)
head(df)
Type hosp1_WoundAssocType hosp1_BloodAssocType hosp1_UrineAssocType
1 Healthcare 464 73 75
2 Community 285 40 37
3 Contaminant 24 26 18
Variables not shown: hosp1_RespAssocType (int), hosp1_CathAssocType (int), hosp2_WoundAssocType
(int), hosp2_BloodAssocType (int), hosp2_UrineAssocType (int), hosp2_RespAssocType (int),
hosp2_CathAssocType (int)
The column Type is the type of bacteria, the following columns represent where they were cultured. The digits represent the number of times the respective type of bacteria were detected.
I know what my final table should look like, but until now I have been doing it step by step for each comparison and variable and there must undoubtedly be a way to do this by piping multiple functions in dplyr - but alas, I have not found the answer on SO to this.
Example of what final table should look like
Wound
Type n Hospital 1 (%) n Hospital 2 (%) p-val
Healthcare associated bacteria 464 (60.0) 171 (56.4) 0.28
Community associated bacteria 285 (36.9) 115 (38.0) 0.74
Contaminants 24 (3.1) 17 (5.6) 0.05
Where the first grouping variable "Wound" is then subsequently replaced by "Urine", "Respiratory", ... and then there's a final column termed "All/Total", which is the total number of times each variable in the rows of "Type" was found and summarized across Hospital 1 and 2 and then compared.
What I have done until now is the following and very tedious, as it's calculated "by hand" and then I poulate the table with all of the results manually.
### Wound cultures & healthcare associated (extracted manually)
# hosp1 464 (yes), 309 (no), 773 wound isolates in total; (% = 464 / 309 * 100)
# hosp2 171 (yes), 132 (no), 303 would isolates in total; (% = 171 / 303 * 100)
### Then the chisq.test of my contingency table
chisq.test(cbind(c(464,309),c(171,132)),correct=FALSE)
I appreciate that if I run a piped dplyr on the raw data.frame I won't be able to get the exact formatting of my desired table, but there must be a way to at least automate all the steps here and put the results together in a final table that I can export as a .csv file and then just do some final column editing etc.?
Any help is greatly appreciated.
It's ugly, but it works (Sam in the comments is right that this whole issue should probably be addressed by adjusting your data to a clean format before analysing, but anyway):
Map(
function(x,y) {
out <- cbind(x,y)
final <- rbind(out[1,],colSums(out[2:3,]))
chisq.test(final,correct=FALSE)
},
SO_Example_v1[grepl("^hosp1",names(SO_Example_v1))],
SO_Example_v1[grepl("^hosp2",names(SO_Example_v1))]
)
#$hosp1_WoundAssocType
#
# Pearson's Chi-squared test
#
#data: final
#X-squared = 1.16, df = 1, p-value = 0.2815
# etc etc...
Matches your intended result:
chisq.test(cbind(c(464,309),c(171,132)),correct=FALSE)
#
# Pearson's Chi-squared test
#
#data: cbind(c(464, 309), c(171, 132))
#X-squared = 1.16, df = 1, p-value = 0.2815

Resources