In R, I tried several codings but they all got errors messages. But even finally some turned out with the result, they do not look like correct. Indeed they look a bit mess up. Why?
I tried to find max values from the column of 'no. of ratings' for different smart water bottle products, but received 'volumes' column's descriptions.
> Smart_Water_Bottle_Review[which.max(Smart_Water_Bottle_Review$`no. of ratings`)]
# A tibble: 9 x 1
`volumne (oz)`
<chr>
1 16
2 17, 21
3 20
4 20.3
......
Warning message:
In which.max(Smart_Water_Bottle_Review$`no. of ratings`) :
NAs introduced by coercion
And as I changed to another column, again, another column's result other then that showed up.
Smart_Water_Bottle_Review[which.max(Smart_Water_Bottle_Review$`volumne (oz)`)]
# A tibble: 9 x 1
`keep hot (hrs)`
<chr>
1 NIL
2 0
3 NIL
......
Warning message:
In which.max(Smart_Water_Bottle_Review$`volumne (oz)`) :
NAs introduced by coercion
Simply speaking, I asked for the max value of no. of ratings, it gives the volumne; and asked for 'volumne', it gave 'keep hot'.
Plus, I asked for max, it provides everything.
Please advise how to correct these or the right syntax, thanks.
There are several issues here.
It looks like you have your variables stored as character, rather than numeric. The NAs introduced by coercion warning is saying that the variable was converted to numeric, but that some values weren't number-like and couldn't be converted (e.g., the NIL value). You should convert your variable to numeric first with as.numeric() and verify that all values are correctly converted.
Second, when you subset a data frame or tibble with [ using numbers and no commas, it selects the column matching that index.
Here is how I would recommend you solve this problem
library(dplyr)
Smart_Water_Bottle_Review |>
mutate(`no. of ratings` = as.numeric(`no. of ratings`)) |>
filter(`no. of ratings` == max(`no. of ratings`, na.rm = TRUE))
# alternative, notice the comma indicating to use [row, column] indexing
Smart_Water_Bottle_Review[which.max(as.numeric(Smart_Water_Bottle_Review$`no. of ratings`)) , ]
You mentioned that you wanted the top 3 values. That is most easily done by sorting the data frame:
library(dplyr)
Smart_Water_Bottle_Review |>
mutate(`no. of ratings` = as.numeric(`no. of ratings`)) |>
arrange(-`no. of ratings`) |>
slice(3)
for no_of_rating:
smartrating %>%
mutate(no_of_ratings_100s = as.numeric(no_of_ratings_100s)) %>%
arrange(-no_of_ratings_100s)
# A tibble: 9 x 2
Products no_of_ratings_100s
<chr> <dbl>
1 Hydrate Spark 3 Tracks (not insultaed) 37.3
2 Hydrate Spark Stainless Steel 21 oz 16.5
3 CrazyCap Self Cleaning, UV water purifyer 11.9
4 Thermos 24 oz hydration bottle w smart lid 11.5
5 LARQ 8.29
6 Philips Water GoZero UV Self-Cleaning Vacuum Insulated 3.22
7 Bellabeat 0.37
8 Equa Smart Water 0.02
for amazon_rating: (no mutate function is needed)
smartamazon %>%
arrange(-amazon_ratings_5)
# A tibble: 9 x 2
Products amazon_ratings_5
<chr> <dbl>
1 LARQ 4.5
2 CrazyCap Self Cleaning, UV water purifyer 4.4
3 Hydrate Spark Stainless Steel 21 oz 4.4
4 Hydrate Spark 3 Tracks (not insultaed) 4.4
5 Philips Water GoZero UV Self-Cleaning Vacuum Insulated 4.1
6 Equa Smart Water 4
7 Bellabeat 3.8
8 Thermos 24 oz hydration bottle w smart lid 3.7
Related
I have a dataset with currently 4 rows /subjects (more to come as this is ongoing research) and 259 variables /columns. 240 variables of this dataset are ratings of fit ("How well does the following adjective match the dimension X?" and 19 variables are sociodemographic.
For these 240 rating-variables, my subjects could give a rating ranging from 1 ("fits very badly") to 7 ("fits very well"). Consequently, I have a 240 variables numbered from 1 to 7. I would like to change these numeric values as follows (the procedure being the same for all of the 240 columns)
1 should change to 0, 2 should change to 1/6, 3 should change to 2/6, 4 should change to 3/6, 5 should change to 4/6, 6 should change to 5/6 and 7 should change to 1. So no matter where in the 240 columns, a 1 should change to 0 and so on.
I have tried the following approaches:
Recode numeric values in R
In this post, it says that
x <- 1:10
# With recode function using backquotes as arguments
dplyr::recode(x, `2` = 20L, `4` = 40L)
# [1] 1 20 3 40 5 6 7 8 9 10
# With case_when function
dplyr::case_when(
x %in% 2 ~ 20,
x %in% 4 ~ 40,
TRUE ~ as.numeric(x)
)
# [1] 1 20 3 40 5 6 7 8 9 10
Consequently, I tried this:
df = ds %>% select(AD01_01:AD01_20,AD02_01:AD02_20,AD03_01:AD03_20,AD04_01:AD04_20,AD05_01:AD05_20,AD06_01:AD06_20, AD09_01:AD09_20,AD10_01:AD10_20,AD11_01:AD11_20,AD12_01:AD12_20,AD13_01:AD13_20,AD14_01:AD14_20)
%>% recode(.,`1`=0,`2`=-1/6,`3`=-2/6, `4`=3/6,`5`=4/6, `6`=5/6, `7`=1))
with AD01_01 etc. being the column names for the adjectives my subjects should rate. I also tried it without the ., after recode(, to no avail.
This code is flawed because it omits the 19 rows of sociodemographic data I want to keep in my dataset. Moreover, I get the error unexpected SPECIAL in "%>%".
I thought R might accept my selected columns with the pipe operator as the "x" in recode. Apparently, this is not the case. I also tried to read up on the R documentation of recode but it made things much more confusing for me, as there were a lot of technical terms I don't understand.
As there is another option mentioned in the post, I also tried this:
df = df %>% select(AD01_01:AD01_20,AD02_01:AD02_20,AD03_01:AD03_20,AD04_01:AD04_20,AD05_01:AD05_20,AD06_01:AD06_20, AD09_01:AD09_20,AD10_01:AD10_20,AD11_01:AD11_20,AD12_01:AD12_20,AD13_01:AD13_20,AD14_01:AD14_20) %>% case_when (.,%in% 1~0,%in% 2~1/6,%in%3~2/6,%in%4~3/6,%in%5~4/6,%in%6~5/6,%in%7~1)
I thought I could give the output of the select function to the case_when function. Apparently, this is also not the case.
When I execute this command, I get
Error: unexpected SPECIAL in:
"df = df %>% select(AD01_01:AD01_20,AD02_01:AD02_20,AD03_01:AD03_20,AD04_01:AD04_20,AD05_01:AD05_20,AD06_01:AD06_20, AD09_01:AD09_20,AD10_01:AD10_20,AD11_01:AD11_20,AD12_01:AD12_20,AD13_01:AD13_20,AD14_01:AD14_20) %>% case_when (%in%"
Reading up on other possibilities, I found this
https://rstudio-education.github.io/hopr/modify.html
exemplary dataset:
head(dplyr::storms)
## # A tibble: 6 x 13
## name year month day hour lat long status category wind pressure
## <chr> <dbl> <dbl> <int> <dbl> <dbl> <dbl> <chr> <ord> <int> <int>
## 1 Amy 1975 6 27 0 27.5 -79 tropi… -1 25 1013
## 2 Amy 1975 6 27 6 28.5 -79 tropi… -1 25 1013
## 3 Amy 1975 6 27 12 29.5 -79 tropi… -1 25 1013
## 4 Amy 1975 6 27 18 30.5 -79 tropi… -1 25 1013
## 5 Amy 1975 6 28 0 31.5 -78.8 tropi… -1 25 1012
## 6 Amy 1975 6 28 6 32.4 -78.7 tropi… -1 25 1012
## # ... with 2 more variables: ts_diameter <dbl>, hu_diameter <dbl>
# We decide that we want to recode all NAs to 9999.
storm <- storms
storm$ts_diameter[is.na(storm$ts_diameter)] <- 9999
summary(storm$ts_diameter)
ds$AD01_01:AD01_20[1(ds$AD01_01:AD01_20)] <- 0, ds$AD01_01:AD01_20[2(ds$AD01_01:AD01_20)] <- 1/6, ds$AD01_01:AD01_20[3(ds$AD01_01:AD01_20)] <- 2/6,
ds$AD01_01:AD01_20[4(ds$AD01_01:AD01_20)] <- 3/6, ds$AD01_01:AD01_20[5(ds$AD01_01:AD01_20)] <- 4/6, ds$AD01_01:AD01_20[6(ds$AD01_01:AD01_20)] <- 5/6,
ds$AD01_01:AD01_20[7(ds$AD01_01:AD01_20)] <- 1
My idea in this case was to use assign for multiple columns at a time (this effort just concerns 20 of my 240 columns and it also didn't work. I got the error
could not find function ":<-" which is weird because I thought this was a basic command. The only noteworthy thing that might explain is that I executed library(readr) and library(tidyverse) beforehand.
Disclaimer: I am an R newbie and have spent 2 hours to try to solve this issue. I would also like to know where I went wrong and why my code doesn't work.
How about using mutate(across())? For example, if all your "adjective rating" columns start with "AD", you can do something like this:
library(dplyr)
ds %>% mutate(across(starts_with("AD"), ~(.x-1)/6))
Explanation of where you went wrong with your code:
First, your select(...) %>% recode(...) was close. However, when you use select, you are reducing ds to only the selected columns, thus recoding those values and assigning to df will result in df not having the demographic variables.
Second, if you want to use recode you can, but you can't feed it an entire data frame/tibble, like you are doing when you pipe (%>%) the selected columns to it. Instead, you can use recode() iteratively in .fns, on each of the columns in the .cols param of across(), like this:
ds %>%
mutate(across(
.cols = starts_with("AD"),
.fns = ~recode(.x,`1`=0,`2`=-1/6,`3`=-2/6, `4`=3/6,`5`=4/6, `6`=5/6, `7`=1))
)
the title is vague but let me explain:
I have a non-vectorized function that outputs a 15-row table of volume estimates for a tree. Each row is a different measurement unit or portion of the input tree. I have a Tables argument to help the user decide what units and measurement protocol they're looking to find, but in 99% of use case scenarios, the output for a single tree's volume estimate is a tibble with more than one row.
I've removed ~20 other arguments from the function for demonstration's sake. DBH is a tree's diameter at breast height. Vol column is arbitrary.
Est1 <- TreeVol(Tables = "All", DBH = 7)
Est1
# A tibble: 15 x 3
Tables DBH Vol
<chr> <dbl> <dbl>
1 1. Total_Above_Ground_Cubic_Volume 7 2
2 2. Gross_Inter_1/4inch_Vol 7 4
3 3. Net_Scribner_Vol 7 6
4 4. Gross_Merchantable_Vol 7 8
5 5. Net_Merchantable_Vol 7 10
6 6. Merchantable_Vol 7 12
7 7. Gross_SecondaryProduct_Vol 7 14
8 8. Net_SecondaryProduct_Vol 7 16
9 9. SecondaryProduct 7 18
10 10. Gross_Inter_1/4inch_Vol 7 20
11 11. Net_Inter_1/4inch_Vol 7 22
12 12. Gross_Scribner_SecondaryProduct 7 24
13 13. Net_Scribner_SecondaryProduct 7 26
14 14. Stump_Volume 7 28
15 15. Tip_Volume 7 30
the user can utilize the Tables argument as so:
Est2 <- TreeVol(Tables = "Scribner_BF", DBH = 7)
# A tibble: 3 x 3
Tables DBH Vol
<chr> <dbl> <dbl>
1 3. Net_Scribner_Vol 7 6
2 12. Gross_Scribner_SecondaryProduct 7 24
3 13. Net_Scribner_SecondaryProduct 7 26
The problem arises in that I'd like to write a vectorized version of this function that can calculate the volume for an entire .csv of tree inventory data. Ideally, I'd like the multi-row outputs that relate to a single tree to output as one long tibble, with each 15-row default output filtered by what the user passes to the Tables argument as so:
Est3 <- VectorizedTreeVol(Tables = "Scribner_BF", DBH = c(7, 21, 26))
# A tibble: 9 x 3
Tables DBH Vol
<chr> <dbl> <dbl>
1 3. Net_Scribner_Vol 7 6
2 12. Gross_Scribner_SecondaryProduct 7 24
3 13. Net_Scribner_SecondaryProduct 7 26
4 3. Net_Scribner_Vol 21 18
5 12. Gross_Scribner_SecondaryProduct 21 72
6 13. Net_Scribner_SecondaryProduct 21 76
7 3. Net_Scribner_Vol 26 8
8 12. Gross_Scribner_SecondaryProduct 26 78
9 13. Net_Scribner_SecondaryProduct 26 84
To achieve this, I wrote a for() loop that acts as the heart of the vectorized function. I've heard from multiple people that it's very inefficient (and I agree), but it works with the principle I'd like to achieve, in theory. Nothing I've found on this topic has suggested a better idea for application in a vectorized function like mine.
The general setup for the loop looks like this:
for(i in 1:length(DBH)){
Output <- VectorizedTreeVol(Tables = Tables[[i]], DBH = DBH[[i]]) %>%
purrr::reduce(dplyr::full_join, by = NULL) %>%
SuppressWarnings()
and in functions where the non-vectorized output is always a single row, the heart of its respective vectorized function doesn't need to be encased in a for() loop and looks like this:
Output <- OtherVectorizedFunction(Tables = Tables, DBH = DBH) %>%
purrr::reduce(dplyr::full_join, by = ColumnNames) %>% #ColumnNames is a vector with all of the output's column names
SuppressWarnings()
This specific call to reduce() has worked pretty well when I've used it to vectorize the other functions in the project, but I'm open to suggestions regarding how to join the output tables. I've been stuck on this dilemma for a few months now, and any help regarding how to achieve what this for() loop is striving for in theory would be awesome. Is having a vectorized function that outputs a tibble like Est3 even possible? Any feedback/comments are much appreciated.
Given this function:
TreeVol <- function(DBH) {
data.frame(Tables = c("Tree_Vol", "Intercapillary_transfusion", "Woodiness"),
Vol = c(DBH^2, sqrt(DBH) + 3, sin(DBH)),
DBH)
}
We could put our DBH parameters into purrr::map and then bind_rows to get a data.frame.
VecTreeVol <- function(DBH) {
DBH %>%
purrr::map(TreeVol) %>%
bind_rows()
}
Result
> VecTreeVol(DBH = 1:3)
Tables Vol DBH
1 Tree_Vol 1.0000000 1
2 Intercapillary_transfusion 4.0000000 1
3 Woodiness 0.8414710 1
4 Tree_Vol 4.0000000 2
5 Intercapillary_transfusion 4.4142136 2
6 Woodiness 0.9092974 2
7 Tree_Vol 9.0000000 3
8 Intercapillary_transfusion 4.7320508 3
9 Woodiness 0.1411200 3
I'm trying to get the name of an animal who has the max value of rem sleep. This is what I'm doing right now but hoping for a better way that returns exact value.
msleep = ggplot2::msleep
msleep[order(msleep$sleep_rem, na.last=TRUE, decreasing=TRUE), ]
The above returns me the sorted data but it's hard to see in console in Rstudio. Is there a better way to do this?
We can use which.max to get the index of max value in 'sleep_rem' and use that to subset the 'name'
msleep$name[which.max(msleep$sleep_rem)]
#[1] "Thick-tailed opposum"
For better viewing you can arrange the data and select only the interested columns -
library(dplyr)
msleep %>% arrange(desc(sleep_rem)) %>% select(name, sleep_rem)
# A tibble: 83 x 2
# name sleep_rem
# <chr> <dbl>
# 1 Thick-tailed opposum 6.6
# 2 Giant armadillo 6.1
# 3 North American Opossum 4.9
# 4 Big brown bat 3.9
# 5 European hedgehog 3.5
# 6 Thirteen-lined ground squirrel 3.4
# 7 Domestic cat 3.2
# 8 Long-nosed armadillo 3.1
# 9 Golden hamster 3.1
#10 Golden-mantled ground squirrel 3
# … with 73 more rows
I am trying to look at the baseball data from 1903 through 1960 from the Lahman database. I am doing this for my own research. I am wanting to use the batting table, which does not include batting average, slugging, OBP or OPS.
I want to calculate those, but I first need to get total bases. I am having trouble getting the program to calculate total bases with the X2B and X3B.
I've looked into trying as.numeric, but I couldn't get it to work. This is using R and R studio. I've tried putting quotes around X2B and X3B for the doubles and triples and without quotes.
batting_1960 <- batting_1903 %>%
filter(yearID <= 1960 & G >= 90) %>%
mutate(Batting_Average = H/AB, TB = (2*"X2B")+(3*"X3B")+HR+(H-"X2B"-"X3B"-HR)) %>%
arrange(yearID, desc(Batting_Average))
I expect that for each row of data, that the total bases will be calculated in a new column but I get the error:
Error in 2 * "X2B" : non-numeric argument to binary operator
This would be so that I could eventually calculated OPS, OBP and slugging.
Your code is trying to mutiply 2 by the literal string "X2B", which is not going to work. Column names should be unquoted in mutate().
Your error:
> tibble(X2B = 1:10) %>% mutate(TB = 2 * "X2B")
Error in 2 * "X2B" : non-numeric argument to binary operator
Should be, for example:
> tibble(X2B = 1:10) %>% mutate(TB = 2 * X2B)
# A tibble: 10 x 2
X2B TB
<int> <dbl>
1 1 2
2 2 4
3 3 6
4 4 8
5 5 10
6 6 12
7 7 14
8 8 16
9 9 18
10 10 20
How is it possible with to sum up consecutive depth data with R?
For instance:
a <- data.frame(label = as.factor(c("Air","Air","Air","Air","Air","Air","Wood","Wood","Wood","Wood","Wood","Air","Air","Air","Air","Stone","Stone","Stone","Stone","Air","Air","Air","Air","Air","Wood","Wood")),
depth = as.numeric(c(1,2,3,-1,4,5,4,5,4,6,8,9,8,9,10,9,10,11,10,11,12,10,12,13,14,14)))
The given output should be something like:
Label Depth
Air 7
Wood 3
Stone 1
First the removal of negative values is done with cummax(), because depth can only increase in this special case. Hence:
label depth
1 Air 1
2 Air 2
3 Air 3
4 Air 3
5 Air 4
6 Air 5
7 Wood 5
8 Wood 5
9 Wood 5
10 Wood 6
11 Wood 8
12 Air 9
13 Air 9
14 Air 9
15 Air 10
16 Stone 10
17 Stone 10
18 Stone 11
19 Stone 11
20 Air 11
21 Air 12
22 Air 12
23 Air 12
24 Air 13
25 Wood 14
26 Wood 14
Now by max-min the increase in depth for every consecutive row you would get: (the question is how to do this step)
label depth
1 Air 4
2 Wood 3
3 Air 1
4 Stone 1
5 Air 2
5 Wood 0
And finally summing up those max-min values the output is the one presented above.
Steps tried to achieve the output:
The first obvious solution would be for instance for Air:
diff(cummax(a[a$label=="Air",]$depth))
This solution gets rid of the negative data, which is necessary due to an expected constant increase in depth.
The problem is the output also takes into account the big steps in between each consecutive subset. Hence, the sum for Air would be 12 instead of 7.
[1] 1 1 0 1 1 4 0 0 1 1 1 0 0 1
Even worse would be a solution with aggreagte, e.g.:
aggregate(depth~label, a, FUN=function(x){sum(x>0)})
Note: solutions with filtering big jumps is not what i'm looking for. Sure you could hard code a limit for instance <2 for the example of Air once again:
sum(diff(cummax(a[a$label=="Air",]$depth))[diff(cummax(a[a$label=="Air",]$depth))<2])
Gives you almost the right result but does not work as it is expected here. I'm pretty sure there is already a function for what I'm looking for because it is not a uncommon problem for many different tasks.
I guess taking the minimum and maximum value of each set of consecutive rows per material and summing those up would be one possible solution, but I'm not sure how to apply a function to only the consecutive subsets.
You can use data.table::rleid to quickly group by run, or reconstruct it with rle if you really like. After that, aggregating is fairly easy in any grammar. In dplyr,
library(dplyr)
a <- data.frame(label = c("Air","Air","Air","Air","Air","Air","Wood","Wood","Wood","Wood","Wood","Air","Air","Air","Air","Stone","Stone","Stone","Stone","Air","Air","Air","Air","Air","Wood","Wood"),
depth = c(1,2,3,-1,4,5,4,5,4,6,8,9,8,9,10,9,10,11,10,11,12,10,12,13,14,14))
a2 <- a %>%
# filter to rows where previous value is lower, equal, or NA
filter(depth >= lag(depth) | is.na(lag(depth))) %>%
# group by label and its run
group_by(label, run = data.table::rleid(label)) %>%
summarise(depth = max(depth) - min(depth)) # aggregate
a2 %>% arrange(run) # sort to make it pretty
#> # A tibble: 6 x 3
#> # Groups: label [3]
#> label run depth
#> <fctr> <int> <dbl>
#> 1 Air 1 4
#> 2 Wood 2 3
#> 3 Air 3 1
#> 4 Stone 4 1
#> 5 Air 5 2
#> 6 Wood 6 0
a3 <- a2 %>% summarise(depth = sum(depth)) # a2 is still grouped, so aggregate more
a3
#> # A tibble: 3 x 2
#> label depth
#> <fctr> <dbl>
#> 1 Air 7
#> 2 Stone 1
#> 3 Wood 3
A base R method using aggregate is
aggregate(cbind(val=cummax(a$depth)),
list(label=a$label, ID=c(0, cumsum(diff(as.integer(a$label)) != 0))),
function(x) diff(range(x)))
The first argument to aggregate calculates the cumulative maximum as the OP does above for the input vector, the use of cbind provide for the final output of the calculated vector. The second argument is the grouping argument. This uses a different method than rle, which calculates the cumulative sum of the differences. Finally, the third argument provides the function which calculates the desired output by taking a difference of the range for each group.
This returns
label ID val
1 Air 0 4
2 Wood 1 3
3 Air 2 1
4 Stone 3 1
5 Air 4 2
6 Wood 5 0
The data.table way (borrowing in part from #alistaire):
setDT(a)
a[, depth := cummax(depth)]
depth_gain <- a[,
list(
depth = max(depth) - depth[1], # Only need the starting and max values
label = label[1]
),
by = rleidv(label)
]
result <- depth_gain[, list(depth = sum(depth)), by = label]