Calculations by Subgroup in a Column [duplicate] - r

This question already has answers here:
Calculate the mean by group
(9 answers)
Closed 5 years ago.
I have a dataset that looks approximately like this:
> dataSet
month detrend
1 Jan 315.71
2 Jan 317.45
3 Jan 317.5
4 Jan 317.1
5 Jan 315.71
6 Feb 317.45
7 Feb 313.5
8 Feb 317.1
9 Feb 314.37
10 Feb 315.41
11 March 316.44
12 March 315.73
13 March 318.73
14 March 315.55
15 March 312.64
.
.
.
How do I compute the average by month? E.g., I want something like
> by_month
month ave_detrend
1 Jan 315.71
2 Feb 317.45
3 March 317.5

What you need to focus on is a means to group your column of interest (the "detrend") by the month. There are ways to do this within "vanilla R", but the most effective way is to use tidyverse's dplyr.
I will use the example taken directly from that page:
mtcars %>%
group_by(cyl) %>%
summarise(disp = mean(disp), sd = sd(disp))
In your case, that would be:
by_month <- dataSet %>%
group_by(month) %>%
summarize(avg = mean(detrend))
This new "tidyverse" style looks quite different, and you seem quite new, so I'll explain what's happening (sorry if this is overly obvious):
First, we are grabbing the dataframe, which I'm calling dataSet.
Then we are piping that dataset to our next function, which is group_by. Piping means that we're putting the results of the last command (which in this case is just the dataframe dataSet) and using it as the first parameter of our next function. The function group_by has a dataframe provided as its first function.
Then the results of that group by are piped to the next function, which is summarize (or summarise if you're from down under, as the author is). summarize simply calculates using all the data in the column, however, the group_by function creates partitions in that column. So we now have the mean calculated for each partition that we've made, which is month.
This is the key: group_by creates "flags" so that summarize calculates the function (mean, in this case) separately on each group. So, for instance, all of the Jan values are grouped together and then the mean is calculated only on them. Then for all of the Feb values, the mean is calculated, etc.
HTH!!

R has an inbuilt mean function: mean(x, trim = 0, na.rm = FALSE, ...)
I would do something like this:
january <- dataset[dataset[, "month"] == "january",]
januaryVector <- january[, "detrend"]
januaryAVG <- mean(januaryVector)

Related

Using indexing to perform mathematical operations on data frame in r

I'm struggling to perform basic indexing on a data frame to perform mathematical operations. I have a data frame containing all 50 US states with an entry for each month of the year, so there are 600 observations. I wish to find the difference between a value for the month of December minus the January value for each of the states. My data looks like this:
> head(df)
state year month value
1 AL 2020 01 2.7
2 AK 2020 01 5
3 AZ 2020 01 4.8
4 AR 2020 01 3.7
5 CA 2020 01 4.2
7 CO 2020 01 2.7
For instance, AL has a value in Dec of 4.7 and Jan value of 2.7 so I'd like to return 2 for that state.
I have been trying to do this with the group_by and summarize functions, but can't figure out the indexing piece of it to grab values that correspond to a condition. I couldn't find a resource for performing these mathematical operations using indexing on a data frame, and would appreciate assistance as I have other transformations I'll be using.
With dplyr:
library(dplyr)
df %>%
group_by(state) %>%
summarize(year_change = value[month == "12"] - value[month == "01"])
This assumes that your data is as you describe--every state has a single value for every month. If you have missing rows, or multiple observations in for a state in a given month, I would not expect this code to work.
Another approach, based row order rather than month value, might look like this:
library(dplyr)
df %>%
## make sure things are in the right order
arrange(state, month) %>%
group_by(state) %>%
summarize(year_change = last(value) - first(value))

Dplyr filter based on less than equal to condition in R

I am trying to subset a data based on <= logic using dplyr in R. Even after running filter function, the data is not being filtered.
How can I fix this?
Code
library(tidyverse)
value = c("a,b,c,d,e,f")
Year = c(2020,2020,2020,2020,2020,2020)
Month = c(01,01,12,12,07,07)
dummy_df = data.frame(value, Year, Month)
dummy_df = dplyr::filter(dummy_df, Month <=07)
Now on a dummy data frame this does work, but when I use this function on an actual data set with in which I created Year, Month and Day columns using lubridate; I still see data from months greater than 07.
It may be because the OP's original dataset may be having 'Month' as character column. Convert to numeric and it should work
dummy_df = dplyr::filter(dummy_df, as.numeric(Month) <= 7)
Or in base R we could do:
subset(dummy_df, as.numeric(Month) <= 7)
value Year Month
1 a,b,c,d,e,f 2020 1
2 a,b,c,d,e,f 2020 1
5 a,b,c,d,e,f 2020 7
6 a,b,c,d,e,f 2020 7

Updating Data Frames

I have the following dataset, which originates from two datasets taken from an API at different points in time. df1 simply shows the state after I appended them. My goal is to generate the newest version of my API data, without forgetting the old data. This means I am looking to create some kind of update mechanism. I thought about creating a unique number for each dataset to identify its state, append the new version to the old one and then filter out the duplicates while keeping the newer data.
The data frames look like this:
df (after simply appending the two)
"Year" "Month" "dataset"
2017 December 1
2018 January 1
2018 January 2
2018 February 1
2018 February 2
2018 March 2
2018 April 2
df2 (the update)
"Year" "Month" "dataset"
2017 December 1
2018 January 2
2018 February 2
2018 March 2
2018 April 2
As df2 shows, the update mechanism prefers the data from dataset 2. January and February data were in both data sets but only the data from February is kept.
On the other hand, if there is no overlap between the datasets, it keeps the old and the new data.
Is there a simple solution in order to create the described update mechanism in R?
This is the Code for df1:
df1 <- data.frame(Year = c(2017,2018,2018,2018,2018,2018,2018),
Month =
c("December","January","January","February","February","March","April"),
Dataset = c(1,1,2,1,2,2,2))
Let me see if I have this right: you have 2 datasets (named 1 and 2) which you want to combine. Currently, you're getting the format shown above as df but you want the output to be df2. Is this correct? The below code should solve your problem. It is important that your newer dataset appears first in the full_join call. Whichever appears first will be given priority by distinct when it decides which duplicated rows to remove.
library(dplyr)
df <- data.frame(Year = c(2017,2018,2018,2018,2018,2018,2018),
Month = c("December","January","January","February",
"February","March","April"),
Dataset = c(1,1,2,1,2,2,2))
df1 <- dfx[dfx$Dataset == 1,]
df2 <- dfx[dfx$Dataset == 2,]
df.updated <- dplyr::full_join(df2, df1) %>%
distinct(Year, Month, .keep_all = TRUE)
df.updated
Year Month Dataset
1 2018 January 2
2 2018 February 2
3 2018 March 2
4 2018 April 2
5 2017 December 1
full_join joins the two data frames on matching variables, keeping all rows from both. Then distinct tosses out the duplicated rows. By specifying variable names in distinct, we tell it to only consider the values in Year and Month when determining uniqueness, so when a specific Year/Month combination appears in more than one dataset, only one row will be kept.
Normally, distinct only keeps the variables it uses to determine uniqueness. By providing the argument .keep_all = TRUE, it will keep all variables. When there are conflicts (for example, 2 rows from February 2018 with different values of Dataset) it will keep whichever row appears first in the data frame. This is why it's important for your newer dataset to appear first in the full_join: this gives rows that appear in df2 priority over rows that also appear in df1.

averaging by months with daily data [duplicate]

This question already has answers here:
Get monthly means from dataframe of several years of daily temps
(3 answers)
Closed 5 years ago.
I have daily data with my matrix, divided into 6 columns - "Years, months, days, ssts, anoms, missing ".
I want to calculate the average of each month of SST in each year. (For example - 1981 - september - avg values sts of all days in sept), and I want to do the same for all the years. i am trying to work, my code, but am unable to do so.
You should use dplyr package in R. For this, we will call your data df
require(dplyr)
df.mnths <- df %>%
group_by(Years, months) %>%
summarise(Mean.SST = mean(SSTs))
df.years <- df %>%
group_by(Years) %>%
summarise(Mean.SST = mean(SSTs))
This is two new data sets that will have the mean(SST) for each month of each year in df.mnths, and another dataset that will have mean(SST) for all years in df.years.
In terms of data.table you can perform the following action
library(data.table)
dt[, average_sst := mean(SSTs), by = .(years,months)]
adding an extra column average_sst.
just suppose that your data is stored in a data.frame named "data":
years months SSTs
1 1981 1 -0.46939368
2 1981 1 0.03226932
3 1981 1 -1.60266798
4 1981 1 -1.53095676
5 1981 1 1.71177023
6 1981 1 -0.61309846
tapply(data$SSTs, list(data$years, data$months), mean)
tapply(data$SSTs, factor(data$years), mean)

R & dplyr: using mutate() to get yearly totals

Data like so:
data <- read.table(text="year level items
2014 a 12
2014 b 16
2014 c 7")
Would like to run that through mutate() and I guess group_by so I have a year and a total, so a row that's just:
year items
2014 35
Feel like it should be 101 simple but I can't quite figure this one out.
library(dplyr)
out <- data %>% group_by(year) %>%
summarize(items= sum(items, na.rm=T))

Resources