Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
How to calculate mean, max, and min number of reviews per group in a dataframe in R? I have a dataset that looks like below:
Firm Review
A Nice work
B Ok
C yes
A ok
B yes
A like
In this case, firm A has 3 reviews, B has 2 and C has 1. Then Max=3, min=1, and average number of review per firm =6/3=2.
The actual dataset has 2.1 million reviews across 50000 firms. What would be an effective way to group by a firm and then calculate these statistics?
I generally use dplyr for this kind of thing if I understand you correctly.
firm_reviews %>%
group_by(firm) %>%
count() %>%
ungroup() %>%
summarise(mean(n), min(n), max(n))
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 17 days ago.
Improve this question
I have a dataset with time, where the time intervals are 6 hours apart and I have a column of heaterstatus.
The dataset :
I would like to know the percentage of zero occurred in each day for heaterstatus. New to R, any suggestion will be helpful.
Not tested since you only provided data as an image, but this should do what you want:
library(dplyr)
dat %>%
group_by(day = as.Date(Time)) %>%
summarize(pct_0 = mean(HeaterStatus == 0))
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 months ago.
Improve this question
I need some help with my data frame (df) in R. I need to transform existing row values with some calculation and I don't know how to apply it to every row I need. I have gross domestic product and I need to change it obsValues with calculation that includes CPI row values : (GDP/CPI)*100. The most difficult thing here is that GDP and CPI are in rows not in columns...
My dataframe looks like this:
enter image description here
enter image description here
Thanks in advance!
We may do
library(dplyr)
library(stringr)
df1 <- df1 %>%
group_by(obsTime) %>%
mutate(obsValue = replace(obsValue,
SUBJECT_DEF == "Gross domestic product",
obsValue[SUBJECT_DEF == "Gross domestic product"]/
obsValue[str_detect(SUBJECT_DEF, "CPI:")] * 100)) %>%
ungroup
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am trying to look at prediction accuracy related to timeframes related to hospital discharge.
For example, I think Mr. Smith will be discharged within 3-7 days, which would mean he could any day from 11/9-11/13 would be correct. If he discharges in 2 days, I would say I was 1 day off and if he discharges within 10 days, I was 3 days off...
Is there any good method to do this using dplyr, base R, and lubridate? TIA. Sample data is at the link:
Sample data
A possible solution would be to express your need in a case_when.
library(dplyr)
df %>%
dplyr::mutate(DIF = case_when(discharge_calender_date < discharge_prediction_lower_bound ~ discharge_calender_date - discharge_prediction_lower_bound,
discharge_calender_date <= discharge_prediction_upper_bound ~ 0,
TRUE ~ discharge_calender_date - discharge_prediction_upper_bound))
This way you get a negative value if the patient left before the lower bound, zero if he left within the prediction and a positive result if he left after the prediction.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I processed the dataset.
df <- read.csv('https://raw.githubusercontent.com/ulklc/covid19-timeseries/master/countryReport/raw/rawReport.csv')
df$countryName = as.character(df$countryName)
Considering the new cases announced, the top 3 countries that explained the most cases explained what% of the total cases. can we find it?
Here is one way to do this with Base R. Since the statistics are cumulative for each country by day, we subset to the most recent day's data with the [ form of the extract operator, sort by descending confirmed cases, calculate and sum the percentages for the first 3 rows.
df <- read.csv('https://raw.githubusercontent.com/ulklc/covid19-timeseries/master/countryReport/raw/rawReport.csv')
df$countryName = as.character(df$countryName)
# subset to max(day)
today <- df[df$day == max(df$day),]
today <- today[order(today$confirmed,decreasing=TRUE),]
today$pct <- today$confirmed / sum(today$confirmed)
paste("top 3 countries percentage as of",today$day[1],"is:",
sprintf("%3.2f%%",sum(today$pct[1:3]*100)))
...and the output:
> paste("top 3 countries percentage as of",today$day[1],"is:",
+ sprintf("%3.2f%%",sum(today$pct[1:3]*100)))
[1] "top 3 countries percentage as of 2020/05/30 is: 44.09%"
We can print selected data for the top 3 countries as follows.
today[1:3,colList]
countryName day confirmed pct
26000 United States 2020/05/30 1816117 0.29531640
3640 Brazil 2020/05/30 498440 0.08105067
21710 Russia 2020/05/30 396575 0.06448654
>
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a dataset of around 1.5 L observations and 2 variables: name and amount. name can have same value again and again, for example a name ABC can appear 50 times in the dataset.
I want a new data frame with two variables: name and total amount, where each name has a unique value and total amount is the sum of all amounts in previous dataset. For example if ABC appears three times with amount == 1, 2 and 3 respectively in the previous dataset then in the new dataset, ABC will only appear one time with total amount == 6.
You can use data.table for big datasets:
library(data.table)
res<- setDT(df)[, list(Total_Amount=sum(amount)), by=name]
Or use dplyr
library(dplyr)
df %>%
group_by(name) %>%
summarise(Total_Amount=sum(amount))
Or as suggested by #hrbrmstr,
count(df, name, wt=amount)
data
set.seed(24)
df <- data.frame(name=sample(LETTERS[1:5], 25, replace=TRUE),
amount=sample(150,25, replace=TRUE))