I want to create a table of the 10 most frequent reasons people discontinue a course. There are around 2,000 responses to my discontinuation survey, with the dataset entitled 'Discontinued'. There are 35 categories to describe the 'Reason'. Currently I have been using the below code but this gives me the frequecy for all 35 discontinuation codes.
Discontinued[,list(Count= .N), by = reason][order(-Count)]
The data.table way to sort is setorder. So instead of
Discontinued[,list(Count= .N), by = reason][order(-Count)][1:10]
it should be faster to use
setorder(Discontinued[, list(Count= .N), by = reason], -Count)[1L:10L]
Related
Very new to R here, also very new to the idea of coding and computer stuff.
Second week of class and I need to find some summary statistics from a set of data my professor provided. I downloaded the chart of data and tried to follow along with his verbal instructions during class, but I am one of the only non-computer science backgrounds in my degree program (I am an RN going for degree in Health Informatics), so he went way too fast for me.
I was hoping for some input on just where to start with his list of tasks for me to complete. I downloaded his data into an excel file, and then uploaded it into R and it is now a matrix. However, everything I try for getting the mean and standard deviation of the columns he wants comes up with an error. I am understanding that I need to convert these column names into some sort of vector, but online every website tells me to do these tasks differently. I don't even know where to start with this assignment.
Any help on how to get myself started would be greatly appreciated. Ive included a screenshot of his instructions and of my matrix. and please, excuse my ignorance/lack of familiarity compared to most of you here... this is my second week into my masters I am hoping I begin to pick this up soon I am just not there yet.
the instructions include:
# * Import the dataset
# * summarize the dataset,Compute the mean and standard deviation for the three variables (columns): age, height, weight
# * Tabulate smokers and age.level data with the variable and its frequency. How many smokers in each age category ?
# * Subset dataset by the mothers that smoke and weigh less than 100kg,how many mothers meet this requirements?
# * Compute the mean and standard deviation for the three variables (columns): age, height, weight
# * Plot a histogram
Stack Overflow is not a place for homeworks, but I feel your pain. Let's get piece by piece.
First let's use a package that helps us do those tasks:
library(data.table) # if not installed, install it with install.packages("data.table")
Then, let's load the data:
library(readxl) #again, install it if not installed
dt = setDT(read_excel("path/to/your/file/here.xlsx"))
Now to the calculations:
1 summarize the dataset. Here you'll see the ranges, means, medians and other interesting data of your table.
summary(dt)
1A mean and standard deviation of age, height and weight (replace age with the column name of height and weight to get those)
dt[, .(meanValue = mean(age, na.rm = TRUE), stdDev = sd(age, na.rm = TRUE))]
2 tabulate smokers and age.level. get the counts for each combination:
dt[, .N, by = .(smoke, age.level)]
3 subset smoker mothers with wt < 100 (I'm asuming non-pregnant mothers have NA in the gestation field. Adjust as necessary):
dt[smoke == 1 & weight < 100 & !is.na(gestation), .N]
4 Is the same as 1A.
5 Plot a histogram (but you don't specify of what variable, so let's say it's age):
hist(dt$age)
Keep on studying R, it's not that difficult. The book recommended in the comments is a very good start.
this is a bit of a complicated one - but I'll do my best to explain. I have a dataset comprised of data that I scrape from a particular video on demand interface every day. Each day there are around 120 titles on display (a grid of 12 x 10) - the data includes a range of variables: date of scrape, title of programme, vertical/horizontal position of programme, genre, synopsis, etc.
One of the things I want to do is analyse the similarity of what's on offer on a day-to-day basis. What I mean by this is that I want to compare how many of the titles on a given day appeared on the previous date (ideally expressed as a percentage). So if 40 (out of 120) titles were the same as the previous day, the similarity would be 30%.
Here's the thing - I know how to do this (thanks to some kindly stranger on this very site who helped me write a script using R). You can see the post here which gives some more detail: Calculate similarity within a dataframe across specific rows (R)
However, this method creates a similarity score based on the total number of titles on a day-to-day basis whereas I also want to be able to explore the similarity after applying other filters. Specifically, I want to narrow the focus to titles that appear within the first four rows and columns. In other words: how many of these titles are the same as the previous day in those positions? I could do this by modifying the R script, but it seems that the better way would be to do this within Tableau so that I can change these parameters in "real-time", so to speak. I.e. if I want to focus on the top 6 rows and columns I don't want to have to run the R script all over again and update the underlying data!
It feels as though I'm missing something very obvious here - maybe it's a simple table calculation? Or I need to somehow tell Tableau how to subset the data?
Hopefully this all makes sense, but I'm happy to clarify if not. Also, I can't provide you the underlying data (for research reasons!) but I can provide a sample if it would help.
Thanks in advance :)
You can have the best of both worlds. Use Tableau to connect to your data, filter as desired, then have Tableau call an R script to calculate similarity and return the results to Tableau for display.
If this fits your use case, you need to learn the mechanics to put this into play. On the Tableau side, you’ll be using the functions that start with the word SCRIPT to call your R code, for example SCRIPT_REAL(), or SCRIPT_INT() etc. Those are table calculations, so you’ll need to learn how table calculations work, in particular with regard to partitioning and addressing. This is described in the Tableau help. You’ll also have to point Tableau at the host for your R code, by managing external services under the Help->Settings and Performance menu.
On the R side, you’ll have write your function of course, and then use the function RServe() to make it accessible to Tableau. Tableau sends vectors of arguments to R and expects a vector in response. The partitioning and addressing mentioned above controls the size and ordering of those vectors.
It can be a bit tricky to get the mechanics working, but they do work. Practice on something simple first.
See Tableau’s web site resources for more information. The official name for this functionality is Tableau “analytic extensions”
I am sharing a strategy to solve this in R.
Step-1 Load the libraries and data
library(tidyverse)
library(lubridate)
movies <- tibble(read.csv("movies.csv"))
movies$date <- as.Date(movies$date, format = "%d-%m-%Y")
set the rows and columns you want to restrict your similarity search to in two variables. Say you are restricting the search to 5 columns and 4 rows only
filter_for_row <- 4
filter_for_col <- 5
Getting final result
movies %>% filter(rank <= filter_for_col, row <= filter_for_row) %>% #Restricting search to designated rows and columns
group_by(Title, date) %>% mutate(d_id = row_number()) %>%
filter(d_id ==1) %>% # removing duplicate titles screened on any given day
group_by(Title) %>%
mutate(similarity = ifelse(lag(date)== date - lubridate::days(1), 1, 0)) %>% #checking whether it was screened previous day
group_by(date) %>%
summarise(total_movies_displayed = sum(d_id),
similar_movies = sum(similarity, na.rm = T),
similarity_percent = similar_movies/total_movies_displayed)
# A tibble: 3 x 4
date total_movies_displayed similar_movies similarity_percent
<date> <int> <dbl> <dbl>
1 2018-08-13 17 0 0
2 2018-08-14 17 10 0.588
3 2018-08-15 17 9 0.529
If you change the filters to 12, 12 respectively, then
filter_for_row <- 12
filter_for_col <- 12
movies %>% filter(rank <= filter_for_col, row <= filter_for_row) %>%
group_by(Title, date) %>% mutate(d_id = row_number()) %>%
filter(d_id ==1) %>%
group_by(Title) %>%
mutate(similarity = ifelse(lag(date)== date - lubridate::days(1), 1, 0)) %>%
group_by(date) %>%
summarise(total_movies_displayed = sum(d_id),
similar_movies = sum(similarity, na.rm = T),
similarity_percent = similar_movies/total_movies_displayed)
# A tibble: 3 x 4
date total_movies_displayed similar_movies similarity_percent
<date> <int> <dbl> <dbl>
1 2018-08-13 68 0 0
2 2018-08-14 75 61 0.813
3 2018-08-15 72 54 0.75
Good Luck
As Alex has suggested, you can have best of both the worlds. But to the best of my knowledge, Tableau Desktop allows interface with R (or python etc.) through calculated fields i.e. script_int script_real etc. All of these can be used in tableau through calculated fields. Presently these functions in tableau allows creation on calculated field through Table calculations which in tableau work only in context. We cannot hard code these values (fields/columns) and thus. we are not at liberty to use these independent on context. Moreover, table calculations in tableau can neither be further aggregated and nor be mixed with LOD expressions. Thus, in your use case, (again to the best of my knowledge) you can build a parameter dependent view in tableau, after hard-coding values through any programming language of your choice. I therefore, suggest that prior to importing data in tableau a new column can be created in your dataset by running following (or alternate as per choice programming language)
movies_edited <- movies %>% group_by(Title) %>%
mutate(similarity = ifelse(lag(date)== date - lubridate::days(1), 1, 0)) %>%
ungroup()
write.csv(movies_edited, "movies_edited.csv")
This created a new column named similarity in dataset wherein 1 denotes that it was available on previous day, 0 denotes it was not not screened on immediately previous day and NA means it is first day of its screening.
I have imported this dataset in tableau and created a parameter dependent view, as you desired.
I have a data set which has products and their quantity sold. I want to write a R code which tells me the best selling product.
Products Quantity
Laminated 520
Laminated 150
Laminated 639
Laminated 702
SUPERSTAR 3
TAMAX 500
TAMAX 20
TAMAX 40
GreenDragon 40
GreenDragon 50
XPLODE 40
XPLODE 20
EXPERT 40
KHANJARBIOSL 40
Here just by looking the data set we can say laminated is the best product in terms of quantity sold. Can we write an R code for this.
Thanks
There could be multiple ways to do this. One way using tapply is to get sum of Quantity for each Product, get the name of the maximum value.
names(which.max(tapply(df$Quantity, df$Products, sum, na.rm = TRUE)))
#[1] "Laminated"
You can use data.table package. First do the sum, then sort it in descending order based on aggregated value. Then fetch first row.
tb = data.frame("Products" =c("Laminated", "Laminated", "Laminated", "Laminated", "SUPERSTAR", "TAMAX", "TAMAX", "TAMAX", "GreenDragon", "GreenDragon", "XPLODE", "XPLODE", "EXPERT", "KHANJARBIOSL"), "Quantity" = c(520,150,639,702,3,500,20,40,40,50,40,20,40,40))
library(data.table)
tb = data.table(tb)
tb[,sum(Quantity), by="Products"][order(-V1)][1]
Hello i have a data frame with more than 3632200+ obs, and I'm trying to find some useful information out of it. I have cleaned it a bit so now this is what the data looks like
Order Lane Days
18852324 796005 - Ahmedabad 2
232313 796008 - Delhi 5
63963231 796005 - Ahmedabad 5
23501231 788152 - Chennai 1
2498732 796008 - Delhi 2
231413 796005 - Ahmedabad 3
75876876 796012 - Chennai 4
14598676 796008 - Delhi 4
Order are different Order Id's, they all are unique, Lane are different paths on which the order was delivered(Lanes can repeat for various orders) & Days is calculated using difftime function in R by differentiating Order delivered and created date.
Now What I'm trying to achieve is something like this
Now I can calculate 98.% order achieved date by using quantile function in R across various lane.
But how do I achieve % of orders fulfilled by day 1 to 5 across various lanes?
Any help would be highly appreciated.
Thank You
Hard to tell without the data, but maybe something like this:
library(purrr)
#df = your data
max_days = max(df$days)
aggregate_fun = function(x){
days = factor(x$days,levels=c(1:max_days))
prop.table(table(days))
}
df = split(df,df$lane)
results = reduce(lapply(df,aggregate_fun),rbind)
I am currently working on the so-called "Moneyball" problem. I am basically trying to select the best combination of three baseball players (based on certain baseball-relevant statistics) for the least amount of money.
I have the following dataset (OBP, SLG, and AB are statistics that describe the performance of a player):
# the table has about 100 observations;
# the data frame is called "batting.2001"
playerID OBP SLG AB salary
giambja01 0.3569001 0.6096154 20 410333
heltoto01 0.4316547 0.4948382 57 4950000
berkmla01 0.2102326 0.6204506 277 305000
gonzalu01 0.4285714 0.3880131 409 9200000
martied01 0.4234079 0.5425532 100 5500000
My goal is to pick three players who in combination have the highest possible sum of OBP, SLG, and AB, but at the same time do not exceed a total salary of 15.000.000 dollar.
My approach so far has been rather simple... I just tried to arrange (in descending order) the columns OBP, SLG, and AB and simply picking the three players on the top that in combination do not exceed the salary restriction of 15 Million dollar:
batting.2001 %>%
arrange(desc(OPB), desc(SLG), desc(AB))
Can anyone of you think of a better solution? Also, what if I would like to get the best combination of three players for the least amount of money? What approach would you use in that scenario?
Thanks in advance, and looking forward to reading your solutions.