Limit the height of bar chart in R - r

I have data divided in almost 25 categories. Almost all of the categories have 1000 or less products in each except for 2 or 3 categories which have more than 3000 or 4000 products. So when i plot the barchartGC of products vs categories because of these 2/3 categories rest of the bars looks very small.
I was wondering if there is any way in which we divide the height of our bar charts in 2 or 3 levels. First level for less than 1000 products and second level for those more than that products.

Related

Calculating a ratio in a ggplot2 graph while retaining faceting variables

So I don't think this has been asked before, but SO search might just be getting confused by combinations of 'ratio' and 'faceting'. I'm trying to calculate a productivity ratio; number of widgets produced for number of workers on a given day or period. I've got my data structured in a single data frame, with each widget produced each day by each worker in it's own record, and other workers that worked that day but didn't produce a widget also in their own record, along with various metadata.
Something like this:
widget_ind
employee_active_ind
employee_id
day
product_type
employee_bu
1
1
123
6/1/2021
pc
americas
0
1
234
6/1/2021
mac
emea
0
1
345
6/1/2021
mac
apac
1
1
444
6/1/2021
mac
americas
1
1
333
6/1/2021
pc
emea
0
1
356
6/1/2021
pc
americas
I'm trying to find the ratio of widget_inds to employee_active_inds, over time, while retaining the metadata, so that i can filter or facet within the ggplot2 code, something like:
plot <- ggplot(data = df[df$employee_bu == 'americas',],aes(y = (widget_ind/employee_active_ind), x = day)) +
geom_bar(stat = 'identity', position = 'stack') +
facet_wrap(product_type ~ ., scales = 'fixed') + #change these to look at different cuts of metadata
print(plot)
Retaining the metadata is appealing rather than making individual dataframes summarizing by the various combinations, but the results with no faceting aren't even correct (e.g. the ggplot is showing a barchart with a height of ~18 widgets per person; creating a summarized dataframe with no faceting is showing a ratio of less than 1 widget per person).
I'm currently getting this error when I run the ggplot code:
Warning message:
Removed 9865 rows containing missing values (geom_bar).
Which doesn't make sense since in my data frame both widget_ind and employee_active_ind have no NA values, so calculating the ratio of the two should always work?
Edit 1: Clarifying employee_active_ind: I should not have any employee_active_ind = 0, but my current joins produce them (and it passes the reality sniff test; the process we are trying to model allows you to do work on day 1 that results in a widget on day 2, where you may not do any work, so wouldn't be counted as active on that day). I think I need to re-think my data structure. Even so, I'm assuming here that ggplot2 is acting like it would for a given bar chart; it's taking the number in each widget_ind record, for a given day (along with any facets and filters), and is then summing that set and displaying the result. The wrinkle I'm adding is dividing by the number of active employees on that day, and while you can have some one out on a given day, you'd never have everyone out. But that isn't what ggplot is doing is it?
I agree with MrFlick - especially the question concerning employee_active_ind of 0. If you have them, this could create NA values where something is divided by 0.

How to resize grid to remove area where there is no data when using factors in R?

I hope I can explain this correctly... basically, what Im trying to do is to remove points on a grid where there is no data... but the issue is, Im trying to do this with 2 factors!
Hopefully I can explain more clearly below.
To begin I have 2 factors drink and food, as shown below. Then I'm creating a grid (which im using to calculate something else) but I'm trying to remove 'points' from the grid where there is no data... for example:
drink = as.factor(c("A","A","A","A","A","A","A","A","A","A","A","B"))
food = as.factor(c('pizza','pizza','pizza','fries','fries','taco','taco','pizza','taco','pizza','taco','fries'))
# looking at a contingency table
table(drink, food)
> food
drink fries pizza taco
A 2 5 4
B 1 0 0
Now Im creating the grid that spans the entire range of the data like so:
# create the grid
gridvals1 <- levels(drink)
gridvals2 <- levels(food)
gridvalsNew <- expand.grid(gridvals1, gridvals2)
If we plot the data and the grid side-by-side, we can see that the grid covers area where there is no data:
par(mfrow=c(1,2))
plot(drink, food)
plot(gridvalsNew)
What Im trying to do is resize the grid so it removes the area where there is no data (i.e., where the count iz zero) . But I cant figure it out.

Stacked bart chart 4 variables with ggplot

I am very new to this and I wanted to add that the various ways in which I tried to reshape/melt the data. My data in three different variations:
Version 1:
year,type,total,action,perc
2015,v,"1,199,310",crime,42.16
2015,p,"8,024,115",crime,18.24
2015,v,"505,681",arrest,42.16
2015,p,"1,463,213",arrest,18.24
2016,v,"1,250,162",crime,32.85
2016,p,"7,928,530",crime,17.07
2016,v,"410,717",arrest,32.85
2016,p,"1,353,283",arrest,17.07
2017,v,"1,247,321",crime,41.58
2017,p,"7,694,086",crime,16.24
2017,v,"518,617",arrest,41.58
2017,p,"1,249,757",arrest,16.24
Version 2:
year,type,crime,arrest,perc
2015,1,"1,199,310","505,681",42.16
2015,2,"8,024,115","1,463,213",18.24
2016,1,"1,250,162","410,717",32.85
2016,2,"7,928,530","1,353,283",17.07
2017,1,"1,247,321","518,617",41.58
2017,2,"7,694,086","1,249,757",16.24
Version 3:
df <- vpcrimetotal
year,vcrime,varrest,varrestperc,pcrime,parrest,parrestperc
2017,"1,247,321","518,617",0.4158,"7,694,086","1,249,757",0.1624
2016,"1,250,162","410,717",0.3285,"7,928,530","1,353,283",0.1707
2015,"1,199,310","505,681",0.4216,"8,024,115","1,463,213",0.1824
The idea is to show the total number of violent crime versus property crime from 1990-2017 with the number of arrests (labeled as a percent) inside each bar based on crime type (property or violent). The preference is to stack all four into one bar per year with different colors for each.
I found these that helped but was still confused in figuring out how to fit my data into them. how to create stacked bar charts for multiple variables with percentages, but to maybe look like this Count and Percent Together using Stack Bar in R
I have used these sets of data to the code but is probably confusing if I post all the different ones I tried that don't work.

How do I put more than 1 condition on a set of data?

I've got a set of data, olympic_height.txt, where each row corresponds to a person. There are 3 columns that tell you their height, their gender and what sport they play respectively. How do I obtain a subset of the data that only contains people that are male and play basketball for example?
I tried this
MBP=read.table(file="olympic_height.txt", header=T)
MBP$sex=="M"
MBP$sport=="Basketball"
t=MBP
boxplot(t)
My goal is to have one boxplot of heights of male basketball players and one of heights of male football players. When I try this, I end up with 2 identical boxplots and I'm certain they should be very different. What am I doing wrong?

Creating subsets of the highest 25% of values using them in a Venn Diagram

Here is an example of my space delimited data, which has 796 rows in total:
Locus GSL Barents Ireland
1 cgpGmo-S1001 0.25805 0.00339 0.02252
2 cgpGmo-S1006 0.11041 0.04298 0.06036
3 cgpGmo-S1007 0.24085 0.08937 0.03964
4 cgpGmo-S1008 0.07428 0.10824 0.01802
5 cgpGmo-S1009 0.08524 0.01471 0.00000
6 cgpGmo-S1013 0.03547 0.05091 0.00991
what I am seeking to do is to isolate the top quartile (25% for each of the three categories and then draw a Venn Diagram showing the number of loci (rows) whose values are in the top 25% for 1, 2, or all three categories.
I am fairly sure I can use the package venn diagram to create the diagrams, but I am unsure how to generate lists of the loci that fall in the top 25% of each category to use as objects for the venn.
A simple case - sort and get the lowest 25%:
a <- seq (100,1,-1)
b <- seq (100,1,-1)
d <- data.frame(cbind(col1=a, col2=b))
sort(d$col1)[1:(length(d$col1)/4)]
will sort and give you 25% of the lowest values.
(or to avoid sorting [could be memory intensive] then use order:
d$col1[order(d$col1)][1:(length(d$col1)/4)]
)

Resources