Can someone please explain the Golden Grid System grid to me? - css

I'm trying to do a responsive two column layout (content and sidebar) with the Golden Grid System grid and am having trouble understanding it. I really like the idea behind this system (no fixed-width, zoomable baseline, etc...) but don't know how to do the columns. I would like to have a sidebar and a content columns that are side-by-side on the desktop and then sidebar on top, content below on tablet/mobile. Any help is appreciated.

Creating the columns can be a little tricky when you first look at the GGS, as the example provided on the website gives a poor illustration of how to use the grid to create the columns.
The most important thing to understand about the GGS is that it's not a grid framework, it only makes suggestions of column width and such. If you've downloaded the CSS, you'll see that these suggestions are outlined in the comments.
Four-column grid active
----------------------------------------
Margin | # 1 2 3 4 | Margin
5.55555% | % 25 50 75 100 | 5.55555%
Eight-column grid active
----------------------------------------------------------------------
Margin | # 1 2 3 4 5 6 7 8 | Margin
5.55555% | % 12.5 25.0 37.5 50.0 62.5 75.0 87.5 100 | 5.55555%
Sixteen-column grid active
----------------------------------------------------------------------------------------------------------------------
Margin | # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | Margin
5.55555% | % 6.25 12.5 18.75 25.0 31.25 37.5 43.75 50.0 56.25 62.5 68.75 75.0 81.25 87.5 93.75 100 | 5.55555%
To create the grid, you need to choose which one best fit your needs. Say that you've chosen the 8-column grid, the most simple way is to build it from the first column width (12.5%) and add that to the subsequent grid columns. To make it responsive, simply wrap the appropriate column definitions in a media query that corresponds to the appropriate breakpoint.
See fiddle example: http://jsfiddle.net/ricebunny/C6QEu/12/

On the page you provided there are 4 files that you can download. I would suggest using those and adapting them.
If you only want to do a 2 column layout without a fixed width, why not create your own layout with 2 simple columns and use a percentage as the width?

Related

Problems with stem - can't stem correctly, and with identify in R using plot

I'm trying to use the stem function to create a stem and leaf plot, but as far as I know it isn't working good, and I don't know why.
I'm doing something like this:
d= c(60,85,72,59,37,75,93,7,98,63,41,90,5,17,97)
stem(d,scale=1)
And I'm getting stem like this:
0 | 577
2 | 7
4 | 19
6 | 0325
8 | 50378
As far as I know there isn't any 27 value in d, or two times 7 value...
It's working weirdly and incorrect, and I don't know why. Additionally, I know that there should be one more column with numbers of observation, and my stem doesn't have it...
The data is being collapsed into groups of 20, not 10. You can see that the stem portion goes up in 2s, so all data are collapsed into groups represented in the stem steps.
If you lengthen the plot with scale, e.g. stem(d, scale = 2) this becomes obvious, and the diagram looks more obviously correct. You can see that what looks like a '27' in your plot above, is actually the 37.
0 | 57
1 | 7
2 |
3 | 7
4 | 1
5 | 9
6 | 03
7 | 25
8 | 5
9 | 0378

Referencing different coloumn as ranges between two data frames

I have one data frame/ list that gives and ID and a number
1. 25
2. 36
3. 10
4. 18
5. 12
This first list is effectively a list of objects with the number of objects contained in each eg. bricks in a wall, so a a list or walls with the number of bricks in each.
I have a second that contains a a full list of the objects being referred to in that above list and a second attribute for each.
1. 3
2. 4
3. 2
4. 8
5. 5
etc.
in the weak example I'm stringing together this would be a list of the weight of each brick in all walls.
so my first list give me the ranges i would like to average in the second list, or I would like as an end result a list of walls with the average weight of each brick per wall.
ie average the attributes of 1-25, 26-62 ... 89-101
my idea so far was to create a data frame with two coloumns
1. 1 25
2. 26 62
3. n
4. n
5. 89 101
and then attempt to create a third column that uses the first two as x and y in a mean(table2$coloumn1[x:y]) type formula, but I can't get anything to work.
the end result could probably looks something like this
1. 3.2
2. 6.5
3. 3
4. 7.9
5. 8.5
is there a way to do it like this or does anyone have a more elegant solution.
You could do something like this... set the low and high limits of your ranges and then use mapply to work out the mean over the appropriate rows of df2.
df1 <- data.frame(id=c(1,2,3,4,5),no=c(25,36,10,18,12))
df2 <- data.frame(obj=1:100,att=sample(1:10,100,replace=TRUE))
df1$low <- cumsum(c(1,df1$no[-nrow(df1)]))
df1$high <- pmin(cumsum(df1$no),nrow(df2))
df1$meanatt <- mapply(function(l,h) mean(df2$att[l:h]), df1$low, df1$high)
df1
id no low high meanatt
1 1 25 1 25 4.760000
2 2 36 26 61 5.527778
3 3 10 62 71 5.800000
4 4 18 72 89 5.111111
5 5 12 90 100 4.454545

Doing a series of operations on every subset of the data obtained from a dataframe

This is a question of a noob in 'R' world. I tried searching and there were quite a few solutions that came close (e.g aggregate, by, etc), but I lacked the understanding to apply it to my problem. Would really appreciate if someone can guide me in a more detailed way.
Hypothetical Dataset
Name Wheels Color Mileage seat_capacity
1 2 Red 70 2
2 3 Black 60 7
3 4 Blue 12 5
4 4 White 15 6
5 3 Yellow 45 6
6 2 Green 70 2
7 3 Silver 45 6
8 6 Silver 5 4
9 14 Red 12 2
10 2 Black 70 7
11 4 Blue 70 5
12 3 White 60 6
13 4 Yellow 12 6
14 4 Green 15 2
I have initially created subsets of data based on color using split.
color <- split(df,df$color)
For each of the subsets created I would be doing more operations e.g
finding the vehicles with highest mileage among the vehicles with lowest number of wheels in each subset.....etc
I have written all the rules pertaining to the later half as well. I am struggling to find a way where I can run all the operations on each of the subset in the variable color.
Any help would be appreciated.
The following worked for me and I would sincerely want to thank #Imo and #aosmith for guiding me.
Assume, I would want to first group the df based on colour and then group further by wheels and then within each such subgroup(wheels) pick top 2 vehicles based on Mileage. Used the dplyr library to achieve the same.
my_list <- df %>% group_by(color, wheels) %>% top_n(2,Mileage)
HTH

R - Rank and Group [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
This is going to be a long shot but i'll try anyway. I want to build a centile (100 groups) or decile (10 groups) based on the data.frame available.
In this example, I have a data frame with 891 records. In this data.frame, I have the following variables.
Unique_ID (numerical). i.e. unique member number
xbeta (numerical) Given credit score. (which allows ranking to be performed)
Good (numerical). Binary Flag (0 or 1). An indicator if member is delinquent
Bad (numerical). Binary Flag (0 or 1) inverse of good
I need your help to build an equivalent table below. By changing the number of groups, i'd be able to split it either 10 or by 100 using xbeta. With the top row being the total (identifiable via TYPE), i'd like to produce the following table (see table below for more details)
r_xbeta is just row number based on the # of groups.
TYPE to identify total or group rank
n = Total Count
count of Good | Bad flag within the rank
xbeta stats, min | max | mean | median
GB_Odds = GOOD / BAD for the rank
LN_GB_ODDs = Log(GB_Odds)
rest should be self explanatory
Your help is much appreciated.
Jim learning R
r_xbeta _TYPE_ n GOOD BAD xbeta_min xbeta_max xbeta_mean xbeta_MEDIAN GB_ODDS LN_GB_ODDS Cummu_Good Cummu_Bad Cummu_Good_pct Cummu_Bad_pct
. 0 891 342 549 -4.42 3.63 -0.7 -1.09 0.62295 -0.47329 342 549 100% 100%
0 1 89 4 85 -4.42 -2.7 -3.6 -3.57 0.04706 -3.05636 4 85 1.20% 15%
1 1 89 12 77 -2.69 -2.37 -2.55 -2.54 0.15584 -1.8589 16 162 4.70% 30%
2 1 87 12 75 -2.35 -1.95 -2.16 -2.2 0.16 -1.83258 28 237 8.20% 43%
3 1 93 14 79 -1.95 -1.54 -1.75 -1.79 0.17722 -1.73039 42 316 12% 58%
4 1 88 10 78 -1.53 -1.09 -1.33 -1.33 0.12821 -2.05412 52 394 15% 72%
5 1 89 27 62 -1.03 -0.25 -0.67 -0.69 0.43548 -0.8313 79 456 23% 83%
6 1 89 44 45 -0.24 0.33 0.05 0.03 0.97778 -0.02247 123 501 36% 91%
7 1 89 54 35 0.37 1.07 0.66 0.63 1.54286 0.43364 177 536 52% 98%
8 1 88 77 11 1.08 2.15 1.56 1.5 7 1.94591 254 547 74% 100%
9 1 90 88 2 2.18 3.63 2.77 2.76 44 3.78419 342 549 100% 100%
A reproducible example would be great, i.e. something we can copy-paste to our terminal that demonstrates your problem. For example, here is the dataframe I'll work with:
set.seed(1) # so you get the same random numbers as me
my_dataframe <- data.frame(Unique_ID = 1:891,
xbeta=rnorm(891, sd=10),
Good=round(runif(891) < 0.5),
Bad=round(runif(891) < 0.5))
head(my_dataframe)
# Unique_ID xbeta Good Bad
# 1 1 -6.264538 1 0
# 2 2 1.836433 1 0
# 3 3 -8.356286 0 1
# 4 4 15.952808 1 1
# 5 5 3.295078 1 0
# 6 6 -8.204684 1 1
(The particular numbers don't matter to your question which is why I made up random ones).
The idea is to:
work out which quantile each row belongs to: see ?quantile. You can specify which quantiles you want (I've shown deciles)
quantile(my_dataframe$xbeta, seq(0, 1, by=.1))
# 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
# -30.0804860 -13.3880074 -8.7326454 -5.1121923 -3.0097613 -0.4493361 2.3680366 5.3732613 8.7867326 13.2425863 38.1027668
This gives the quantile cutoffs; if you use cut on these you can add a variable that says which quantile each row is in (?cut):
my_dataframe$quantile <- cut(my_dataframe$xbeta,
quantile(my_dataframe$xbeta, seq(0, 1, by=.1)))
Have a look at head(my_dataframe) to see what this did. The quantile column is a factor.
split up your dataframe by quantile, and calculate the stats for each. You can use the plyr, dplyr or data.table packages for this; I recommend one of the first two as you are new to R. If you need to do massive merges and calculations on huge tables efficiently (thousands of rows) use data.table, but the learning curve is much steeper. I will show you plyr purely because it's the one I find easiest. dplyr is very similar, but just has a different syntax.
# The idea: `ddply(my_dataframe, .(quantile), FUNCTION)` applies FUNCTION
# to each subset of `my_dataframe`, where we split it up into unique
# `quantile`s.
# For us, `FUNCTION` is `summarize`, which calculates summary stats
# on each subset of the dataframe.
# The arguments after `summarize` are the new summary columns we
# wish to calculate.
library(plyr)
output = ddply(my_dataframe, .(quantile), summarize,
n=length(Unique_ID), GOOD=sum(Good), BAD=sum(Bad),
xbeta_min=min(xbeta), xbeta_max=max(xbeta),
GB_ODDS=GOOD/BAD) # you can calculate the rest yourself,
# "the rest should be self explanatory".
> head(output, 3)
quantile n GOOD BAD xbeta_min xbeta_max GB_ODDS
1 (-30.1,-13.4] 89 41 39 -29.397737 -13.388007 1.0512821
2 (-13.4,-8.73] 89 49 45 -13.353714 -8.732645 1.0888889
3 (-8.73,-5.11] 89 46 48 -8.667335 -5.112192 0.9583333
Calculate the other columns. See (E.g.) ?cumsum for cumulative sums. e.g. output$cummu_good <- cumsum(output$GOOD).
Add the 'total' row. You should be able to do this. You can add an extra row to output using rbind.
Here is the final version my script with math coffee's guidance. I had to use .bincode instead of the suggested cut due to "'breaks' are not unique" error.
Thanks everyone.
set.seed(1) # so you get the same random numbers as me
my_dataframe <- data.frame(Unique_ID = 1:891,
xbeta=rnorm(891, sd=10),
Good=round(runif(891) < 0.5),
Bad=round(runif(891) < 0.5))
head(my_dataframe)
quantile(my_dataframe$xbeta, seq(0, 1, by=.1))
my_dataframe$quantile = .bincode(my_dataframe$xbeta,quantile(my_dataframe$xbeta,seq(0,1,by=.1)))
library(plyr)
output = ddply(my_dataframe, .(quantile), summarize,
n=length(Unique_ID), GOOD=sum(Good), BAD=sum(Bad),
xbeta_min=min(xbeta), xbeta_max=max(xbeta), xbeta_median=median(xbeta), xbeta_mean=mean(xbeta),
GB_ODDS=GOOD/BAD, LN_GB_ODDS = log(GOOD/BAD))
output$cummu_good = cumsum(output$GOOD)
output$cummu_bad = cumsum(output$BAD)
output$cummu_n = cumsum(output$n)
output$sum_good = sum(output$GOOD)
output$sum_bad = sum(output$BAD)
output$cummu_good_pct = cumsum(output$GOOD/output$sum_good)
output$cummu_bad_pct = cumsum(output$BAD/output$sum_bad)
output[["sum_good"]]=NULL
output[["sum_bad"]]=NULL
output

SAP BO XI Desktop Intelligence Aggregate Calculations

I am new to Business Objects and more specifically Desktop Intelligence. We are trying to use it as a reporting tool for our scientific data but running into issues when performing calculations to "create" objects and then trying to perform statistical or aggregate functions on them. For example I run a query that pulls the columns subject name, result day, parameter, and result value. In a table it would look like this:
SUBJECT DAY PARAM RV
10001 0 Length 5.32
10001 0 Width 4.68
10002 0 Length 3.98
10002 0 Width 1.64
10001 7 Length 8.89
10001 7 Width 7.30
10002 7 Length 4.17
10002 7 Width 2.19
We then use the equation for Volume: L*W^2*0.52 in the report defined as measure variable. Using a cross tab with days across the top and subjects down the side I display Length Width and Tumor Volume like such:
0 7
SUBJECT L W V L W V
10001 5.32 4.68 60.59 8.89 7.30 246.35
10002 3.98 1.64 5.57 4.17 2.19 10.40
COUNT # #
MEAN # #
Within the footers I'd like to display aggregates such as count, standard deviation, percent change from day zero but they are all screwed up. It's not that it's also doubling the n by two either to account for the fact that Length and Width make up Volume. I have no clue and am at a loss. Any advice suggestions or guidance would be welcomed.
Thanks in advance,
Jeff
I assume that your cross tab looks like the following in Slice and Dice:
¦ <DAY> (Break)
¦ <PARAM>
--------------------
<SUBJECT> ¦ <RV>
So your table should look something like:
0 7
Length Width Volume Length Width Volume
10001 5.32 4.68 60.59 8.89 7.30 246.35
10002 3.98 1.64 5.57 4.17 2.19 10.40
With <DAY>'s break footer having the volume variable.
For your volume calculation I've used the formula: =(<RV> Where (<PARAM>="Length"))*(Power(<RV> Where (<PARAM>="Width") , 2))*0.52
Right click on the cross tab edge and select Format Crosstab... Then check the Show Footer check box in the Down Edge Display section of the General tab. Add extra rows in the footer if you need them.
Then manually add the formulas for count =Count(<VOLUME>) and mean =Average(<VOLUME>)
For me the final table now looks like this (With values rounded to 2dp):
0 7
Length Width Volume Length Width Volume
10001 5.32 4.68 60.59 8.89 7.30 246.35
10002 3.98 1.64 5.57 4.17 2.19 10.40
Count 2.00 2.00
Mean 33.08 128.37
The trick is making sure the calculations happen in the right context (that is, with respect to the header variables in the different sections of the table). You can add and remove variables and context with the functions In, ForAll and ForEach. Although I haven't needed to use them for this table.

Resources