I'm not sure exactly what to call this, but I'm trying to achieve a sort of "broken histogram" or "axis gap" effect: http://gnuplot-tricks.blogspot.com/2009/11/broken-histograms.html (example is in gnuplot) with R.
It looks like I should be using the gap.plot() function from the plotrix package, but I've only seen examples of doing that with scatter and line plots. I've been able to add a break in the box around my plot and put a zigzag in there, but I can't figure out how to rescale my axes to zoom in on the part below the break.
The whole point is to be able to show the top value for one really big bar in my histogram while zooming into the majority of my bins which are significantly shorter. (Yes, I know this could potentially be misleading, but I still want to do it if possible)
Any suggestions?
Update 5/10/2012 1040 EST:
If I make a regular histogram with the data and use <- to save it into a variable (hdata <- hist(...)), I get the following values for the following variables:
hdata$breaks
[1] 0.00 0.20 0.21 0.22 0.23 0.24 0.25 0.26 0.27 0.28 0.29 0.30 0.31 0.32 0.33
[16] 0.34 0.35 0.36 0.37 0.38 0.39 0.40 0.41 0.42 0.43 0.44 0.45 0.46 0.47 0.48
[31] 0.49 0.50 0.51 0.52 0.53 0.54 0.55 0.56 0.57 0.58 0.59 0.60 0.61 0.62 0.63
[46] 0.64 0.65 0.66 0.67 0.68 0.69 0.70 0.71 0.72 0.73 0.74 0.75 0.76 0.77 0.78
[61] 0.79 0.80 0.81 0.82 0.83 0.84 0.85 0.86 0.87 0.88 0.89 0.90 0.91 0.92 0.93
[76] 0.94 0.95 0.96 0.97 0.98 0.99 1.00
hdata$counts
[1] 675 1 0 1 2 2 0 1 0 2
[11] 1 1 1 2 5 2 1 0 2 0
[21] 2 1 2 2 1 2 2 2 6 1
[31] 0 2 2 2 2 3 5 4 0 1
[41] 5 8 6 4 10 3 7 7 4 3
[51] 7 6 16 11 15 15 16 25 20 22
[61] 31 42 48 62 57 45 69 70 98 104
[71] 79 155 214 277 389 333 626 937 1629 3471
[81] 175786
I believe I want to use $breaks as my x-axis and $counts as my y-axis.
You could use the gap.barplot from the plotrix package.
# install.packages('plotrix', dependencies = TRUE)
require(plotrix)
example(gap.barplot)
or
twogrp<-c(rnorm(10)+4,rnorm(10)+20)
gap.barplot(twogrp,gap=c(8,16),xlab="Index",ytics=c(3,6,17,20),
ylab="Group values",main="Barplot with gap")
Will give you this,
update 2012-05-09 19:15:42 PDT
Would it be an option to use facet_wrap with "free" (or "free_y") scales? That way you would be able to compare the data side by side, but have different y scales
Here is my quick example,
library('ggplot2')
source("http://www.ling.upenn.edu/~joseff/rstudy/data/coins.R")
coins$foo <- ifelse(coins$Mass.g >= 10, c("Low"), c("hight"))
m <- ggplot(coins, aes(x = Mass.g))
m + geom_histogram(binwidth = 2) + facet_wrap(~ foo, scales = "free")
The above would give you this,
This seems to work:
gap.barplot(hdata$counts,gap=c(4000,175000),xlab="Counts",ytics=c(0,3500,175000),
ylab="Frequency",main="Barplot with gap",xtics=hdata$counts)
Related
Here are the first 20 rows of my dataframe:
x y z
1 0.50 0.50 48530.98
2 0.50 0.51 49029.34
3 0.50 0.52 49576.12
4 0.50 0.53 50161.22
5 0.50 0.54 50752.05
6 0.50 0.55 51354.43
7 0.50 0.56 51965.09
8 0.50 0.57 38756.51
9 0.50 0.58 39262.34
10 0.50 0.59 39783.68
11 0.51 0.60 41052.09
12 0.51 0.61 41447.51
13 0.51 0.62 26972.85
14 0.51 0.63 27134.74
15 0.51 0.64 27297.85
16 0.51 0.65 27462.82
17 0.51 0.66 27632.45
18 0.51 0.67 27806.77
19 0.51 0.68 27988.12
20 0.51 0.69 25514.42
I need to create a 3d surface plot to view it.
The best would be one where I can rotate it around angles to view it from all perspectives.
Thanks.
You can use plotly to create a 3d surface plot. Use xtabs to turn your data into a suitable matrix
library(plotly)
plot_ly(z = ~xtabs(z ~ x + y, data = df)) %>% add_surface()
Sample data
df <- read.table(text =
" x y z
1 0.50 0.50 48530.98
2 0.50 0.51 49029.34
3 0.50 0.52 49576.12
4 0.50 0.53 50161.22
5 0.50 0.54 50752.05
6 0.50 0.55 51354.43
7 0.50 0.56 51965.09
8 0.50 0.57 38756.51
9 0.50 0.58 39262.34
10 0.50 0.59 39783.68
11 0.51 0.60 41052.09
12 0.51 0.61 41447.51
13 0.51 0.62 26972.85
14 0.51 0.63 27134.74
15 0.51 0.64 27297.85
16 0.51 0.65 27462.82
17 0.51 0.66 27632.45
18 0.51 0.67 27806.77
19 0.51 0.68 27988.12
20 0.51 0.69 25514.42", header = T)
Suppose I have a dataframe as follows:
df <- data.frame(
alpha = 0:20,
beta = 30:50,
gamma = 100:120
)
I have a custom function that makes new columns. (Note, my actual function is a lot more complex and can't be vectorized without a custom function, so please ignore the substance of the transformation here.) For example:
newfun <- function(var = NULL) {
newname <- paste0(var, "NEW")
df[[newname]] <- df[[var]]/100
return(df)
}
I want to apply this over many columns of the dataset repeatedly and have the dataset "build up." This happens just fine when I do the following:
df <- newfun("alpha")
df <- newfun("beta")
df <- newfun("gamma")
Obviously this is redundant and a case for map. But when I do the following I get back a list of dataframes, which is not what I want:
df <- data.frame(
alpha = 0:20,
beta = 30:50,
gamma = 100:120
)
out <- c("alpha", "beta", "gamma") %>%
map(function(x) newfun(x))
How can I iterate over a vector of column names AND see the changes repeatedly applied to the same dataframe?
Writing the function to reach outside of its scope to find some df is both risky and will bite you, especially when you see something like:
df[['a']] <- 2
# Error in df[["a"]] <- 2 : object of type 'closure' is not subsettable
You will get this error when it doesn't find your variable named df, and instead finds the base function named df. Two morals from this discovery:
While I admit to using df myself, it's generally bad practice to name variables the same as R functions (especially from base); and
Scope-breach is sloppy and renders a workflow unreproducible and often difficult to troubleshoot problems or changes.
To remedy this, and since your function relies on knowing what the old/new variable names are or should be, I think pmap or base R Map may work better. Further, I suggest that you name the new variables outside of the function, making it "data-only".
myfunc <- function(x) x/100
setNames(lapply(dat[,cols], myfunc), paste0("new", cols))
# $newalpha
# [1] 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15 0.16 0.17
# [19] 0.18 0.19 0.20
# $newbeta
# [1] 0.30 0.31 0.32 0.33 0.34 0.35 0.36 0.37 0.38 0.39 0.40 0.41 0.42 0.43 0.44 0.45 0.46 0.47
# [19] 0.48 0.49 0.50
# $newgamma
# [1] 1.00 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.10 1.11 1.12 1.13 1.14 1.15 1.16 1.17
# [19] 1.18 1.19 1.20
From here, we just need to column-bind (cbind) it:
cbind(dat, setNames(lapply(dat[,cols], myfunc), paste0("new", cols)))
# alpha beta gamma newalpha newbeta newgamma
# 1 0 30 100 0.00 0.30 1.00
# 2 1 31 101 0.01 0.31 1.01
# 3 2 32 102 0.02 0.32 1.02
# 4 3 33 103 0.03 0.33 1.03
# 5 4 34 104 0.04 0.34 1.04
# ...
Special note: if you plan on doing this iteratively (repeatedly), it is generally bad to iteratively add rows to frames; while I know this is a bad idea for adding rows, I suspect (without proof at the moment) that doing the same with columns is also bad. For that reason, if you do this a lot, consider using do.call(cbind, c(list(dat), ...)) where ... is the list of things to add. This results in a single call to cbind and therefore only a single memory-copy of the original dat. (Contrast that with iteratively calling the *bind functions which make a complete copy with each pass, scaling poorly.)
additions <- lapply(1:3, function(i) setNames(lapply(dat[,cols], myfunc), paste0("new", i, cols)))
str(additions)
# List of 3
# $ :List of 3
# ..$ new1alpha: num [1:21] 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 ...
# ..$ new1beta : num [1:21] 0.3 0.31 0.32 0.33 0.34 0.35 0.36 0.37 0.38 0.39 ...
# ..$ new1gamma: num [1:21] 1 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 ...
# $ :List of 3
# ..$ new2alpha: num [1:21] 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 ...
# ..$ new2beta : num [1:21] 0.3 0.31 0.32 0.33 0.34 0.35 0.36 0.37 0.38 0.39 ...
# ..$ new2gamma: num [1:21] 1 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 ...
# $ :List of 3
# ..$ new3alpha: num [1:21] 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 ...
# ..$ new3beta : num [1:21] 0.3 0.31 0.32 0.33 0.34 0.35 0.36 0.37 0.38 0.39 ...
# ..$ new3gamma: num [1:21] 1 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 ...
do.call(cbind, c(list(dat), additions))
# alpha beta gamma new1alpha new1beta new1gamma new2alpha new2beta new2gamma new3alpha new3beta new3gamma
# 1 0 30 100 0.00 0.30 1.00 0.00 0.30 1.00 0.00 0.30 1.00
# 2 1 31 101 0.01 0.31 1.01 0.01 0.31 1.01 0.01 0.31 1.01
# 3 2 32 102 0.02 0.32 1.02 0.02 0.32 1.02 0.02 0.32 1.02
# 4 3 33 103 0.03 0.33 1.03 0.03 0.33 1.03 0.03 0.33 1.03
# 5 4 34 104 0.04 0.34 1.04 0.04 0.34 1.04 0.04 0.34 1.04
# 6 5 35 105 0.05 0.35 1.05 0.05 0.35 1.05 0.05 0.35 1.05
# ...
An alternative approach is to change your function to only return a vector:
newfun2 <- function(var = NULL) {
df[[var]] / 100
}
newfun2('alpha')
# [1] 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13
#[15] 0.14 0.15 0.16 0.17 0.18 0.19 0.20
Then, using base, you can use lapply() to loop through your list of functions to do:
cols <- c("alpha", "beta", "gamma")
df[, paste0(cols, 'NEW')] <- lapply(cols, newfun2)
#or
#df[, paste0(cols, 'NEW')] <- purrr::map(cols, newfun2)
df
alpha beta gamma alphaNEW betaNEW gammaNEW
1 0 30 100 0.00 0.30 1.00
2 1 31 101 0.01 0.31 1.01
3 2 32 102 0.02 0.32 1.02
4 3 33 103 0.03 0.33 1.03
5 4 34 104 0.04 0.34 1.04
6 5 35 105 0.05 0.35 1.05
7 6 36 106 0.06 0.36 1.06
8 7 37 107 0.07 0.37 1.07
9 8 38 108 0.08 0.38 1.08
10 9 39 109 0.09 0.39 1.09
11 10 40 110 0.10 0.40 1.10
12 11 41 111 0.11 0.41 1.11
13 12 42 112 0.12 0.42 1.12
14 13 43 113 0.13 0.43 1.13
15 14 44 114 0.14 0.44 1.14
16 15 45 115 0.15 0.45 1.15
17 16 46 116 0.16 0.46 1.16
18 17 47 117 0.17 0.47 1.17
19 18 48 118 0.18 0.48 1.18
20 19 49 119 0.19 0.49 1.19
21 20 50 120 0.20 0.50 1.20
Based on the way you wrote your function, a for loop that assign the result of newfun to df repeatedly works pretty well.
vars <- names(df)
for (i in vars){
df <- newfun(i)
}
df
# alpha beta gamma alphaNEW betaNEW gammaNEW
# 1 0 30 100 0.00 0.30 1.00
# 2 1 31 101 0.01 0.31 1.01
# 3 2 32 102 0.02 0.32 1.02
# 4 3 33 103 0.03 0.33 1.03
# 5 4 34 104 0.04 0.34 1.04
# 6 5 35 105 0.05 0.35 1.05
# 7 6 36 106 0.06 0.36 1.06
# 8 7 37 107 0.07 0.37 1.07
# 9 8 38 108 0.08 0.38 1.08
# 10 9 39 109 0.09 0.39 1.09
# 11 10 40 110 0.10 0.40 1.10
# 12 11 41 111 0.11 0.41 1.11
# 13 12 42 112 0.12 0.42 1.12
# 14 13 43 113 0.13 0.43 1.13
# 15 14 44 114 0.14 0.44 1.14
# 16 15 45 115 0.15 0.45 1.15
# 17 16 46 116 0.16 0.46 1.16
# 18 17 47 117 0.17 0.47 1.17
# 19 18 48 118 0.18 0.48 1.18
# 20 19 49 119 0.19 0.49 1.19
# 21 20 50 120 0.20 0.50 1.20
I am having some problems sorting my dataset into bins, that based on the numeric value of the data value. I tried doing it with the function shingle from the lattice which seem to split it accurately.
I can't seem to extract the desired output which is the knowledge how the data is divided into the predefined bins. I seem only able to print it.
bin_interval = matrix(c(0.38,0.42,0.46,0.50,0.54,0.58,0.62,0.66,0.70,0.74,0.78,0.82,0.86,0.90,0.94,0.98,
0.40,0.44,0.48,0.52,0.56,0.60,0.64,0.68,0.72,0.76,0.80,0.84,0.88,0.92,0.96,1.0),
ncol = 2, nrow = 16)
bin_1 = shingle(data_1,intervals = bin_interval)
How do i extract the intervals which is outputted by the shingle function, and not only print it...
the intervals being the output:
Intervals:
min max count
1 0.38 0.40 0
2 0.42 0.44 6
3 0.46 0.48 46
4 0.50 0.52 251
5 0.54 0.56 697
6 0.58 0.60 1062
7 0.62 0.64 1215
8 0.66 0.68 1227
9 0.70 0.72 1231
10 0.74 0.76 1293
11 0.78 0.80 1330
12 0.82 0.84 1739
13 0.86 0.88 2454
14 0.90 0.92 3048
15 0.94 0.96 8936
16 0.98 1.00 71446
As an variable, that can be fed to another function.
The shingle() function returns the values using attributes().
The levels are specifically given by attr(bin_1,"levels").
So:
set.seed(1337)
data_1 = runif(100)
bin_interval = matrix(c(0.38,0.42,0.46,0.50,0.54,0.58,0.62,0.66,0.70,0.74,0.78,0.82,0.86,0.90,0.94,0.98,
0.40,0.44,0.48,0.52,0.56,0.60,0.64,0.68,0.72,0.76,0.80,0.84,0.88,0.92,0.96,1.0),
ncol = 2, nrow = 16)
bin_1 = shingle(data_1,intervals = bin_interval)
attr(bin_1,"levels")
This gives:
[,1] [,2]
[1,] 0.38 0.40
[2,] 0.42 0.44
[3,] 0.46 0.48
[4,] 0.50 0.52
[5,] 0.54 0.56
[6,] 0.58 0.60
[7,] 0.62 0.64
[8,] 0.66 0.68
[9,] 0.70 0.72
[10,] 0.74 0.76
[11,] 0.78 0.80
[12,] 0.82 0.84
[13,] 0.86 0.88
[14,] 0.90 0.92
[15,] 0.94 0.96
[16,] 0.98 1.00
Edit
The count information for each interval is only computed within the print.shingle method. Thus, you would need to run the following code:
count.shingle = function(x){
l <- levels(x)
n <- nlevels(x)
int <- data.frame(min = numeric(n), max = numeric(n),
count = numeric(n))
for (i in 1:n) {
int$min[i] <- l[[i]][1]
int$max[i] <- l[[i]][2]
int$count[i] <- length(x[x >= l[[i]][1] & x <= l[[i]][2]])
}
int
}
a = count.shingle(bin_1)
This gives:
> a
min max count
1 0.38 0.40 0
2 0.42 0.44 1
3 0.46 0.48 3
4 0.50 0.52 1
5 0.54 0.56 2
6 0.58 0.60 2
7 0.62 0.64 2
8 0.66 0.68 4
9 0.70 0.72 1
10 0.74 0.76 3
11 0.78 0.80 2
12 0.82 0.84 2
13 0.86 0.88 5
14 0.90 0.92 1
15 0.94 0.96 1
16 0.98 1.00 2
where a$min is lower range, a$max is upper range, and a$count is the number within the bins.
The name of this question does not do it justice. This is best explained by numerical example. Let's say I have the following portfolio data, called data.
> data
Stdev AvgReturn
1 1.92 0.35
2 1.53 0.34
3 1.39 0.31
4 1.74 0.31
5 1.16 0.30
6 1.27 0.29
7 1.78 0.28
8 1.59 0.27
9 1.05 0.27
10 1.17 0.26
11 1.62 0.25
12 1.33 0.25
13 0.96 0.24
14 1.47 0.24
15 1.09 0.24
16 1.20 0.24
17 1.49 0.23
18 1.01 0.23
19 0.88 0.22
20 1.21 0.22
21 1.37 0.22
22 1.09 0.22
23 0.95 0.21
24 0.81 0.21
I have already sorted the data data.frame by AvgReturn to make this (what I believe to be easier). My goal is to essentially eliminate all the points that do not make sense to choose, i.e., I would not want a portfolio where I choose a lower AvgReturn but receive a higher Stdev (assuming stdev is an appropriate measure of risk, but I am assuming that for now).
Essentially, does any know of an efficient (in the code sense) way to choose the "rational" portfolio choices. I have manually created a third column to this data frame to show you which portfolio choices should be kept. I would want to remove portfolio 4 because I would never choose it since I can choose portfolio 3 and receive the same return and a lower stdev. Similarly, I would never choose 8 because I can choose 5 with a higher return and a lower stdev.
> res
Stdev AvgReturn Keep
1 1.92 0.35 TRUE
2 1.53 0.34 TRUE
3 1.39 0.31 TRUE
4 1.74 0.31 FALSE
5 1.16 0.30 TRUE
6 1.27 0.29 FALSE
7 1.78 0.28 FALSE
8 1.59 0.27 FALSE
9 1.05 0.27 TRUE
10 1.17 0.26 FALSE
11 1.62 0.25 FALSE
12 1.33 0.25 FALSE
13 0.96 0.24 TRUE
14 1.47 0.24 FALSE
15 1.09 0.24 FALSE
16 1.20 0.24 FALSE
17 1.49 0.23 FALSE
18 1.01 0.23 FALSE
19 0.88 0.22 TRUE
20 1.21 0.22 FALSE
21 1.37 0.22 FALSE
22 1.09 0.22 FALSE
23 0.95 0.21 FALSE
24 0.81 0.21 TRUE
The only way I can think of solving this issue is by looping through and checking each condition. This, however, will be relatively inefficient in R my preferred language for this solution. I am having difficulty thinking of a vectorized solution. Any help is appreciated!
EDIT
Here I believe is a solution:
domstrat <- function(data){
keep <- c(-1,sign(diff(cummin(data[[1]]))))
data <- data[which(keep!=0),]
return(data)
}
Stdev AvgReturn
1 1.92 0.35
2 1.53 0.34
3 1.39 0.31
5 1.16 0.30
9 1.05 0.27
13 0.96 0.24
19 0.88 0.22
24 0.81 0.21
This uses the function cummax to identify a series of qualifying points by then testing against the original data:
> data <- data[order(data$Stdev),]
> data[ which(data$AvgReturn == cummax(data$AvgReturn)) , ]
Stdev AvgReturn
24 0.81 0.21
19 0.88 0.22
13 0.96 0.24
9 1.05 0.27
5 1.16 0.30
3 1.39 0.31
2 1.53 0.34
1 1.92 0.35
> plot(data)
> points( data[ which(data$AvgReturn == cummax(data$AvgReturn)) , ] , col="green")
It's not actually the convex hull but what might be called the "monotonically increasing hull".
You can define a custom R function which contains some logic to decide whether or not to keep a certain portfolio depending on the standard deviation and the average return:
>portfolioKeep <- function(x){
+ # x[1] contains the Stdev for the input row
+ # x[2] contains the AvgReturn for the input row
+ # make your decision based on these inputs here...
+ # and remember to return either "TRUE" or "FALSE"
+ }
Next we can use an apply function on your input data frame to come up with the Keep column you want:
# your 'input' data frame
input.mat <- data.matrix(input)
# apply custom function to rows
keep <- apply(input.mat, 1, portfolioKeep)
# bind keep vector to input data frame
input <- cbind(input, keep)
The above code first converts the input data frame into a numeric matrix so that we can use the apply function on it. The apply function will run portfolioKeep on each row, returning either "TRUE" or "FALSE." Finally, we roll the Keep column up into the original data frame for convenience.
And now you can do your reporting easily with the data frame input with which you started.
I'm not sure exactly what to call this, but I'm trying to achieve a sort of "broken histogram" or "axis gap" effect: http://gnuplot-tricks.blogspot.com/2009/11/broken-histograms.html (example is in gnuplot) with R.
It looks like I should be using the gap.plot() function from the plotrix package, but I've only seen examples of doing that with scatter and line plots. I've been able to add a break in the box around my plot and put a zigzag in there, but I can't figure out how to rescale my axes to zoom in on the part below the break.
The whole point is to be able to show the top value for one really big bar in my histogram while zooming into the majority of my bins which are significantly shorter. (Yes, I know this could potentially be misleading, but I still want to do it if possible)
Any suggestions?
Update 5/10/2012 1040 EST:
If I make a regular histogram with the data and use <- to save it into a variable (hdata <- hist(...)), I get the following values for the following variables:
hdata$breaks
[1] 0.00 0.20 0.21 0.22 0.23 0.24 0.25 0.26 0.27 0.28 0.29 0.30 0.31 0.32 0.33
[16] 0.34 0.35 0.36 0.37 0.38 0.39 0.40 0.41 0.42 0.43 0.44 0.45 0.46 0.47 0.48
[31] 0.49 0.50 0.51 0.52 0.53 0.54 0.55 0.56 0.57 0.58 0.59 0.60 0.61 0.62 0.63
[46] 0.64 0.65 0.66 0.67 0.68 0.69 0.70 0.71 0.72 0.73 0.74 0.75 0.76 0.77 0.78
[61] 0.79 0.80 0.81 0.82 0.83 0.84 0.85 0.86 0.87 0.88 0.89 0.90 0.91 0.92 0.93
[76] 0.94 0.95 0.96 0.97 0.98 0.99 1.00
hdata$counts
[1] 675 1 0 1 2 2 0 1 0 2
[11] 1 1 1 2 5 2 1 0 2 0
[21] 2 1 2 2 1 2 2 2 6 1
[31] 0 2 2 2 2 3 5 4 0 1
[41] 5 8 6 4 10 3 7 7 4 3
[51] 7 6 16 11 15 15 16 25 20 22
[61] 31 42 48 62 57 45 69 70 98 104
[71] 79 155 214 277 389 333 626 937 1629 3471
[81] 175786
I believe I want to use $breaks as my x-axis and $counts as my y-axis.
You could use the gap.barplot from the plotrix package.
# install.packages('plotrix', dependencies = TRUE)
require(plotrix)
example(gap.barplot)
or
twogrp<-c(rnorm(10)+4,rnorm(10)+20)
gap.barplot(twogrp,gap=c(8,16),xlab="Index",ytics=c(3,6,17,20),
ylab="Group values",main="Barplot with gap")
Will give you this,
update 2012-05-09 19:15:42 PDT
Would it be an option to use facet_wrap with "free" (or "free_y") scales? That way you would be able to compare the data side by side, but have different y scales
Here is my quick example,
library('ggplot2')
source("http://www.ling.upenn.edu/~joseff/rstudy/data/coins.R")
coins$foo <- ifelse(coins$Mass.g >= 10, c("Low"), c("hight"))
m <- ggplot(coins, aes(x = Mass.g))
m + geom_histogram(binwidth = 2) + facet_wrap(~ foo, scales = "free")
The above would give you this,
This seems to work:
gap.barplot(hdata$counts,gap=c(4000,175000),xlab="Counts",ytics=c(0,3500,175000),
ylab="Frequency",main="Barplot with gap",xtics=hdata$counts)