Adjust area from geom_area to a line from geom_line - r

I'm trying to make a hourly dispatch curve with generation and energy consumpsion data, which have the characteristic that when we do a power balance (generation minus consumpsion) we get values nearly to zero.
Into the generation data there are also net interchange values, that be negative when de power system are exporting energy and positive when the system are importing energy to complete the consumption.
Thus, to the plot created with geom_area and geom_line be ok, the black line (consumption) needs be adjusted with the generation area, so that there's no gap between the area and the black line. But, in my attempts I couldn't do it. How you can see, same the energy balence resulting in zero, there are a gap beetwen 19 and 20 hours. I don't know what is wrong. Someone have idea how to do that?
Thanks in advance.
Data to the plot:
generation <-
data.frame('dayHour' = c('18/11/2018 18:00','18/11/2018 19:00','18/11/2018 20:00','18/11/2018 21:00','18/11/2018 18:00','18/11/2018 19:00','18/11/2018 20:00','18/11/2018 21:00','18/11/2018 18:00','18/11/2018 19:00','18/11/2018 20:00','18/11/2018 21:00','18/11/2018 18:00','18/11/2018 19:00','18/11/2018 20:00','18/11/2018 21:00'),
'power' = c(-1364.290, -433.110, 1132.39, 749.48, 463.75, 467.8, 469.35, 436.51, 2025.5, 2133.07, 2306.85, 2304.91, 211.52, 213.16, 214.33, 214.59),
'label' = c('net interchange', 'net interchange', 'net interchange', 'net interchange', 'gas', 'gas', 'gas', 'gas', 'hydro', 'hydro', 'hydro', 'hydro', 'biomass', 'biomass', 'biomass', 'biomass'))
generation$label <- factor(generation$label, levels = c('net interchange', 'gas', 'hydro', 'biomass'))
net.load <-
data.frame('dayHour' = c('18/11/2018 18:00', '18/11/2018 19:00', '18/11/2018 20:00', '18/11/2018 21:00'), 'power' = c(1336.48, 2380.91, 4122.91, 3705.49), 'label' = c('net load', 'net load', 'net load', 'net load'))
generation$dayHour <-
as.POSIXct(strptime(generation$dayHour,format='%d/%m/%Y %H:%M'))
net.load$dayHour <-
as.POSIXct(strptime(net.load$dayHour,format='%d/%m/%Y %H:%M'))
Power balance
pb <-
filter(generation, label == "biomass")$power +
filter(generation, label == "hydro")$power +
filter(generation, label == "gas")$power +
filter(generation, label == "net interchange")$power -
net.load$power
summary(pb)
Dispatch curve
ggplot() +
geom_area(data = generation,
aes(y = power,
x = dayHour,
fill = label)) +
geom_line(data = net.load,
aes(y = power,
x = dayHour,
colour = label),
size = 1.2,
colour = "black") +
labs(fill = "generation",
colour = 'net load')

It looks like position_stack is getting confused when the interpolation crosses the x-axis.
To fix it, you can interpolate manually before plotting (e.g. with approx):
library(tidyverse)
generation <- data.frame(
dayHour = structure(c(1542585600, 1542589200, 1542592800, 1542596400, 1542585600, 1542589200, 1542592800, 1542596400, 1542585600, 1542589200, 1542592800, 1542596400, 1542585600, 1542589200, 1542592800, 1542596400), class = c("POSIXct", "POSIXt"), tzone = ""),
power = c(-1364.29, -433.11, 1132.39, 749.48, 463.75, 467.8, 469.35, 436.51, 2025.5, 2133.07, 2306.85, 2304.91, 211.52, 213.16, 214.33, 214.59),
label = structure(c(1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L), .Label = c("net interchange", "gas", "hydro", "biomass"), class = "factor")
)
generation_interpolated <- generation %>%
group_by(label) %>%
summarise(data = list(as_tibble(approx(dayHour, power, n = 501)))) %>%
unnest() %>%
mutate(x = as.POSIXct(x, origin = '1970-01-01', tz = 'UTC'))
net_power_interpolated <- generation_interpolated %>%
group_by(x) %>%
summarise(y = sum(y))
ggplot(generation_interpolated, aes(x, y)) +
geom_area(aes(fill = label)) +
geom_line(data = net_power_interpolated)
To see how approx works, a simpler, ungrouped example:
df <- data.frame(x = c(0, 5, 10), y = c(0, 20, 10))
interpolated <- approx(df$x, df$y, n = 11)
str(interpolated)
#> List of 2
#> $ x: int [1:11] 0 1 2 3 4 5 6 7 8 9 ...
#> $ y: num [1:11] 0 4 8 12 16 20 18 16 14 12 ...
ggplot(as.data.frame(interpolated), aes(x, y)) +
geom_line() +
geom_point() +
geom_point(data = df, color = 'dodgerblue', size = 4)

Related

ggplot2::coord_cartesian on facets

coord_cartesian doesn't allow one to set per-facet coordinates, and using other range-limiting tends to produce a straight-line on the specific extreme. Since we have widelay-varying y-ranges, we can't set the limits on all facets identically; limiting the data before plot is not as friendly with geom_line/geom_path (https://stackoverflow.com/a/27319786/3358272), as it takes a lot more effort to interpolate data to get to the edge and then insert NAs in order to break up the line. (Ultimately, the only way to get the desired result is to do exactly this, which can be a bit onerous with other data.)
One workaround is suggested in https://gist.github.com/burchill/d780d3e8663ad15bcbda7869394a348a, where it starts with
test_data %>%
ggplot(aes(x=Nsubjects, y = Odds, color=EffectSize)) +
facet_wrap(DataType ~ ExpType, labeller = label_both, scales="free") +
geom_line(size=2) +
geom_ribbon(aes(ymax=Upper, ymin=Lower, fill=EffectSize, color=NULL), alpha=0.2)
and in previous versions of ggplot2, that gist defined coord_panel_ranges and was able to control coordinates per-facet. The two right facets should narrow down to a 1-6(ish) y-axis so that the exploding confidence interval goes off-screen and allows the facet to focus primarily on the "normal range" of data. (Note: the test_data and this vis is not mine, it's taken from the gist. While my needs are somewhat similar, I thought it better to stay within the confines of the gist's data and code.)
Unfortunately, this now fails for me with ggplot2-3.3.0. Initial errors related to the recent loss of ggplot2::scale_range, which I tried to mitigate with this adaptation of burchill's code (that uses other ggplot2::: internal functions):
UniquePanelCoords <- ggplot2::ggproto(
"UniquePanelCoords", ggplot2::CoordCartesian,
num_of_panels = 1,
panel_counter = 1,
panel_ranges = NULL,
setup_layout = function(self, layout, params) {
self$num_of_panels <- length(unique(layout$PANEL))
self$panel_counter <- 1
layout
},
setup_panel_params = function(self, scale_x, scale_y, params = list()) {
if (!is.null(self$panel_ranges) & length(self$panel_ranges) != self$num_of_panels)
stop("Number of panel ranges does not equal the number supplied")
train_cartesian <- function(scale, limits, name, given_range = NULL) {
if (is.null(given_range)) {
expansion <- ggplot2:::default_expansion(scale, expand = self$expand)
range <- ggplot2:::expand_limits_scale(scale, expansion,
coord_limits = self$limits[[name]])
} else {
range <- given_range
}
out <- scale$break_info(range)
out$arrange <- scale$axis_order()
names(out) <- paste(name, names(out), sep = ".")
out
}
cur_panel_ranges <- self$panel_ranges[[self$panel_counter]]
if (self$panel_counter < self$num_of_panels)
self$panel_counter <- self$panel_counter + 1
else
self$panel_counter <- 1
c(train_cartesian(scale_x, self$limits$x, "x", cur_panel_ranges$x),
train_cartesian(scale_y, self$limits$y, "y", cur_panel_ranges$y))
}
)
coord_panel_ranges <- function(panel_ranges, expand = TRUE, default = FALSE, clip = "on") {
ggplot2::ggproto(NULL, UniquePanelCoords, panel_ranges = panel_ranges,
expand = expand, default = default, clip = clip)
}
but this is still failing with
test_data %>%
ggplot(aes(x=Nsubjects, y = Odds, color=EffectSize)) +
facet_wrap(DataType ~ ExpType, labeller = label_both, scales="free") +
geom_line(size=2) +
geom_ribbon(aes(ymax=Upper, ymin=Lower, fill=EffectSize, color=NULL), alpha=0.2) +
coord_panel_ranges(panel_ranges = list(
list(x=c(8,64), y=c(1,4)), # Panel 1
list(x=c(8,64), y=c(1,6)), # Panel 2
list(NULL), # Panel 3, an empty list falls back on the default values
list(x=c(8,64), y=c(1,7)) # Panel 4
))
# Error in panel_params$x$break_positions_minor() :
# attempt to apply non-function
I'm not very familiar with extending ggplot2, and I suspect there is something I'm missing from the ggproto. Here's what the return value from the proto looks like:
str(c(train_cartesian(scale_x, self$limits$x, "x", cur_panel_ranges$x),
train_cartesian(scale_y, self$limits$y, "y", cur_panel_ranges$y)))
# List of 14
# $ x.range : num [1:2] 8 64
# $ x.labels : chr [1:3] "20" "40" "60"
# $ x.major : num [1:3] 0.214 0.571 0.929
# $ x.minor : num [1:6] 0.0357 0.2143 0.3929 0.5714 0.75 ...
# $ x.major_source: num [1:3] 20 40 60
# $ x.minor_source: num [1:6] 10 20 30 40 50 60
# $ x.arrange : chr [1:2] "secondary" "primary"
# $ y.range : num [1:2] 1 4
# $ y.labels : chr [1:4] "1" "2" "3" "4"
# $ y.major : num [1:4] 0 0.333 0.667 1
# $ y.minor : num [1:7] 0 0.167 0.333 0.5 0.667 ...
# $ y.major_source: num [1:4] 1 2 3 4
# $ y.minor_source: num [1:7] 1 1.5 2 2.5 3 3.5 4
# $ y.arrange : chr [1:2] "primary" "secondary"
Do I need to have an x element that's a list with at least a break_positions_minor function, or is there something else that needs to be inherited in order to ensure panel_params$x$break_positions_minor exists or a reasonable default is used?
Data:
test_data <- structure(list(DataType = structure(c(1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L,
2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L), .Label = c("A", "B"), class = "factor"),
ExpType = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L,
2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L), .Label = c("X", "Y"), class = "factor"),
EffectSize = structure(c(1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L,
1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 2L,
2L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L), .Label = c("15", "35"
), class = "factor"), Nsubjects = c(8, 16, 32, 64, 8, 16,
32, 64, 8, 16, 32, 64, 8, 16, 32, 64, 8, 16, 32, 64, 8, 16,
32, 64, 8, 16, 32, 64, 8, 16, 32, 64), Odds = c(1.06248116259846,
1.09482076720863, 1.23086993413208, 1.76749340505612, 1.06641831731573,
1.12616954196688, 1.48351814320987, 3.50755080416964, 1.11601399761081,
1.18352602009495, 1.45705466646283, 2.53384744810515, 1.13847061762186,
1.24983742407086, 1.97075900741022, 6.01497152563726, 1.02798821372378,
1.06297006279249, 1.19432835697453, 1.7320754674107, 1.02813271730924,
1.09355953747203, 1.44830680332583, 3.4732692664923, 1.06295915758305,
1.12008443626365, 1.3887632112682, 2.46321037334, 1.06722652223114,
1.1874936754725, 1.89870184372054, 5.943747409114), Upper = c(1.72895843644471,
2.09878774769559, 2.59771794965346, 5.08513435549015, 1.72999898901071,
1.8702196882561, 3.85385388850167, 5.92564404180303, 1.99113042576373,
2.61074135841984, 3.45852331828636, 4.83900142207583, 1.57897154221764,
1.8957409107653, 10, 75, 2.3763918424135, 2.50181951057562,
3.45037180395673, 3.99515276392065, 2.04584535265976, 2.39317394040066,
2.832526733659, 5.38414183471915, 1.40569501856836, 2.6778044191832,
2.98023068052396, 4.75934650422069, 1.54116883311054, 2.50647989271592,
3.48517589981551, 100), Lower = c(0.396003888752214, 0.0908537867216577,
-0.135978081389309, -1.55014754537791, 0.40283764562075,
0.382119395677663, -0.88681760208193, 1.08945756653624, 0.240897569457892,
-0.243689318229938, -0.544413985360706, 0.228693474134466,
0.69796969302609, 0.603933937376415, 0.183548809738402, 3.57236968943798,
-0.320415414965949, -0.375879384990643, -1.06171509000767,
-0.531001829099242, 0.010420081958713, -0.206054865456611,
0.0640868729926525, 1.56239669826544, 0.720223296597732,
-0.437635546655903, -0.202704257987574, 0.167074242459314,
0.593284211351745, -0.131492541770921, 0.312227787625573,
3.76692741957876)), .Names = c("DataType", "ExpType", "EffectSize",
"Nsubjects", "Odds", "Upper", "Lower"), class = c("tbl_df", "tbl",
"data.frame"), row.names = c(NA, -32L))
I modified the function train_cartesian to match the output format of view_scales_from_scale (defined here), which seems to work:
train_cartesian <- function(scale, limits, name, given_range = NULL) {
if (is.null(given_range)) {
expansion <- ggplot2:::default_expansion(scale, expand = self$expand)
range <- ggplot2:::expand_limits_scale(scale, expansion,
coord_limits = self$limits[[name]])
} else {
range <- given_range
}
out <- list(
ggplot2:::view_scale_primary(scale, limits, range),
sec = ggplot2:::view_scale_secondary(scale, limits, range),
arrange = scale$axis_order(),
range = range
)
names(out) <- c(name, paste0(name, ".", names(out)[-1]))
out
}
p <- test_data %>%
ggplot(aes(x=Nsubjects, y = Odds, color=EffectSize)) +
facet_wrap(DataType ~ ExpType, labeller = label_both, scales="free") +
geom_line(size=2) +
geom_ribbon(aes(ymax=Upper, ymin=Lower, fill=EffectSize, color=NULL), alpha=0.2)
p +
coord_panel_ranges(panel_ranges = list(
list(x=c(8,64), y=c(1,4)), # Panel 1
list(x=c(8,64), y=c(1,6)), # Panel 2
list(NULL), # Panel 3, an empty list falls back on the default values
list(x=c(8,64), y=c(1,7)) # Panel 4
))
Original answer
I've cheated my way out of a similar problem before.
# alternate version of plot with data truncated to desired range for each facet
p.alt <- p %+% {test_data %>%
mutate(facet = as.integer(interaction(DataType, ExpType, lex.order = TRUE))) %>%
left_join(data.frame(facet = 1:4,
ymin = c(1, 1, -Inf, 1), # change values here to enforce
ymax = c(4, 6, Inf, 7)), # different axis limits
by = "facet") %>%
mutate_at(vars(Odds, Upper, Lower), list(~ ifelse(. < ymin, ymin, .))) %>%
mutate_at(vars(Odds, Upper, Lower), list(~ ifelse(. > ymax, ymax, .))) }
# copy alternate version's panel parameters to original plot & plot the result
p1 <- ggplot_build(p)
p1.alt <- ggplot_build(p.alt)
p1$layout$panel_params <- p1.alt$layout$panel_params
p2 <- ggplot_gtable(p1)
grid::grid.draw(p2)
Many thanks go to Z.Lin for starting the fix to my question, and that answer certainly helped me get past the errors and learn a more appropriate way of working with ggproto objects.
This answer is posted as more of a flexible method of fixing the underlying problem of per-panel limits within a faceted plot. The major issue I had with my first batch of code was that it relies on the ordering of the facets, which in some of my other (private) use-cases is not always known (well, not controlled) a priori. Because of this, I wanted an unambiguous determination of per-panel limits.
I've changed the function name (and the args) to represent two points: (1) this appears to be mimic/replace coord_cartesian, and (2) I don't know that it will translate to other coord_* functions without adjustment. Comments/patches welcome at my gist.
Up front, a perfect duplication of Z.Lin's results can be had with:
p <- test_data %>%
ggplot(aes(x = Nsubjects, y = Odds, color=EffectSize)) +
facet_wrap(DataType ~ ExpType, labeller = label_both, scales = "free") +
geom_line(size = 2) +
geom_ribbon(aes(ymax = Upper, ymin = Lower, fill = EffectSize, color = NULL), alpha = 0.2)
p + coord_cartesian_panels(
panel_limits = tibble::tribble(
~DataType, ~ExpType, ~ymin, ~ymax
, "A" , "X" , 1, 4
, "A" , "Y" , 1, 6
, "B" , "Y" , 1, 7
)
)
and gone is the ambiguity (that the original code introduced) of which panel is which argument in the list. Since it uses a data.frame to match (usually merge) with the layout of the plot, the order of rows does not matter.
Notes:
the panel_limits fields referenced are: xmin, xmax, ymin, and ymax, on top of whichever faceting variables are desired;
an NA in a particular field (or a missing field) means to use the previously-defined limit;
when all faceting-variables match (between panel_limits and the layout defined by facet_*), the limits are set on individual panels; this one-to-one mapping is the going-in assumption about this function;
when some (but not all) variables match, the limits are set on a subset of panels (e.g., on one axis of the panels, depending on the faceting method);
when no variables match and panel_limits is a single row, then set the limits for all panels indiscriminately; and
faceting rows in panel_limits that match nothing in layout are silently ignored.
Errors:
any faceting variables in panel_limits that do not exist in the layout (i.e., not specified within facet_*); or
more than one row in panel_limits matches a particular panel.
As an extension, this also handles a subset of the faceting variables, so if we want to limit all facets by ExpType only, then
# set the limits on panels based on one faceting variable only
p + coord_cartesian_panels(
panel_limits = tibble::tribble(
~ExpType, ~ymin, ~ymax
, "X" , NA, 4
, "Y" , 1, 5
)
) + labs(title = "panel_limits, one variable")
# set the limits on all panels
p + coord_cartesian_panels(
panel_limits = tibble::tribble(
~ymin, ~ymax
, NA, 5
)
) + labs(title = "panel_limits, no variables")
(The last example seems silly, but if the facets/plots are being built programmatically and it is not guaranteed a priori that there are individual facets, then this will result in a reasonable default behavior, assuming that everything is otherwise unambiguous.)
A further extension might allow for an NA in a facet variable to match all, such as
# does not work
p + coord_cartesian_panels(
panel_limits = tibble::tribble(
~DataType, ~ExpType, ~ymin, ~ymax
, "A" , NA , 1, 4
, NA , "Y" , 1, 6
)
)
This would require that merge understand that NA means "all/any", not a literal NA. I'm not going to extend merge at the moment to handle that, so I'm not going to complicate this function to attempt to do that. If there is a reasonable merge replacement that does this kind of calculus, let me know :-)
Many Thanks to ...
burchill for the original effort and gist; and
Z.Lin, for helping to bring the function up to ggplot2-3.3.0.
UniquePanelCoords <- ggplot2::ggproto(
"UniquePanelCoords", ggplot2::CoordCartesian,
num_of_panels = 1,
panel_counter = 1,
layout = NULL,
setup_layout = function(self, layout, params) {
self$num_of_panels <- length(unique(layout$PANEL))
self$panel_counter <- 1
self$layout <- layout # store for later
layout
},
setup_panel_params = function(self, scale_x, scale_y, params = list()) {
train_cartesian <- function(scale, limits, name, given_range = c(NA, NA)) {
if (anyNA(given_range)) {
expansion <- ggplot2:::default_expansion(scale, expand = self$expand)
range <- ggplot2:::expand_limits_scale(scale, expansion, coord_limits = limits)
isna <- is.na(given_range)
given_range[isna] <- range[isna]
}
out <- list(
ggplot2:::view_scale_primary(scale, limits, given_range),
sec = ggplot2:::view_scale_secondary(scale, limits, given_range),
arrange = scale$axis_order(),
range = given_range
)
names(out) <- c(name, paste0(name, ".", names(out)[-1]))
out
}
this_layout <- self$layout[ self$panel_counter,, drop = FALSE ]
self$panel_counter <-
if (self$panel_counter < self$num_of_panels) {
self$panel_counter + 1
} else 1
# determine merge column names by removing all "standard" names
layout_names <- setdiff(names(this_layout),
c("PANEL", "ROW", "COL", "SCALE_X", "SCALE_Y"))
limits_names <- setdiff(names(self$panel_limits),
c("xmin", "xmax", "ymin", "ymax"))
limit_extras <- setdiff(limits_names, layout_names)
if (length(limit_extras) > 0) {
stop("facet names in 'panel_limits' not found in 'layout': ",
paste(sQuote(limit_extras), collapse = ","))
} else if (length(limits_names) == 0 && NROW(self$panel_limits) == 1) {
# no panels in 'panel_limits'
this_panel_limits <- cbind(this_layout, self$panel_limits)
} else {
this_panel_limits <- merge(this_layout, self$panel_limits, all.x = TRUE, by = limits_names)
}
if (isTRUE(NROW(this_panel_limits) > 1)) {
stop("multiple matches for current panel in 'panel_limits'")
}
# add missing min/max columns, default to "no override" (NA)
this_panel_limits[, setdiff(c("xmin", "xmax", "ymin", "ymax"),
names(this_panel_limits)) ] <- NA
c(train_cartesian(scale_x, self$limits$x, "x",
unlist(this_panel_limits[, c("xmin", "xmax"), drop = TRUE])),
train_cartesian(scale_y, self$limits$y, "y",
unlist(this_panel_limits[, c("ymin", "ymax"), drop = TRUE])))
}
)
coord_cartesian_panels <- function(panel_limits, expand = TRUE, default = FALSE, clip = "on") {
ggplot2::ggproto(NULL, UniquePanelCoords,
panel_limits = panel_limits,
expand = expand, default = default, clip = clip)
}
At some point I had a similar problem to this. The result was a slightly more verbose but also more flexible option that can customize many aspects of position scales on a per-facet basis. Due to some technicality it uses the equivalent of scales::oob_keep() as oob arguments on the scales, thereby acting as if the coordinates determined the limits.
library(ggh4x)
library(tidyverse)
p <- test_data %>%
ggplot(aes(x=Nsubjects, y = Odds, color=EffectSize)) +
facet_wrap(DataType ~ ExpType, labeller = label_both, scales="free") +
geom_line(size=2) +
geom_ribbon(aes(ymax=Upper, ymin=Lower, fill=EffectSize, color=NULL), alpha=0.2) +
facetted_pos_scales(
x = list(
scale_x_continuous(limits = c(8, 64)),
scale_x_continuous(limits = c(64, 8), trans = "reverse"),
NULL,
scale_x_continuous(limits = c(8, 64), labels = scales::dollar_format())
),
y = list(
scale_y_continuous(limits = c(1, 4), guide = "none"),
scale_y_continuous(limits = c(1, 6), breaks = 1:3),
NULL,
scale_y_continuous(limits = c(1, 7), position = "right")
)
)

Building legends with time series data, in ggplot

Aloha all,
I've struggled to build a legend for a mix/match of time series data I'm making. Here is some code:
My understanding is that I need to somehow clean my data and put it all in the same data frame, but all of the time series don't line up very well. Some is at 15 minutes, other one hour. Is there any way to force a legend for these datasets? I don't know what else to post here - since the 5 datasets are quite large.
Plot I'm working on:
q<- ggplot(subset(cr200_Auwai1, timedate>startd & timedate<endd), aes(timedate, Turb_SS)) +
geom_point(color="coral4")+
geom_point(data=subset(dsloi_wl, timedate>startd & timedate<endd), aes(timedate, level), color="blue")+
#geom_point(data=subset(flow_data, mdate>startd & mdate<endd), aes(as.POSIXct(mdate), flow_cfs*1000), color="red")+
geom_point(data=subset(cr300_Wai1, timedate>startd & timedate<endd), aes(timedate, Lvl_m*1000), color="forestgreen", size=1)+ #aquamarine3
geom_point(data=subset(cr300_Wai1, timedate>startd & timedate<endd), aes(timedate, Turb_SS), color="orange")+
#geom_point(data=subset(hihimanu_wl, timedate>startd & timedate<endd), aes(timedate, level), color="azure4", size=0.1)+
#geom_point(data=subset(rain_data, timedate>startd & timedate<endd), aes(timedate, rainmm), color="red",size=5)+
geom_point(data=subset(haptuk_ysi, datetime>startd & datetime<endd), aes(datetime, Turb), color="pink")+
#scale_x_date(breaks=date_breaks("month"), labels = date_format("%b-%y"))+
xlab("Date")+
ylab("Turbidity (NTU) and Water Level (mm)")+
coord_cartesian(ylim=c(0, 1500))+
theme_bw()+
theme(axis.text=element_text(size=14),
axis.title=element_text(size=16,face="bold"),
legend.justification = c(1, 1),
legend.position = c(1, 1),
legend.title=element_text(size=14),
legend.text=element_text(size=12))
Here is a sample of two of the datasets: Note that the times don't line up at all... since I'm mixing sources.
dsloi_wl:
structure(list(ReceptionTime = c(1533895414.1134, 1533895414.1733,
1533895414.19397, 1533895414.20708, 1533895414.22283, 1533895414.23634,
1533895414.25135, 1533895414.26387, 1533895414.27653, 1533895414.29126,
1533896013.68755, 1533896013.7638, 1533896013.79232, 1533896013.80917,
1533896013.82312, 1533896013.83648, 1533896013.84988, 1533896013.8648,
1533896013.87724, 1533896013.8894), d2w = c(776.7, 789.7, 790.2,
777.1, 777.2, 777.7, 778.4, 793.4, 779.6, 794.1, 819.9, 780.7,
794.1, 806.9, 781.9, 781.9, 782.7, 782.8, 783.1, 783.4), timedate = structure(c(1533895414.1134,
1533895414.1733, 1533895414.19397, 1533895414.20708, 1533895414.22283,
1533895414.23634, 1533895414.25135, 1533895414.26387, 1533895414.27653,
1533895414.29126, 1533896013.68755, 1533896013.7638, 1533896013.79232,
1533896013.80917, 1533896013.82312, 1533896013.83648, 1533896013.84988,
1533896013.8648, 1533896013.87724, 1533896013.8894), class = c("POSIXct",
"POSIXt"), tzone = ""), level = c(723.3, 710.3, 709.8, 722.9,
722.8, 722.3, 721.6, 706.6, 720.4, 705.9, 680.1, 719.3, 705.9,
693.1, 718.1, 718.1, 717.3, 717.2, 716.9, 716.6)), .Names = c("ReceptionTime",
"d2w", "timedate", "level"), row.names = c(NA, 20L), class = "data.frame")
CR300_Wai1
structure(list(RECORD = 73027:73046, Temp_C = c(24.62861, 24.62332,
24.61533, 24.60857, 24.60189, 24.59733, 24.59068, 24.58404, 24.57869,
24.57327, 24.56781, 24.5606, 24.55551, 24.55218, 24.54648, 24.5416,
24.5358, 24.5319, 24.52781, 24.52294), Turb_BS = c(94.50522,
88.65939, 109.354, 57.71527, 134.1903, 46.37191, 78.17719, 52.22319,
58.07111, 96.95719, 51.47488, 44.65616, 70.43825, 99.58217, 93.68374,
87.4787, 175.5395, 167.6757, 110.8119, 132.5971), Turb_SS = c(36.63349,
34.31228, 37.02223, 32.97258, 36.68553, 33.82083, 37.43391, 33.43639,
31.17306, 33.6327, 34.69954, 30.99891, 34.69988, 33.64369, 32.54948,
32.1177, 32.86558, 48.97706, 30.65004, 33.71646), Temp_C_2 = c(24.9014,
24.89474, 24.88837, 24.88279, 24.87574, 24.86852, 24.86357, 24.85751,
24.85236, 24.84759, 24.84091, 24.83577, 24.83192, 24.82713, 24.8229,
24.81832, 24.81237, 24.80821, 24.8051, 24.80015), WD_OBS = c(0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L), Lvl_m = c(0.6907353, 0.6905226, 0.6896195, 0.6890779,
0.6881586, 0.6878724, 0.6862501, 0.6848835, 0.6844589, 0.6837503,
0.6836612, 0.6831629, 0.6821692, 0.6812283, 0.6799452, 0.6791196,
0.6782504, 0.6772775, 0.6763596, 0.6755115), timedate = structure(c(1533895500,
1533895800, 1533896100, 1533896400, 1533896700, 1533897000, 1533897300,
1533897600, 1533897900, 1533898200, 1533898500, 1533898800, 1533899100,
1533899400, 1533899700, 1533900000, 1533900300, 1533900600, 1533900900,
1533901200), class = c("POSIXct", "POSIXt"), tzone = "")), .Names = c("RECORD",
"Temp_C", "Turb_BS", "Turb_SS", "Temp_C_2", "WD_OBS", "Lvl_m",
"timedate"), row.names = c(NA, 20L), class = "data.frame")
Here is a solution using mock data (next time provide a sample of your data) :
library(tidyverse)
library(lubridate)
#>
#> Attachement du package : 'lubridate'
#> The following object is masked from 'package:base':
#>
#> date
# mock data
time_15m <- seq(as.POSIXct("2018-08-30 00:00:00"), as.POSIXct("2018-08-31 00:00:00"), by = "15 min")
time_30m <- seq(as.POSIXct("2018-08-30 00:00:00"), as.POSIXct("2018-08-31 00:00:00"), by = "30 min")
time_60m <- seq(as.POSIXct("2018-08-30 00:00:00"), as.POSIXct("2018-08-31 00:00:00"), by = "60 min")
data_1 <- data.frame(time = time_15m,
var_1 = cos(hour(time_15m) + minute(time_15m)))
data_2 <- data.frame(time = time_30m,
var_2 = sin(hour(time_30m) + minute(time_30m)))
data_3 <- data.frame(time = time_60m,
var_3 = cos(1 - hour(time_60m) + minute(time_60m)))
# the kind of plot you have (prefer the 2nd version)
ggplot(data_1, aes(x = time, y = var_1)) +
geom_point(color = "red") +
geom_point(data = data_2, aes(time, var_2), color = "green") +
geom_point(data = data_3, aes(time, var_3), color = "blue") +
theme_bw()
# a version with long format data and use of gather function
data_1 %>%
left_join(data_2) %>% # join data from data_2 (timestep = 30m), missing data is NA
left_join(data_3) %>% # join data from data_3 (timestep = 60m), missing data is NA
gather(variable_name, variable_value, var_1, var_2, var_3) %>% # gather var_1, var_2 and var_3 in a single column
ggplot(., aes(x = time, y = variable_value, color = variable_name)) +
theme_bw() +
geom_point(size = 2)
#> Joining, by = "time"
#> Joining, by = "time"
#> Warning: Removed 120 rows containing missing values (geom_point).
Created on 2018-08-22 by the reprex package (v0.2.0).
EDIT 1 (include provided datasets)
library(tidyverse)
dsloi_wl %>%
full_join(cr300_Wai1) %>%
mutate(Lvl_m = 100 * Lvl_m) %>%
gather(variable_name, variable_value, level, Lvl_m, Turb_SS) %>%
ggplot(., aes(x = timedate, y = variable_value, color = variable_name)) +
geom_point() +
scale_color_manual("Legend title",
values = c("level" = "blue",
"Lvl_m" = "forestgreen",
"Turb_SS" = "orange"))
#> Joining, by = "timedate"
#> Warning: Removed 60 rows containing missing values (geom_point).
Created on 2018-08-23 by the reprex package (v0.2.0).

R barplot cumulative - x date, y freq

I'm trying to recreate this kind of plot in R but I'm not very successful.
Where X = date and Y = frequency of a discrete variable, cumulative on one bar.
Also I'm trying to put it in a function so it would be easier to use this kind of plot for different variables.
Link to the plot image <---
I'd appreciate any help!
Data example:
Excel plot example <---
Purchase_date Phone
2014-10-23 Sony
2014-10-23 Apple
2014-10-23 Nokia
2014-10-23 Nokia
2014-10-24 NA
2014-10-24 Nokia
2014-10-24 Sony
2014-10-24 Other
2014-10-24 Apple
2014-10-25 Sony
2014-10-25 NA
2014-10-25 Apple
2014-10-25 Sony
2014-10-25 Nokia
Also
I have something like this but it's definitely far from universal method for different variables:
base_table %>%
filter(year(as.Date(BUY_DATE)) >= 2014, year(as.Date(BUY_DATE)) <= 2017) %>%
mutate(BUY_DATE = as.yearmon(as.Date(BUY_DATE))) %>%
group_by(PHONETYPE, BUY_DATE) %>% summarise(n = n()) -> applPerTypeAndMonth
applPerTypeAndMonth %>% pull(PHONETYPE) %>% table()
filter(applPerTypeAndMonth, PHONETYPE == '') -> x
xts(x$n, order.by = x$BUY_DATE) -> type1
filter(applPerTypeAndMonth, PHONETYPE == 'NOKIA') -> x
xts(x$n, order.by = x$BUY_DATE) -> type2
filter(applPerTypeAndMonth, PHONETYPE == 'APPLE') -> x
xts(x$n, order.by = x$BUY_DATE) -> type3
filter(applPerTypeAndMonth, PHONETYPE == 'SONY') -> x
xts(x$n, order.by = x$BUY_DATE) -> type4
filter(applPerTypeAndMonth, PHONETYPE == 'HUAWEI') -> x
xts(x$n, order.by = x$BUY_DATE) -> type5
filter(applPerTypeAndMonth, PHONETYPE == 'LG') -> x
xts(x$n, order.by = x$BUY_DATE) -> type6
filter(applPerTypeAndMonth, PHONETYPE == 'OTHER') -> x
xts(x$n, order.by = x$BUY_DATE) -> type7
merge(type1,type2,type3,type4,type5,type6,type7) -> types
na.fill(types, fill = 0.0) -> types
barplot(types, col = rainbow(7))
types %>% apply(1, function(x) x / sum(x)) %>% barplot(col = rainbow(7))
# legend("topright", legend = names(types), fill = rainbow(7))
Using data.table first create a summary table that details the frequency of each phone by each day.
summary = purchases[,list(Purchases = .N), by = list(Purchase_date, Phone)
Then split this out by phone type, and in each sub-dataset order by date and add in a cumulative purchases variable.
splitted = split(summary, summary$Phone)
splitted = lapply(splitted, function(x){
x = x[order(PurchaseDate)]
x$CumulativePurchases = cumsum(x$Purchases)
return(x)})
Then rbindlist back together into a single dataframe and then you can use GGplot easily.
summary = rbindlist(splitted)
plotted = ggplot(summary, aes(x = PurchaseDate, y = CumulativePurchases, fill = Phone)) + geom_bar(stat = "identity")
something along the lines of this,
dta <- structure(list(Purchase_date = structure(c(1L, 1L, 1L, 1L, 2L,
2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L), .Label = c("2014-10-23",
"2014-10-24", "2014-10-25"), class = "factor"), Phone = structure(c(4L,
1L, 2L, 2L, NA, 2L, 4L, 3L, 1L, 4L, NA, 1L, 4L, 2L), .Label = c("Apple",
"Nokia", "Other", "Sony"), class = "factor")), .Names = c("Purchase_date",
"Phone"), class = "data.frame", row.names = c(NA, -14L))
# install.packages(c("ggplot2"), dependencies = TRUE)
library(ggplot2)
g <- ggplot(dta, aes(Purchase_date))
g + geom_bar(aes(fill = Phone))
updated, here's the plot wrapped in a function,
function.name <- function(df)
{
require(ggplot2)
p <- ggplot(df, aes(x = Purchase_date))
p + geom_bar(aes(fill = Phone))
}
function.name(dta)
I'll obviously recommend you take a look at this site to learn how to label, color, reorder, etc.
# load packages
library(tidyverse)
library(lubridate)
# create a dataframe from your data
df <- frame_data(
~Purchase_date, ~Phone
, "2014-10-23", "Sony"
, "2014-10-23", "Apple"
, "2014-10-23", "Nokia"
, "2014-10-23", "Nokia"
, "2014-10-24", "NA"
, "2014-10-24", "Nokia"
, "2014-10-24", "Sony"
, "2014-10-24", "Other"
, "2014-10-24", "Apple"
, "2014-10-25", "Sony"
, "2014-10-25", NA
, "2014-10-25", "Apple"
, "2014-10-25", "Sony"
, "2014-10-25", "Nokia"
)
# make dates dates, if you want to
df <- df %>%
mutate(Purchase_date = as_date(Purchase_date))
# and plot it
df %>%
ggplot(aes(Purchase_date, fill = Phone)) +
geom_bar()
ggplot() and geom_bar() ARE a functions and they do what you want (and actually a whole lot more if desired). How to plot can be read up, e.g., in the R-Graphics Cookbook which really helps whenever you need it.

how to make a merged heatmap between each two columns of values

How can I put two columns in one heatmap?
Lets say I have the following data
data<- structure(list(names = structure(c(5L, 1L, 10L, 2L, 6L, 4L, 9L,
7L, 11L, 3L, 8L), .Label = c("Bin", "Dari", "Down", "How", "India",
"Karachi", "Left", "middle", "Right", "Trash", "Up"), class = "factor"),
X1Huor = c(1.555555556, 5.2555556, 2.256544, 2.3654225, 1.2665545,
0, 1.889822365, 2.37232101, -1, -1.885618083, 1.128576187
), X2Hour = c(1.36558854, 2.254887, 2.3333333, 0.22255444,
2.256588, 5.66666, -0.377964473, 0.107211253, -1, 0, 0),
X3Hour = c(0, 1.222222222, 5.336666, 1.179323788, 0.832050294,
-0.397359707, 0.185695338, 1.393746295, -1, -2.121320344,
1.523019248), X4Hour = c(3.988620176, 3.544745039, -2.365555,
2.366666, 1.000000225, -0.662266179, -0.557086015, 0.862662186,
0, -1.305459824, 1.929157714), X5Hour = c(2.366666, 2.333365,
4.22222, 0.823333333, 0.980196059, -2.516611478, 2.267786838,
0.32163376, 0, -2.592724864, 0.816496581)), .Names = c("names",
"X1Huor", "X2Hour", "X3Hour", "X4Hour", "X5Hour"), class = "data.frame", row.names = c(NA,
-11L))
This data has 5 columns of values. I want to make a heatmap which half of it is the value from first colum and the other half of each cell is from the second column.
The same for the third column and fourth
The same for the fifth and sixth ( there is no sixth but I can leave it empty)
This is just an example to show what I am looking for. I have searched a lot but I could not find anything like this
The color range from Red to green, if the value is higher than 2 the color red and if the value is lower than -2 the color is green.
Any thought how to do this ?
This is a somewhat hacky solution, but it might work for you, so check this out.
The idea is to utilize geom_polygon to create the triangles and stack them. To do that we first need to generate the triangle coordinates
library(dplyr)
library(tidyr)
library(stringr)
# the following two line create the triangle coordinates
x = rep(c(1,2,2, 1, 1, 2),nrow(data))
y = rep(c(1,1,2, 1, 2, 2),nrow(data)) + rep(0:10, each=6)
Now that we have our coordinates we need to generate their ids, which are the names. But because we want each triangle to be unique, we need to create two unique versions of each name:
names <- data %>%
select(names, X1Huor, X2Hour) %>%
gather("key", "value", X1Huor, X2Hour) %>%
arrange(names, key) %>%
mutate(name = str_c(names, key)) %>%
.$name %>%
rep(each = 3)
And now we do the same with the hours:
hour <- data %>%
select(names, X1Huor, X2Hour) %>%
gather("key", "value", X1Huor, X2Hour) %>%
arrange(names, key) %>%
.$value %>%
rep(each = 3)
datapoly <- data.frame(x = x, y = y , hour = hour, names = names)
Since there are no proper labels for the plot in our datapoly df, we need to create one:
name_labels <- data %>%
select(names) %>%
arrange(names) %>%
.$names
The scene is now set for our graph:
ggplot(datapoly, aes(x = x, y = y)) +
geom_polygon(aes(group = names, fill = hour), color = "black") +
scale_fill_continuous(low = "green", high = "red") +
scale_y_continuous(breaks = 1:nrow(data), labels = name_labels) +
theme(axis.text.y = element_text(vjust = -2),
axis.ticks = element_blank(),
axis.text.x = element_blank(),
axis.title = element_blank())
The output looks like this:
Several points to keep in mind: Is this really a plot you want to be creating and using? Is this really useful for your purposes? Perhaps other, more traditional visualization methods are more suitable. Also, I didn't bother doing the same for the other hour columns as these are quite tedious, but the method on how to achieve them should be clear enough (I hope).

ggplot2 scatter plot with overlay of means and bidirectional SD bars

This question is a direct successor to a pervious question asked here called “ggplot scatter plot of two groups with superimposed means with X and Y error bars”. That questions answer looks to do exactly what I am trying to accomplish however the code provided results in an error which I can’t get around. I will use my data as example here but I have tried the original question code as well with the same result.
I have a data frame which looks like this:
structure(list(Meta_ID = structure(c(15L, 22L, 31L, 17L), .Label = c("NM*624-46",
"NM*624-54", "NM*624-56", "NM*624-61", "NM*624-70", "NM624-36",
"NM624-38", "NM624-39", "NM624-40", "NM624-41", "NM624-43", "NM624-46",
"NM624-47", "NM624-51", "NM624-54 ", "NM624-56", "NM624-57",
"NM624-59", "NM624-61", "NM624-64", "NM624-70", "NM624-73", "NM624-75",
"NM624-77", "NM624-81", "NM624-82", "NM624-83", "NM624-84", "NM625-02",
"NM625-10", "NM625-11", "SM621-43", "SM621-44", "SM621-46", "SM621-47",
"SM621-48", "SM621-52", "SM621-53", "SM621-55", "SM621-56", "SM621-96",
"SM621-97", "SM622-51", "SM622-52", "SM623-14", "SM623-23", "SM623-26",
"SM623-27", "SM623-32", "SM623-33", "SM623-34", "SM623-55", "SM623-56",
"SM623-57", "SM623-58", "SM623-59", "SM623-61", "SM623-62", "SM623-64",
"SM623-65", "SM623-66", "SM623-67", "SM680-74", "SM681-16"), class = "factor"),
Region = structure(c(1L, 1L, 1L, 1L), .Label = c("N", "S"
), class = "factor"), Tissue = structure(c(1L, 2L, 1L, 1L
), .Label = c("M", "M*"), class = "factor"), Tag_Num = structure(c(41L,
48L, 57L, 43L), .Label = c("621-43", "621-44", "621-46",
"621-47", "621-48", "621-52", "621-53", "621-55", "621-56",
"621-96", "621-97", "622-51", "622-52", "623-14", "623-23",
"623-26", "623-27", "623-32", "623-33", "623-34", "623-55",
"623-56", "623-57", "623-58", "623-59", "623-61", "623-62",
"623-64", "623-65", "623-66", "623-67", "624-36", "624-38",
"624-39", "624-40", "624-41", "624-43", "624-46", "624-47",
"624-51", "624-54", "624-56", "624-57", "624-59", "624-61",
"624-64", "624-70", "624-73", "624-75", "624-77", "624-81",
"624-82", "624-83", "624-84", "625-02", "625-10", "625-11",
"680-74", "681-16"), class = "factor"), Lab_Num = structure(1:4, .Label = c("C4683",
"C4684", "C4685", "C4686", "C4687", "C4688", "C4689", "C4690",
"C4691", "C4692", "C4693", "C4694", "C4695", "C4696", "C4697",
"C4698", "C4699", "C4700", "C4701", "C4702", "C4703", "C4704",
"C4705", "C4706", "C4707", "C4708", "C4709", "C4710", "C4711",
"C4712", "C4713", "C4714", "C4715", "C4716", "C4717", "C4718",
"C4719", "C4720", "C4721", "C4722", "C4723", "C4724", "C4725",
"C4726", "C4727", "C4728", "C4729", "C4730", "C4731", "C4732",
"C4733", "C4734", "C4735", "C4736", "C4737", "C4738", "C4739",
"C4740", "C4741", "C4742", "C4743", "C4744", "C4745", "C4746",
"C4747", "C4748"), class = "factor"), C = c(46.5, 46.7, 45,
43.6), N = c(12.9, 13.7, 14.5, 13.4), C.N = c(3.6, 3.4, 3.1,
3.3), d13C = c(-19.7, -19.5, -19.4, -19.2), d15N = c(13.3,
12.4, 11.7, 11.9)), .Names = c("Meta_ID", "Region", "Tissue",
"Tag_Num", "Lab_Num", "C", "N", "C.N", "d13C", "d15N"), row.names = c(NA,
4L), class = "data.frame")
What I want to produce is a scatter plot of the raw data with an overlay of the data means for each “Region” with bidirectional error bars. To accomplish that I use plyr to summarize my data and generate the means and SD’s. Then I use ggplot2:
library(plyr)
Basic <- ddply(First.run,.(Region),summarise,
N = length(d13C),
d13C.mean = mean(d13C),
d15N.mean = mean(d15N),
d13C.SD = sd(d13C),
d15N.SD = sd(d15N))
ggplot(data=First.run, aes(x = First.run$d13C, y = First.run$d15N))+
geom_point(aes(colour = Region))+
geom_point(data = Basic,aes(colour = Region))+
geom_errorbarh(data = Basic, aes(xmin = d13C.mean + d13C.SD, xmax = d13C.mean - d13C.SD,
y = d15N.mean, colour = Region, height = 0.01))+
geom_errorbar(data = Basic, aes(ymin = d15N.mean - d15N.SD, ymax = d15N.mean + d15N.SD,
x = d13C.mean,colour = Region))
But each time I run this code I get the same error and can’t figure out what the problem is.
Error: Aesthetics must either be length one, or the same length as the dataProblems:Region
Any help would be much appreciated.
Edit: Since my example data is taken from the head of my full dataset it only includes samples from the "N" Region. With only this one region the code works fine but if you use fix() to change the provided dataset so that at least one other Region is included (in my data the other Region is "S") then the error I get shows up. My mistake in not including some data from each Region.
I ended up changing two of the "N" Regions to "S" so I could calculate standard deviation for both groups.
I think the problem was that you were missing required aesthetics in some of your geoms (geom_point was missing x and y, for example). At least getting all the required aesthetics into each geom seemed to get everything working. I cleaned up a few other things while I was at it to shorten the code up a bit.
ggplot(data = First.run, aes(x = d13C, y = d15N, colour = Region)) +
geom_point() +
geom_point(data = Basic,aes(x = d13C.mean, y = d15N.mean)) +
geom_errorbarh(data = Basic, aes(xmin = d13C.mean + d13C.SD,
xmax = d13C.mean - d13C.SD, y = d15N.mean, x = d13C.mean), height = .5) +
geom_errorbar(data = Basic, aes(ymin = d15N.mean - d15N.SD,
ymax = d15N.mean + d15N.SD, x = d13C.mean, y = d15N.mean), width = .01)

Resources