I'd like to combine 2 different posts of mine - Set marker size based on coordinate values, not pixels, in plotly R - and - add_trace in plotly, which trace to use for my particular graph - into one here.
At a high level, I'd like to create interactive hexagonal heatmaps in R using plotly. I am considering the following two approaches:
using scatter trace with markers
using heatmap trace with hexagons
I am running into difficulties with both approaches. For the heatmap trace, I cannot find a parameter to change the shapes from squares to hexagons (does this exist?). For the scatter trace, I cannot fix the size of the markers to the values of the axes (only to pixel counts), and therefore the shapes do not scale the same size when I increase the size of the graph (see example at bottom).
and
Any thoughts on this would be appreciated!
EDIT: here's an example of my code using scatter trace with markers:
mydf <- data.frame(x = rep(1:20, times = 20), y = rep(1:20, each = 20),
thesize = 15)
plot_ly(mydf) %>%
add_trace(x = ~x, y = ~y, type = 'scatter', mode = 'markers',
marker = list(symbol = 'hexagon', size = ~thesize, opacity = 0.6))
# note that the size is 15 pixels. I'd prefer that the size is pinned to values on the X and/or Y axis.
Related
I have a data set showing differences of two measurements of the same structure with two different methods as distance in meters and direction in degree. I found the scatterpolar plot of the function plot_ly of the package plotly in R, which produced almost what I wanted, but with some problems with the layout. Here is the code I used:
library(plotly)
data <- data.frame(measurements_compare); data
fig <- plot_ly(
type = 'scatterpolar',
r = c(data$distance),
theta = c(data$rotation),
text = c(data$id),
mode = 'markers',
)
fig
What I got from that was this plot, which is already quite close to what I want:
Now I would like to rotate the plot so that 0° is at the top instead of 90°, also I would like the degrees to be plotted ascending clockwise instead of counterclockwise. I found code examples for that in the archive where the function update_layoutwas used, but this example uses Python instead of R. I could not find something similar for R, but I am pretty sure there must be.
Inn plotly you can oftten use the same arguments in R and Python. For your code Rs layout function is needed instead of Pythons update_layout.
Example Data
data <- data.frame(distance = sample(seq(.1, .5, by = .01), 15, T),
rotation = sample(0:360, 15, T),
id = paste0(1:15))
Code
fig <- plot_ly(
type = 'scatterpolar',
r = c(data$distance),
theta = c(data$rotation),
text = c(data$id),
mode = 'markers',
) %>%
layout(polar = list(
angularaxis = list(
rotation = 90,
direction = "clockwise"
)
))
fig
Plot
I am having trouble understanding the default axis range for bars and lines in plotly for R. They seem to be different. To be precise, the default y-axis range for bars is not based on the extrema of the input data while it is based as such for lines. A little bit of background follows.
So I am making a plot of different economic time series. As is often the case with visualization of economic data, I often need two y-axes to show different variables which might be related. Currently I am making a line chart on the primary y axis and a bar chart on the secondary axis. The problem is that the secondary axis bar chart does not adequately represent the data because it selects a very wide range for the axis on a default basis. E.g. the specific variable on the sec. axis ranges from 3500 to 4000 but the range is shown from 0 to 4000. For line charts there is no such problem.
I can of course change these ranges manually using the attribute "range" in the the "layout" function but I want to be able to get my desired plot without much manual input. Also, it is helpful if plotly figures out the extrema by itself because the input data changes quite frequently. Here is my current code:
plot_ly(data = filter(dlx_df3, month_date >= "2013-01-01", month_date <= "2014-01-01")) %>%
add_lines(x = ~month_date,y = ~walr, name = "walr") %>%
add_bars(x = ~ month_date,y = ~ advances,yaxis = "y2", name = "adv") %>%
layout(
xaxis = list(ticks = "outside"),
yaxis2 = list(
side = "right",
autotick = TRUE,
ticks = "outside",
rangemode = "normal"
),
yaxis = list(
overlaying = "y2",
autotick = TRUE,
ticks = "outside"
),
legend = list(x = 1.08, y = 0.7)
)
You can see that the bars do not show much "variation". But this changes if I change the add_bars to add_lines. See below:
How do I change this axis modification for bars?
This is a simple question, that I've asked about previously, but that I wanted to revisit. See the following graph made in R's plotly:
mydf = data.frame(x = 1:5, y = 1:5)
plot_ly(mydf) %>%
add_trace(x = ~x, y = ~x, type = 'scatter', mode = 'markers',
marker = list(size = 24)) %>%
layout(annotations = list(text = 'Over Here',
x = 2, y = 3,
font = list(size = 24)))
When this plot is zoomed, the marker sizes remain the same, as does the annotation, and this is because the size parameters are set in pixel sizes. I am interested in having the marker sizes be based on the axis values, rather than a pixel count.
For example, I would like for the size of the markers to always have a height and width of 0.5 relative to the axes. The marker plotted at x = 3, y = 3 would then have a size such that it extends from x = 2.75 to x = 3.25, and from y = 2.75 to 7 = 3.25.
I do not see any way currently to set the marker sizes based on the coordinate scale of the plot, only based on the pixel width. My previous post here - Set marker size based on coordinate values, not pixels, in plotly R - got no responses, but this is an issue i'd still very much like to resolve.
Thanks!
I have problem with correct color range on my plot and legend.
This is code which I use:
data.ch4 <- read.csv2("v42_CH4_1970_TOT.txt",skip = 3,stringsAsFactors = FALSE, header = F)
num_data <- data.frame(data.matrix(data.ch4))
library(maptools)
library(lattice)
library(png)
#map loading
map1 <- readShapePoly("CNTR_2014_03M_SH/Data/CNTR_RG_03M_2014.shp")
coordinates(num_data) <- ~V2+V1
gridded(num_data) <- TRUE
#plotting
png(file="Map2.png",width=35,height=30,unit="cm", res=200, type = "cairo")
spplot(num_data["V3"], xlim=c(-5,35), ylim=c(35,70),
sp.layout = list("sp.polygons",map1),contour=F)
dev.off()
Here is the file with data: https://www.sendspace.com/file/hjtatp (compressed beacuse normally it weights 57 mb)
Map from here (but map has secondary priority, it can be skipped)
This is how it looks like without any scale modifications:
So everything is blue. Obviously there is to big scale distance, from min to max value. I would like to fix the scal, for e.g. last value would by "higher than x". I tried to do this like this:
So now this looks much better. This is how I did it:
#Fixed breakpoints (?)
at <- c(0e+0, 1.5e-5, 1.0e-4, 1.0e-3, 1.0e-2, 1.0e-1, 1.0e+0, 2.0e+0, 1.0e+1, 1.0e+2, 2.0e+2,5.0e+2)
spplot(num_data["V3"], xlim=c(-5,35), ylim=c(35,70),
sp.layout = list("sp.polygons",map1),
contour=F,
at=at) #right there
So I added manually the at values (but not accurate scale). Everything looks much better but..
As you can see, scale on the right is not uniformly distributed. I cannot see any blue-purple colors, only orange and yellow.
Also some spots on the map are bright yellow (Germany area), because the values are highest here, but sadly there is no such color on the scale.
Probably I didn't do it properly. I don't know how to set the scale to looks good. I would like to have scale like this:
I achieved this by adding:
spplot(num_data["V3"], xlim=c(-5,35), ylim=c(35,70),
sp.layout = list("sp.polygons",map1),
contour=F,at=at,
colorkey=list(at=seq(0, 400, 30)) #right there
)
But again, this is just fake scale, it won't work.
And the second fast question: How to add country contours on top of spplotted data? Because now contours are burried under the colorful data :c
The data converted to factor gives regular intervals to the legend. And you can change labels and its positions by colorkey = list(labels = list(at = ..., labels = ...)).
[Edited; (I noticed that some values are over 500, sorry)]
## convert numeric to factor
num_data#data$cutV3 <- cut(num_data#data$V3, breaks = c(at, Inf)) # I modified the breaks
spplot(num_data["cutV3"], xlim=c(-5, 35), ylim=c(35, 70),
colorkey = list(height = 1, labels = list(at = seq(0.5, length(at) -0.5), labels = at)),
sp.layout = list("sp.polygons", map1, first = F), contour = F) # drawn after main plot
I am making a density map in R using ggmap and stat_density2d. The code looks like this:
riverside <- get_map('Riverside, IL', zoom = 14 , color = 'bw' )
RiversideMap <- ggmap(riverside, extent = 'device', legend = 'topleft')
# make the map:
RiversideMap +
stat_density2d(aes(x = lon, y = lat,
fill = ..level.. , alpha = ..level..),size = .01, bins = 16,
data = myData, geom = 'polygon') +
scale_fill_gradient(low = "yellow", high = "blue") +
scale_alpha(range = c(.0, 0.3), guide = FALSE)
The density shown in the map's color legend is normalized in stat_density2d by requiring the integral of the density over area equals 1.
In the map, the units of the x and y axes are decimal degrees. (For example, a point is specified by the coordinates lat = 41.81888 and lon = -87.84147).
For ease of interpretation, like to make two changes to the values of the density as displayed in the map legend.
First, I'd like the integral of the density to be N (the number of data points - or addresses - in the data set) rather than 1. So the values displayed in the legend need to be multiplied by N = nrow(myData).
Second, I'd like the unit of distance to be kilometers rather than decimal degrees. For the latitudes and longitudes that I am plotting, this requires dividing the values displayed in the legend by 9203.
With the default normalization of density in stat_density2d, I get these numbers in the legend: c(2000,1500,1000,500).
Taking N = 1600 and performing the above re-scalings, this becomes c(348, 261, 174, 87) (= 1600/9203 * 2000 etc). Obviously, these are not nice round numbers, so it would be even better if the legend numbers were say c(400,300,200,100) with their locations in the legend color bar adjusted accordingly.
The advantage of making these re-scalings is that the density in the map becomes easy to interpret: it is just the number of people per square km (rather than the probability density of people per square degree).
Is there an easy way to do this? I am new to ggmap and ggplot2. Thanks in advance.
In brief, use:
scale_fill_continuous(labels = scales::unit_format(unit = "k", scale = 1e-3))
This link is great help for managing scales, axes and labels: https://ggplot2-book.org/scales.html