R rgl text3d() artifacts block objects and change plot extent - r

R 3.5.1
RStudio 1.1.463
rgl 0.99.16
extrafont 0.17
Windows 10 build 1809
When I plot a shape with quads3d() and then add a text3d() object, I find 3 problems (I think they are related, thus a single post here):
The text3d() object produces artifacts that interfere with the
drawing; they look like surfaces behind the text that intersect the
drawn objects in strange ways.
The plot zooms way out (and the extent/bbox changes) - I can make it not zoom using ignoreExtent=T, but if I do axes3d(), the bbox is seen to be much bigger than the one with no text. This seems to indicate that a comparatively very large piece of geometry was inserted. What about a few characters of text is so large dimensionally?
The text looks crappy; pixellated, math symbols are weak, etc.
I have tried different font families (including the basic four), colors, and other text parameters. I use the extrafont package and have loaded my Windows fonts using font_import(). It doesn't matter if it's a fresh R session/environment. It happens with plotMath=T or F.
The geometry I'm working with is smallish, fits roughly into a unit cube, should that make any difference?
How can I get rid of the artifacts and get decent-looking text that doesn't change the plot dimensions? Thanks.
Here's example code:
# Draw a 3D Shape and Label it
library(rgl)
library(extrafont)
# Open a new device in which to display the diagram
open3d(windowRect=c(900,200,1700,800))
# Define vertices of the faces
A0 <- c(0, 0.1, -0.02)
B0 <- c(0, -0.1, -0.02)
C0 <- c(0, -0.1, 0.02)
D0 <- c(0, 0.1, 0.02)
Al <- c(1, 0.02, -0.1)
Bl <- c(1, -0.02, -0.1)
Cl <- c(1, -0.02, 0.1)
Dl <- c(1, 0.02, 0.1)
# Define the quadrangles to be visualized
Face0 <- c(A0, B0, C0, D0)
Facel <- c(Bl, Al, Dl, Cl)
Side1 <- c(A0, Al, Bl, B0)
Side2 <- c(B0, Bl, Cl, C0)
Side3 <- c(C0, Cl, Dl, D0)
Side4 <- c(D0, Dl, Al, A0)
# Draw faces and sides
TColor <- "steelblue"
TAlpha <- .25
F0 <- quads3d(matrix((Face0), nrow=4, byrow=T), col=TColor, alpha=TAlpha)
Fl <- quads3d(matrix((Facel), nrow=4, byrow=T), col=TColor, alpha=TAlpha)
S1 <- quads3d(matrix((Side1), nrow=4, byrow=T), col=TColor, alpha=TAlpha)
S2 <- quads3d(matrix((Side2), nrow=4, byrow=T), col=TColor, alpha=TAlpha)
S3 <- quads3d(matrix((Side3), nrow=4, byrow=T), col=TColor, alpha=TAlpha)
S4 <- quads3d(matrix((Side4), nrow=4, byrow=T), col=TColor, alpha=TAlpha)
Running this much results in a nice image of a 3d shape:
If I run the following code to add a text label,
# Label a point
Cx <- c(.6,-0.052,0.068)
Xcolor <- "#000000"
points3d(Cx[1], Cx[2], Cx[3], col=Xcolor, size=5)
points3d(matrix(Cx, nrow=1), col=Xcolor, size=5)
XVertexColor <- "darkseagreen4"
par3d(ignoreExtent=F)
labelCx <- text3d(x=Cx[1], y=Cx[2], z=Cx[3], adj=c(0,0), family="Calibri", cex=1, font=2, text=expression(bold(sqrt(1/C[3](x)))), usePlotmath=T, col=XVertexColor)
...it looks like this (with ignoreExtent=F):
The same rgl device, the only change has been the point and the text3d()
Zooming and rotating the image shows the text artifacts that interfere with the view of the geometry:
Note that the square root symbol is barely visible; this is true no matter what font family, and whether or not bold() is applied.

A known limitation of the way rgl draws transparent (i.e. alpha < 1) objects is that they don't always interact well. The problem is that transparent objects need to be drawn in order from furthest to closest in the current view, but if you have two transparent polygons that intersect, some parts need to be drawn in one order, and some parts need to be drawn in the opposite order. Since rgl doesn't split them into separate pieces, some part will be drawn incorrectly.
This affects text because text is drawn as a quad with the background drawn with alpha = 0 and the text drawn with alpha = 1. If the quad holding the text intersects a transparent polygon, some part of one of them will be drawn poorly.
You can reduce the pixellation of your text by increasing the initCex argument; see ?plotmath3d for a discussion. Unfortunately, this makes the square root symbol look even worse: I think it is drawn at a constant width regardless of size (by the base graphics functions, not by rgl). You can see this in base graphics using
plot(1,1, type="n")
text(1,1,expression(bold(sqrt(1/C[3](x)))), cex = 5)
Using a smaller initCex will give a better proportioned square root, but it will be blurry or pixellated (depending on the size). (NB: see the addition below.)
EDITED TO ADD:
Regarding the bounding box changes: that definitely looks like a bug, but again it appears to be a limitation of the design. As mentioned, text is drawn on a transparent quad. This quad is drawn by sprites3d, which means it doesn't rotate with the scene, it always faces towards the viewer. If you have ignoreExtent = FALSE, then rgl attempts to make sure that the quad fits within the scene regardless of orientation, i.e. it takes up the same space as a sphere around the quad.
Your scene is much bigger in the X direction than in Y or Z, so a sphere really distorts things.
The solution here is to use ignoreExtent = TRUE so that the bounding box ignores that sphere. Remember to restore it afterwards.
One other improvement is possible. Since you don't want resizeable text, you can improve the way it is drawn by setting cex and initCex to the same value, but drawing with different material properties. Before
adding the text, set both texminfilter and texmagfilter to "nearest", and things will look a little pixellated, but better than what you were seeing.
Putting both changes together:
That is, change your final two lines of code to this:
saveIgnore <- par3d(ignoreExtent = TRUE)
saveFilter <- material3d(texminfilter = "nearest", texmagfilter = "nearest")
labelCx <- text3d(x=Cx[1], y=Cx[2], z=Cx[3], adj=c(0,0),
family="Calibri", cex = 1, initCex = 1, font=2,
text=expression(bold(sqrt(1/C[3](x)))),
usePlotmath=TRUE, col=XVertexColor)
material3d(saveFilter)
par3d(saveIgnore)
2nd EDIT:
There are a few workarounds for your first problem. The simplest is moving the text away from anything that's transparent, or making the transparent things opaque. But if you really want to have text near transparent objects, setting the material property depth_mask = FALSE will mean the text's quad will never obscure anything behind it. This is probably a good default. Setting depth_test = "always" will mean nothing can obscure the text. This can lead to fairly weird looking displays so I wouldn't recommend it in general, but with your alpha = 0.25 surfaces it doesn't look too bad.

Related

Finding xy coordinates of shelves in a store floorplan in r

I'm working on the following: I have a store layout, example see below (cannot add the real thing for GDPR reasons but the example should do the trick) on which I have xy coordinates from visitors (anonymous of course)
I already placed a grid on the picture so I can see which route they take in the store. That works fine. origin is bottom left and x & y are scaled from 0-100.
So far so good. Now next step is identifying the coordinates of the shelves, rectangles in the picture. Is there a way to do this without having to do this manually? The real store layout contains more than 900 shelves or am I pushing out the boat too far?
The output I'm looking for is a dataframe that contains a shelve ID and the coordinates for the corners. Idea is to create some heatmaps in the store to see that there are blind spots, hotspots, ...
The second analysis needs also the integer points. The idea is to create vectors of visitor points so we get a direction to which they are looking. By using the scope of what a human being can see I would give percentages of "seen" the products based on intersection with integer points.
thx!
JL
One approach is to perform clustering on the black pixels of the image. The clusters are then the shelves. If the shelves are axis parallel you can find the rectangles by just taking min/max in each direction. This works quite well:
Sample code (I converted the image to PNG as it is easier to read than gif):
library(png)
library(dbscan)
library(tidyverse)
library(RColorBrewer)
img <- readPNG("G18JU.png")
is_black <-
img %>%
apply(c(1, 2), sum) %>% #sum all color channels
{. < 2.5} %>% # we assume black if the sum is lower than 2.5 (max value is 3)
which(arr.ind=TRUE) # the indices of the black pixels
clust <- dbscan(is_black, 2) # identify clusters
rects <-
as.tibble(is_black) %>%
mutate(cluster = clust$cluster) %>% # add cluster information
group_by(cluster) %>%
## find corner points of rectangles normalized to [0, 1]
summarise(xleft = max(col) / dim(img)[2],
ybottom = 1 - min(row) / dim(img)[1],
xright = min(col) / dim(img)[2],
ytop = 1 - max(row) / dim(img)[1])
## plot the image and the rectangles
plot(c(0, 1), c(0, 1), type="n")
rasterImage(img, 0, 0, 1, 1)
for (i in seq_len(nrow(rects))) {
rect(rects$xleft[i], rects$ybottom[i], rects$xright[i], rects$ytop[i],
border = brewer.pal(nrow(rects), "Paired")[i], lwd = 2)
}
Of course this approach also detects other black lines as "rectangles" (e.g. the black border). But I guess you can easily create a "clean" image.
Edit: extend method to find shelves that share a black line
To extend the method such that it can separate shelves that share a black line:
First, identify the rectangles in the way outlined above.
Then, extract each rectangle from the image and compute the row means. This gives you a 1d image (= line) for each rectangle. In this line apply thresholding and clustering as before. The clusters are now the black line segments, and the mean of each cluster corresponds to a vertical line shared by two shelves.
To find horizontal shared lines, the same procedure can be applied, but with column means instead of row means.

Control persp mesh tile border colors

I'm having some trouble creating a perspective plot that looks exactly how I want it to look. In particular, I am trying to get the mesh not to be visible at all. If you look at the image on the left you can see faint lines running between the tiles. I want it looking like the right image with no lines visible:
I specifically want a solution with graphics::persp or other base R function. I am not interested in 3rd party packages like rgl.
I obtained the right by using polygon and specifying a border color to match the col color. If I leave border=NA with polygon I get the same result as with persp. However, it seems persp just takes the first border value and re-uses it, unlike polygon which matches colors to the polygons.
This is the code used to generate the image:
nr <- nc <- 10
mx <- matrix(numeric(nr * nc), nr)
par(mai=numeric(4))
col <- gray((row(mx[-1,-1]) * col(mx[-1,-1])/((nr-1)*(nc-1))))
par(mfrow=c(1,3), mai=c(0, 0, .25, 0), pty='s')
persp(
mx, phi=90, theta=0, border=NA, col=col, r=1e9, zlim=c(0,1),
axes=FALSE, box=FALSE
)
title('Persp border=NA')
persp(
mx, phi=90, theta=0, border=col, col=col, r=1e9, zlim=c(0,1),
axes=FALSE, box=FALSE
)
title('Persp border=col')
plot.new()
mxpoly.x <- rbind(
c(row(mx)[-nr, -nc]), c(row(mx)[-1, -nc]), c(row(mx)[-1, -1]),
c(row(mx)[-nr, -1]), NA
)
mxpoly.y <- rbind(
c(col(mx)[-nr, -nc]), c(col(mx)[-1, -nc]), c(col(mx)[-1, -1]),
c(col(mx)[-nr, -1]), NA
)
title('Polygon')
polygon(
((mxpoly.x - 1) / (max(mxpoly.x,na.rm=TRUE) - 1)),
((mxpoly.y - 1) / (max(mxpoly.y,na.rm=TRUE) - 1)),
col=col, border=col
)
That looks like a result of antialiasing. When each cell is drawn, the background is white, so antialiasing means the border pixels are drawn in a lighter colour.
On a Mac, you can fix this by turning antialiasing off. Your first example gives
by default, but if I open the graphics device using
quartz(antialias = FALSE)
and then run the identical code, I get
Turning off antialiasing can cause jagged edges, so this might not really be an acceptable solution to your real problem if it has diagonal lines.
You might be able to get things to work by drawing the surface twice with antialiasing: the first time will show borders, the second time might still show something, but should show less. However, persp() has no add = TRUE argument, so drawing things the second time is likely to be tricky.
If you're not on a Mac, you'll need to read about the device you're using to find if it allows control of antialiasing.
Edited to add: I tried modifying the C source to the persp function
to draw the surface 2 or 3 times. The boundaries were still slightly
visible when it was drawn twice, but invisible with 3 draws.

Sobel edge detector in R?

I am working on an R assignment about Sobel edge-detection. Unfortunately the video tutorial I was following uses R for every other task, but switches to python for image processing - I am guessing he did not find any useful R package for image convolution type work (this tutorial is from last year). I have tried EBImage and magick (this one seems new), but did not find much. This magick vignette talks aboutimage_convolve('Sobel')(about half way down the page) but only for vertical edges, not horizontal. Can someone suggest some good material that I can use? I am fairly new to image processing.
Update:
I have managed to get as far as detecting vertical and horizontal edges separately using magick package (code pasted below), but do not know how to combine them to generate a single image
library(magick)
# get image
img <- image_read("https://www.r-project.org/logo/Rlogo.png")
print(image_info(img))
# define horizontal and vertical Sobel kernel
Shoriz <- matrix(c(1, 2, 1, 0, 0, 0, -1, -2, -1), nrow = 3)
Svert <- t(Shoriz)
# get horizontal and vertical edges
imgH <- image_convolve(img, Shoriz)
imgV <- image_convolve(img, Svert)
print(plot(as.raster(img))) # view original image
print(plot(as.raster(imgH))) # view horizontal edges
print(plot(as.raster(imgV))) # view vertical edges
From the tutorial, next I need to combineimgHandimgVby computing the Euclidean distance between these edges, butdist()won't work with the image objects themselves. I need to get the data from these images, but do not know how. Something similar toimageData()inEBImagepackage would help, but cannot find it inmagick.It hasimage_data()function, but its output looks complicated.
Update(2):
I have (hopefully) got what I wanted withEBImagepackage (code below). I'd still like to get it working withmagickpackage, once I figure out how to get pixel data from edge images as described above, and also how to transform the final edge data back to image.
library(EBImage)
# get image
img <- readImage("https://www.r-project.org/logo/Rlogo.png")
print(img, short = T)
# define horizontal and vertical Sobel kernel
Shoriz <- matrix(c(1, 2, 1, 0, 0, 0, -1, -2, -1), nrow = 3)
Svert <- t(Shoriz)
# get horizontal and vertical edges
imgH <- filter2(img, Shoriz)
imgV <- filter2(img, Svert)
# combine edge pixel data to get overall edge data
hdata <- imageData(imgH)
vdata <- imageData(imgV)
edata <- sqrt(hdata^2 + vdata^2)
# transform edge data to image
imgE <- Image(edata, colormode = 2)
print(display(combine(img, imgH, imgV, imgE), method = "raster", all = T))
Thanks.

How do I make planes in RGL thicker?

I will try 3D printing data to make some nice visual illustration for a binary classification example.
Here is my 3D plot:
require(rgl)
#Get example data from mtcars and normalize to range 0:1
fun_norm <- function(k){(k-min(k))/(max(k)-min(k))}
x_norm <- fun_norm(mtcars$drat)
y_norm <- fun_norm(mtcars$mpg)
z_norm <- fun_norm(mtcars$qsec)
#Plot nice big spheres with rgl that I hope will look good after 3D printing
plot3d(x_norm, y_norm, z_norm, type="s", radius = 0.02, aspect = T)
#The sticks are meant to suspend the spheres in the air
plot3d(x_norm, y_norm, z_norm, type="h", lwd = 5, aspect = T, add = T)
#Nice thick gridline that will also be printed
grid3d(c("x","y","z"), lwd = 5)
Next, I wanted to add a z=0 plane, inspired by this blog here describing the r2stl written by Ian Walker. It is supposed to be the foundation of the printed structure that holds everything together.
planes3d(a=0, b=0, c=1, d=0)
However, it has no volume, it is a thin slab with height=0. I want it to form a solid base for the printed structure, which is meant to keep everything together (check out the aforementioned blog for more details, his examples are great). How do I increase the thickness of my z=0 plane to achieve the same effect?
Here is the final step to exporting as STL:
writeSTL("test.stl")
One can view the final product really nicely using the open source Meshlab as recommended by Ian in the blog.
Additional remark: I noticed that the thin plane is also separate from the grids that I added on the -z face of the cube and is floating. This might also cause a problem when printing. How can I merge the grids with the z=0 plane? (I will be sending the STL file to a friend who will print for me, I want to make things as easy for him as possible)
You can't make a plane thicker. You can make a solid shape (extrude3d() is the function to use). It won't adapt itself to the bounding box the way a plane does, so you would need to draw it last.
For example,
example(plot3d)
bbox <- par3d("bbox")
slab <- translate3d(extrude3d(bbox[c(1,2,2,1)], bbox[c(3,3,4,4)], 0.5),
0,0, bbox[5])
shade3d(slab, col = "gray")
produces this output:
This still isn't printable (the points have no support), but it should get you started.
In the matlib package, there's a function regvec3d() that draws a vector space representation of a 2-predictor multiple regression model. The plot method for the result of the function has an argument show.base that draws the base x1-x2 plane, and draws it thicker if show.base >0.
It is a simple hack that just draws a second version of the plane at a small offset. Maybe this will be enough for your application.
if (show.base > 0) planes3d(0, 0, 1, 0, color=col.plane, alpha=0.2)
if (show.base > 1) planes3d(0, 0, 1, -.01, color=col.plane, alpha=0.1)

Scanning and storing a simple image in a complex matrix

I have been playing with linear algebra transformations in R, moving around a bunch of points plotted in the complex plane. I have posted the results here - the code is linked on the first sentence.
I would like to do the same operations on a real image. Evidently I don't want to get into Fourier transforming the image, or dealing with color or grayscale. I would like to get any old jpeg, turn it into a summarized plot of black and white dots, locate each dot in terms of its position in the complex plane, and then apply the linear algebra operations as I did to my drawing of a house.
The questions are, 1. What is the name for the type of stripped-down, basic black and white image that I am describing? 2. How can I turn a regular jpeg (or other file) into that type of image? How can then store every dot of the thousands of dots the image will contain into a matrix of complex numbers?
Is there software to do this? Is there code in R or python to do it?
It's not clear what you're trying to do with those complex vectors, that wouldn't be more easily obtained using standard x,y coordinates, but here goes a possible starting point
library(jpeg)
im <- readJPEG(system.file("img", "Rlogo.jpg", package="jpeg"))
gr <- apply(im, 1:2, mean)
bw <- which(gr < 0.5, arr.ind = TRUE)
conjure_matrix_of_darkness <- function(bw, xlim=c(-2, 2), ylim=c(-2,2)){
x <- (bw[,1] - min(bw[,1]))/diff(range(bw[,1])) * diff(xlim) + min(xlim)
y <- (bw[,2] - min(bw[,2]))/diff(range(bw[,2])) * diff(ylim) + min(ylim)
x+1i*y
}
test <- conjure_matrix_of_darkness(bw)
par(mfrow=c(2,1), mar=c(0,0,0,0))
plot(test, pch=19, xaxt="n", yaxt="n")
plot(test*exp(1i*pi), pch=19, xaxt="n", yaxt="n")

Categories

Resources