Export R object for 3D printing - r

If I have a data set in R, what would be a good way to export it so I could get it to a service like Shapeways for 3D printing?
I don't have any "real" CAD software, but I've used Google Sketchup before.
In my case the object can be described by two surface plots, something like this:
x <- y <- seq(0,1,by=0.01)
persp(x, y, outer(x, y, function(x,y) (x+y)^2))
persp(x, y, outer(x, y, function(x,y) rep(0,length(x))), zlim=c(-1,1))
...which I would like to appear together as one object to be printed. Any ideas?

Shapeways says it can take output from MeshLab: http://sourceforge.net/projects/meshlab/files/meshlab
MeshLab, an open-source, free-as-in-beer project, is able to import this file using its .asc format option:
dat <- data.frame(x=x, # will be recycled 101 times
y=rep(y, each=101),
z=as.vector(outer(x, y, function(x,y) (x+y)^2)))
write.table(dat, file="out.asc", row.names=FALSE, col.names=FALSE)
I probably should have done an sos-search;
library(sos)
findFn("3d printing")
.... did bring up the r2stl package whose sole function has the same name. It also found other convex hull functions that might be useful to others that want to build other 3D shapes from data.

DWin has already made one suggestion for the mesh. If you need to export the resulting object from Meshlab and manipulate it in an extraordinarily intuitive 3D application that doesn't cost the earth then you should try MoI 3D.
I mention this because MoI has a very competent mesh engine and many of MoI's users seem to be involved in 3D printing (see for example this thread).
The developer Michael Gibson often responds to forum questions in, literally, minutes and other users in the forum are very supportive. There is a full 30-day trial version that allows you to experiment at no cost. MoI can also be scripted using JavaScript.
By its nature 3D printing is irrevocably real so it pays to be sure before you commit!

Related

R copies for no apparent reason

Some R function will make R copy the object AFTER the function call, like nrow, while some others don't, like sum.
For example the following code:
x = as.double(1:1e8)
system.time(x[1] <- 100)
y = sum(x)
system.time(x[1] <- 200) ## Fast (takes 0s), after calling sum
foo = function(x) {
return(sum(x))
}
y = foo(x)
system.time(x[1] <- 300) ## Slow (takes 0.35s), after calling foo
Calling foo is NOT slow, because x isn't copied. However, changing x again is very slow, as x is copied. My guess is that calling foo will leave a reference to x, so when changing it after, R makes another copy.
Any one knows why R does this? Even when the function doesn't change x at all? Thanks.
I definitely recommend Hadley's Advanced R book, as it digs into some of the internals that you will likely find interesting and relevant. Most relevant to your question (and as mentioned by #joran and #lmo), the reason for the slow-down was an additional reference that forced copy-on-modify.
An excerpt that might be beneficial from Memory#Modification:
There are two possibilities:
R modifies x in place.
R makes a copy of x to a new location, modifies the copy, and then
uses the name x to point to the new location.
It turns out that R can do either depending on the circumstances. In
the example above, it will modify in place. But if another variable
also points to x, then R will copy it to a new location. To explore
what’s going on in greater detail, we use two tools from the pryr
package. Given the name of a variable, address() will tell us the
variable’s location in memory and refs() will tell us how many names
point to that location.
Also of interest are the sections on R's C interface and Performance. The pryr package also has tools for working with these sorts of internals in an easier fashion.
One last note from Hadley's book (same Memory section) that might be helpful:
While determining that copies are being made is not hard, preventing
such behaviour is. If you find yourself resorting to exotic tricks to
avoid copies, it may be time to rewrite your function in C++, as
described in Rcpp.

How to compute the volume of an unstructured mesh using TVTK and Python?

I'm trying to calculate the volume of an unstructured grid using mayavi and tvtk. My idea was to tetrahedronalize the Point cloud by means of the Delaunay3d-Filter. Then I Need to somehow extract the tetrahedra from this dataset while ignoring other cell-types such as lines and triangles.
But how can i accomplish this? My Python code so far Looks as follows:
import numpy as np
from mayavi import mlab
x, y, z = np.random.random((3, 100))
data = x**2 + y**2 + z**2
src = mlab.pipeline.scalar_scatter(x, y, z, data)
field = mlab.pipeline.delaunay3d(src)
Can i use the field-object to retrieve the polyhedras Vertices?
Thanks in advance.
frank.
Is this the best way to go about it? scipy.spatial has delaunay functionality as well. Having recently worked with both myself, I would note that scipy is a much lighter dependency, easier to use, and better documented. Note that either method will work on the convex hull of the pointcloud, which may not be what you want. the scipy version also easily allows you to compute the boundary primitives as well, amongst other things, which may be useful for further processing.

Analog to utility classes in R?

I have a couple of functions that convert between coordinate systems, and they all rely on constants from the WGS84 ellipsoid, etc. I'd rather not have these constants pollute the global namespace. Similarly, not all of the functions need to be visible globally.
In Java, I'd encapsulate all the coordinate stuff in a utility class and only expose the coordinate transformation methods.
What's a low-overhead way to do this in R? Ideally, I could:
source("coordinateStuff.R")
at the top of my file and call the "public" functions as needed. It might make a nice package down the road, but that's not a concern right now.
Edit for initial approach:
I started coords.R with:
coords <- new.env()
with(coords, {
## Semi-major axis (center to equator)
a <- 6378137.0
## And so on...
})
The with statement and indentation clearly indicate that something is different about the assignment variables. And it sure beats typing a zillion assign statements.
The first cut at functions looked like:
ecef2geodetic <- function (x,y,z) {
attach(coords)
on.exit(detach(coords))
The on.exit() ensures that we'll leave coords when the function exits. But the attach() statements caused trouble when one function in coords called another in coords. See this question to see how things went from there.
Utility classes in Java are code smell. This is not what you want in R.
There are several ways of solving this in R. For medium / large scale things, the way to go is to put you stuff into a package and use it in the remaining code. That encapsulates your “private” variables nicely and exposes a well-defined interface.
For smaller things, an excellent way of doing this is to put your code into a local call which, as the name suggests, executes its argument in a local scope:
x <- 23
result <- local({
foo <- 42
bar <- x
foo * bar
})
Finally, you can put your objects into a list or environment (there are differences but you may ignore them for now), and then just access them via listname$objname:
coordinateStuff <- list(
foo = function () { cat('42\n') }
bar = 23
)
coordinateStuff$foo()
If you want something similar to your source statement, take a look at my xsource command which solves this to some extent (although it’s work in progress and has several issues!). This would allow you to write
cs <- xsource(coordinateStuff)
# Use cs as if it were an evironment, e.g.
cs$public_function()
# or even:
cs::public_function()
A package is the solution... But for a fast solution you could use Environments http://stat.ethz.ch/R-manual/R-devel/library/base/html/environment.html

Sliding FFT in R

Is there a function or package in R for calculating the Sliding FFT of a sample? By this I mean that given the output of fft(x[n:m]), calculate fft(x[1+(n:m)]) efficiently.
Ideally I'd find both an online version (where I don't have access to the full time series at the beginning, or it's too big to fit in memory, and I'm not going to try to save the whole running FFT in memory either) and a batch version (where I give it the whole sample x and tell it the running window width w, resulting in a complex matrix of dimension c(w,length(x)/w)).
An example of such an algorithm is presented here (but I've never tried implementing it in any language yet):
http://cnx.org/content/m12029/latest/
If no such thingy exists already in R, that doesn't look too hard to implement I guess.
As usually happens when I post something here, I kept working on it and came up with a solution:
fft.up <- function(x1, xn, prev) {
b <- length(prev)
vec <- exp(2i*pi*seq.int(0,b-1)/b)
(prev - x1 + xn) * vec
}
# Test it out
x <- runif(6)
all.equal(fft.up(x[1], x[6], fft(x[1:5])), fft(x[2:6]))
# [1] TRUE
Still interested to know if some library offers this, because then it might offer other handy things too. =) But for now my problem's solved.

Plotting complex functions using the Symbolic Math Toolbox?

How should I plot this function:
z^(1/n) [complex roots of z]
with ezsurf(), ezmesh(), ...? In the official documentation is clearly stated that ezsurf() and ezsurfc() for example, do not accept complex inputs.
I understand the trick is probably in using both real() and imag() functions, but even so, I can't get rid of the problem.
The basic idea seems to work for me alright. Of course you can go on to tweak the axis limits, grid spacing, color look-up table, etc.. The online documentation at http://www.mathworks.com/help/techdoc/ref/ezsurf.html has some nice examples that aren't found in the built-in help system. Good luck!
syms z n
subplot(2,1,1)
ezsurf(real(z^(1/n)))
subplot(2,1,2)
ezsurf(imag(z^(1/n)))

Resources