I'd like to display our own images (*.jpg) in AR... They are 2D (because I don't want to convert them into 3D model - it makes no sense for me).
I do know how to display 3D models on our own web page (see eq. https://developers.google.com/ar/develop/scene-viewer), but I need to display 2D picture in people's camera... Any hint, please?
Related
I have developed a framework in R to automatically measure vegetation structure variables from whiteboard photos taken on grasslands for ecological related studies. Until now we have preprocessed the images by hand, however now we need to automatise the rotation and cropping of the images.
The idea is the following: Use reference marks on the whiteboard, and detect these markings to rotate and crop the original photographs. I need help to detect the reference markings. After knowing the position of reference markings (centroids) we can calculate the coordinates/pixels where to crops the image. In the end, we want to get a picture like that.
We can use some special colour for the markings, but these can be obstructed by the vegetation. The bottom of the whiteboard is always obstructed, the cropped part (without the reference markings) should be 25×100 cm.
Possibly edge detection can be a solution. I'm familiar with only with R programming.
I'm looking to auto-generate a UML class model in virtual reality using A-Frame.io (or another technology) by passing in values. Has anyone ever done something similar in the past? Not sure where to start.
Thanks
You might want to look into plantuml which is a nice UML generator. Most of it's diagrams are generated as input to graphviz's dot. Dot is a layout engine - it takes a list of nodes and connections and puts them into 2D space and then renders them to one of it's output formats - or just returns the graph, but this time with coordinates on where to draw what. You could meddle with this data and render the elements with volume but on a 2D plane with dot generated coordinates. Perhaps even you could modify it to place them in 3D space instead of a plane.
Or you could just render the plantuml output on a 2D plane, place it in 3D space and it would probably be good enough. There are even online generators for plantuml.
I'm currently rendering a 3D model (Wavefront .obj format) in my Qt program. Right now, I'm rendering the model using Scene3D in QML, and I'm able to get it to display in the viewing area. What I would like to do is have a user click on the model and generate a 2D cross section of the slice that I would like to plot on a different window. I'm quite new to 3D rendering, and a lot of Qt documentation isn't very descriptive. I've been reading Qt documentation, experimenting, and searching online with no luck. How can I create 2D slices of a 3D object Model in Qt 3D, preferably in QML? What Qt libraries or classes can I use to achieve this?
Unfortunately, the fact that models are stored as a set of surfaces makes this hard. QT probably doesn't have a built in method for this.
Consider, for example, that a model made of faces might be missing a face. What now? can you interpolate across that gap consistently from different angles? What about the fact that a cross-section probably won't contain any vertices?
But, of course, it can be solved. First, just don't allow un-closed surfaces (meshes with holes). Second, for finding the vertices of your cross-section, perform an intersection between every edge in your model and the plane you're using, and if there's an intersection, there's a point there. Third, to find the edges, look at the list of vertices, and any two that are from an edge on the same polygon in the mesh should be connected by an edge in the cross section. To find which direction the edge should go, project the normal of the polygon onto the plane your using. For filling, I don't really know what to do. I guess that's whatever you want it to be.
The Wikipedia page for L-Systems describes many of them, including a couple rules that converge toward the Sierpinski triangle. That particular fractal also has a 3D version, which basically uses pyramids instead of triangles. Is there a way to reach this one with an L-system? That same wikipedia page mentions the existence of 3D L-systems, but doesn't explain how they work or give any example as to what their rules would look like.
So first, how do 3D L-systems differ from their 2D counterpart (if there are major differences), and second, can they be used to create this Sierpinski Pyramids?
I'm trying to create it in Processing, as I managed to draw the 2D version in this software using an L-system before. An example of making a 3D L-system work would be appreciated, but not necessary
A 2D L system in instructions for creating recursive 2D trees with branches that contain number of sub-branches, angle, and length. A 3D version expends the branches to have roll, pitch and yaw. Its easiest to create one with turtle graphics. (If you just use a orthographic projection, you can see the tree, which is of course flattened again to 2D, but looks more complex and less symmetrical than a 2D tree)
Otherwise the system is the same.
I don't know the instruction sequence specifically for creating a Seipinsky pyramid. Presumably you stat at the apex pointing down, then do a pitch of 45*,
and four Rolls with 4 As between them.
Do you know what would be the best approach to generate 3D output for one of these new "3D ready" televisions from software. Our application have some nice 3D visualizations, we want these to look nice.
Also, how feasible is it to generate it from a Flash (Flex) app.
I believe that the gaming and 3DTV industries have paved the way for you. As long as your app already outputs 3D visualizations, it may just be a matter of installing a driver. You can get started with this NVIDIA 3D Stereo User’s Guide, but I believe there's tons of other stuff out there if you look.
See also the answers to this question.
3D televisions can display 3D output only for images shot in 3D. This means "intended for simulated 3D," not just a two-dimensional projection of a 3D image.
Stereoscopy is produced by generating two completely separate images per frame (one for each eye) in which the foreground objects are offset to simulate a 3D image. You cannot take a 2D image and make it into a 3D image, the source frames must be produced as 3D frames from the beginning.
More information:
http://en.wikipedia.org/wiki/3D_television
http://en.wikipedia.org/wiki/Stereoscopy