Engine A-Frame.
I had a 3d-model with two surfaces: metal and glass.
How can I assign two materials with different options in one model?
Update October 2017 - A-Frame 0.7.x:
A-Frame 0.7.x has made the glTF2 standard over glTF1. All you have to do is to use the gltf-model component directly.
Old answer - A-Frame 0.6.x:
You can do this in your modelling software and export your asset in glTF2 format. In order to load it into A-Frame use A-Frame extras package.
Related
I have an A-Frame scene with many copies of the same entity (gltf model). Is there a way to reduce draw calls by using instancing from three.js?
There are a few A-Frame components that support some level of instancing:
https://github.com/diarmidmackenzie/instanced-mesh - most recent
https://github.com/EX3D/aframe-InstancedMesh - basic support
https://github.com/takahirox/aframe-instancing - experimental, out of date
https://github.com/gftruj/webzamples/blob/master/aframe/models/instanced.html - code sample from How to use Three.InstancedMesh in Aframe
Does anybody know a simple sample to use an custom allocator/presenter of the VMR9?
Or to get access to the default allocator/presenter.
I'm not interested in drawing on 3D-surfaces. I just want to have full control to stretching/shrinking/moving of varying input streams on the composition space of the VMR9.
Traditional custom allocator/presenter sample is \Samples\multimedia\directshow\vmr9\vmr9allocator from Windows SDK 7 (also, mentioned here). Compared to EVR presenter it is actually a simple one.
There is a scene and Direct3D inevitably involved anyway because that's VMR-9: it uses 3D device and surfaces for presentation.
Given qt3d's structure, is it possible to integrate the oculus sdk with a qt3d application?
I have tried but my two main obstacles are:
I cant use the textures from the texture swap chain created by the oculus sdk as a render target attachment
I am not able to call ovr_SubmitFrame at the end of each frame since qt3d doesnt have a signal that would allow me to do so.
Has anyone successfully gotten the oculus sdk to work with qt3d? If so, how did you overcome these issues?
Are there any plans for allowing the integration of VR SDKs (not just oculus') in qt3d in further releases?
You could probably do it with some sort of custom framegraph that encapsulated the stereo rendering functionality and included a custom component that could take the currently rendered content and submitted it to the SDK prior to the swapbuffer call.
Alternatively you could dive into the code that processes the framegraph itself and see how hard it would be to customize it to work against a VR API. I've done significant work with integrating Qt apps with VR, but not specifically with Qt3D.
The frame graph will indeed provide one part of the solution for the stereoscopic rendering setup. There is already an anaglyphic stereo example showing most of what you need that ships with Qt 3D.
To integrate the swap chain of the occulus SDK will require deeper integration. I do not know the details of the Occulus SDK as yet but we can take a look.
From what I can see you should be able to do something analogous to the Scene3D custom Qt Quick 2 item to be able to render to the textures provided by the Occulus SDK and to tell Qt 3D which OpenGL Context to use. See
http://code.qt.io/cgit/qt/qt3d.git/tree/src/quick3d/imports/scene3d?h=5.7
Nicolas, I also do not appreciate you publicly saying that KDAB are not much help. I only received an email from Karsten on Friday which I responded to despite being on vacation saying that we can help but it will be on a best efforts basis since you are not paying and I have a very full workload preparing Qt 3D for release at the end of the month along with Qt 5.7. Today is a public holiday in the UK, as you are aware, yet you are already saying detrimental things about us.
You were also directed to post to the interest#qt-project.org mailing list on the qt-forums as I do not tend to monitor SO or the qt-forums on a regular basis. You could have also emailed us directly or via the development#qt-interest mailing list.
We would be more than happy to set up a support agreement with you.
I'm trying to use to SketchUp API to navigate around 3D models (zoom, pan, rotate, etc). My ultimate aim is to integrate it with a Leap Motion app.
However, right now I think my first step would be to figure out how to control the basic navigation gestures via the Sketchup API. After a bit of research, I see that there are the 'Camera' and the 'Animation' interfaces, but I think they would be more suited to 'hardcoded' paths and motions within the script.
Therefore I was wondering - does anyone know how I can write a plugin that is able to accept inputs from another program (my eventual Leap Motion App in this case) and translate it into specific navigation commands using the Sketchup API (like pan, zoom, etc). Can this be done using the 'Camera' and the 'Animation' interfaces (in some sort of step increments), or are there other interfaces I should be looking at.
As usual, and examples would be most helpful.
Thanks!
View, Camera and the Animation class is what you are looking for. Maybe you don't even need the Animation class, you might just be ok with using the time in the UI class. Depends on the details of what you will be doing.
You can set the camera directly like so:
Sketchup.active_model.active_view.camera.set(ORIGIN, Z_AXIS, Y_AXIS)
or you can use View.camera= which also accept a transition time argument if you find that useful.
For bridging input you could always create a Ruby C Extension that takes care of the communication between the applications. There are some quirks in getting C Extensions work for SketchUp Ruby as oppose to regular Ruby though - depending on how you compile it.
I wrote a hello world example a couple of years ago: https://bitbucket.org/thomthom/sketchup-ruby-c-extension
Though note that I have since found a better solution for Windows, using the Development Kit from Ruby Installers: http://rubyinstaller.org/
This answer is related to my comment above regarding the view seemingly 'jumping' when I assign a new camera to the current view using camera=, but not if I use camera.set.
I figured out this was happening because the camera FOV for the original camera was different, and the new camera was defaulting to an FOV of 30. By explicitly creating the camera with the optional perspective and FOV arguments from the initial camera solves this problem:
new_camera = Sketchup::Camera.new new_eye, new_target, curr_camera.up, curr_camera.perspective?, curr_camera.fov
Hope people find this useful!
I'm developing a site on Adobe Day CQ. My site should consist of many different components that can not be found in the standard library Day CQ. Are there any third-party component libraries for this CMS, or will I need to create them yourself?
Surprisingly not as many as one might expect--at least at present. It certainly seems like fertile ground for a startup. But here are a few if they happen to mesh with your particular needs:
Video component - http://www.brightcove.com/en/partners/cq5 and http://blog.brightcove.com/en/2011/12/brightcove-integrates-video-cloud-adobe%E2%80%99s-cq5-cms and http://coresecure.com/bc-cq5-component/
Google maps component - http://blogs.adobe.com/livecyclepost/2011/09/google-maps-component-for-adep-wemcq5/
Adobe third-party references - http://dev.day.com/docs/en/crx/current/developing/developers_getting_started.html#Third%20Party%20Libraries
There are consultants as well you could tap to create custom components. They likely have some libraries they have developed, but not (yet?) packaged for resale that they may be able to leverage:
http://www.headstandmedia.com/what-we-do/cms/adobe-cq5/cq5-development/
http://www.ensemble.com/expertise/digitalmarketing/cq5.shtml