I have been using a-frame since 2018 to show my scenographies to the directors I work for. I used the colladas files for my objects and everything was working great and my textures were
as I wanted. But since the webxr (aframe-v1.1.0.min) appeared it is no longer possible to use
collada I have tried the Gltf files but it is too heavy and not satisfactory. So I wanted to know how to put collada back into the a-frame scipts. I tried: "collada-model-legacy.js"
with "ColladaLoader.js" in another folder but it doesn't work. Do you have a solution?
Thank you
You can use any of the loaders from the three.js repository - also the ColladaLoader.
You could create a wrapper component, which will use threejs loader and add the model to the scene:
AFRAME.registerComponent("foo", {
init: function() {
const el = this.el;
// create a loader
const loader = new THREE.ColladaLoader();
// load the model
loader.load("MODEL_URL", function(model) {
el.object3D.add(collada.scene);
})
}
})
A simple example could be this component (check it out with static and animated collada models here)
Related
So I saw an old post here:
https://discourse.threejs.org/t/is-there-a-way-to-increase-scene-environment-map-exposure-without-affecting-unlit-materials/13458/4
That says... "If you apply the env map to Scene.environment, it is automatically used as the environment map for all physical materials in the scene (assumed the material’s envmap is not set)."
So tried this using an Aframe component on the scene:
AFRAME.registerComponent('setenvironment', {
init: function () {
var sceneEl = this.el;
var loader = new THREE.CubeTextureLoader();
loader.setPath('./');
var textureCube = loader.load([
'./images/py.png', './images/pz.png',
'./images/nx.png', './images/ny.png',
'./images/px.png', './images/nz.png'
]);
textureCube.encoding = THREE.sRGBEncoding;
sceneEl.object3D.environment = textureCube;
}
});
The environment attribute is successfully set, but the other objects materials still have envMap set to null and the environment lighting does not take effect on the materials.
Any ideas?
aframe 1.0.4 uses three.js revision 111dev. The scene's environment property was introduced in revision 112 (source).
If you use the aframe master build - it seems to be working properly (as its based on three.js r119).
Otherwise, you'll have to iterate through the meshes, and set the material.envMap property manually.
I want to use SVG images as icons on a TabbedPage in Xamarin.Forms for iOS.
The documentation for the TabbedPage class provides the following tip:
The TabbedRenderer for iOS has an overridable GetIcon method that can be used to load tab icons from a specified source. This override makes it possible to use SVG images as icons on a TabbedPage. In addition, selected and unselected versions of an icon can be provided.
I created the following class to perform the override:
[assembly: ExportRenderer(typeof(TabsRoot), typeof(TabbedPageCustomRenderer))]
namespace MyProject.iOS.Renderers
{
public class TabbedPageCustomRenderer : TabbedRenderer
{
protected override Task<Tuple<UIImage, UIImage>> GetIcon(Page page)
{
var image = UIImage.FromFile(#"home-black-18dp.svg");
return Task.FromResult(new Tuple<UIImage, UIImage>(image, image));
}
}
}
The accepted answer in this thread recommends creating a UIImage from an SVG file by doing something like this: var myImage = UIImage.FromFile(<<file name>>) where <<filename>> is an SVG. This other thread contradicts the previous thread, saying that UIImages cannot be made from SVG files. Sure enough, when I provide an SVG file, UIImage.FromFile() returns null and no icon is shown at all, just as the latter thread predicted. When I provide a PNG file, the override works as expected.
Another way I've tried to square this circle is to use the SvgCachedImage provided by FFImageLoading.Svg.Forms, but I haven't figured out how to 'wrap' a UIImage around an SvgCachedImage or whether that is even appropriate in this case.
Thank you for your help!
Add Xamarin.FFImageLoading.Svg nuget package to your ios project.
then create the UImage in your GetIcon method like below :
UIImage img = await ImageService.Instance
.LoadFile("timer.svg")
.WithCustomDataResolver(new SvgDataResolver((int)(TabBar?.Bounds.Height / 2 ?? 30), (int)(TabBar?.Bounds.Height / 2 ?? 30), false))
.AsUIImageAsync();
UIImage newImg = img.ImageWithRenderingMode(UIImageRenderingMode.AlwaysOriginal);
It's very convenient to load GLTF- model in aframe, but no case is found that contains envmap texture. I'd like to see that the official can provide the same case as three official. pmremGenerator.fromEquirectangular(texture) function is used to make gltf model produce real reflection effect
https://threejs.org/examples/#webgl_loader_gltf
https://threejs.org/examples/#webgl_materials_envmaps_hdr
One way would be creating a custom component, which will:
wait until the model is loaded
traverse through the object's children
if they have a material property - apply the envMap
The envmap needs to be a CubeTexture - which adds another level of complication, when you want to use a panorama. You can use a the WebGLRenderTargetCube - It's an object which provides a texture from a Cube Camera 'watching' the panorama.
Overall The component code could look like this:
// create the 'cubecamera' objct
var targetCube = new THREE.WebGLRenderTargetCube(512, 512);
var renderer = this.el.sceneEl.renderer;
// wait until the model is loaded
this.el.addEventListener("model-loaded", e => {
let mesh = this.el.getObject3D("mesh");
// load the texture
var texture = new THREE.TextureLoader().load( URL,
function() {
// create a cube texture from the panorama
var cubeTex = targetCube.fromEquirectangularTexture(renderer, texture);
mesh.traverse(function(node) {
// if a node has a material attribute - it can have a envMap
if (node.material) {
node.material.envMap = cubeTex.texture;
node.material.envMap.intensity = 3;
node.material.needsUpdate = true;
}
});
}
Check it out in this glitch.
I was having the same issue and i found that cube-env-map from a-frame-extras works like a charm.
View component on GitHub
Its docs describe it as:
Applies a CubeTexture as the envMap of an entity, without otherwise
modifying the preset materials
And the code is super simple:
yarn add aframe-extras
import 'aframe-extras'
<a-entity
gltf-model="src: url('/path/to/file.glb')"
cube-env-map="path: /cubeMapFolder/;
extension: jpg;
reflectivity: 0.9;">
</a-entity>
In THREE demo, I remember that WebGLRenderTargetCube was used to produce envmap, but recently it was found thatPMREMGenerator was basically used to generate envmap texture with mipmap. It also supports HDR image format, making gltf model better than JPG texture.
I don't know how these JS modules PMREMGenerator and RGBELoader are used together with the components of Aframe. Can someone provide such an example in Aframe ,Thanks
That's the same High dynamic range (RGBE) Image-based Lighting (IBL) using run-time generated pre-filtered roughness mipmaps (PMREM)
I am currently experimenting with A-Frame and AR.js for a project I'm working on. I was wondering if it's possible to animate a series of PNG files eg. img-1.png, img-2.png, and so on in a-frame without individually adding animation for each frame?
I'm aware of an A-frame GIF component but GIFs are harder to maintain and can only output limited colors (and also trouble with opacity).
Any insights/help would be appreciated. Thanks!
How about a component, which loads up the .pngs as textures, and swaps them in a fixed interval:
AFRAME.registerComponent("slideshow", {
init: function() {
load up and store the images
var loader = new THREE.TextureLoader()
this.array = []
this.array.push(loader.load("one.png"))
this.array.push(loader.load("two.png"))
Instead of doing this one by one, you could do this in a loop ("img-" + i + ".png").
Also you could provide a list using the schema.
Wait until the entity is loaded:
this.el.addEventListener('loaded', e => {
let mesh = this.getObject3D('mesh')
let material = mesh.material
swap the material.map texture in the tick() or within an interval:
let i = 0
setInterval(e => {
// if we're at the last element - swap to the first one
if (i >= this.array.length) i = 0
this.material.map = this.array[i++]
this.material.needsUpdate = true
and it should be working like in this fiddle, when attached to an entity:
<a-box slideshow></a-box>
Why this.array ? For example you can easily access it in the remove() function and dispose the textures to free up memory.
Why not just do setAttribute('material', 'src', 'img-' + i + '.png') ?
I believe with more images it may by highly inefficcient.
I am trying to load a simple cube with per-vertex color information from a Stanford PLY file using QML.
My entity looks like this:
Entity
{
id: circle
property Material materialPoint: Material {
effect: Effect {
techniques: Technique {
renderPasses: RenderPass {
shaderProgram: ShaderProgram {
vertexShaderCode: loadSource("qrc:/imports/org/aid/shared/geometry/shaders/point.vert")
fragmentShaderCode: loadSource("qrc:/imports/org/aid/shared/geometry/shaders/point.frag")
}
}
}
}
parameters: Parameter { name: "pointSize"; value: 2 }
}
property alias translation: circleTransform.translation
property alias rotation : circleTransform.rotationZ
Mesh
{
id: circleMesh
source: "qrc:/resources/models/rg.ply"
}
Transform
{
id: circleTransform
scale : 1
}
components:
[materialPoint, circleTransform, circleMesh]
}
I have also tried replacing the material property with the default Qt material purposely created to solve this problem:
property Material materialPoint: PerVertexColorMaterial {}.
Unfortunately, there are no per-vertex colors visible in the scene.
Is there any recommended way of importing a PLY file with vertex color data in QML? (I suppose it is possible to achieve this if one writes the logic in C++ and creates a specialized QML entity for doing so, but the functionality should be available already).
Loading PLY in Qt3D doesn't include color as you've noticed. Par for the course for Qt3D at the moment, I'm afraid.
You can either:
build and load the Qt Assimp Sceneparser plugin which does support color attributes in PLY, or:
Write your own Qt3D geometry loader in C++. I have done similar when needing to load a custom OBJ model with extra data in each vertex. The loader code is pretty straightforward to work with, you need only modify it to read the extra data, and you can either modify the code in Qt3D itself, or create a plugin and load it in your application for this to work.
Note: it is not necessary to create a specialized QML entity. The loader will read your file in as QMesh.