I am currently experimenting with A-Frame and AR.js for a project I'm working on. I was wondering if it's possible to animate a series of PNG files eg. img-1.png, img-2.png, and so on in a-frame without individually adding animation for each frame?
I'm aware of an A-frame GIF component but GIFs are harder to maintain and can only output limited colors (and also trouble with opacity).
Any insights/help would be appreciated. Thanks!
How about a component, which loads up the .pngs as textures, and swaps them in a fixed interval:
AFRAME.registerComponent("slideshow", {
init: function() {
load up and store the images
var loader = new THREE.TextureLoader()
this.array = []
this.array.push(loader.load("one.png"))
this.array.push(loader.load("two.png"))
Instead of doing this one by one, you could do this in a loop ("img-" + i + ".png").
Also you could provide a list using the schema.
Wait until the entity is loaded:
this.el.addEventListener('loaded', e => {
let mesh = this.getObject3D('mesh')
let material = mesh.material
swap the material.map texture in the tick() or within an interval:
let i = 0
setInterval(e => {
// if we're at the last element - swap to the first one
if (i >= this.array.length) i = 0
this.material.map = this.array[i++]
this.material.needsUpdate = true
and it should be working like in this fiddle, when attached to an entity:
<a-box slideshow></a-box>
Why this.array ? For example you can easily access it in the remove() function and dispose the textures to free up memory.
Why not just do setAttribute('material', 'src', 'img-' + i + '.png') ?
I believe with more images it may by highly inefficcient.
Related
I am trying to make a web app where I can use world tracking to view a 3D model. I am trying to change the opacity of the model using the material property of the a-entity tag, however it does not seem to be working. any idea on how to change the opacity of my 3D object in order to make it translucent?
<a-entity
id="model"
gltf-model="#3dmodel"
class="cantap"
geometry="primitive: box"
scale="0.5 0.5 0.5"
material="transparent: true; opacity 0.1"
animation-mixer="clip: idle; loop: repeat"
hold-drag two-finger-spin pinch-scale>
</a-entity>
GLTF models come with their own materials included in the model file. These are handled by THREE.js rather than AFrame, so in order to access their properties you have to search through the elements object3D, using something like this:
this.el.object3D.traverse((child) => {
if (child.type === 'Mesh') {
const material = child.material;
// Do stuff with the material
material.opacity = 0.1;
}
})
See the THREE.js documentation on Materials and Object3D for more detail.
Also, I noticed you have both a geometry primitive and a gltf-model on that entity. Not sure how well those two things can co-exist, if you want both a box and the model you should probably make one a child of the other.
E Purwanto's answer above works for me with an addition of setting transparency true:
this.el.object3D.traverse((child) => {
if (child.type === 'Mesh') {
const material = child.material;
// Do stuff with the material
material.transparent = true; // enable to modify opacity correctly
material.opacity = 0.1;
}
})
It's very convenient to load GLTF- model in aframe, but no case is found that contains envmap texture. I'd like to see that the official can provide the same case as three official. pmremGenerator.fromEquirectangular(texture) function is used to make gltf model produce real reflection effect
https://threejs.org/examples/#webgl_loader_gltf
https://threejs.org/examples/#webgl_materials_envmaps_hdr
One way would be creating a custom component, which will:
wait until the model is loaded
traverse through the object's children
if they have a material property - apply the envMap
The envmap needs to be a CubeTexture - which adds another level of complication, when you want to use a panorama. You can use a the WebGLRenderTargetCube - It's an object which provides a texture from a Cube Camera 'watching' the panorama.
Overall The component code could look like this:
// create the 'cubecamera' objct
var targetCube = new THREE.WebGLRenderTargetCube(512, 512);
var renderer = this.el.sceneEl.renderer;
// wait until the model is loaded
this.el.addEventListener("model-loaded", e => {
let mesh = this.el.getObject3D("mesh");
// load the texture
var texture = new THREE.TextureLoader().load( URL,
function() {
// create a cube texture from the panorama
var cubeTex = targetCube.fromEquirectangularTexture(renderer, texture);
mesh.traverse(function(node) {
// if a node has a material attribute - it can have a envMap
if (node.material) {
node.material.envMap = cubeTex.texture;
node.material.envMap.intensity = 3;
node.material.needsUpdate = true;
}
});
}
Check it out in this glitch.
I was having the same issue and i found that cube-env-map from a-frame-extras works like a charm.
View component on GitHub
Its docs describe it as:
Applies a CubeTexture as the envMap of an entity, without otherwise
modifying the preset materials
And the code is super simple:
yarn add aframe-extras
import 'aframe-extras'
<a-entity
gltf-model="src: url('/path/to/file.glb')"
cube-env-map="path: /cubeMapFolder/;
extension: jpg;
reflectivity: 0.9;">
</a-entity>
In THREE demo, I remember that WebGLRenderTargetCube was used to produce envmap, but recently it was found thatPMREMGenerator was basically used to generate envmap texture with mipmap. It also supports HDR image format, making gltf model better than JPG texture.
I don't know how these JS modules PMREMGenerator and RGBELoader are used together with the components of Aframe. Can someone provide such an example in Aframe ,Thanks
That's the same High dynamic range (RGBE) Image-based Lighting (IBL) using run-time generated pre-filtered roughness mipmaps (PMREM)
Say, a shape of a human outline.
Ideally it could be converted to 3d by extruding, but even if it has no depth, that's fine for my use case.
I think the easiest way would be taking a transparent png image (the human outline), and use it as a source for an a-plane
<a-plane material="src: img.png; transparent: true"></a-plane>
Glitch here.
....but if you want to create a geometry with a custom shape, which will be helpful for extrusion, then check this out:
Creating a simple shape with the underlying THREE.js
First you need an array of 2D points:
let points = [];
points.push(new THREE.Vector2(0, 0));
// and so on for as many as you want
Create a THREE.Shape object which vertices will be created from the array
var shape = new THREE.Shape(points);
Create a mesh with the shape geometry, and any material, and add it to the scene, or entity
var geometry = new THREE.ShapeGeometry(shape);
var material = new THREE.MeshBasicMaterial({
color: 0x00ff00
});
var mesh = new THREE.Mesh(geometry, material);
entity.object3D.add(mesh);
More on:
1) THREE.Shape
2) THREE.ShapeGeometry
3) THREE.Mesh
Extrusion
Instead of the ShapeGeometry you can use the ExtrudeGeometry object:
var extrudedGeometry = new THREE.ExtrudeGeometry(shape, {amount: 5, bevelEnabled: false});
Where the amount is basically the "thickness". More on Three.ExtrudeGeometry here.
Usage with AFRAME
I'd recommend creating an AFRAME custom component:
js
AFRAME.registerComponent('foo', {
init: function() {
// mesh creation
this.el.object3D.add(mesh);
}
})
HTML
<a-entity foo></a-entity>
2D shape here.
Extruded 2D shape here.
Three.js examples here. They are quite more complicated than my polygons :)
There are also a couple of pre-built A-Frame components you could use to help with extrusion.
https://github.com/JosePedroDias/aframe-extrude-and-lathe
https://github.com/luiguild/aframe-svg-extruder
You can find examples of usage in each of those repos.
For example, if you go to Twitter and click on an image, you can see they have a nice color that is close to what you see on the image. I tried looking up ways to achieve this as well as trying to figure it out on my own but no luck. I'm not sure if there's a color: relative property or not.
if you want to use the a colour that exists in your image and set it as a background colour you need to use the canvas element in the following manner:
HTML (this is your image)
<img src="multicolour.jpg" id="mainImage">
JS
window.onload = function() {
// get the body element to set background (this can change dependending of your needs)
let body = document.getElementsByTagName("body")
// get references to the image element that contains the picture you want to match with background
let referenceImage = document.getElementById("mainImage");
// create a canvas element (but don't add it to the page)
let canvas = document.createElement("canvas");
// make the canvas size the same as your image
canvas.width = referenceImage.offsetWidth
canvas.height = referenceImage.offsetHeight
// create the canvas context
let context = canvas.getContext('2d')
// usage your image reference to draw the image in the canvas
context.drawImage(referenceImage,0,0);
// select a random X and Y coordinates inside the drawn image in the canvas
// (you don't have to do this one, but I did to demonstrate the code)
let randomX = Math.floor(Math.random() * (referenceImage.offsetWidth - 1) + 1)
let randomY = Math.floor(Math.random() * (referenceImage.offsetHeight - 1) + 1)
// THIS IS THE MOST IMPORTANT LINE
// getImageData takes 4 arguments: coord x, coord y, sample size w, and sample size h.
// in our case the sample size is going to be of 1 pixel so it retrieves only 1 color
// the method gives you the data object which constains and array with the r, b, g colour data from the selected pixel
let color = context.getImageData(randomX, randomY, 1, 1).data
// use the data to dynamically add a background color extracted from your image
body[0].style.backgroundColor = `rgb(${color[0]},${color[1]},${color[2]})`
}
here is a gif of the code working... hopefully this helps
UPDATE
Here is the code to select two random points and create a css3 background gradient
window.onload = function() {
// get the body element to set background (this can change dependending of your needs)
let body = document.getElementsByTagName("body")
// get references to the image element that contains the picture you want to match with background
let referenceImage = document.getElementById("mainImage");
// create a canvas element (but don't add it to the page)
let canvas = document.createElement("canvas");
// make the canvas size the same as your image
canvas.width = referenceImage.offsetWidth
canvas.height = referenceImage.offsetHeight
// create the canvas context
let context = canvas.getContext('2d')
// usage your image reference to draw the image in the canvas
context.drawImage(referenceImage,0,0);
// select a random X and Y coordinates inside the drawn image in the canvas
// (you don't have to do this one, but I did to demonstrate the code)
let randomX = Math.floor(Math.random() * (referenceImage.offsetWidth - 1) + 1)
let randomY = Math.floor(Math.random() * (referenceImage.offsetHeight - 1) + 1)
// THIS IS THE MOST IMPORTANT LINE
// getImageData takes 4 arguments: coord x, coord y, sample size w, and sample size h.
// in our case the sample size is going to be of 1 pixel so it retrieves only 1 color
// the method gives you the data object which constains and array with the r, b, g colour data from the selected pixel
let colorOne = context.getImageData(randomX, randomY, 1, 1).data
// THE SAME TO OBTAIN ANOTHER pixel data
let randomX2 = Math.floor(Math.random() * (referenceImage.offsetWidth - 1) + 1)
let randomY2 = Math.floor(Math.random() * (referenceImage.offsetHeight - 1) + 1)
let colorTwo = context.getImageData(randomX2, randomY2, 1, 1).data
// use the data to dynamically add a background color extracted from your image
//body[0].style.backgroundColor = `rgb(${allColors[0]},${allColors[1]},${allColors[2]})`
body[0].style.backgroundImage = `linear-gradient(to right, rgb(${colorOne[0]},${colorOne[1]},${colorOne[2]}),rgb(${colorTwo[0]},${colorTwo[1]},${colorTwo[2]}))`;
}
The following are your options.
1. Use an svg.
As far as I know there's no way to have javascript figure out what color is being used in a png and set it as a background color. But you can work the other way around. You can have javascript set the background color and an svg image to be the same color.
See this stackoverflow answer to learn more about modifying svgs with javascript.
2. Use a custom font.
There are fonts out there that provide a bunch of icons instead of letters, you can also create your own font if you feel so inclined to do so. With css you just have to set the font-color of that icon to be the same as the background-color of your other element.
Font Awesome provides a bunch of useful custom icons. If the image you need to use happens to be similar to one of theirs, you can just go with them.
3. Use canvas
If you really want to spend the time to code it up you can use a html <canvas/> element and put the image into it. From there you can inspect certain details about the image like its color, then apply that color to other elements. I won't go into too much detail about using this method as it seems like it's probably overkill for what you're trying to do, but you can read up more about from this stackoverflow answer.
4. Just live with it.
Not a fun solution, but this is usually the option I go with. You simply have to hard-code the color of the image into your css and live with it. If you ever need to modify the color of the image, you have to remember to update your css also.
I have a Xamarin.Forms app where I need to drag irregularly shaped controls (TwinTechForms SvgImageView) around, like this one:
I want it to only respond to touches on the black area and not on transparent (checkered) areas
I tried using MR.Gestures package. Hooking up to the Panning event lets me drag the image but it also starts dragging when I touch the transparent parts of it.
My setup looks like this:
<mr:ContentView x:Name="mrContentView" Panning="PanningEventHandler" Panned="PannedEventHandler" Background="transparent">
<ttf:SvgImageView x:Name="svgView" Background="transparent" SvgPath=.../>
</mr:ContentView>
and code-behind
private void PanningEventHandler(object sender, PanningEventParameters arg){
svgView.TranslateX = arg.IsCancelled ? 0: arg.TotalDistance.X;
svgView.TranslateY = arg.IsCancelled ? 0: arg.TotalDistance.Y;
}
private void PannedEventHandler(object sender, PanningEventParameters arg){
if (!arg.IsCancelled){
mrContentView.TranslateX = svgView.TranslateX;
mrContentView.TranslateY = svgView.TranslateY;
}
svgView.TranslateX = 0;
svgView.TranslateY = 0;
}
In this code-behind how should I check if I'm hitting a transparent point on the target object and when that happens how do I cancel the gesture so that another view under this one may respond to it? In the right side image touching the red inside the green O's hole should start dragging the red O
Update: SOLVED
The accepted answer's suggestion worked but was not straightforward.
I had to fork and modify both NGraphics (github fork) and TwinTechsFormsLib (TTFL, github fork)
In the NGraphics fork I added an XDocument+filter ctor to SvgReader so the same XDocument can be passed into different SvgImageView instances with a different parse filter, effectively splitting up the original SVG into multiple SvgImageView objects that can be moved independently without too much of a memory hit. I had to fix some brush inheritance for my SVGs to show as expected.
The TTFL fork exposes the XDocument+filter ctor and adds platform-specific GetPixelColor to the renderers.
Then in my Xamarin.Forms page I can load the original SVG file into multiple SvgImageView instances:
List<SvgImageView> LoadSvgImages(string resourceName, int widthRequest = 500, int heightRequest = 500)
{
var svgImageViews = new List<SvgImageView>();
var assembly = this.GetType().GetTypeInfo().Assembly;
Stream stream = assembly.GetManifestResourceStream(resourceName);
XDocument xdoc = XDocument.Load(stream);
// only groups that don't have other groups
List<XElement> leafGroups = xdoc.Descendants()
.Where(x => x.Name.LocalName == "g" && x.HasElements && !x.Elements().Any(dx => dx.Name.LocalName == "g"))
.ToList();
leafGroups.Insert(0, new XElement("nonGroups")); // this one will
foreach (XElement leafGroup in leafGroups)
{
var svgImage = new SvgImageView
{
HeightRequest = widthRequest,
WidthRequest = heightRequest,
HorizontalOptions = LayoutOptions.Start,
VerticalOptions = LayoutOptions.End,
StyleId = leafGroup.Attribute("id")?.Value, // for debugging
};
// this loads the original SVG as if only there's only one leaf group
// and its parent groups (to preserve transformations, brushes, opacity etc)
svgImage.LoadSvgFromXDocument(xdoc, (xe) =>
{
bool doRender = xe == leafGroup ||
xe.Ancestors().Contains(leafGroup) ||
xe.Descendants().Contains(leafGroup);
return doRender;
});
svgImageViews.Add(svgImage);
}
return svgImageViews;
}
Then I add all of the svgImageViews to a MR.Gesture <mr:Grid x:Name="movableHost"> and hook up Panning and Panned events to it.
SvgImageView dragSvgView = null;
Point originalPosition = Point.Zero;
movableView.Panning += (sender, pcp) =>
{
// if we're not dragging anything - check the previously loaded SVG images
// if they have a non-transparent pixel at the touch point
if (dragSvgView==null){
dragSvgView = svgImages.FirstOrDefault(si => {
var c = si.GetPixelColor(pcp.Touches[0].X - si.TranslationX, pcp.Touches[0].Y - si.TranslationY);
return c.A > 0.0001;
});
if (dragSvgView != null)
{
// save the original position of this item so we can put it back in case dragging was canceled
originalPosition = new Point (dragSvgView.TranslationX, dragSvgView.TranslationY);
}
}
// if we're dragging something - move it along
if (dragSvgView != null)
{
dragSvgView.TranslationX += pcp.DeltaDistance.X;
dragSvgView.TranslationY += pcp.DeltaDistance.Y;
}
}
Neither MR.Gestures nor any underlying platform checks if a touched area within the view is transparent. The elements which listen to the touch gestures are always rectangular. So you have to do the hit testing yourself.
The PanningEventParameters contain a Point[] Touches with the coordinates of all touching fingers. With these coordinates you can check if they match any visible area in the SVG.
Hit-testing for the donut from your sample is easy, but testing for a general shape is not (and I think that's what you want). If you're lucky, then SvgImage already supports it. If not, then you may find the principles how this can be done in the SVG Rendering Engine, Point-In-Polygon Algorithm — Determining Whether A Point Is Inside A Complex Polygon or 2D collision detection.
Unfortunately overlapping elements are a bit of a problem. I tried to implement that with the Handled flag when I originally wrote MR.Gestures, but I couldn't get it to work on all platforms. As I thought it's more important to be consistent than to make it work on just one platform, I ignore Handled on all platforms and rather raise the events for all overlapping elements. (I should've removed the flag altogether)
In your specific case I'd propose you use a structure like this for multiple SVGs:
<mr:ContentView x:Name="mrContentView" Panning="PanningEventHandler" Panned="PannedEventHandler" Background="transparent">
<ttf:SvgImageView x:Name="svgView1" Background="transparent" SvgPath=.../>
<ttf:SvgImageView x:Name="svgView2" Background="transparent" SvgPath=.../>
<ttf:SvgImageView x:Name="svgView3" Background="transparent" SvgPath=.../>
</mr:ContentView>
In the PanningEventHandler you can check if the Touches are on any SVG and if yes, which one is on top.
If you'd do multiple ContentViews each with only one SVG in it, then the PanningEventHandler would be called for each overlapping rectangular element which is not what you want.