I want to show an image with a frame. Using <a-image> gives me a plane with the image.
<a-box src="path/to/img.jpg> however gives me the image but it's giving me the image 6 times. Is it possible to get the box with am image at the front and any color at all other sides?
I don't know if you can do this with Aframe (I don't think so), but you can do it with Threejs, by making an array of materials that contain a material for each box face, and applying that to the box mesh.
const loadManager = new THREE.LoadingManager();
const loader = new THREE.TextureLoader(loadManager);
const materials = [
new THREE.MeshBasicMaterial({map: loader.load('resources/images/flower-1.jpg')}),
new THREE.MeshBasicMaterial({map: loader.load('resources/images/flower-2.jpg')}),
new THREE.MeshBasicMaterial({map: loader.load('resources/images/flower-3.jpg')}),
new THREE.MeshBasicMaterial({map: loader.load('resources/images/flower-4.jpg')}),
new THREE.MeshBasicMaterial({map: loader.load('resources/images/flower-5.jpg')}),
new THREE.MeshBasicMaterial({map: loader.load('resources/images/flower-6.jpg')}),
];
loadManager.onLoad = () => {
const cube = new THREE.Mesh(geometry, materials);
scene.add(cube);
cubes.push(cube); // add to our list of cubes to rotate
};
you can place this code inside of a custom component that is attached the cube geometry.
Here is the tutorial that the above code was taken from, on threejsfundamentals.
Related
I want to learn pixels band values, for example when I clik on mNDWI image in screen of Earth Engine, I need learning values of red, green and blue
var geometry=ee.Geometry.Polygon([[38.877002459052335,40.75574968156597],
[41.206104021552335,41.17882292442983],
[40.645801287177335,41.59918091806734],
[40.052539568427335,41.84517989453356],
[39.569141130927335,41.886088143011904],
[38.800098162177335,41.48405920501165],
[38.877002459052335,40.75574968156597],
]);
var s2SR = ee.ImageCollection('COPERNICUS/S2_SR')
//filter start and end date
.filter(ee.Filter.calendarRange(2020,2020,'year'))
.filter(ee.Filter.calendarRange(8,8,'month'))
//filter according to drawn boundary
.filterBounds(geometry)
.filterMetadata('CLOUD_COVERAGE_ASSESSMENT', 'less_than',10);
//Map.addLayer(s2SR, {bands:['B4', 'B3', 'B2'], min:0, max:8000}, 's2SR');
// adding mNDWI function
var addMNDWI = function(image) {
var mndwi = ee.Image(image).normalizedDifference(['B3', 'B11']).rename('MNDWI');
return ee.Image(image).addBands(mndwi);
};
var mndwı=s2SR
.map(addMNDWI);
Map.addLayer(mndwı.first(), { min:245, max:5000}, 'mndwı');
It is simple to view the values for any displayed image. First, click on the “Inspector” tab in the top right pane of the Earth Engine Code Editor.
Then, click wherever you want on the map. The Inspector tab will display:
The coordinates of the location you clicked.
The values of every band of every image under that point. (When there are many, as a chart.)
The details of the image (or feature), including properties.
Say, a shape of a human outline.
Ideally it could be converted to 3d by extruding, but even if it has no depth, that's fine for my use case.
I think the easiest way would be taking a transparent png image (the human outline), and use it as a source for an a-plane
<a-plane material="src: img.png; transparent: true"></a-plane>
Glitch here.
....but if you want to create a geometry with a custom shape, which will be helpful for extrusion, then check this out:
Creating a simple shape with the underlying THREE.js
First you need an array of 2D points:
let points = [];
points.push(new THREE.Vector2(0, 0));
// and so on for as many as you want
Create a THREE.Shape object which vertices will be created from the array
var shape = new THREE.Shape(points);
Create a mesh with the shape geometry, and any material, and add it to the scene, or entity
var geometry = new THREE.ShapeGeometry(shape);
var material = new THREE.MeshBasicMaterial({
color: 0x00ff00
});
var mesh = new THREE.Mesh(geometry, material);
entity.object3D.add(mesh);
More on:
1) THREE.Shape
2) THREE.ShapeGeometry
3) THREE.Mesh
Extrusion
Instead of the ShapeGeometry you can use the ExtrudeGeometry object:
var extrudedGeometry = new THREE.ExtrudeGeometry(shape, {amount: 5, bevelEnabled: false});
Where the amount is basically the "thickness". More on Three.ExtrudeGeometry here.
Usage with AFRAME
I'd recommend creating an AFRAME custom component:
js
AFRAME.registerComponent('foo', {
init: function() {
// mesh creation
this.el.object3D.add(mesh);
}
})
HTML
<a-entity foo></a-entity>
2D shape here.
Extruded 2D shape here.
Three.js examples here. They are quite more complicated than my polygons :)
There are also a couple of pre-built A-Frame components you could use to help with extrusion.
https://github.com/JosePedroDias/aframe-extrude-and-lathe
https://github.com/luiguild/aframe-svg-extruder
You can find examples of usage in each of those repos.
I have created a dice entity from six plane entities. However, when I click and drag the dice entity, instead of moving that dice, only one sided face gets dragged.
This can be tried hands on at link http://shrouded-chamber-73425.herokuapp.com/
Rather than creating a box from six planes, you should create one box and use a material that will render the dice faces for you. You can use three.js CubeTexture:
AFRAME.registerComponent('dice-texture', {
init: function () {
var box = this.el.getOrCreateObject3D('mesh');
var loader = new THREE.CubeTextureLoader();
loader.setPath('/images/diceTextures/');
var textureCube = loader.load([
'1.png', '2.png',
'3.png', '4.png',
'5.png', '6.png'
]);
box.material = new THREE.MeshStandardMaterial({envMap: textureCube);
}
});
<a-entity geometry="primitive: box" dice-texture></a-entity>
Then you can further optimize so that every box shares the same material so you aren't creating a new one each time.
I have found many examples of three.js reflecting images, but is at all possible to reflect lines, triangles and shapes? I want to create a mirror pyramid that reflects lines.
For example: http://www.gus.graphics/buffer.html >This page has lots of lines.
I want to reflect them onto a 3d shape that sits in the middle.
For example: http://www.gus.graphics/ball1.html > This page has a mirror ball.
These are the sort of lines of code I am looking at. Not sure if it's even possible.
var textureCube = THREE.ImageUtils.loadTextureCube( urls );
var material = new THREE.MeshBasicMaterial( { color: 0xffffff, envMap: textureCube } )
shader.uniforms[ "tCube" ].value = textureCube;
At the moment that code above is taking in a bunch of images "urls", but as you probably know by now I want to reflect the geometry in the first link I provided.
You can take a look at THREE.CubeCamera. It creates 6 cameras that render to a WebGLRenderTargetCube and then use it as envMap. An example would be:
//Create cube camera
var cubeCamera = new THREE.CubeCamera( 1, 100000, 128 );
scene.add( cubeCamera );
//Create material and mesh
var chromeMaterial = new THREE.MeshLambertMaterial( { color: 0xffffff, envMap: cubeCamera.renderTarget } );
var car = new Mesh( carGeometry, chromeMaterial );
scene.add( car );
//Update the render target cube
car.setVisible( false );
cubeCamera.position.copy( car.position );
cubeCamera.updateCubeMap( renderer, scene );
//Render the scene
car.setVisible( true );
renderer.render( scene, camera );
See this: http://threejs.org/docs/#Reference/Cameras/CubeCamera
There are also examples usage here:
http://threejs.org/examples/#webgl_materials_cubemap_dynamic
http://threejs.org/examples/#webgl_materials_cubemap_dynamic2
I have an image loaded from an url and added to canvas as child. Then I am drag and dropping another image on it which also uses the senocular transform so the image can be transformed on the canvas. I have coded in such way that the transform handles shows up only after it's dropped on canvas. The image shows up correctly. But I am trying to save the result image (that is the main image and the dropped image on top of it), I only end up with the main image that was loaded earlier. The dropped image doesn't show up.
Below is the code for handleDrop() that is fired on dragDrop event and prepares the final image. What am I doing wrong?
var dragInitiator:IUIComponent = dragEvent.dragInitiator;
var dropTarget:IUIComponent = dragEvent.currentTarget as IUIComponent;
var tool:TransformTool = new TransformTool(new ControlSetStandard());
var items:String = dragEvent.dragSource.dataForFormat("items") as String;
var img:Image = new Image();
img.x=50;
img.y=50;
img.width=55;
img.height=55;
img.source=items.toString();
var bitmap:Bitmap= Bitmap(img.content);
var tool:TransformTool = new TransformTool(new ControlSetStandard());
var component:UIComponent = new UIComponent( );
tool.target = img;
tool.x=myCanvas.x;
tool.y=myCanvas.y;
addElement(component);
myCanvas.addChild(img);
img.z=myCanvas.z+1;
component.addChild(tool);
original=new BitmapData(bmd.width,bmd.height,true,0x000000FF);
original.draw(myCanvas);
Just because you added the image to the canvas doesn't mean it has drawn already. Either listen for the updateComplete event on the image or do a callLater to a function that then draws the bitmap.