DirectX 11 Alpha Blending Not Working - alphablending

Okay, so I have been trying to get Alpha Blending to work in my 3D application but it just doesn't want to happen. I am drawing 2d images with an orthogonal projection at the very end of the rendering loop (depth testing remains enabled) and the image textures have transparent parts but they render black.
Here is my blending code:
D3D11_BLEND_DESC blendStateDesc;
ZeroMemory(&blendStateDesc, sizeof(D3D11_BLEND_DESC));
blendStateDesc.AlphaToCoverageEnable = FALSE;
blendStateDesc.IndependentBlendEnable = FALSE;
blendStateDesc.RenderTarget[0].BlendEnable = TRUE;
blendStateDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
blendStateDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
blendStateDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
blendStateDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_SRC_ALPHA;
blendStateDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_DEST_ALPHA;
blendStateDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
blendStateDesc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
if (FAILED(device->CreateBlendState(&blendStateDesc, &blendState))) {
printf("Failed To Create Blend State\n");
}
deviceContext->OMSetBlendState(blendState, NULL, 0xFFFFFF);
And if it helps here is the texture description:
D3D11_TEXTURE2D_DESC texDesc;
texDesc.Width = TextureWidth;
texDesc.Height = textureHeight;
texDesc.MipLevels = 1;
texDesc.ArraySize = 1;
texDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D11_USAGE_IMMUTABLE;
texDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
texDesc.CPUAccessFlags = 0;
texDesc.MiscFlags = 0;
I am only using a single render target and not pre-multiplying alphas inside the shaders. I have looked everywhere and tried all manner of different combinations for the D3D11_BLEND_STATE but nothing has worked.
The closest I can get is when I set the AlphaToCoverage to TRUE but then it doesn't work if I change the alpha of the vertices, and I know for what I'm doing AlphaToCoverage should be FALSE.

I know this is a very old question but i had a similiar issue.
What i had to do was to make sure you set/enabble the blend state before drawing and then disable after the draw call of the transparent 2d image. More information can be found: DirectX Image texture quad displays underlying controls color where it is transparent

Related

How to add fontawasome icon as a layer in geotools using javafx (Desktop App)?

I have been using java 17 and I'm unable to add icons into the map as a layer. please help me.
void drawTarget(double x, double y) {
SimpleFeatureTypeBuilder builder = new SimpleFeatureTypeBuilder();
builder.setName("MyFeatureType");
builder.setCRS( DefaultGeographicCRS.WGS84 ); // set crs
builder.add("location", LineString.class); // add geometry
// build the type
SimpleFeatureType TYPE = builder.buildFeatureType();
// create features using the type defined
SimpleFeatureBuilder featureBuilder = new SimpleFeatureBuilder(TYPE);
// GeometryFactory geometryFactory = JTSFactoryFinder.getGeometryFactory();
// Coordinate[] coords =
// new Coordinate[] {new Coordinate(79,25.00), new Coordinate(x, y)};
// line = geometryFactory.createLineString(coords);
// ln = new javafx.scene.shape.Line();
FontAwesomeIcon faico = new FontAwesomeIcon();
faico.setIconName("FIGHTER_JET");
faico.setX(76);
faico.setY(25);
faico.setVisible(true);
// TranslateTransition trans = new TranslateTransition();
// trans.setNode(faico);
featureBuilder.add(faico);
SimpleFeature feature = featureBuilder.buildFeature("FeaturePoint");
DefaultFeatureCollection featureCollection = new DefaultFeatureCollection("external", TYPE);
featureCollection.add(feature); // Add feature 1, 2, 3, etc
Style style5 = SLD.createLineStyle(Color.YELLOW, 2f);
Layer layer5 = new FeatureLayer(featureCollection, style5);
map.addLayer(layer5);
// mapFrame.getMapPane().repaint();
}
I want to add a font-awesome icon to the map
Currently, your code is attempting to use an Icon as a Geometry in your feature. I'm guessing that's what isn't working since you don't say.
If you want to use an Icon to display the location of a Feature then you will need two things.
A valid geometry in your feature, probably a point (since an Icon is normally a point)
A valid Style to be used by the Renderer to draw your feature(s) on the map. Currently, you are asking for the line in your feature to be drawn using a yellow line (style5 = SLD.createLineStyle(Color.YELLOW, 2f);)
I can't really help with step 1, since I don't know where your fighter jet currently is.
For step 2 I suggest you look at the SLD resources to give you some clues of how the styling system works before going on the manual to see how GeoTools implements that.
Since you are trying to add an Icon I suggest you'd need something like:
List<GraphicalSymbol> symbols = new ArrayList<>();
symbols.add(sf.externalGraphic(svg, "svg", null)); // svg preferred
symbols.add(sf.externalGraphic(png, "png", null)); // png preferred
symbols.add(sf.mark(ff.literal("circle"), fill, stroke)); // simple circle backup plan
Expression opacity = null; // use default
Expression size = ff.literal(10);
Expression rotation = null; // use default
AnchorPoint anchor = null; // use default
Displacement displacement = null; // use default
// define a point symbolizer of a small circle
Graphic city = sf.graphic(symbols, opacity, size, rotation, anchor, displacement);
PointSymbolizer pointSymbolizer =
sf.pointSymbolizer("point", ff.property("the_geom"), null, null, city);
rule1.symbolizers().add(pointSymbolizer);
featureTypeStyle.rules().add(rule1);
But that assumes that you can convert your FontAwesomeIcon into a static representation that the renderer can draw (png, svg). If it doesn't work like that (I don't use JavaFX) then you may need to add a new MarkFactory to handle them.

How can I merge geometries in A-Frame without losing material information?

I have a large set of block objects using a custom geometry, that I am hoping to merge into a smaller number of larger geometries, as I believe this will reduce rendering costs.
I have been following guidance here: https://aframe.io/docs/1.2.0/introduction/best-practices.html#performance which has led me to the geometry-merger component here:
https://github.com/supermedium/superframe/tree/master/components/geometry-merger/
The A-Frame docs say:
"You can use geometry-merger and then make use a three.js material with vertex colors enabled. three.js geometries keep data such as color, uvs per vertex."
The geometry-merger component also says:
"Useful if using vertex or face coloring as individual geometries' colors can still be manipulated individually since this component keeps a faceIndex and vertexIndex."
However I have a couple of problems.
If I set vertexColors on my material (as suggested by the A-Frame docs), then this ruins the appearance of my blocks.
Whether or not I set vertexColors on my material, all material information seems to be lost when the geometries are merged, and everything just ends up white.
See this glitch for a demonstration of both problems.
https://tundra-mercurial-garden.glitch.me/
My suspicion is that the A-Frame geometry-merger component just won't do what I need here, and I need to implement something myself using the underlying three.js functions.
Is that right, or is there a way that I could make this work using geometry-merger?
For the vertexColors to work, you need to have your vertices coloured :)
More specifically - the BufferGeometry expects an array of rgb values for each vertex - which will be used as color for the material.
In this bit of code:
var geometry = new THREE.BoxGeometry();
var mat = new THREE.MeshStandardMaterial({color: 0xffffff, vertexColors: THREE.FaceColors});
var mesh = new THREE.Mesh(geometry, mat);
The mesh will be be black unless the geometry contains information about the vertex colors:
// create a color attribute in the geometry
geometry.setAttribute('color', new THREE.BufferAttribute(new Float32Array(vertices_count), 3));
// grab the array
const colors = this.geometry.attributes.color.array;
// fill the array with rgb values
const faceColor = new THREE.Color(color_hex);
for (var i = 0; i < vertices_count / 3; i += 3) {
colors[i + 0] = faceColor.r; // lol +0
colors[i + 1] = faceColor.g;
colors[i + 2] = faceColor.b;
}
// tell the geometry to update the color attribute
geometry.attributes.color.needsUpdate = true;
I can't make the buffer-geometry-merger component work for some reason, but It's core seems to be valid:
AFRAME.registerComponent("merger", {
init: function() {
// replace with an event where all child entities are ready
setTimeout(this.mergeChildren.bind(this), 500);
},
mergeChildren: function() {
const geometries = [];
// traverse the child and store all geometries.
this.el.object3D.traverse(node => {
if (node.type === "Mesh") {
const geometry = node.geometry.clone();
geometry.applyMatrix4(node.parent.matrix);
geometries.push(geometry)
// dispose the merged meshes
node.parent.remove(node);
node.geometry.dispose();
node.material.dispose();
}
});
// create a mesh from the "merged" geometry
const mergedGeo = THREE.BufferGeometryUtils.mergeBufferGeometries(geometries);
const mergedMaterial = new THREE.MeshStandardMaterial({color: 0xffffff, roughness: 0.3, vertexColors: THREE.FaceColors});
const mergedMesh = new THREE.Mesh(mergedGeo, mergedMaterial);
this.el.object3D.add(mergedMesh)
}
})
You can check it out in this glitch. There is also an example on using the vertex colors here (source).
I agree it sounds like you need to consider other solutions. Here are two different instances of instancing with A-Frame:
https://github.com/takahirox/aframe-instancing
https://github.com/EX3D/aframe-InstancedMesh
Neither are perfect or even fully finished, but can hopefully get you started as a guide.
Although my original question was about geometry merging, I now believe that Instanced Meshes were a better solution in this case.
Based on this suggestion I implemented this new A-Frame Component:
https://github.com/diarmidmackenzie/instanced-mesh
This glitch shows the scene from the original glitch being rendered with just 19 calls using this component. That compares pretty well with > 200 calls that would have been required if every object were rendered individually.
https://dull-stump-psychology.glitch.me/
A key limitation is that I was not able to use a single mesh for all the different block colors, but had to use one mesh per color (7 meshes total).
InstancedMesh can support different colored elements, but each element must have a single color, whereas the elements in this scene had 2 colors each (black frame + face color).

Altering code to place 2D UIImage as opposed to 3D shape geometry

I've been following a tutorial to create an AR Ruler. Therefore, with the code below, I'm able to place 3D sphere's in my scene (which im looking to keep for the tracking functionality). However, instead of a 3d image, I'm looking to place an image. I attempted changing the dotGeometry and setting it to a UIImage and commenting out the material code but wasn't sure how to deal with with dotNode piece of code. Therefore, how would I be able to set my image as the resulting on-screen addition?
let dotGeometry = SCNSphere(radius: 0.005)
let material = SCNMaterial()
material.diffuse.contents = UIColor.red
dotGeometry.materials = [material]
let dotNode = SCNNode(geometry: dotGeometry)
dotNode.position = SCNVector3(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
sceneView.scene.rootNode.addChildNode(dotNode)
You could create a SCNBox and set its length to a very small value to make it appear flat.
let box = SCNBox(width: 0.2, height: 0.2, length: 0.005, chamferRadius: 0)
let material = SCNMaterial()
material.diffuse.contents = UIImage(named: "image.png")
box.materials = [material]
boxNode = SCNNode(geometry: box)
boxNode.opacity = 1.0
boxNode.position = SCNVector3(0,0,-0.5)
scene.rootNode.addChildNode(boxNode)
Or you could follow another approach and add an overlay SKScene (2D) to the SCNScene(3D) as described here : How can I overlay a SKScene over a SCNScene in Swift?

three js mirror not reflecting all meshes

Objective:
To simulate a reflective floor(like this) in three js.
Idea:
Make the floor translucent by setting opacity to 0.5.
Place a Mirror below it to reflect the meshes above it.
Expected Output:
To be able to see reflections of the house via the floor mirror.
Obtained Output:
Doesn't reflect the meshes which is part of the house.
Instead, reflects only the skybox and that too only in certain angles.
Screenshots:
Mirror reflecting skybox fully - http://prntscr.com/6yn52y
Mirror reflecting skybox partially - http://prntscr.com/6yn5f7
Mirror not reflecting anything - http://prntscr.com/6yn5qy
Questions:
Why aren't the other meshes of the house reflected through the mirror?
Why is the mirror not reflecting in certain orientations of the camera?
Code Attached:
.......
.......
function getReflectiveFloorMesh(floorMesh) {
var WIDTH = window.innerWidth;
var HEIGHT = window.innerHeight;
floorMirror = new THREE.Mirror( renderer, firstPerson.camera,
{ clipBias: 0.003,
textureWidth: WIDTH,
textureHeight: HEIGHT,
color: 0x889999 } );
var mirrorMesh = floorMesh.clone();
mirrorMesh.position.y -= 10; // Placing the mirror just below the actual translucent floor; Fixme: To be tuned
mirrorMesh.material = floorMirror.material;
mirrorMesh.material.side = THREE.BackSide; // Fixme: Normals were flipped. How to decide on normals?
mirrorMesh.material.needsUpdate = true;
mirrorMesh.add(floorMirror);
return mirrorMesh;
}
function getSkybox() {
var urlPrefix = "/img/skybox/sunset/";
var urls = [urlPrefix + "px.png", urlPrefix + "nx.png",
urlPrefix + "py.png", urlPrefix + "ny.png",
urlPrefix + "pz.png", urlPrefix + "nz.png"];
var textureCube = THREE.ImageUtils.loadTextureCube(urls);
// init the cube shadder
var shader = THREE.ShaderLib["cube"];
shader.uniforms["tCube"].value = textureCube;
var material = new THREE.ShaderMaterial({
fragmentShader: shader.fragmentShader,
vertexShader: shader.vertexShader,
uniforms: shader.uniforms,
side: THREE.BackSide
});
// build the skybox Mesh
var skyboxMesh = new THREE.Mesh(new THREE.CubeGeometry(10000, 10000, 10000, 1, 1, 1, null, true), material);
return skyboxMesh;
}
function setupScene(model, floor) {
scene.add(model); // Adding the house which contains translucent floor
scene.add(getSkybox()); // Adding Skybox
scene.add(getReflectiveFloorMesh(floor)); // Adds mirror just below floor
scope.animate();
}
....
....
this.animate = function () {
// Render the mirrors
if(floorMirror)
floorMirror.render();
renderer.render(scene, firstPerson.camera);
};
You have to attach the mirror to the mesh before doing any transformation.
So the code would be:
floorMirror = new THREE.Mirror( ... );
var mirrorMesh = floorMesh.clone();
mirrorMesh.add(floorMirror); // attach first!
mirrorMesh.position.y -= 10;
...
But another problem here is that you are cloning mirrorMesh from floorMesh, which has already been (probably) transformed.
At creation, a mirror object has the same default transform matrix as a regular Mesh with plane geometry (which is by default 'vertical').
When you attach the mirror to a floor (or any horizontal mesh), the matrix doesn't match with the mesh one and that's why you don't see the reflections, or only from a certain angle.
So, always attach a mirror to a non-transformed plane mesh, before you apply your transformations (translations or rotations).
Idea:
"Make the floor translucent by setting opacity to 0.5.
Place a Mirror below it to reflect the meshes above it".
I suggest another way, make floor as solid add mirror on top and change the alpha of the mirror instead, I think you have issues with the translucent floor restricting the mirror projection through the alpha..
If you move the mirror over from under the translucent floor or into an empty scene with just a cube or sphere geometry attached basic material does it reflect as expected?
You may need 2 mirrors, one for the room assuming you want polished floor boards and one for outside general reflection

Zoom into group of points in Flex

I have an application in Flex 4 with a map, a database of points and a search tool.
When the user types something and does the search it returns name, details and coordinates of the objects in my database.
I have a function that, when i click one of the results of my search, it zooms the selected point of the map.
The question is, i want a function that zooms all the result points at once. For example if i search "tall trees" and it returns 10 points, i want that the map zooms to a position where i can see the 10 points at once.
Below is the code im using to zoom one point at a time, i thought flex would have some kind of function "zoom to group of points", but i cant find anything like this.
private function ResultDG_Click(event:ListEvent):void
{
if (event.rowIndex < 0) return;
var obj:Object = ResultDG.selectedItem;
if (lastIdentifyResultGraphic != null)
{
graphicsLayer.remove(lastIdentifyResultGraphic);
}
if (obj != null)
{
lastIdentifyResultGraphic = obj.graphic as Graphic;
switch (lastIdentifyResultGraphic.geometry.type)
{
case Geometry.MAPPOINT:
lastIdentifyResultGraphic.symbol = objPointSymbol
_map.extent = new Extent((lastIdentifyResultGraphic.geometry as MapPoint).x-0.05,(lastIdentifyResultGraphic.geometry as MapPoint).y-0.05,(lastIdentifyResultGraphic.geometry as MapPoint).x+0.05,(lastIdentifyResultGraphic.geometry as MapPoint).y+0.05,new SpatialReference(29101)).expand(0.001);
break;
case Geometry.POLYLINE:
lastIdentifyResultGraphic.symbol = objPolyLineSymbol;
_map.extent = lastIdentifyResultGraphic.geometry.extent.expand(0.001);
break;
case Geometry.POLYGON:
lastIdentifyResultGraphic.symbol = objPolygonSymbol;
_map.extent = lastIdentifyResultGraphic.geometry.extent.expand(0.001);
break;
}
graphicsLayer.add(lastIdentifyResultGraphic);
}
}
See the GraphicUtil class from com.esri.ags.Utils package. You can use the method "getGraphicsExtent" to generate an extent from an array of Graphics. You then use the extent to set the zoom factor of your map :
var graphics:ArrayCollection = graphicsLayer.graphicProvider as ArrayCollection;
var graphicsArr:Array = graphics.toArray();
// Create an extent from the currently selected graphics
var uExtent:Extent;
uExtent = GraphicUtil.getGraphicsExtent(graphicsArr);
// Zoom to extent created
if (uExtent)
{
map.extent = uExtent;
}
In this case, it would zoom to the full content of your graphics layer. You can always create an array containing only the features you want to zoom to. If you find that the zoom is too close to your data, you can also use map.zoomOut() after setting the extent.
Note: Be careful if you'Ve got TextSymbols in your graphics, it will break the GraphicUtil. In this case you need to filter out the Graphics with TextSymbols
Derp : Did not see the thread was 5 months old... Hope my answer helps other people

Resources