I'm currently working on a project in Openlayers where we allow the users to draw polygon-shapes. These shapes are only allowed to be drawn inside another polygon which for clarity i will call the Trackable-Area. To meet one of the requirements of the customer, i need to find the empty space of the Trackable-Area. I've already achieved finding the area size of the empty space, but i am lost on how to calculate the empty space as a polygon.
Please see the following picture as an example. The "rectangle" is the Trackable-Area polygon, and the colorful shapes are the drawn polygons made by the user. The white space is what i aim to calculate as another polygon.
I found a solution to this by creating a multipolygon containing all drawn polygons, and then utilizing the JSTS-library to calculate the difference between the multipolygon and trackable-area polygon:
// map JSTS GeoJSON-readers
const jstsGeoJSONReader = new jsts.io.GeoJSONReader();
const jstsGeoJSONWriter = new jsts.io.GeoJSONWriter();
// ol GeoJSON-format
const geoJSONFormat = new GeoJSON();
let trackableAreaFeature;
let trackableAreaGeomJSTS;
// trackable area geometry parsed to JSTS
const TrackableAreaLayer = this.mapService.getLayerByName(LAYERS.TRACKABLE_AREA);
trackableAreaFeature = TrackableAreaLayer.getSource().getFeatureById('Trackable Area');
trackableAreaGeomJSTS = jstsGeoJSONReader.read(geoJSONFormat.writeFeatureObject(trackableAreaFeature)).geometry;
// create multiPolygon consisting of drawn zones
const multiPolygon = new MultiPolygon([]);
const zoneSource = drawnAreaLayer.getSource();
const zoneFeatures = drawnAreaLayer.getSource().getFeatures();
zoneFeatures.forEach(zone => {
const zoneGeometry = zone.getGeometry();
multiPolygon.appendPolygon(zoneGeometry);
});
// parse multipolygon to JSTS
const multiPolygonJSTS = jstsGeoJSONReader.read(geoJSONFormat.writeGeometry(multiPolygon));
// calculate difference between trackable area and multipolygon
const unmappedAreaJSTS = trackableAreaGeomJSTS.difference(multiPolygonJSTS);
const unmappedAreaGeometry = jstsGeoJSONWriter.write(unmappedAreaJSTS);
// parse multipolygon from GeoJSON to ol geometry
const unmappedAreaMultiPolygonGeometry = geoJSONFormat.readGeometry(unmappedAreaGeometry);
// create ol feature based on parsed geometry
const unmappedAreaFeature = new Feature({
geometry: unmappedAreaMultiPolygonGeometry
});
// add Feature to source of layer
zoneSource.addFeature(unmappedAreaFeature);
One possibility would be to delete de area of the trackable area (T) with the summed area of all the polygons drawn by the user inside the trackable area (S).
E = T - S, where E is the empty area.
En the newest version fo the library, you just need to use the function getArea (OL API Docs - Polygon getArea).
Related
As a newbie to the google earth engine, I have been trying something (https://code.earthengine.google.com/6f45059a59b75757c88ce2d3869fc9fd) following a NASA tutorial (https://www.youtube.com/watch?v=JFvxudueT_k&ab_channel=NASAVideo). My last line (line 60) shows image.filter is not a function, while the one in the tutorial (line 34) is working. I am not sure what happened and how to sort this out?
//creating a new variable 'image' from the L8 collection data imported
var image = ee.Image (L8_tier1 //the details in the data will represent that the band resolution is 30m
//the details in the data will represent that the band resolution is 30m
//.filterDate ("2019-07-01","2021-10-03") //for a specific date range. maybe good to remove it for the function.
//the details in the data will represent that the band resolution is 30m
//the details in the data will represent that the band resolution is 30m
//.filterDate ("2019-07-01","2021-10-03") //for a specific date range. maybe good to remove it for the function.
.filterBounds (ROI) //for the region of interest we are interested in
//.sort ("COLUD_COVER") //for sorting the data between the range with a cloud cover, the metadata property we are interested in. Other way to do this is using the function below.
//.first() //this will make the image choose the first image with the least amount of cloud cover for the area. Other way to do this is using the function below.
);
//print ("Hague and Rotterdam", image); //printing the image in the console
//console on the right hand side will explain everything from the data
//id will show the image deatils and date of the image, for this case 29th July 2019
//under the properties tab cloud cover can be found, this is the least we can get for this area during this period
// //vizualisation of the data in the map with true color rendering
// var trueColour = {
// bands:["SR_B4","SR_B3","SR_B2"],
// min: 5000,
// max: 12000
// };
// Map.centerObject (ROI, 12); //for the centering the area in the center of the map with required zoom level
// Map.addLayer (image, trueColour, "Hague and Rotterdam"); //for adding the image with the variable of bands we made and naming the image
//Alternate way
//Function to cloud mask from the qa_pixel band of Landsat 8 SR data. In this case bits 3 and 4 are clouds and cloud shadow respectively. This can be different for different image sets.
function maskL8sr(image) {
var cloudsBitMask = 1 << 3; //remember to check this with the source
var cloudshadowBitMask = 1 << 4; //remember to check this with the source
var qa = image.select ('qa_pixel'); //creating the new variable from the band of the source image
var mask = qa.bitwiseAnd(cloudsBitMask).eq(0) //making the cloud equal to zero to mask them out
.and(qa.bitwiseAnd(cloudshadowBitMask).eq(0)); //making the cloud shadow equal to zero to mask them out
return image.updateMask(mask).divide(10000)
.select("SR_B[0-9]*")
.copyProperties(image, ["system:time_start"]);
}
// print ("Hague and Rotterdam", image);// look into the console now. How many images the code have downloaded!!!
//filtering imagery for 2015 to 2021 summer date ranges
//creating joint filter and applying to image collection
var sum21 = ee.Filter.date ('2021-06-01','2021-09-30');
var sum20 = ee.Filter.date ('2020-06-01','2020-09-30');
var sum19 = ee.Filter.date ('2019-06-01','2019-09-30');
var sum18 = ee.Filter.date ('2018-06-01','2018-09-30');
var sum17 = ee.Filter.date ('2017-06-01','2017-09-30');
var sum16 = ee.Filter.date ('2016-06-01','2016-09-30');
var sum15 = ee.Filter.date ('2015-06-01','2015-09-30');
var SumFilter = ee.Filter.or(sum21, sum20, sum19, sum18, sum17, sum16, sum15);
var allsum = image.filter(SumFilter);
Filtering is an operation you can do on ImageCollections, not individual Images, because all filtering does is choose a subset of the images. Then, in your script, you have (with the comments removed):
var image = ee.Image (L8_tier1
.filterBounds (ROI)
);
The result of l8_tier1.filterBounds(ROI) is indeed an ImageCollection. But in this case, you have told the Earth Engine client that it should be treated as an Image, and it believed you. So, then, the last line
var allsum = image.filter(SumFilter);
fails with the error you saw because there is no filter() on ee.Image.
The script will successfully run if you change ee.Image(...) to ee.ImageCollection(...), or even better, remove the cast because it's not necessary — that is,
var image = L8_tier1.filterBounds(ROI);
You should probably also change the name of var image too, since it is confusing to call an ImageCollection by the name image. Naming things accurately helps avoid mistakes, while you are working on the code and also when others try to read it or build on it.
I have a large set of block objects using a custom geometry, that I am hoping to merge into a smaller number of larger geometries, as I believe this will reduce rendering costs.
I have been following guidance here: https://aframe.io/docs/1.2.0/introduction/best-practices.html#performance which has led me to the geometry-merger component here:
https://github.com/supermedium/superframe/tree/master/components/geometry-merger/
The A-Frame docs say:
"You can use geometry-merger and then make use a three.js material with vertex colors enabled. three.js geometries keep data such as color, uvs per vertex."
The geometry-merger component also says:
"Useful if using vertex or face coloring as individual geometries' colors can still be manipulated individually since this component keeps a faceIndex and vertexIndex."
However I have a couple of problems.
If I set vertexColors on my material (as suggested by the A-Frame docs), then this ruins the appearance of my blocks.
Whether or not I set vertexColors on my material, all material information seems to be lost when the geometries are merged, and everything just ends up white.
See this glitch for a demonstration of both problems.
https://tundra-mercurial-garden.glitch.me/
My suspicion is that the A-Frame geometry-merger component just won't do what I need here, and I need to implement something myself using the underlying three.js functions.
Is that right, or is there a way that I could make this work using geometry-merger?
For the vertexColors to work, you need to have your vertices coloured :)
More specifically - the BufferGeometry expects an array of rgb values for each vertex - which will be used as color for the material.
In this bit of code:
var geometry = new THREE.BoxGeometry();
var mat = new THREE.MeshStandardMaterial({color: 0xffffff, vertexColors: THREE.FaceColors});
var mesh = new THREE.Mesh(geometry, mat);
The mesh will be be black unless the geometry contains information about the vertex colors:
// create a color attribute in the geometry
geometry.setAttribute('color', new THREE.BufferAttribute(new Float32Array(vertices_count), 3));
// grab the array
const colors = this.geometry.attributes.color.array;
// fill the array with rgb values
const faceColor = new THREE.Color(color_hex);
for (var i = 0; i < vertices_count / 3; i += 3) {
colors[i + 0] = faceColor.r; // lol +0
colors[i + 1] = faceColor.g;
colors[i + 2] = faceColor.b;
}
// tell the geometry to update the color attribute
geometry.attributes.color.needsUpdate = true;
I can't make the buffer-geometry-merger component work for some reason, but It's core seems to be valid:
AFRAME.registerComponent("merger", {
init: function() {
// replace with an event where all child entities are ready
setTimeout(this.mergeChildren.bind(this), 500);
},
mergeChildren: function() {
const geometries = [];
// traverse the child and store all geometries.
this.el.object3D.traverse(node => {
if (node.type === "Mesh") {
const geometry = node.geometry.clone();
geometry.applyMatrix4(node.parent.matrix);
geometries.push(geometry)
// dispose the merged meshes
node.parent.remove(node);
node.geometry.dispose();
node.material.dispose();
}
});
// create a mesh from the "merged" geometry
const mergedGeo = THREE.BufferGeometryUtils.mergeBufferGeometries(geometries);
const mergedMaterial = new THREE.MeshStandardMaterial({color: 0xffffff, roughness: 0.3, vertexColors: THREE.FaceColors});
const mergedMesh = new THREE.Mesh(mergedGeo, mergedMaterial);
this.el.object3D.add(mergedMesh)
}
})
You can check it out in this glitch. There is also an example on using the vertex colors here (source).
I agree it sounds like you need to consider other solutions. Here are two different instances of instancing with A-Frame:
https://github.com/takahirox/aframe-instancing
https://github.com/EX3D/aframe-InstancedMesh
Neither are perfect or even fully finished, but can hopefully get you started as a guide.
Although my original question was about geometry merging, I now believe that Instanced Meshes were a better solution in this case.
Based on this suggestion I implemented this new A-Frame Component:
https://github.com/diarmidmackenzie/instanced-mesh
This glitch shows the scene from the original glitch being rendered with just 19 calls using this component. That compares pretty well with > 200 calls that would have been required if every object were rendered individually.
https://dull-stump-psychology.glitch.me/
A key limitation is that I was not able to use a single mesh for all the different block colors, but had to use one mesh per color (7 meshes total).
InstancedMesh can support different colored elements, but each element must have a single color, whereas the elements in this scene had 2 colors each (black frame + face color).
I have a big data set of coordinates and would like to place them into a groups falling under 30 mile radius circles. And I need these circles to cover entire US land area. Overlapping circles are allowed. Is there a way to do this? Any help would be much appreciated. Thank you.
I wrote an npm package a while back that will help you work with locations on earth.
You can see a jsfiddle I made that makes random points on a circle centered at some location. The code below is pasted from the jsfiddle because SO wants code when you include a fiddle link, but you are better off to go experiment with the fiddle. The function pointAtDistance() implements the Haversine formula.
For the packing, I'd attempt a hexagon formation - make a grid and eliminate those that don't intersect land. As the earth is a sphere, you should probably find the number of degrees longitude closest to the equator that represents your offset so that 30 mile circles still overlap. Then using that angle, the circles further north will overlap more than needed, but at least there won't be gaps, and the structure is easy to reason about.
function pointAtDistance(inputCoords, distance) {
const result = {}
const coords = toRadians(inputCoords)
const sinLat = Math.sin(coords.latitude)
const cosLat = Math.cos(coords.latitude)
const bearing = Math.random() * TWO_PI
const theta = distance/EARTH_RADIUS
const sinBearing = Math.sin(bearing)
const cosBearing = Math.cos(bearing)
const sinTheta = Math.sin(theta)
const cosTheta = Math.cos(theta)
result.latitude = Math.asin(sinLat*cosTheta+cosLat*sinTheta*cosBearing);
result.longitude = coords.longitude +
Math.atan2( sinBearing*sinTheta*cosLat, cosTheta-sinLat*Math.sin(result.latitude )
);
result.longitude = ((result.longitude+THREE_PI)%TWO_PI)-Math.PI
return toDegrees(result)
}
If we have NON-axis-aligned box, how can we best check if a point lies inside it? (I'm using three.js, so any utility from there can be of help. Three.js contains bounding box concept, but that is axis-aligned bounding box)
If your box is a THREE.BoxGeometry that is rotated, translated and scaled, then you can use its transformation matrix m to find if it intersects your point v:
transform v and the box by the inverse of m
check if transformed v is inside the transformed box (which is now axis aligned)
Here is the code:
var box = <Your non-aligned box>
var point = <Your point>
box.geometry.computeBoundingBox(); // This is only necessary if not allready computed
box.updateMatrixWorld(true); // This might be necessary if box is moved
var boxMatrixInverse = new THREE.Matrix4().getInverse(box.matrixWorld);
var inverseBox = box.clone();
var inversePoint = point.clone();
inverseBox.applyMatrix(boxMatrixInverse);
inversePoint.applyMatrix4(boxMatrixInverse);
var bb = new THREE.Box3().setFromObject(inverseBox);
var isInside = bb.containsPoint(inversePoint);
And here is a running demonstration: https://jsfiddle.net/holgerl/q0z979uy/
I was wondering if there was a way to obtain the bounding box for the models that are inserted via 3dio.js, or otherwise calculate their center points? I'm looking to center them on the origin.
The images below show two models relative to the scene origin indicated by the red box.
You can access the three.js object of the 3d.io entity like this:
var threeElem = document.getElementById("custom-id").components['io3d-data3d'].data3dView.threeParent
Then you can use the native bounding box from three.js:
var bbox = new THREE.Box3().setFromObject(threeElem)
Like that you get the min/max bounds which you can use to determine the origin.
I hope that answers your question. Let me know!
Edit:
for furniture it would probably be
var threeElem = document.getElementById("custom-id").components['io3d-furniture'].data3dView.threeParent
Based on Madlaina's answer. I needed to ensure the model was loaded before
addModelToScene(type) {
let scene = document.querySelector('a-scene');
let model = document.createElement('a-entity');
model.setAttribute('io3d-data3d', getModelKey(type) )
model.addEventListener('model-loaded', () => {
// Access the three.js object of the 3d.io
let threeElem = model.components['io3d-data3d'].data3dView.threeParent
// create the bounding box
let bbox = new THREE.Box3().setFromObject(threeElem)
// Calculate the center-point offsets from the max and min points
const offsetX = (bbox.max.x + bbox.min.x)/2
const offsetY = (bbox.max.y + bbox.min.y)/2
const offsetZ = (bbox.max.z + bbox.min.z)/2
// apply the offset
model.setAttribute('position', {x:-offsetX,y:-offsetY, z:-offsetZ})
} );
scene.appendChild(model);
}
The result: