MGRS/USNG- Detecting zone changes - coordinate-systems

I am writing a coordinate conversion library for javascript (for both NodeJS and browser javascript). I have quite a bit done already. Check it out if you're interested. I didn't write most of the code, so I'm not exactly sure how everything works.
From what I can tell, MGRS/USNG zones are basically renamed UTM zones, which in turn are defined in terms of lat/long.
Let's say I have an MGRS coordinate and I move a certain displacement. Can I accurately determine whether I crossed a zone boundary without first converting to lat/long? I know how to update coordinates as I move within a zone because most everything is square within zones.
Is there a way to detect zone changes without having to convert to lat/long?
Are there any libraries, in any language, that do this?
The existing code requires a latitude and longitude, so I guess I could convert to lat/long, apply the transformation, and then convert back to MGRS. This wouldn't be that bad if I only want to determine the zone I'm in, then I can keep the rest of the transformation stuff in MGRS to maintain precision. The only issue is, I want to be as precise as possible.
Note:
I did find this article explaining coordinate conversions, but this doesn't really cover coordinate transformations.

The openmap Java library may be of some help (you could adapt the code to JavaScript). They have conversions for MGRS to Lat/Lon and more. As for detecting zone changes, you may try posting this question to the GIS Stack Exchange site. I'm sure someone there would have an answer.
MGRS has grid references numbers, not lat/lngs. See here. So it seems you should not need to convert to lat/lng to determine you crossed a grid.

From what I've found, it is not possible to convert directly between MGRS and Lat/Long.
MGRS (or USNG for that matter) uses the same zone definitions as UTM, which is defined by lat/long coordinates. UTM and MGRS (USNG) make the world essentially flat, using fudge factors to maintain close accuracy (within a meter or so). Since these grids are treated as being flat, they are not reliable for determining zone boundaries, which are defined in terms of the great circle.
Conversion between MGRS and UTM is easy (and pretty much lossless), and conversion between UTM and lat/long is pretty accurate, albeit not 100% precise.
From what I've read, the best way to accurately translate an MGRS coordinate is to first convert it to lat/long, apply the tranlation, then convert back. This is good for detecting zone changes, but a more accurate destination point could potentially be found by using the zone from the lat/long conversion and applying the rest of the translation in MGRS coordinates. This should detect mid-square zone changes.
References/Resources:
http://en.wikipedia.org/wiki/Military_grid_reference_system
http://www.stellman-greene.com/mgrs_to_utm/

Related

Required Data for IFC

I'm working on a project where I need to generate an IFC file, and am given not much more information than geometry (I have access to the density and heat-conductivity of materials, and basic labeling for Objects).
So far I could only find what IFC can store, never what IFC needs to store.
What do I need to include in an IFC file so it is properly functional?
What does an IFC file need besides basic geometry?
Disclaimer: I have not read (or bought) the standard. My knowledge primarily stems from working with IFC files and trying different things. And reading the buildingSMART documentation. So I can't give you a hard guarantee, but I am rather confident my information is correct/usable.
As an alternative to buying the official standards file, you could look into the official documentation by buildingsmart. (Also have a look here for more general information and availability of other/more modern releases).
Now assuming you are familiar with the basic STEP file layout (header and data segment), let's jump to what an IFC file absolutely has to include to be considered correct (as far as I understand the documentation; there might be parsers/loaders which can load incorrect/incomplete files, but we aren't aiming for them). Also note I am building this example for IFC 4.0. This should be correct for the current IFC 4.1 standard, but probably not for the older IFC2X3 standard (there have been some relaxations in IFC4 from IFC2X3). Also I am skipping on names and descriptions - you can set those fields for testing to recognize your structures in a viewer (it's easier than comparing GUIDs).
IfcProject
The root of all elements is the IfcProject. It also contains most basic properties and definitions for all other elements. The attributes required per documentation on this entity are only the unique id. But for a working example you usually also need a minimal unit assignment and representation context.
#20= IFCPROJECT('344O7vICcwH8qAEnwJDjSU',$,$,$,$,$,$,(#19),#13);
In the unit assignment you define required units, starting from geometric units to monetary, thermal, etc. The minimum is length, area and angle to meaningfully define geometric items. So for our example we include only those: metre as length, square meter as area and radians as angle. If you need foot or inch or degree you can define those as derived units.
#10= IFCSIUNIT(*,.LENGTHUNIT.,$,.METRE.);
#11= IFCSIUNIT(*,.AREAUNIT.,$,.SQUARE_METRE.);
#12= IFCSIUNIT(*,.PLANEANGLEUNIT.,$,.RADIAN.);
#13= IFCUNITASSIGNMENT((#10,#11,#12));
The representation context defines for a given class of representations (=geometric/parametric descriptions) the basic coordinate system. So the simple case would be a 3-dimensional right handed system at point zero. IFC is working with the z-axis pointing up - this might be important if your are working with models/files originating from 3D/OpenGl applications which usually assume the y-axis pointing upwards. You also need a precision value - I am using 1.0e-5 here, but you might want to test out if you can go with less or need more. The precision is usually applied when comparing points/edges when combining geometry (during constructive solid geometry steps). If you have errors, try a different precision value.
The second attribute of the representation context is the context type. This is a string identifying on which representations this context should be applied. The documentation states that values are based on "implementers agreement" - which means AFAIK "look what the others are using". From my experience using "Model" works for 3D geometry. Using "Plan" for 2D plans and sketches should work, too.
#14= IFCDIRECTION((1.,0.,0.));
#15= IFCDIRECTION((0.,0.,1.));
#16= IFCCARTESIANPOINT((0.,0.,0.));
#17= IFCAXIS2PLACEMENT3D(#16,#15,#14);
#18= IFCDIRECTION((0.,1.));
#19= IFCGEOMETRICREPRESENTATIONCONTEXT($,'Model',3,1.0E-5,#17,#18);
Spatial container for elements
Elements can't be added to the IfcProject directly - they need to be placed into a spatial element which is contained in the project. There are three possible choices: IfcSite, IfcBuilding and IfcSpatialZone (see section Spatial Decomposition on the IfcProject page). The IfcSpatialZone is defined as non-hierarchical spatial element - its usage is slightly different from the other two (elements are added using a different relation).
A single site is sufficient as spatial container. Adding all elements to it might be sematically vague (mostly fences are directly added to it, other elements are usually inside a building) but not incorrect. (IFC does not care if you have electrical appliances in your garden). As nearly all attributes of IfcSite are optional we can skip on those. But beware: if you give your site a representation (=some geometric shape) you will need to include a placement for it. The site will be aggregated into the project to be related to it.
#30= IFCSITE('20FpTZCqJy2vhVJYtjuIce',$,$,$,$,$,$,$,.ELEMENT.,$,$,$,$,$);
#31= IFCRELAGGREGATES('0Du7$nzQXCktKlPUTLFSAT',$,$,$,#20,(#30));
Elements
Actually that is all that is needed as absolute minimum structure. Now you can add your elements - entities of some type derived from IfcProduct. As all those elements have some sort of meaning attached to it you either need to select those closely matching the objects you have, or you might want to use IfcBuildingElementProxy which is the most "meaningless" (or better: no specialized semantic meaning) object type. The following code places one proxy without geometry. The placement references the same coordinate system definition that is used to create the coordinate system out of convenience as it doesn't transform or move anything. Your geometry would be added through a product definition shape which has shape aspects and finally some geometry items. The building smart documentation has a few examples with assigned geometry.
#40= IFCLOCALPLACEMENT($,#17);
#41= IFCBUILDINGELEMENTPROXY('3W29Drc$H6CxK3FGIxjJNl',$,$,$,$,#40,$,$,.NOTDEFINED.);
#42= IFCRELCONTAINEDINSPATIALSTRUCTURE('04ldtj6cp2dME6CiP80Bzh',#12,$,$,(#41),#30);
Conclusion
So there isn't much needed as bare minimum to add elements:
a project
basic unit definitions
one spatial container
The complete example file would be:
ISO-10303-21;
HEADER;FILE_DESCRIPTION(('IFC4'),'2;1');
FILE_NAME('example.ifc','2018-08-8',(''),(''),'','','');
FILE_SCHEMA(('IFC4'));
ENDSEC;
DATA;
#10= IFCSIUNIT(*,.LENGTHUNIT.,$,.METRE.);
#11= IFCSIUNIT(*,.AREAUNIT.,$,.SQUARE_METRE.);
#12= IFCSIUNIT(*,.PLANEANGLEUNIT.,$,.RADIAN.);
#13= IFCUNITASSIGNMENT((#10,#11,#12));
#14= IFCDIRECTION((1.,0.,0.));
#15= IFCDIRECTION((0.,0.,1.));
#16= IFCCARTESIANPOINT((0.,0.,0.));
#17= IFCAXIS2PLACEMENT3D(#16,#15,#14);
#18= IFCDIRECTION((0.,1.));
#19= IFCGEOMETRICREPRESENTATIONCONTEXT($,'Model',3,1.0E-5,#17,#18);
#20= IFCPROJECT('344O7vICcwH8qAEnwJDjSU',$,$,$,$,$,$,(#19),#13);
#30= IFCSITE('20FpTZCqJy2vhVJYtjuIce',$,$,$,$,$,$,$,.ELEMENT.,$,$,$,$,$);
#31= IFCRELAGGREGATES('0Du7$nzQXCktKlPUTLFSAT',$,$,$,#20,(#30));
#40= IFCLOCALPLACEMENT($,#17);
#41= IFCBUILDINGELEMENTPROXY('3W29Drc$H6CxK3FGIxjJNl',$,$,$,$,#40,$,$,.NOTDEFINED.);
#42= IFCRELCONTAINEDINSPATIALSTRUCTURE('04ldtj6cp2dME6CiP80Bzh',$,$,$,(#41),#30);
ENDSEC;
END-ISO-10303-21;
Note that loading this one up doesn't show anything, because it doesn't contain any geometry. Also please note that I have not yet verified if it is error free - I currently don't have my IFC tools at hand (if you would like to verify your files have a look at stepcode which can check if your files are syntactically correct - it won't check semantic meaning or enforcement of the mentioned concepts in the building smart documentation.)
Also good to know is that the order of references/ids (like #20) can be freely arranged - you can reference elements that you add later in the file and the references only need to be unique to this one file. This means the lines of the example file can be shuffled and it is still a valid file - parsers usually use a two-step apporach to create an in-memory representation (1. parse into IFC classes, 2. resolve references).

Accelerometer using ADXL345 for Earthquake Detection

Well, i want to ask if ADXL345 can be used to detect an Earthquake Occurrence based on its magnitude/intensity level. For more information, I want to used an accelerometer to create a Device that can detect the intensity/magnitude level of an Earthquake.
I have absolutely no experience in this field, but it looks useful and fascinating.
Questions are:
is this device able to detect medium scale earthquakes?
if yes, does anybody did it, available to share experiences?
if no to the previous, is there any guide which explains algorithms, calculations and mechanical plans?
That sensor is not suitable. It has 13 bit resolution at +-16g full range. That gives you a sensitivity of 0.002g for the lsb. In order to detect an earthquake directly below you, you need approx. a few milli-g (e.g. see here), even less for earthquakes with an epicentre elsewhere.
You want a sensor which is much more sensitive by a factor of 100 and probably with more resolution (better ADC), too.
(And you should have been able to do this quick google-search analysis yourself ;) )
Using accelerometers reading tells you nothing about the actual magnitude of the quake itself. It tells you the size of the quake at your location. Combining location and amplitude will give you a 'weighted' measurement, but that's still useless without a calibration curve. Without knowing what acceleration, at a certain distance, corresponds to what magnitude you will be unable to tell what the magnitude is. You can certainly conclude that your measured earthquake has a median amplitude of, say, 2000% of a non-earthquake reading, but you won't be able to turn it into a Richter measurement. To do that you'd need to take some data during earthquakes of known magnitude and then work out how acceleration, distance and magnitude are related for your device. You could alternatively use a scale like the Shindo (just Google it).

ITK-SNAP segmentation displays same intensity value even after registration

I'm using ITK-SNAP to compare the intensities of several Regions of Interest between several conditions.
For some subjects, I need to realign one image to another by using the Registration tool.
However, I noticed that the intensity values of a specific segmentation that I drew on the reference image doesn't change no matter how I register.
The value will be different between the two images, but even if I manually register the second image to something completely off, it will stay the same.
Is it possible to get the actual mean intensity of my segmentation depending on where it is on the registered image ?
Segmentation menu, option "Volumes and Statistics..." should show you what you are looking for.
Registration does not impact the intensity. Depending on how you transform your image, it affects the location and coordination of your voxels! It does not play with the intensities! It may reform, or reshape, rotate, or translate the image. If you expect different intensities after registration, you need to apply some other techniques rather than registration! because all the transformation matrix are applied on the coordination and location. You should play with the other features of your data!
There are some registration methods which influence the intensities but they are not used in ITKSNAP for example. You should look for its special package.
For example this paper is on:
Intensity based image registration by minimizing the complexity of weighted subtraction under illumination changes
Which is specifically playing with the intensities for fusion.
https://www.sciencedirect.com/science/article/abs/pii/S1746809415001755
Other example is this matlab script for Intensity based automatic registration, The process begins with the transform type you specify and an internally determined transformation matrix. Together, they determine the specific image transformation that is applied to the moving image with bilinear interpolation.
https://www.sciencedirect.com/science/article/abs/pii/S1746809415001755

Why is there a limitation on the number of points a polygon can have on ST_WITHIN?

We are at a cross roads where we need to decide if we are going to store our GeoSpatial data in DocumentDB or SQL Azure. According to this article, the polygon parameter of the ST_WITHIN function in a query can contain a maximum of 256 points. Our data will potentially contain polygons with millions of points as we are mapping continents, countries, states/provinces, etc. We need to be able to use ST_WITHIN against all of these polygons. The article also mentions that we can adjust that limitation by contacting Azure Support.
Why is this limitation in the first place? If Support does remove the limitation, are we going to bring DocumentDB down with so many points?
If you want to do it all in DocumentDB (as opposed to adding something like SQL Azure), you can use an approach of narrowing down the list by using ST_DISTANCE to get candidates and then running the equivalent to ST_WITHIN client side (ray casting algorithm is simple and fast). The trick involves storing denormalized meta-data about each polygon, namely a center point (accuracy of center point not critical) and the maximum radius using that center point. Then if the distance between your point and the center minus the maximum radius is less than zero, it's in the candidate list. It works like a charm and is performant with some careful index design.
One thing to worry about is the condition where the polygon intersects itself. Do you treat the intersecting space as outside the polygon or within it? We had a nasty bug that took forever to figure out and it boiled down to a self-intersecting polygon. This problem exists whether you implement your own algorithm or use the database's native "within" function.
The short answer to your question is yes, they are worried you will bring DocumentDB down with more than 256 points. It used to be limited to just 16 points, but they changed it to 256 recently. Perhaps they will raise it again in the future. We ran into a similar problem with polygons having more than 1,000 points. In the end, we decided to use Sql Server for our polygon searches and then use the data refined from Sql Server to pull the related data from DocumentDB.
The problem is that DocumentDB resources are shared between customers so all of the operations that you run against DocumentDB have to be governed by request units. That way, no one customer can bring the system down with massive queries. I don't know how to calculate the request units from using ST_WITHIN on millions of points, but my guess is that even on the S3 tier, it would probably push the limit of the allowable 2500 Request Units. So even if they lifted the 256 points to a one million points, your query might not be able to finish because it would be too expensive. So I suggest you go with Sql Azure. That is what we settled on and it performs great.

Storing pixel based world data

I am making a 2d game with destructable terrain. It will be on iOS but I am looking for ideas or pseudocode, not actual code. I'm wondering how to store a large amount of data. (It will be a large world, approximately 64000 pixels wide and 9600 tall. Each pixel needs a way to store what type of object it is.) I was hoping to use a 2D array but a quick load test showed that this is not feasable (even using a 640x480 grid I dropped below 1 fps)
I also tried the method detailed here: http://gmc.yoyogames.com/index.php?showtopic=315851 (I used to use Game Maker and remembered this method) however is seems a bit cumbersome and recombining the objects again is nearly impossible.
So what other methods are there? Does anyone know how Worms worked? What about image editors, how do they store the colour of each pixel?
Thankyou,
YM
Run-length encoding can help with your memory issues
I am most likely going to use Polygon based storage.

Resources