How to get features from vector layer in Openlayers 3 - vector

I am trying to get the features from my vector layer. The vector layer is composed from a GeoJSON document loaded via Geoserver. I tried vector.features but in vain. Could anyone help with this?

The architecture of OL3 distinguishes between a layer and their source. So to get access to the features of a layer you first have to access the source of the layer. This is done via:
var source = layer.getSource();
In case of a vector layer you will than get a ol.source.Vector object. From this object you can access your features via:
var features = source.getFeatures();
Further you got the possibility to access special features via getFeatureById(id) or getFeaturesAtCoordinate(coordinate). For more information see the api documentation http://openlayers.org/en/v3.4.0/apidoc/ol.source.Vector.html

this.vectorLayer = new VectorLayer({
source: new VectorSource({
url:'https://json.extendsclass.com/bin/d4f21e1dd8ed',
format: new GeoJSON()
}),
});
this.maps.addLayer(this.vectorLayer)
var source = this.vectorLayer.getSource();
var features = source.getFeatures();
console.log(features,'FeatureData');
When we getting features from source then its returning 0 array you can see in image click here for image
and in my url one feature available.

Related

Access Here XYZ Studio Map GeoJSON via API?

I'm wondering if there's a way to build a map in Here XYZ Studio but then get said map's GeoJSON via one of the Here APIs.
HERE Maps API for Javascript can render your XYZ Studio spaces directly:
// Assumption: the platform is instantiated
var service = platform.getXYZService({
token: 'authentication_token_goes_here'
}),
provider = new H.service.xyz.Provider(service, 'space_id_goes_here');
map.addLayer(new H.map.layer.TileLayer(provider));
doc: https://developer.here.com/documentation/maps/3.1.14.0/api_reference/H.service.xyz.Provider.html
It also can render geojson features:
var reader = new H.data.geojson.Reader('/path/to/geojson/file.json');
reader.parse();
// Assumption: map already exists
map.addLayer(reader.getLayer());
doc: https://developer.here.com/documentation/maps/3.1.14.0/api_reference/H.data.geojson.Reader.html
Here is simple jsfiddle example of loading geojson features (country shapes):
https://jsfiddle.net/vhwfg8t4/

Reading JS library from CDN within Mirth

I'm doing some testing around Mirth-Connect. I have a test channel that the datatypes are Raw for the source and one destination. The destination is not doing anything right now. In the source, the connector type is JavaScript Reader, and the code is doing the following...
var url = new java.net.URL('https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.15/lodash.fp.min.js');
var conn = url.openConnection();
conn.setRequestMethod('GET');
if(conn.getResponseCode() === 200) {
var body = org.apache.commons.io.IOUtils.toString(conn.getInputStream(), 'UTF-8');
logger.debug('CONTENT: ' + body);
globalMap.put('_', body);
}
conn.disconnect();
// This code is in source but also tested in destination
logger.debug('FROM GLOBAL: ' + $('_')); // library was found
var arr = [1, 2, 3, 4];
var _ = $('_');
var newArr = _.chunk(arr, 2);
The error I'm getting is: TypeError: Cannot find function chunk in object.
The reason I want to do this is to build custom/internal libraries with unit test and serve them with an internal/company CDN and allow Mirth to consume them.
How can I make the library available to Mirth?
Rhino actually has commonjs support, but mirth doesn't have it enabled by default. Here's how you can use it in your channel.
channel deploy script
with (JavaImporter(
org.mozilla.javascript.Context,
org.mozilla.javascript.commonjs.module.Require,
org.mozilla.javascript.commonjs.module.provider.SoftCachingModuleScriptProvider,
org.mozilla.javascript.commonjs.module.provider.UrlModuleSourceProvider,
java.net.URI
)) {
var require = new Require(
Context.getCurrentContext(),
this,
new SoftCachingModuleScriptProvider(new UrlModuleSourceProvider([
// Search path. You can add multiple URIs to this array
new URI('https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.15/')
],null)),
null,
null,
true
);
} // end JavaImporter
var _ = require('lodash.min');
require('lodash.fp.min')(_); // convert lodash to fp
$gc('_', _);
Note: There's something funky with the cdnjs lodash fp packages that don't detect the environment correctly and force that weird two stage import. If you use https://cdn.jsdelivr.net/npm/lodash#4.17.15/ instead you only need to do var _ = require('fp'); and it loads everything in one step.
transformer
var _ = $gc('_');
logger.info(JSON.stringify(_.chunk(2)([1,2,3,4])));
Note: This is the correct way to use fp/chunk. In your OP you were calling with the standard chunk syntax.
Additional Commentary
I think it's probably ok to do it this way where you download the library once at deploy time and store it in the globalChannelMap, then retrieve it from the map where needed. It would probably also work to store the require object itself in the map if you wanted to call it elsewhere. It will cache and reuse the object created for future calls to the same resource.
I would not create new Require objects anywhere but the deploy script, or you will be redownloading the resource on every message (or every poll in the case of a Javascript Reader.)
Edit: I guess for an internal webhost, this could be desirable in a Javascript Reader if you intend for it to pick up changes immediately on the next poll without a redeploy, assuming you would be upgrading the library in place instead of incrementing a version
The benefit to using Code Templates, as Vibin suggested is that they get compiled directly into your channel at deploy time and there is no additional fetching step at runtime. Making the library available is as simple as assigning it to your channel.
Even though importing third party libraries could be an option, I was actually looking into this for our team to write our own custom functions, write unit-test for them, and lastly be able to pull that code inside Mirth. I was experimenting with lodash but it was not my end goal to use it, it is. My solution was to do a REST GET call with java in the global script. Your URL would be the GitHub raw URL of the code you want to pull in. Same code of my original question but like I said, the URL is the raw GitHub URL for the function I want to pull in.

Is there a way of saving a Google Doc so it has the same unique ID as an existing doc?

I have a need to create a copy of a Google Doc with a specific ID - not the "friendly" name like MyDocument, but the name that makes it unique in the GoogleSphere - the one like 1x_tfTiA9-b5UwAf3k2fg6y6hyZSYQIvhSNn-saaDs4c.
Here's the scenario why I would like to do this:
I have a newsletter which is in the form of a Google Doc. The newsletter is published on a website by embedding the document in a web page inside an <iframe> element. Also published in the same way is a "large print" version of the newsletter that is the same, apart from the fact that the default font size is 24pt, rather than 11pt.
I am trying to automate the production of the large print version, but in such a way that the unique ID of the large print document doesn't change, so that the embedded <iframe> for it still works.
I have experimented in the past with Google Apps Scripts routines for creating a deep copy of a document but the deep copy functions don't play nicely with images and tables, so I could never get a complete copy. If I could implement a "Save As" function, where the operand was an existing unique ID, I think this would do what I want.
Anyone know how I might do this?
I delved into this, attempting to set the id of the "large print" version of the file in a variety of ways:
via copy(): var copiedFile = Drive.Files.copy(lpFile, spFile.id, options);
which yields the error:
Generated IDs are not currently supported for copy requests
via insert(): var newFile = Drive.Files.insert(lpFile, doc.getBlob(), options);
which yields the error:
Generated IDs are not supported for Google Docs formats
via update(): Drive.Files.update(lpFile, lpFile.id, doc.getBlob(), options);
This method successfully updates the "large print" file from the small print file. This particular line, however, uses the Document#getBlob() method, which has issues with formatting and rich content from the Document. In particular, as you mention, images and tables in are not preserved (among other things, like changes to the font, etc.). Compare pre with post
It seems that - if the appropriate method of exporting formatted byte content from the document can be found - the update() method has the most promise. Note that the update() method in the Apps Script client library requires a Blob input (i.e. doc.getBlob().getBytes() will not work), so the fundamental limitation may be the (lack of) support for rich format information in the produced Blob data. With this in mind, I tried a couple methods for obtaining "formatted" Blob data from the "small print" file:
via Document#getAs(mimetype): Drive.Files.export(lpFile, lpFile.id, doc.getAs(<type>), options);
which fails for seemingly sensible types with the errors:
MimeType.GOOGLE_DOCS: We're sorry, a server error occurred. Please wait a bit and try again.
MimeType.MICROSOFT_WORD: Converting from application/vnd.google-apps.document to application/vnd.openxmlformats-officedocument.wordprocessingml.document is not supported.
These errors do make sense, since the internal Google Docs MimeType is not exportable (you can't "download as" this filetype since the data is kept however Google wants to keep it), and the documentation for Document#getAs(mimeType) indicates that only PDF export is supported by the Document Service. Indeed, attempting to coerce the Blob from doc.getBlob() with getAs(mimeType) fails, with the error:
Converting from application/pdf to application/vnd.openxmlformats-officedocument.wordprocessingml.document is not supported.
using DriveApp to get the Blob, rather than the Document Service:
Drive.Files.update(lpFile, lpFile.id, DriveApp.getFileById(smallPrintId).getBlob(), options);
This has the same issues as doc.getBlob(), and likely uses the same internal methods.
using DriveApp#getAs has the same errors as Document#getAs
Considering the limitation of the native Apps Script implementations, I then used the advanced service to obtain the Blob data. This is a bit trickier, since the File resource returned is not actually the file, but metadata about the file. Obtaining the Blob with the REST API requires exporting the file to a desired MimeType. We know from above that the PDF-formatted Blob fails to be properly imported, since that is the format used by the above attempts. We also know that the Google Docs format is not exportable, so the only one left is MS Word's .docx.
var blob = getBlobViaURL_(smallPrintId, MimeType.MICROSOFT_WORD);
Drive.Files.update(lpFile, lpFile.id, blob, options);
where getBlobViaURL_ implements the workaround from this SO question for the (still-broken) Drive.Files.export() Apps Script method.
This method successfully updates the existing "large print" file with the exact content from the "small print" file - at least for my test document. Given that it involves downloading content instead of using the internal, already-present data available to the export methods, it will likely fail for larger files.
Testing Script:
function copyContentFromAtoB() {
var smallPrintId = "some id";
var largePrintId = "some other id";
// You must first enable the Drive "Advanced Service" before this will work.
// Get the file metadata of the to-be-updated file.
var lpFile = Drive.Files.get(largePrintId);
// View available options on the relevant Drive REST API pages.
var options = {
updateViewedDate: false,
};
// Ideally this would use Drive.Files.export, but there is a bug in the Apps Script
// client library's implementation: https://issuetracker.google.com/issues/36765129
var blob = getBlobViaURL_(smallPrintId, MimeType.MICROSOFT_WORD);
// Replace the contents of the large print version with that of the small print version.
Drive.Files.update(lpFile, lpFile.id, blob, options);
}
// Below function derived from https://stackoverflow.com/a/42925916/9337071
function getBlobViaURL_(id, mimeType) {
var url = "https://www.googleapis.com/drive/v2/files/"+id+"/export?mimeType="+ mimeType;
var resp = UrlFetchApp.fetch(url, {
headers: { Authorization: 'Bearer ' + ScriptApp.getOAuthToken()}
});
return resp.getBlob();
}

Argon.js: Error: A frame state has not yet been received

I am attempting to use argon.js server side so that I can convert lla coordinates to a predefined reference frame. I'm not rendering any graphics of course, I'm just using it to convert values. See SO question
Using Geo-coordintes Instead of Cartesian to Draw in Argon and A-Frame
for details.
Per that thread, I am trying to create a cesium entity for a fixed coordinate which I will later use to create other entities relative to it. When I do, everything runs until I get to the last line of the program var gtrefEntityPose = app.context.getEntityPose(gtrefEntity); where I receive Error: A frame state has not yet been received.
At first I thought this might be due to the setting the default reference entity to app.context.setDefaultReferenceFrame(app.context.localOriginEastUpSouth); since I did not have a local user yet due to it being server side. I looked up the documentation for setDefaultReferenceFrame as well as the possibility that I might need to use convertEntityReferenceFrame as well as the source code for each, but I am unable to make sense of it given my knowledge of the program.
I've put the error as well as my application code below.
Thanks for your help!
/home/path/to/folder/node_modules/#argonjs/argon/dist/argon.js:4323
if (!cesium_imports_1.defined(this.serializedFrameState)) throw new Error(
^
Error: A frame state has not yet been received
at ContextService.Object.defineProperty.get [as frame] (/home/path/to/folder/node_modules/#argonjs/argon/dist/argon.js:4323:89)
at ContextService.getTime (/home/path/to/folder/node_modules/#argonjs/argon/dist/argon.js:4343:32)
at ContextService.getEntityPose (/home/path/to/folder/node_modules/#argonjs/argon/dist/argon.js:4381:37)
at Object.<anonymous> (/home/path/to/folder/test.js:27:35)
at Module._compile (module.js:460:26)
at Object.Module._extensions..js (module.js:478:10)
at Module.load (module.js:355:32)
at Function.Module._load (module.js:310:12)
at Function.Module.runMain (module.js:501:10)
at startup (node.js:129:16)
Here is my code:
var Argon = require('#argonjs/argon');
var Cesium = Argon.Cesium;
var Cartesian3 = Cesium.Cartesian3;
var ConstantPositionProperty = Cesium.ConstantPositionProperty;
var ReferenceFrame = Cesium.ReferenceFrame;
var ReferenceEntity = Cesium.ReferenceEntity;
//var degToRad = THREE.Math.degToRad;
const app = Argon.init();
app.context.setDefaultReferenceFrame(app.context.localOriginEastUpSouth);
var data = { lla : { x : -84.398881, y : 33.778463, z : 276 }};
var gtref = Cartesian3.fromDegrees(data.lla.x, data.lla.y, data.lla.z);
var options = { position: new ConstantPositionProperty(gtref, ReferenceFrame.FIXED),
orientation: Cesium.Quaternion.IDENTITY
};
var gtrefEntity = new Cesium.Entity(options);
var gtrefEntityPose = app.context.getEntityPose(gtrefEntity);
As it's currently designed, argon.js won't work server-side. In particular, the local coordinate frame is not set until argon.js has obtained a geospatial position and orientation for the "user", which is obtained either from Argon4 (if you are running in the Argon4 web browser) or from a combination of the web geolocation and deviceorientation API's (if you are running in a different browser).
With a full 6D pose (3D position + 3D orientation), the system does not know the position and viewing direction of the user, and cannot establish a local euclidean coordinate frame. Thus app.context.localOriginEastUpSouth remains undefined (the Cesium Entity exists, but it's position and orientation are not set), and app.context.getEntityPose(gtrefEntity) will fail.
To use argon.js server-side, you would need to modify it to allow the program to manually set the position and orientation of the viewer. I could imagine doing that, and even using it in a system that has the mobile client periodically sending the pose back to the server (via socket.io, for example). In situations where you don't care about the viewing direction (e.g., if you're just worrying about the position of the user) you could just set the orientation to identity and ignore the orientation in the returned pose.

google map api v3 Weather information

How do we add the weather overlay option to a Google map, just like on Google my maps?
I've searched for it, but i can't find the answer.
Do i have to call a weather web service?
You can display the WeatherLayerdev-guide on your map by taking two steps:
First, load the WeatherLayer library by adding the libraries parameter to the URL you use to load the map:
<script type="text/javascript"
src="http://maps.googleapis.com/maps/api/js?libraries=weather&sensor=false">
</script>
Then, you must create an instance of the google.maps.weather.WeatherLayerapi-doc class, passing any options in a WeatherLayerOptionsapi-doc object, and attach the WeatherLayer to the map:
var weatherLayer = new google.maps.weather.WeatherLayer({
temperatureUnits: google.maps.weather.TemperatureUnit.FAHRENHEIT
});
weatherLayer.setMap(map);
In addition to the WeatherLayer, there is also a separate google.maps.weather.CloudLayerapi-doc class that you may create in a similar way:
var cloudLayer = new google.maps.weather.CloudLayer();
cloudLayer.setMap(map);
There is no separate library that you must load to use the CloudLayer; adding the libraries=weather parameter to the URL that loads the map makes both the WeatherLayer and the CloudLayer available on your map. There as a Google Maps JavaScript API v3 Example: Weather Layer, that shows the usage of both layers.
I have got same trouble when I tried to use Google map apiv3 with weather layer. But luckily, I found this site http://code.google.com/p/gmaps-samples-v3/ that provides all Google map api v3 samples and they are worked. Hope this help.
Cuong Vo

Resources