Get data(values) from RasterLayer arcgis js - raster

I have mapserver which contains tif files (raster layer).This files contains temperature data . I want to know how to get data from this raster layer. I look a lot of arcgis js api samples ,but none of them show how to read data from raster layer.

ArcGIS JS API support WCS (Web Coverage Service), OGC standard, which is supported by every map server including MapServer.
In that way, you will use a WCSLayer, wich has several properties and methods to handle information of bands an pixels.
ArcGIS JS API - WCSLayer
MapServer - WCS

import * as identify from "#arcgis/core/rest/identify";
.
.
.
return identify
.identify(url, params)
.then(function (response) {
var results = response.results;
results.map((result, index) => {
...
}
This was solution for me

Related

How to export here maps to image file for printing programmatically?

Just like leaflet and Arcgis JS Api have support for printing and exporting map to image files, how to do it in here maps? I explored here API and searched web but found nothing.
You have to take a look at our Map Image REST API.
https://developer.here.com/documentation/map-image/topics/what-is.html
What Is the Map Image API?
The HERE Map Image API is a REST API that allows you to request static map images for all regions in the world. The map images show conventional map views, but can also include points of interest, routes (for example, with turning points and junction views), statistics and heat maps.
In addition, the API offers a variety of supplementary services for displaying location-based data. For example, it is possible to present roadsigns.
You can also request the map images in different formats:
0 PNG
1 JPEG (default)
2 GIF
3 BMP
4 PNG8
-5 SVG (only for companylogo)
If this is not given, JPEG is used as default.
Within JavaScript, you can configure a MapTileService to request tiles of the map.
https://developer.here.com/documentation/maps/topics_api/h-service-maptileservice.html
There is a capturing functionality in HERE javascript API.
I made a quick draft how you can easily export the captured canvas element of the map with anything rendered on top of it:
// overlay element containing captured canvas element
var captureBackground = document.createElement('div'),
bgStyle = captureBackground.style;
bgStyle.width='100%';
bgStyle.position='absolute';
bgStyle.top='0';
bgStyle.bottom='0';
bgStyle.background='rgba(0,0,0,0.7)';
bgStyle.padding='30px';
bgStyle.zIndex=1000;
captureBackground.addEventListener('click', function(e) {
document.body.removeChild(this);
});
// capture the map:
map.capture(function(capturedCanvas) {
// remove previously added canvas from the overlay
captureBackground.innerHTML = '';
captureBackground.appendChild(capturedCanvas);
document.body.appendChild(captureBackground);
}, [], 50, 50, 700, 700);
For more information see https://developer.here.com/documentation/maps/topics_api/h-map.html#h-map__capture

Is there a way of saving a Google Doc so it has the same unique ID as an existing doc?

I have a need to create a copy of a Google Doc with a specific ID - not the "friendly" name like MyDocument, but the name that makes it unique in the GoogleSphere - the one like 1x_tfTiA9-b5UwAf3k2fg6y6hyZSYQIvhSNn-saaDs4c.
Here's the scenario why I would like to do this:
I have a newsletter which is in the form of a Google Doc. The newsletter is published on a website by embedding the document in a web page inside an <iframe> element. Also published in the same way is a "large print" version of the newsletter that is the same, apart from the fact that the default font size is 24pt, rather than 11pt.
I am trying to automate the production of the large print version, but in such a way that the unique ID of the large print document doesn't change, so that the embedded <iframe> for it still works.
I have experimented in the past with Google Apps Scripts routines for creating a deep copy of a document but the deep copy functions don't play nicely with images and tables, so I could never get a complete copy. If I could implement a "Save As" function, where the operand was an existing unique ID, I think this would do what I want.
Anyone know how I might do this?
I delved into this, attempting to set the id of the "large print" version of the file in a variety of ways:
via copy(): var copiedFile = Drive.Files.copy(lpFile, spFile.id, options);
which yields the error:
Generated IDs are not currently supported for copy requests
via insert(): var newFile = Drive.Files.insert(lpFile, doc.getBlob(), options);
which yields the error:
Generated IDs are not supported for Google Docs formats
via update(): Drive.Files.update(lpFile, lpFile.id, doc.getBlob(), options);
This method successfully updates the "large print" file from the small print file. This particular line, however, uses the Document#getBlob() method, which has issues with formatting and rich content from the Document. In particular, as you mention, images and tables in are not preserved (among other things, like changes to the font, etc.). Compare pre with post
It seems that - if the appropriate method of exporting formatted byte content from the document can be found - the update() method has the most promise. Note that the update() method in the Apps Script client library requires a Blob input (i.e. doc.getBlob().getBytes() will not work), so the fundamental limitation may be the (lack of) support for rich format information in the produced Blob data. With this in mind, I tried a couple methods for obtaining "formatted" Blob data from the "small print" file:
via Document#getAs(mimetype): Drive.Files.export(lpFile, lpFile.id, doc.getAs(<type>), options);
which fails for seemingly sensible types with the errors:
MimeType.GOOGLE_DOCS: We're sorry, a server error occurred. Please wait a bit and try again.
MimeType.MICROSOFT_WORD: Converting from application/vnd.google-apps.document to application/vnd.openxmlformats-officedocument.wordprocessingml.document is not supported.
These errors do make sense, since the internal Google Docs MimeType is not exportable (you can't "download as" this filetype since the data is kept however Google wants to keep it), and the documentation for Document#getAs(mimeType) indicates that only PDF export is supported by the Document Service. Indeed, attempting to coerce the Blob from doc.getBlob() with getAs(mimeType) fails, with the error:
Converting from application/pdf to application/vnd.openxmlformats-officedocument.wordprocessingml.document is not supported.
using DriveApp to get the Blob, rather than the Document Service:
Drive.Files.update(lpFile, lpFile.id, DriveApp.getFileById(smallPrintId).getBlob(), options);
This has the same issues as doc.getBlob(), and likely uses the same internal methods.
using DriveApp#getAs has the same errors as Document#getAs
Considering the limitation of the native Apps Script implementations, I then used the advanced service to obtain the Blob data. This is a bit trickier, since the File resource returned is not actually the file, but metadata about the file. Obtaining the Blob with the REST API requires exporting the file to a desired MimeType. We know from above that the PDF-formatted Blob fails to be properly imported, since that is the format used by the above attempts. We also know that the Google Docs format is not exportable, so the only one left is MS Word's .docx.
var blob = getBlobViaURL_(smallPrintId, MimeType.MICROSOFT_WORD);
Drive.Files.update(lpFile, lpFile.id, blob, options);
where getBlobViaURL_ implements the workaround from this SO question for the (still-broken) Drive.Files.export() Apps Script method.
This method successfully updates the existing "large print" file with the exact content from the "small print" file - at least for my test document. Given that it involves downloading content instead of using the internal, already-present data available to the export methods, it will likely fail for larger files.
Testing Script:
function copyContentFromAtoB() {
var smallPrintId = "some id";
var largePrintId = "some other id";
// You must first enable the Drive "Advanced Service" before this will work.
// Get the file metadata of the to-be-updated file.
var lpFile = Drive.Files.get(largePrintId);
// View available options on the relevant Drive REST API pages.
var options = {
updateViewedDate: false,
};
// Ideally this would use Drive.Files.export, but there is a bug in the Apps Script
// client library's implementation: https://issuetracker.google.com/issues/36765129
var blob = getBlobViaURL_(smallPrintId, MimeType.MICROSOFT_WORD);
// Replace the contents of the large print version with that of the small print version.
Drive.Files.update(lpFile, lpFile.id, blob, options);
}
// Below function derived from https://stackoverflow.com/a/42925916/9337071
function getBlobViaURL_(id, mimeType) {
var url = "https://www.googleapis.com/drive/v2/files/"+id+"/export?mimeType="+ mimeType;
var resp = UrlFetchApp.fetch(url, {
headers: { Authorization: 'Bearer ' + ScriptApp.getOAuthToken()}
});
return resp.getBlob();
}

Using bilateral filter with PCL

I'm trying to use the bilateral filter (not fast bilateral filter) with PCL 1.7, as I have an unordered point cloud. I have been able to make other PCL code snippets work (so it's not the conversion code), and I can't find documentation on how to make this particular filter work. I'm trying the following code, but I get a memory access violation when calling applyFilter:
pcl::PointCloud<pcl::PointXYZI>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZI> ());
// convert from custom format to pcl format
convert(world_pts, left_intensities, cloud);
pcl::search::KdTree<pcl::PointXYZI>::Ptr tree (new pcl::search::KdTree<pcl::PointXYZI>);
pcl::PointCloud<pcl::PointXYZI> cloud_filtered;
pcl::BilateralFilter<pcl::PointXYZI> fbFilter;
fbFilter.setInputCloud(cloud);
fbFilter.setHalfSize(1.0);
fbFilter.setStdDev(0.2);
fbFilter.applyFilter(cloud_filtered);
The function:
void pcl::BilateralFilter< PointT >::applyFilter ( PointCloud & output)
expects a reference to the output point cloud
and thats why you get a memory access violation
use:
fbFilter.applyFilter(*cloud_filtered);
instead ;)

How to get features from vector layer in Openlayers 3

I am trying to get the features from my vector layer. The vector layer is composed from a GeoJSON document loaded via Geoserver. I tried vector.features but in vain. Could anyone help with this?
The architecture of OL3 distinguishes between a layer and their source. So to get access to the features of a layer you first have to access the source of the layer. This is done via:
var source = layer.getSource();
In case of a vector layer you will than get a ol.source.Vector object. From this object you can access your features via:
var features = source.getFeatures();
Further you got the possibility to access special features via getFeatureById(id) or getFeaturesAtCoordinate(coordinate). For more information see the api documentation http://openlayers.org/en/v3.4.0/apidoc/ol.source.Vector.html
this.vectorLayer = new VectorLayer({
source: new VectorSource({
url:'https://json.extendsclass.com/bin/d4f21e1dd8ed',
format: new GeoJSON()
}),
});
this.maps.addLayer(this.vectorLayer)
var source = this.vectorLayer.getSource();
var features = source.getFeatures();
console.log(features,'FeatureData');
When we getting features from source then its returning 0 array you can see in image click here for image
and in my url one feature available.

Store Locator API + Geolocation

I am trying to create a Store Locator with the Google API, very similar to the one in the Google examples here:
http://storelocator.googlecode.com/git/examples/panel.html
However I've hit a wall trying to get the Store Locator API to get the user's position through Geolocation, so when I click on Get Direction in the infowindow I get directions to the user position; instead of having to type my address in the: Where are you? Panel box.
In the documentation what I have seen is that geolocation is a boolean in the View Option that is set to True by default. But this does not solve my problem.
Does anyone have any idea on how to do this?
seems that the googlecode page you said is no longer exist. So I can't give any further insight about what you want to make.
However, by your description, Luckily I made similar site few months ago. It is on Grocery Store Near Me .
The concept of Geolocation is an HTML5 (actually W3C) Geolocation API which is now already embed in most modern browser. It is an API which you can obtain user's location (latitude, longitude, altitude, and accuracy).
you can call it with simply
navigator.geolocation.getCurrentPosition(success, error, geo_options);
Success and error is a callback function in which you can define.
In my case, the function looks like these
function success(position) {
var latitude = position.coords.latitude;
var longitude = position.coords.longitude;
// use the lat and long to call function to fetch an API through AJAX
fetchStoreData(latitude, longitude);
}
In the Store Locator schema (like mine in Grocery Store Near Me), Geolocation is used to obtains user lat and long. The lat and long, then send to server via AJAX to get data about nearby store location.
The server serve an API in which accept lat and long as parameter, then fetch the store data (either using database, or other external API like foursquare), then you can display it either on list, or on a maps.

Resources