Unable to download ERA5 land hourly data via Google earth engine platform recently - google-earth-engine

Recently I find that I can't download ERA5 land hourly data via Google Earth Engine, and the following code can only return null. But if I replace the first row with "var era51 = ee.ImageCollection('ECMWF/ERA5/DAILY')", it could return the images. Is there something wrong with the ERA5 land hourly data?
Here is the code:
var era51 = ee.ImageCollection("ECMWF/ERA5_LAND/HOURLY")
.filterDate('2018-01-01', '2018-02-02')
.select('total_precipitation');
function exportImageCollection(imgCol) {
var indexList = imgCol.reduceColumns(ee.Reducer.toList(), ["system:index"])
.get("list");
indexList.evaluate(function(indexs) {
for (var i=0; i<indexs.length; i++) {
var image = imgCol.filter(ee.Filter.eq("system:index", indexs[i])).first();
print(image)
}
});
}
exportImageCollection(era51);

EDIT: Turns out that it was a problem on the Dataset side. It is now fixed. Happy downloading :)
Same problem here, about last week my script worked smoothly, but today it just fails. I use to work with the python API, but I've been able to download Landsat-[5,8] images with no problem.
I tried to use Earth Engine Javascript API to download the same area with both: an URL (image.GetDownloadURL()) and to Drive (Export.image.toDrive()); but both approaches also failed.
Tests in Javascript API:
var imgcol = ee.ImageCollection("ECMWF/ERA5_LAND/HOURLY");
var subset = imgcol.filterDate("2010-09-11T10", "2010-09-11T11").filterBounds(geometry);
var img = subset.map(function(x){return x.clip(geometry);}).first();
Map.addLayer(subset.select("surface_latent_heat_flux"))
var url = img.getDownloadURL(
{
name: 'single_band',
bands: ['surface_latent_heat_flux'],
region: geometry
}
);
print(url); //url is printed but fails in the download
Export.image.toDrive(
{
image: img,
description: 'LET',
folder: 'ee_test',
region: geometry,
scale: 9000
});
Could it be an error in Earth Engine end?
EDIT: Turns out that it was a problem on the Dataset side. It is now fixed. Happy downloading :)

Related

How to export a table as google sheet in Google app maker using a button

I've looked extensively and tried to modify multiple sample sets of codes found on different posts in Stack Overflow as well as template documents in Google App Maker, but cannot for the life of me get an export and en email function to work.
UserRecords table:
This is the area where the data is collected and reviewed, the populated table:
These are the data fields I am working with:
This is what the exported Sheet looks like when I go through the motions and do an export through the Deployment tab:
Lastly, this is the email page that I've built based on tutorials and examples I've seen:
What I've learned so far (based on the circles I'm going round in):
Emails seem mostly straight forward, but I don't need to send a message, just an attachment with a subject, similar to using the code:
function sendEmail_(to, subject, body) {
var emailObj = {
to: to,
subject: subject,
htmlBody: body,
noReply: true
};
MailApp.sendEmail(emailObj);
}
Not sure how to change the "body" to the exported document
To straight up export and view the Sheet from a button click, the closest I've found to a solution is in Document Sample but the references in the code speak to components on the page only. I'm not sure how to modify this to use the table, and also what to change to get it as a sheet instead of a doc.
This may seem trivial to some but I'm a beginner and am struggling to wrap my head around what I'm doing wrong. I've been looking at this for nearly a week. Any help will be greatly appreciated.
In it's simplest form you can do a Google sheet export with the following server script (this is based on a model called employees):
function exportEmployeeTable() {
//if only certain roles or individuals can perform this action include proper validation here
var query = app.models.Employees.newQuery();
var results = query.run();
var fields = app.metadata.models.Employees.fields;
var data = [];
var header = [];
for (var i in fields) {
header.push(fields[i].displayName);
}
data.push(header);
for (var j in results) {
var rows = [];
for (var k in fields) {
rows.push(results[j][fields[k].name]);
}
data.push(rows);
}
if (data.length > 1) {
var ss = SpreadsheetApp.create('Employee Export');
var sheet = ss.getActiveSheet();
sheet.getRange(1,1,data.length,header.length).setValues(data);
//here you could return the URL for your spreadsheet back to your client by setting up a successhandler and failure handler
return ss.getUrl();
} else {
throw new app.ManagedError('No Data to export!');
}
}

How to scrape any type of website

I am working on scraping websites, I have tried many technologies to scrape websites.
First of all I used PHP cURL as a scraping tool, and went up to some extent to scrape websites, but then I faced a problem, that was; the PHP cURL couldn't scrape websites that used Ajax to load the website contents/data. And that's what stopped me scraping through PHP.
After a decent research I have found another solution to scrape websites, that were beyond the limitation of Ajax loaded websites etc, and was very powerful and cool to use, they were indeed Phantom JS and Casper JS. I have scraped lot of sites with it.
The problem I faced with these tools was that, these tools works/controlled through the command line interface, for example when you want to run the Phantom/Casper JS code, you need to run it through the command line. And this is my basic problem. What I need is, to write the code in Phantom/Casper JS and I want to have a webpage with admin panel, where I can control these scripts. Currently I am scraping career/jobs listings websites, and I want to automate these tools, to scrape these sites automatically after a given time, to stay updated with the employers sites, who posts new jobs.
For instance, I have code for each website separately and I manually execute each file through the command line and then wait for it to finish scraping and then I continue with second one and so on. What I want to have is, I write a script in JavaScript (preferably in Node JS - but not compulsory) which will execute the scraper code after a specific instance, and then will start scraping all of the websites in the background.
I can do the automation, its not a problem, but the problem is, I am unable to connect the Phantom/Casper JS with the website, even I tried Spooky JS which connects Phantom/Casper JS with Node JS, but unfortunately it doesn't work for me, and its alot messy.
Is there any other tool that's powerful like these two, and I can easily interact with them through a webpage ?
Continuing my own research for scrapping sites, I was unable to find any perfect solution. But the powerful solution I came up with is to use Phantom JS module with Node JS. You can find this module here.
For installation Guide follow this documentation. Phantom JS is used asynchronously in node JS and then its alot easier to get the results, and really easy to interact with it using, express JS on server side and Ajax or Socket.io on client side to enhance the functionality.
Below is my code which I came up with :
const phantom = require('phantom');
const ev = require('events');
const event = new ev.EventEmitter();
var MAIN_URL,
TOTAL_PAGES,
TOTAL_JOBS,
PAGE_DATA_COUNTER = 0,
PAGE_COUNTER = 0,
PAGE_JOBS_DETAILS = [],
IND_JOB_DETAILS = [],
JOB_NUMBER = 1,
CURRENT_PAGE = 1,
PAGE_WEIGHT_TIME,
CLICK_NEXT_TIME,
CURRENT_WEBSITE,
CURR_WEBSITE_LINK,
CURR_WEBSITE_NAME,
CURR_WEBSITE_INDEX,
PH_INSTANCE,
PH_PAGE;
function InitScrap() {
// Initiate the Data
this.init = async function(url) {
MAIN_URL = url;
PH_INSTANCE = await phantom.create(),
PH_PAGE = await PH_INSTANCE.createPage();
console.log("Scrapper Initiated, Please wait...")
return "success";
}
// Load the Basic Page First
this.loadPage = async function(pageLoadWait) {
var status = await PH_PAGE.open(MAIN_URL),
w;
if (status == "success") {
console.log("Page Loaded . . .");
if (pageLoadWait !== undefined && pageLoadWait !== null && pageLoadWait !== false) {
let p = new Promise(function(res, rej) {
setTimeout(async function() {
console.log("Page After 5 Seconds");
PH_PAGE.render("new.png");
TOTAL_PAGES = await PH_PAGE.evaluate(function() {
return document.getElementsByClassName("flatten pagination useIconFonts")[0].textContent.match(/\d+/g)[1];
});
TOTAL_JOBS = await PH_PAGE.evaluate(function() {
return document.getElementsByClassName("jobCount")[0].textContent.match(/\d+/g)[0];
});
res({
p: TOTAL_PAGES,
j: TOTAL_JOBS,
s: true
});
}, pageLoadWait);
})
return await p;
}
}
}
function ScrapData(opts) {
var scrap = new InitScrap();
scrap.init("https://www.google.com/").then(function(init_res) {
if (init_res == "success") {
scrap.loadPage(opts.pageLoadWait).then(function(load_res) {
console.log(load_res);
if (load_res.s === true) {
scrap.evaluatePage().then(function(ev_page_res) {
console.log("Page Title : " + ev_page_res);
scrap.evaluateJobsDetails().then(function(ev_jobs_res) {
console.log(ev_jobs_res);
})
})
}
return
})
}
});
return scrap;
}
module.exports = {
ScrapData
};
}

Google Analytics dimension pagePathLevel more than 4

I have very long web pages paths reported to google analytics:
/#/legends_g01/games/legends_g01_02_academy_i-170909-55/notes/1/dynamics
/#/legends_02_academy_i/games/legends_g01_02_academy_i-170912-64/notes/12/players
/#/legends_05/games/legends_05-170912-84/notes/22/players
/#/legends_g01_02_academy_i/games/legends_g01_02_academy_i-170919-78/notes/34/levels
I'm using Core API to create a query where I need to have a metric ga:users with dimension by the last path part (7th). The starting part of the path doesn't matter here and should be ignored.
So if there is ga:pagePathLevel7 then I can use
dimension: ga:pagePathLevel7
metrics: ga:users
And see the result like this:
dynamics: 34
players: 45
levels: 87
How can I do this without ga:pagePathLevel7?
It seems that I'm the only one here with such a problem.
As I failed to find a direct solution I ended up adding custom dimensions to my google analytics. I added the dimensions for the last important path parts and changed the code on the site to supply this data together with the pageView url.
import ReactGA from 'react-ga';
export const statDimension = (dimensionName, value) => {
if(value)
{
let obj = {};
obj[dimensionName] = value;
ReactGA.set(obj);
}
};
export const statPageView = (url, game_id, clip_num) => {
if(!url)
{
url = window.location.hash;
}
//set game_id
statDimension(STAT_DIM_GAME_ID, game_id);
//set clip number
statDimension(STAT_DIM_CLIP_NUM, clip_num);
ReactGA.pageview(url);
return null;
};
I use react-ga npm module for submitting data to google analytics.
Now I can use custom dimensions together with filters on my urls to get stats based on the parts of the path with depth > 4.
May be that's not an elegant solution but is a working one.
Hope this will be helpful for somebody like me.

Enter zipcode and display users within a certain radius

I have a Meteor app where product providers enter their zip code when registering.
This data is stored in users.profile.zipcode.
Flow:
1. Anyone visiting the site can enter a zip code in a search field.
2. A list of product providers with zipcodes within 10 kilometers of that zip code is displayed.
The app will be for Norwegian users to begin with, but will maybe be expanded to different countries in the future.
Can someone provide me with example code of how this can be done, i guess using the Google API or something similar? I'm pretty new to JavaScript so a complete example would be very much appreciated. Hopefully using Meteor.Publish and Meteor.Subscribe, including the display of the data.
Thank you in advance!
First you will have to convert ZIP code to coordinates, there is a zipcodes lib - for US and Canada only, if you're targeted other region/country libs can be easily found on NPM.
For example we have a Meteor Method which accepts form with zipcode field:
import zipcodes from 'zipcodes';
// Create `2dsphere` index
const collection = new Mongo.Collection('someCollectionName');
collection._ensureIndex({zipLoc: '2dsphere'});
Meteor.Methods({
formSubmit(data) {
const zipData = zipcodes.lookup(data.zipcode);
// Save location as geospatial data:
collection.insert({
zipLoc: {
type: "Point",
coordinates: [zipData.longitude, zipData.latitude]
}
});
}
});
To search within radius use next code:
const searchRadius = 10000; // 10km in meters
const zip = 90210;
const zipData = zipcodes.lookup(zip);
collection.find({
zipLoc: {
$near: {
$geometry: [zipData.longitude, zipData.latitude],
$maxDistance: searchRadius
}
}
});
Further reading:
zipcodes NPM library
MongoDB: Geospatial Queries
MongoDB: $near
MongoDB: $geometry
MongoDB: 2dsphere Indexes

Google Maps PlacesService, PlaceResult is only returning one (1) photo for the photos property array

Having an issue with the results from the Google Maps PlacesService. The resultant PlaceResult object is now only returning one photo in the photos property array. In the past this was not the case and up to 10 photos were returned. Is this a change?
Example code:
var request = {
reference: place.reference
}
var callback = function(details, status) {
if (status == google.maps.places.PlacesServiceStatus.OK) {
alert("Number of photos: " + details.photos.length);
}
}
var service = new google.maps.places.PlacesService(map);
service.getDetails(request, callback);
fiddle showing an example
In a previous answer that has been deleted I said that it must be a bug on the Google side.
I just found this issue :
https://code.google.com/p/gmaps-api-issues/issues/detail?id=6825&sort=-id&colspec=ID%20Type%20Status%20Introduced%20Fixed%20Summary%20Stars%20ApiType%20Internal
If I am right, the Google Maps PlacesService is the Javascript version of the Google Places API, so the backend code might be the same : that could explain why we have the same results (same bug(?)).
Hope this helps.

Resources