I'm trying to get the restaurant's address from a website using IMPORTXML.
When pulling the xpath the selected line (for the street) I get this information:
//*[#id="marmita-panel0-2"]/div/div[2]/p[2]
enter image description here
The formula I write on the Google Sheets cell is =IMPORTXML("URL Address","//*[#id='marmita-panel0-2']/div/div[2]/p[2]") and returns blank.
The site is built in javascript on the client side, not the server side. Then you cannot retrieve the information using importxml. Hopefully the data is contained in a json inside the source. So try:
function info(){
var url='https://www.ifood.com.br/delivery/santo-andre-sp/hamburgueria-sabor-amigo-parque-das-nacoes/1d270c55-1158-49a7-8df4-f369402a07e0'
var source = UrlFetchApp.fetch(url).getContentText()
var jsonString = source.split('<script type="application/ld+json">')[2].split('</script>')[0]
var data = JSON.parse(jsonString)
Logger.log(data.address.streetAddress)
Logger.log(data.address.addressLocality)
}
Related
I have a need to create a copy of a Google Doc with a specific ID - not the "friendly" name like MyDocument, but the name that makes it unique in the GoogleSphere - the one like 1x_tfTiA9-b5UwAf3k2fg6y6hyZSYQIvhSNn-saaDs4c.
Here's the scenario why I would like to do this:
I have a newsletter which is in the form of a Google Doc. The newsletter is published on a website by embedding the document in a web page inside an <iframe> element. Also published in the same way is a "large print" version of the newsletter that is the same, apart from the fact that the default font size is 24pt, rather than 11pt.
I am trying to automate the production of the large print version, but in such a way that the unique ID of the large print document doesn't change, so that the embedded <iframe> for it still works.
I have experimented in the past with Google Apps Scripts routines for creating a deep copy of a document but the deep copy functions don't play nicely with images and tables, so I could never get a complete copy. If I could implement a "Save As" function, where the operand was an existing unique ID, I think this would do what I want.
Anyone know how I might do this?
I delved into this, attempting to set the id of the "large print" version of the file in a variety of ways:
via copy(): var copiedFile = Drive.Files.copy(lpFile, spFile.id, options);
which yields the error:
Generated IDs are not currently supported for copy requests
via insert(): var newFile = Drive.Files.insert(lpFile, doc.getBlob(), options);
which yields the error:
Generated IDs are not supported for Google Docs formats
via update(): Drive.Files.update(lpFile, lpFile.id, doc.getBlob(), options);
This method successfully updates the "large print" file from the small print file. This particular line, however, uses the Document#getBlob() method, which has issues with formatting and rich content from the Document. In particular, as you mention, images and tables in are not preserved (among other things, like changes to the font, etc.). Compare pre with post
It seems that - if the appropriate method of exporting formatted byte content from the document can be found - the update() method has the most promise. Note that the update() method in the Apps Script client library requires a Blob input (i.e. doc.getBlob().getBytes() will not work), so the fundamental limitation may be the (lack of) support for rich format information in the produced Blob data. With this in mind, I tried a couple methods for obtaining "formatted" Blob data from the "small print" file:
via Document#getAs(mimetype): Drive.Files.export(lpFile, lpFile.id, doc.getAs(<type>), options);
which fails for seemingly sensible types with the errors:
MimeType.GOOGLE_DOCS: We're sorry, a server error occurred. Please wait a bit and try again.
MimeType.MICROSOFT_WORD: Converting from application/vnd.google-apps.document to application/vnd.openxmlformats-officedocument.wordprocessingml.document is not supported.
These errors do make sense, since the internal Google Docs MimeType is not exportable (you can't "download as" this filetype since the data is kept however Google wants to keep it), and the documentation for Document#getAs(mimeType) indicates that only PDF export is supported by the Document Service. Indeed, attempting to coerce the Blob from doc.getBlob() with getAs(mimeType) fails, with the error:
Converting from application/pdf to application/vnd.openxmlformats-officedocument.wordprocessingml.document is not supported.
using DriveApp to get the Blob, rather than the Document Service:
Drive.Files.update(lpFile, lpFile.id, DriveApp.getFileById(smallPrintId).getBlob(), options);
This has the same issues as doc.getBlob(), and likely uses the same internal methods.
using DriveApp#getAs has the same errors as Document#getAs
Considering the limitation of the native Apps Script implementations, I then used the advanced service to obtain the Blob data. This is a bit trickier, since the File resource returned is not actually the file, but metadata about the file. Obtaining the Blob with the REST API requires exporting the file to a desired MimeType. We know from above that the PDF-formatted Blob fails to be properly imported, since that is the format used by the above attempts. We also know that the Google Docs format is not exportable, so the only one left is MS Word's .docx.
var blob = getBlobViaURL_(smallPrintId, MimeType.MICROSOFT_WORD);
Drive.Files.update(lpFile, lpFile.id, blob, options);
where getBlobViaURL_ implements the workaround from this SO question for the (still-broken) Drive.Files.export() Apps Script method.
This method successfully updates the existing "large print" file with the exact content from the "small print" file - at least for my test document. Given that it involves downloading content instead of using the internal, already-present data available to the export methods, it will likely fail for larger files.
Testing Script:
function copyContentFromAtoB() {
var smallPrintId = "some id";
var largePrintId = "some other id";
// You must first enable the Drive "Advanced Service" before this will work.
// Get the file metadata of the to-be-updated file.
var lpFile = Drive.Files.get(largePrintId);
// View available options on the relevant Drive REST API pages.
var options = {
updateViewedDate: false,
};
// Ideally this would use Drive.Files.export, but there is a bug in the Apps Script
// client library's implementation: https://issuetracker.google.com/issues/36765129
var blob = getBlobViaURL_(smallPrintId, MimeType.MICROSOFT_WORD);
// Replace the contents of the large print version with that of the small print version.
Drive.Files.update(lpFile, lpFile.id, blob, options);
}
// Below function derived from https://stackoverflow.com/a/42925916/9337071
function getBlobViaURL_(id, mimeType) {
var url = "https://www.googleapis.com/drive/v2/files/"+id+"/export?mimeType="+ mimeType;
var resp = UrlFetchApp.fetch(url, {
headers: { Authorization: 'Bearer ' + ScriptApp.getOAuthToken()}
});
return resp.getBlob();
}
I am trying to scrape data from a web page. Since the page has a dynamic content, I used phantomjs to handle. But, with the codes I am using, I just can download the data seen on the web page. However, I need to input the date range and then submit to get all the data I want.
Here are the codes i used,
library(xml2)
library(rvest)
connection <- "pr.js"
writeLines(sprintf("var page=require('webpage').create();
var fs = require('fs');
page.open('%s',function(){
console.log(page.content);//page source;
fs.write('pr.html', page.content, 'w');
phantom.exit();
});",url),con=connection)
system_input <- paste(path,"phantomjs"," ",connection,sep="")
system(system_input)
Thanks to the codes, I have the html output of the webpage which has been created dynamically.
And as I stated, I also need a date input submit. But I couldn't achieve.
The url is : https://seffaflik.epias.com.tr/transparency/piyasalar/gop/ptf.xhtml
I am trying to figure out how I can get database information involving the email column, make an array with all of the emails and then use the "button" feature to populate the "To:" part of an email page.
Any help is appreciated. Very new at this and pointing me on where to get the info would be great. Thanks
I recommend you to run a server script that would query the datasource that has the emails. The script will look something like this:
function getEmails(){
var query = app.models.<yourmodel>.newQuery();
var results = query.run();
var allEmails = [];
if(results.length > 0){
for(var i = 0; i < results.length; i++){
var uniqueEmail = results[i].<emailfieldname>;
allEmails.push(uniqueEmail);
}
}
return allEmails.join();
}
Then add a script to the button widget "onclick" event that will run the server script and manipulate the returned data. Something similar to this:
function poulateToField(response){
<widget path>.text/value = response;
}
google.script.run.withSuccessHandler(poulateToField).getEmails();
The above widget path would be the path to the "To:" widget, which can be a text box, text area, etc. In my case, I used a text area and the path was this "widget.parent.descendants.TextArea1.value"
I hope this helps. If you have more questions, just let me know! :)
P.D. Please don't forget to review the official documentation for a better and more detailed explanation.
you can also use projections to get a list of items (emails) from your datasource. As per this Article https://developers.google.com/appmaker/ui/binding#projections:
Projections let you access properties from records in a datasource's
items list. Access projections with the ..projections.. option in the
advanced binding wizard, or use the projection operator .. in a
binding path. For example, for an Employees datasource with a name
property, #datasources.Employee.items..name returns a list of all
employees' names.
You can check the "Call Scripts" guide which shows you how to send emails using App Maker which is available here https://developers.google.com/appmaker/tutorials/call-scripts/
To use projections and following the above guide, in Step #2 under "Create the UI": Add a text box for the recipient:
c. In the Property Editor, instead of entering the "To" value to the Textbox widget, you can select Binding and bind the widget to the Datasource projections following this path: datasource > items > ..projections.. > Email (name of the datasource field where the emails are located)
For example a projection will look like this: #datasource.items..email
This will automatically bind all emails that are available in your datasource to the text box widget. Then you can complete the guide and emails will be sent to all the email addresses in your datasource. Hope this helps.
I have set-up a page using Google maps places autocomplete service. Below is a screen snapshot of an address being entered into the text field (controlled) by places autocomplete (works fine)...
When I click on the only option presented by the Google drop down list, the place changed handler is invoked (as it should be) and the following code is executed to get the place result object:
var place = m_autocomplete.getPlace();
Examining the place object in the debugger reveals that only one property is defined; the name property, below is the place object as a JSON string:
{"name":"2701 Riverside Dr, Ottawa, ON K1A 0B1, Canada"}
The place object is missing all other properties (ie: no geometry, no address_components, etc) as outlined in the Google maps documentation for a PlaceResult object.
So, I added more code (to fallback) and call the geocoder service directly when the place results object was incomplete, ie:
var geocoder = new google.maps.Geocoder();
geocoder.geocode({ 'address': addr }, geocoder_callback_handler);
Where addr contained the value from the text field (confirmed via debugger); The geocoder callback function returns a status of: ZERO_RESULTS
Questions:
1 - Why does the autocomplete service present an address in the drop down list that results in an incomplete place result object?
2 - Why does the geocoder not recognize an address presented by the autocomplete service?
Any advice would be appreciated.
I am changing image through flex every time i change it saved into server directory with same name(which i am referring to show). So when i refresh my page my browser didn't send new request to server since it's already in request.so didn't getting new image.Tip:- when i clear browser history it will come with new image
You can try adding an additional time-stamp to the image source each time u make a new request, which would make the request look different for the browser.
Example :
var src:String = "image.png";
src = src + "?" + new Date().getTime().toString();
Since you mentioned that you're refreshing the browser, then I assume that your embedded SWF file will also need to be refreshed.
When you embed your SWF, you need to add a parameter that would be random across all time (i.e. datetime stamp, etc.)
var mySWF = "swf/YourEmbeddedFlashFile.swf?guid=" + rnd();
and declare a js function:
function rnd()
{
return String((new Date()).getTime()).replace(/\D/gi, '')
}