I'm writing an extension to push our API back to our servers doc store - but it seems that XMLHttpRequest isn't available, perhaps because of the missing window object.
Is there an alternative within Paw?
Bonus question: What is Paw built on top, what do we / don't we have access to compared to the browser?
The actual XMLHttpRequest isn't available in Paw, but instead we have NetworkHTTPRequest (doc here).
Here's an example:
var httpRequest = new NetworkHTTPRequest(); httpRequest.requestUrl = "http://httpbin.org/post";
httpRequest.requestMethod = "POST";
httpRequest.setRequestHeader('Content-Type', 'application/json')
httpRequest.requestBody = JSON.stringify({
name: 'Paw'
})
httpRequest.send()
console.log('HTTP ' + httpRequest.responseStatusCode)
var response = JSON.parse(httpRequest.responseBody)
console.log(JSON.stringify(response, null, 2));
To answer the bonus part, Paw is a native Cocoa application written in Objective-C (a very few parts in C), but Extensions are using JavaScript Core which is also the engine that powers Safari on Mac and iOS. Though, it's plain JavaScript and doesn't exposes all methods that are available in web browsers. It also supports some ES6 features (see Safari 9 on the ECMAScript 6 compatibility table).
Related
I am currently developing an app in Kotlin that uses the FACE API of Azure. To identify faces on images I need to send the image to the server. I use Retrofit 2.7.0 for the REST requests. Whenever I google about sending an image with retrofit, I come across the #Multipart annotation. For example here or here. None of the questions state why they do it. I found that apparently Multipart is the standard to send files via http.
However I do not seem to need it for my request. The simple approach seems to work just fine. Seeing as everyone else seems to use multipart, I am probably missing something. So my question is, why would I need to use Multipart over the simple approach?
I currently use this approach:
interface FaceAPI {
#Headers(value = ["$CONTENT_TYPE_HEADER: $CONTENT_TYPE_OCTET_STREAM"])
#POST("face/v1.0/detect")
suspend fun detectFace(
#Query("recognitionModel") recognitionModel: String = RECOGNITION_MODEL_2,
#Query("detectionModel") detectionModel: String = DETECTION_MODEL_2,
#Query("returnRecognitionModel") returnRecognitionModel: Boolean = false,
#Query("returnFaceId") returnFaceId: Boolean = true,
#Query("returnFaceLandmarks") returnFaceLandmarks: Boolean = false,
#Header(HEADER_SUBSCRIPTION_KEY) subscriptionKey: String = SubscriptionKeyProvider.getSubscriptionKey(),
#Body image: RequestBody
): Array<DetectResponse>
}
And then I call it like this:
suspend fun detectFaces(image: InputStream): Array<DetectResponse> {
return withContext(Dispatchers.IO) {
val bytes = image.readAllBytes()
val body = bytes.toRequestBody(CONTENT_TYPE_OCTET_STREAM.toMediaTypeOrNull(), 0, bytes.size)
val faceApi = ApiFactory.createFaceAPI()
faceApi.detectFace(image = body)
}
}
This code works for images up to the 6 MB that Azure supports.
If you:
aren't generating the request by submitting an HTML form (which has native support for multipart but not for raw files)
don't need to convey multiple pieces of data (e.g. other form fields)
… then there is no need to use multipart.
Given the prevalence of it (due to the history from HTML form support) there are more server side data handling libraries which can handle it then there are for raw files so it may be easier to use multipart with some server side environments.
I have a need to create a copy of a Google Doc with a specific ID - not the "friendly" name like MyDocument, but the name that makes it unique in the GoogleSphere - the one like 1x_tfTiA9-b5UwAf3k2fg6y6hyZSYQIvhSNn-saaDs4c.
Here's the scenario why I would like to do this:
I have a newsletter which is in the form of a Google Doc. The newsletter is published on a website by embedding the document in a web page inside an <iframe> element. Also published in the same way is a "large print" version of the newsletter that is the same, apart from the fact that the default font size is 24pt, rather than 11pt.
I am trying to automate the production of the large print version, but in such a way that the unique ID of the large print document doesn't change, so that the embedded <iframe> for it still works.
I have experimented in the past with Google Apps Scripts routines for creating a deep copy of a document but the deep copy functions don't play nicely with images and tables, so I could never get a complete copy. If I could implement a "Save As" function, where the operand was an existing unique ID, I think this would do what I want.
Anyone know how I might do this?
I delved into this, attempting to set the id of the "large print" version of the file in a variety of ways:
via copy(): var copiedFile = Drive.Files.copy(lpFile, spFile.id, options);
which yields the error:
Generated IDs are not currently supported for copy requests
via insert(): var newFile = Drive.Files.insert(lpFile, doc.getBlob(), options);
which yields the error:
Generated IDs are not supported for Google Docs formats
via update(): Drive.Files.update(lpFile, lpFile.id, doc.getBlob(), options);
This method successfully updates the "large print" file from the small print file. This particular line, however, uses the Document#getBlob() method, which has issues with formatting and rich content from the Document. In particular, as you mention, images and tables in are not preserved (among other things, like changes to the font, etc.). Compare pre with post
It seems that - if the appropriate method of exporting formatted byte content from the document can be found - the update() method has the most promise. Note that the update() method in the Apps Script client library requires a Blob input (i.e. doc.getBlob().getBytes() will not work), so the fundamental limitation may be the (lack of) support for rich format information in the produced Blob data. With this in mind, I tried a couple methods for obtaining "formatted" Blob data from the "small print" file:
via Document#getAs(mimetype): Drive.Files.export(lpFile, lpFile.id, doc.getAs(<type>), options);
which fails for seemingly sensible types with the errors:
MimeType.GOOGLE_DOCS: We're sorry, a server error occurred. Please wait a bit and try again.
MimeType.MICROSOFT_WORD: Converting from application/vnd.google-apps.document to application/vnd.openxmlformats-officedocument.wordprocessingml.document is not supported.
These errors do make sense, since the internal Google Docs MimeType is not exportable (you can't "download as" this filetype since the data is kept however Google wants to keep it), and the documentation for Document#getAs(mimeType) indicates that only PDF export is supported by the Document Service. Indeed, attempting to coerce the Blob from doc.getBlob() with getAs(mimeType) fails, with the error:
Converting from application/pdf to application/vnd.openxmlformats-officedocument.wordprocessingml.document is not supported.
using DriveApp to get the Blob, rather than the Document Service:
Drive.Files.update(lpFile, lpFile.id, DriveApp.getFileById(smallPrintId).getBlob(), options);
This has the same issues as doc.getBlob(), and likely uses the same internal methods.
using DriveApp#getAs has the same errors as Document#getAs
Considering the limitation of the native Apps Script implementations, I then used the advanced service to obtain the Blob data. This is a bit trickier, since the File resource returned is not actually the file, but metadata about the file. Obtaining the Blob with the REST API requires exporting the file to a desired MimeType. We know from above that the PDF-formatted Blob fails to be properly imported, since that is the format used by the above attempts. We also know that the Google Docs format is not exportable, so the only one left is MS Word's .docx.
var blob = getBlobViaURL_(smallPrintId, MimeType.MICROSOFT_WORD);
Drive.Files.update(lpFile, lpFile.id, blob, options);
where getBlobViaURL_ implements the workaround from this SO question for the (still-broken) Drive.Files.export() Apps Script method.
This method successfully updates the existing "large print" file with the exact content from the "small print" file - at least for my test document. Given that it involves downloading content instead of using the internal, already-present data available to the export methods, it will likely fail for larger files.
Testing Script:
function copyContentFromAtoB() {
var smallPrintId = "some id";
var largePrintId = "some other id";
// You must first enable the Drive "Advanced Service" before this will work.
// Get the file metadata of the to-be-updated file.
var lpFile = Drive.Files.get(largePrintId);
// View available options on the relevant Drive REST API pages.
var options = {
updateViewedDate: false,
};
// Ideally this would use Drive.Files.export, but there is a bug in the Apps Script
// client library's implementation: https://issuetracker.google.com/issues/36765129
var blob = getBlobViaURL_(smallPrintId, MimeType.MICROSOFT_WORD);
// Replace the contents of the large print version with that of the small print version.
Drive.Files.update(lpFile, lpFile.id, blob, options);
}
// Below function derived from https://stackoverflow.com/a/42925916/9337071
function getBlobViaURL_(id, mimeType) {
var url = "https://www.googleapis.com/drive/v2/files/"+id+"/export?mimeType="+ mimeType;
var resp = UrlFetchApp.fetch(url, {
headers: { Authorization: 'Bearer ' + ScriptApp.getOAuthToken()}
});
return resp.getBlob();
}
What is the JSON Format to set aspects to some folder or documents in alfresco via REST API.
You need to send POST request by the following url (Alfresco 4.1.5):
your_host/alfresco/s/slingshot/doclib/action/aspects/node/workspace/SpacesStore/{nodeUUID}
for Alfresco 5:
your_host/alfresco/s/slingshot/doclib/action/aspects/node/workspace/SpacesStore/{nodeUUID}
with the following body:
{
"added":["abc:doc"],
"removed":[]
}
The preferred approach is to use CMIS rather than the internal slingshot web script. Using CMIS you can add an aspect in a standard way, and you can do it via the browser binding (JSON), the Atom Pub binding (XML), or Web Services.
You can use a CMIS client, such as one of the ones available from http://chemistry.apache.org, or you can do it using the raw binding directly over HTTP.
In CMIS 1.1 you add an aspect by adding its ID to the multi-value property named cmis:secondaryObjectTypeIds.
Here's a gist that shows what this looks like in Java: https://gist.github.com/jpotts/7242070
You need to make one custom webscript.The code of webscript will be like below.In case of webscript you can use below link for learning purpose.
https://wiki.alfresco.com/wiki/Web_Scripts
var props = new Array(1);
props["cm:template"] = document.nodeRef;
document.addAspect("cm:templatable", props);
props = new Array(1);
props["cm:lockIsDeep"] = true;
document.addAspect("cm:lockable", props);
props = new Array(1);
props["cm:hits"] = 1;
document.addAspect("cm:countable", props);
I'm testing the routing service in API 3.0 and I can´t find the attribute "direction" in meanuver, this attribute exists in API 2.5, it attribute indicates the direction of the instruction for example "forward, straight, right.."
Does anybody know if there is some attribute that indicates the direction of the instruccion in the API 3.0?
Thanks.
As discussed in the migration guide, there is a fundamental shift in the use of services between the 2.x and 3.0 Here Maps APIs for JavaScript - previously the Manager objects decided a fixed format of for the request to the underlying REST APIs and encapsulated the response. Whereas now the full range of parameters can (and should) be set by the developer.
In the routing case the question is not so much "What can the 3.0 API do?" as "How was the REST request fixed by the 2.x API and how can I mimic the parts of that request that I need?".
Looking at the Legacy API playground simple routing example, the underlying REST request is:
http://route.cit.api.here.com/routing/7.2/calculateroute.json?routeattributes=shape&maneuverattributes=all&jsonAttributes=1&waypoint0=geo!52.516,13.388&waypoint1=geo!52.517,13.395&language=en-GB&mode=shortest;car;traffic:default;tollroad:-1&app_id=APP_ID&app_code=TOKEN...
This can be reproduced precisely in the 3.x API with the following:
var router = platform.getRoutingService(),
routeRequestParams = {
routeattributes: 'shape',
maneuverattributes: 'all',
jsonAttributes :'1',
waypoint0: '52.516,13.388',
waypoint1: '52.517,13.395',
language: 'en-GB',
mode: 'shortest;car;traffic:default;tollroad:-1'
};
router.calculateRoute(...);
The next question here is what parameters do you really need for your application? The list for the calculateRoute endpoint of the underlying REST Routing 7.2 API includes the description of maneuverattributes showing how to obtain directions - with maneuverattributes=...,direction
So it may be possible to reduce the routeRequestParams to something like:
var routeRequestParams = {
routeattributes: 'shape',
maneuverattributes: 'position,length,direction',
...etc...
So in summary, you'll need to consult the REST Routing API documentation to define what you need first, before passing those parameters into the query of the Maps API for JavaScript calculateRoute() call.
I'm developing a Phonegap application using jQuery Mobile. It's a very basic application, its purpose is to show information about a big organization in Spanish and English. On the first page the application shows 2 options, Spanish and English. If the user selects Spanish, the information displayed must be Spanish and vice versa.
Using SQLite DB will probably give some problems on Windows Phones since it is not yet supported (see Phonegap Storage).
There is the File Storage option too. And raw JSON files, as well.
The way I do it is to create a language specific json file to hold all my strings. In this case english.json and spanish.json. Structure the json like:
{
help: "Help",
ok: "Okay"
}
On the first page of your app when the user clicks the Spanish button for instance it should set a value in localStorage.
localStorage.setItem("lang", "spanish");
Then in the second page once you get the "deviceready" event you should load the correct json file using XHR.
var request = new XMLHttpRequest();
request.open("GET", localStorage.getItem("lang") + ".json", true);
request.onreadystatechange = function(){//Call a function when the state changes.
if (request.readyState == 4) {
if (request.status == 200 || request.status == 0) {
langStrings = JSON.parse(request.responseText);
}
}
}
request.send();
Now whenever you want to use a translated string you get it from langStrings.
langStrings.ok;
Make sense?
For persistance I successfully used Html5 Local Storage.
It works on Android, iOS and Windows Phone 7(I tried it on these platforms).
I use it like this.
For i18n you can use any javascript i18n library. I created own simple solution.