Why use multipart to upload an image? - http

I am currently developing an app in Kotlin that uses the FACE API of Azure. To identify faces on images I need to send the image to the server. I use Retrofit 2.7.0 for the REST requests. Whenever I google about sending an image with retrofit, I come across the #Multipart annotation. For example here or here. None of the questions state why they do it. I found that apparently Multipart is the standard to send files via http.
However I do not seem to need it for my request. The simple approach seems to work just fine. Seeing as everyone else seems to use multipart, I am probably missing something. So my question is, why would I need to use Multipart over the simple approach?
I currently use this approach:
interface FaceAPI {
#Headers(value = ["$CONTENT_TYPE_HEADER: $CONTENT_TYPE_OCTET_STREAM"])
#POST("face/v1.0/detect")
suspend fun detectFace(
#Query("recognitionModel") recognitionModel: String = RECOGNITION_MODEL_2,
#Query("detectionModel") detectionModel: String = DETECTION_MODEL_2,
#Query("returnRecognitionModel") returnRecognitionModel: Boolean = false,
#Query("returnFaceId") returnFaceId: Boolean = true,
#Query("returnFaceLandmarks") returnFaceLandmarks: Boolean = false,
#Header(HEADER_SUBSCRIPTION_KEY) subscriptionKey: String = SubscriptionKeyProvider.getSubscriptionKey(),
#Body image: RequestBody
): Array<DetectResponse>
}
And then I call it like this:
suspend fun detectFaces(image: InputStream): Array<DetectResponse> {
return withContext(Dispatchers.IO) {
val bytes = image.readAllBytes()
val body = bytes.toRequestBody(CONTENT_TYPE_OCTET_STREAM.toMediaTypeOrNull(), 0, bytes.size)
val faceApi = ApiFactory.createFaceAPI()
faceApi.detectFace(image = body)
}
}
This code works for images up to the 6 MB that Azure supports.

If you:
aren't generating the request by submitting an HTML form (which has native support for multipart but not for raw files)
don't need to convey multiple pieces of data (e.g. other form fields)
… then there is no need to use multipart.
Given the prevalence of it (due to the history from HTML form support) there are more server side data handling libraries which can handle it then there are for raw files so it may be easier to use multipart with some server side environments.

Related

Evernote IOS SDK fetchResourceByHashWith throws exception

Working with Evernote IOS SDK 3.0
I would like to retrieve a specific resource from note using
fetchResourceByHashWith
This is how I am using it. Just for this example, to be 100% sure about the hash being correct I first download the note with a single resource using fetchNote and then request this resource using its unique hash using fetchResourceByHashWith (hash looks correct when I print it)
ENSession.shared.primaryNoteStore()?.fetchNote(withGuid: guid, includingContent: true, resourceOptions: ENResourceFetchOption.includeData, completion: { note, error in
if error != nil {
print(error)
seal.reject(error!)
} else {
let hash = note?.resources[0].data.bodyHash
ENSession.shared.primaryNoteStore()?.fetchResourceByHashWith(guid: guid, contentHash: hash, options: ENResourceFetchOption.includeData, completion: { res, error in
if error != nil {
print(error)
seal.reject(error!)
} else {
print("works")
seal.fulfill(res!)
}})
}
})
Call to fetchResourceByHashWith fails with
Optional(Error Domain=ENErrorDomain Code=0 "Unknown error" UserInfo={EDAMErrorCode=0, NSLocalizedDescription=Unknown error})
The equivalent setup works on Android SDK.
Everything else works so far in IOS SDK (chunkSync, auth, getting notebooks etc.. so this is not an issue with auth tokens)
would be great to know if this is an sdk bug or I am still doing something wrong.
Thanks
This is a bug in the SDK's "EDAM" Thrift client stub code. First the analysis and then your workarounds.
Evernote's underlying API transport uses a Thrift protocol with a documented schema. The SDK framework includes a layer of autogenerated stub code that is supposed to marshal input and output params correctly for each request and response. You are invoking the underlying getResourceByHash API method on the note store, which is defined per the docs to accept a string type for the contentHash argument. But it turns out the client is sending the hash value as a purely binary field. The service is failing to parse the request, so you're seeing a generic error on the client. This could reflect evolution in the API definition, but more likely this has always been broken in the iOS SDK (getResourceByHash probably doesn't see a lot of usage). If you dig into the more recent Python version of the SDK, or indeed also the Java/Android version, you can see a different pattern for this method: it says it's going to write a string-type field, and then actually emits a binary one. Weirdly, this works. And if you hack up the iOS SDK to do the same thing, it will work, too.
Workarounds:
Best advice is to report the bug and just avoid this method on the note store. You can get resource data in different ways: First of all, you actually got all the data you needed in the response to your fetchNote call, i.e. let resourceData = note?.resources[0].data.body and you're good! You can also pull individual resources by their own guid (not their hash), using fetchResource (use note?.resources[0].guid as the param). Of course, you may really want to use the access-by-hash pattern. In that case...
You can hack in the correct protocol behavior. In the SDK files, which you'll need to build as part of your project, find the ObjC file called ENTProtocol.m. Find the method +sendMessage:toProtocol:withArguments.
It has one line like this:
[outProtocol writeFieldBeginWithName:field.name type:field.type fieldID:field.index];
Replace that line with:
[outProtocol writeFieldBeginWithName:field.name type:(field.type == TType_BINARY ? TType_STRING : field.type) fieldID:field.index];
Rebuild the project and you should find that your code snippet works as expected. This is a massive hack however and although I don't think any other note store methods will be impacted adversely by it, it's possible that other internal user store or other calls will suddenly start acting funny. Also you'd have to maintain the hack through updates. Probably better to report the bug and don't use the method until Evernote publishes a proper fix.

Is there a way of saving a Google Doc so it has the same unique ID as an existing doc?

I have a need to create a copy of a Google Doc with a specific ID - not the "friendly" name like MyDocument, but the name that makes it unique in the GoogleSphere - the one like 1x_tfTiA9-b5UwAf3k2fg6y6hyZSYQIvhSNn-saaDs4c.
Here's the scenario why I would like to do this:
I have a newsletter which is in the form of a Google Doc. The newsletter is published on a website by embedding the document in a web page inside an <iframe> element. Also published in the same way is a "large print" version of the newsletter that is the same, apart from the fact that the default font size is 24pt, rather than 11pt.
I am trying to automate the production of the large print version, but in such a way that the unique ID of the large print document doesn't change, so that the embedded <iframe> for it still works.
I have experimented in the past with Google Apps Scripts routines for creating a deep copy of a document but the deep copy functions don't play nicely with images and tables, so I could never get a complete copy. If I could implement a "Save As" function, where the operand was an existing unique ID, I think this would do what I want.
Anyone know how I might do this?
I delved into this, attempting to set the id of the "large print" version of the file in a variety of ways:
via copy(): var copiedFile = Drive.Files.copy(lpFile, spFile.id, options);
which yields the error:
Generated IDs are not currently supported for copy requests
via insert(): var newFile = Drive.Files.insert(lpFile, doc.getBlob(), options);
which yields the error:
Generated IDs are not supported for Google Docs formats
via update(): Drive.Files.update(lpFile, lpFile.id, doc.getBlob(), options);
This method successfully updates the "large print" file from the small print file. This particular line, however, uses the Document#getBlob() method, which has issues with formatting and rich content from the Document. In particular, as you mention, images and tables in are not preserved (among other things, like changes to the font, etc.). Compare pre with post
It seems that - if the appropriate method of exporting formatted byte content from the document can be found - the update() method has the most promise. Note that the update() method in the Apps Script client library requires a Blob input (i.e. doc.getBlob().getBytes() will not work), so the fundamental limitation may be the (lack of) support for rich format information in the produced Blob data. With this in mind, I tried a couple methods for obtaining "formatted" Blob data from the "small print" file:
via Document#getAs(mimetype): Drive.Files.export(lpFile, lpFile.id, doc.getAs(<type>), options);
which fails for seemingly sensible types with the errors:
MimeType.GOOGLE_DOCS: We're sorry, a server error occurred. Please wait a bit and try again.
MimeType.MICROSOFT_WORD: Converting from application/vnd.google-apps.document to application/vnd.openxmlformats-officedocument.wordprocessingml.document is not supported.
These errors do make sense, since the internal Google Docs MimeType is not exportable (you can't "download as" this filetype since the data is kept however Google wants to keep it), and the documentation for Document#getAs(mimeType) indicates that only PDF export is supported by the Document Service. Indeed, attempting to coerce the Blob from doc.getBlob() with getAs(mimeType) fails, with the error:
Converting from application/pdf to application/vnd.openxmlformats-officedocument.wordprocessingml.document is not supported.
using DriveApp to get the Blob, rather than the Document Service:
Drive.Files.update(lpFile, lpFile.id, DriveApp.getFileById(smallPrintId).getBlob(), options);
This has the same issues as doc.getBlob(), and likely uses the same internal methods.
using DriveApp#getAs has the same errors as Document#getAs
Considering the limitation of the native Apps Script implementations, I then used the advanced service to obtain the Blob data. This is a bit trickier, since the File resource returned is not actually the file, but metadata about the file. Obtaining the Blob with the REST API requires exporting the file to a desired MimeType. We know from above that the PDF-formatted Blob fails to be properly imported, since that is the format used by the above attempts. We also know that the Google Docs format is not exportable, so the only one left is MS Word's .docx.
var blob = getBlobViaURL_(smallPrintId, MimeType.MICROSOFT_WORD);
Drive.Files.update(lpFile, lpFile.id, blob, options);
where getBlobViaURL_ implements the workaround from this SO question for the (still-broken) Drive.Files.export() Apps Script method.
This method successfully updates the existing "large print" file with the exact content from the "small print" file - at least for my test document. Given that it involves downloading content instead of using the internal, already-present data available to the export methods, it will likely fail for larger files.
Testing Script:
function copyContentFromAtoB() {
var smallPrintId = "some id";
var largePrintId = "some other id";
// You must first enable the Drive "Advanced Service" before this will work.
// Get the file metadata of the to-be-updated file.
var lpFile = Drive.Files.get(largePrintId);
// View available options on the relevant Drive REST API pages.
var options = {
updateViewedDate: false,
};
// Ideally this would use Drive.Files.export, but there is a bug in the Apps Script
// client library's implementation: https://issuetracker.google.com/issues/36765129
var blob = getBlobViaURL_(smallPrintId, MimeType.MICROSOFT_WORD);
// Replace the contents of the large print version with that of the small print version.
Drive.Files.update(lpFile, lpFile.id, blob, options);
}
// Below function derived from https://stackoverflow.com/a/42925916/9337071
function getBlobViaURL_(id, mimeType) {
var url = "https://www.googleapis.com/drive/v2/files/"+id+"/export?mimeType="+ mimeType;
var resp = UrlFetchApp.fetch(url, {
headers: { Authorization: 'Bearer ' + ScriptApp.getOAuthToken()}
});
return resp.getBlob();
}

How to set request headers asynchronously in typeahead/bloodhound

Environment:
I am using typeahead/bloodhound for a search field in my mobile app (steroids/cordova)
Every request from my app to the API needs to be signed and the signature added to auth headers
Obviously setting the headers in the ajax settings won't work as each request bloodhound sends will be different and require different signatures.
In my first implementation, I was using the beforeSend ajax setting to achieve this. Simply calculate the signature in that function and add it to the request headers.
However, this was not very secure so I have decided to place the secret used and the signature calculation into a Cordova custom plugin's native code to be compiled. Not bullet proof but a reasonable amount of security.
As Cordova plugins are asynchronous, beforeSend became useless in this case. The function will complete before the signing and setting of the headers are done.
So, in summary, the question is: How can I asynchronously calculate and set those headers with typeahead/bloodhound?
ok, the solution seems to be fork and hack. First modify _getFromRemote to remove the need for beforeSend by adding a remote.headers option similar to the remote.replace except that it returns a deferred object
if (this.remote.headers) {
$.when(
this.remote.headers(url, query, this.remote.ajax)
).done(function(headers) {
that.remote.ajax.headers = headers;
deferred.resolve(that.transport.get(url, that.remote.ajax, handleRemoteResponse));
});
} else {
deferred.resolve(this.transport.get(url, this.remote.ajax, handleRemoteResponse));
}
and then modify the get function that uses this to handle the deferred
if (matches.length < this.limit && this.transport) {
cacheHitPromise = this._getFromRemote(query, returnRemoteMatches);
cacheHitPromise.done(function(hit) {
if (!hit) {
(matches.length > 0 || !this.transport) && cb && cb(matches);
}
});
}
now I'm free to use asynchronous native code to sign and set request auth headers :)

Seeking not working in HTML5 audio tag

I have a lighttpd server running locally. If I load a static file on the server (through an html5 audio tag), it plays and seeks fine.
However, seeking doesn't work when running a dev server (web.py/CherryPy) or if I return the bytes via a defined action url instead of as a static file. It won't load the duration either.
According to the "HTTP byte range requests" section in this Opera Page it's something to do with support for byte range requests/partial content responses. The content is treated as streaming instead.
What I don't understand is:
If the browser has the whole file downloaded surely it can display the duration, and surely it can seek.
What I need to do on the web server to enable byte range requests (for non-static urls).
Any advice would be most gratefully received.
Here's some web.py code to get you started (just happened to need this as well and ran into your question):
## experimental partial content support
## perhaps this shouldn't be enabled by default
range = web.ctx.env.get('HTTP_RANGE')
if range is None:
return result
total = len(result)
_, r = range.split("=")
partial_start, partial_end = r.split("-")
start = int(partial_start)
if not partial_end:
end = total-1
else:
end = int(partial_end)
chunksize = (end-start)+1
web.ctx.status = "206 Partial Content"
web.header("Content-Range", "bytes %d-%d/%d" % (start, end, total))
web.header("Accept-Ranges", "bytes")
web.header("Content-Length", chunksize)
return result[start:end+1]
Google tells me you have to use the staticFilter for byte ranges to work in CherryPy - but that is for static files only. Luckily this posting also includes pointers on how to do it for non-static data :-)

Drupal node.save and JSONP

I am having an issue with call Drupal node.save using MooTool's JSONP. Here is an example.
Here is my request:
callback Request.JSONP.request_map.request_1
method node.save
sessid 123123123123123
node {"type":"blog","title":"New Title","body":"This is the blog body"}
Here is my result
HTTP/1.0 500 Internal Server Error
I got this working before, but i used AMFPHP and was able to send objects to drupal. I am assuming that this has to do with Drupal expecting an object, but since it is a GET it gets transformed as a string. Is there any way of getting around this with out hacking the code?
Here is my code:
$('newBlogSubmit').addEvent('click', function()
{
var node = {
type : "blog",
title:"New Title",
body :"This is the blog body"
}
var string = JSON.encode(node);
string.escapeRegExp()
var sessID = _sessID;
DrupalService.getInstance().node_save(string, sessID, drupal_handleBlogSubmit);
});
My Drupal Service JS Code:
//NODE
DrupalService.prototype.node_save = function(node, sessid, callback){
var dataObj = {
method : "node.save",
sessid : sessid,
node : node
}
DrupalService.getInstance().request(dataObj, callback);
}
//SEND REQUEST AND CALLBACK FUNCTION
DrupalService.prototype.request = function(dataObject, callback){
new JsonP('http://myDrupalSite.com/services/json', {data: dataObject,onComplete: callback}).request();
}
I am trying to connect the dots, but not too familiar with Drupal, but i would guess all I need to do is turn the string back into an object. Any ideas where I should be looking, or if there is an existing patch?
A first question could be why you use mootools since Drupal comes with jQuery and use it extensively throughout the different modules and Drupal core itself.
Anyways I don't know mootools so can't help you there, but if your request in ending in a internal server error, you have a problem with your drupal code or your js code. So even if I knew exactly what you were doing, I couldn't tell you the problem without looking at the drupal code for your http://myDrupalSite.com/services/json callback.
In general, what you want to make sure is:
You make a POST request, as drupal will cache get's and the semantic of this, is that you are posting data - the node - to the server.
Your data should be sent as post params, this will make them end up in the PHP $_POST variable
Your callback should validate the data and act accordingly, creating a node when the data is intact. You don't need session id's since the script will have the same session the browser has.
I've answered a similar question in detail, which was about altering a field instead of saving a node, but much of the work is still the same. You can take a look on the post, although this is with jQuery and not Mootools.

Resources