Error downloading - qt

to download a console returns the following error:
Frame load interrupted by policy change
Example:
Start Download
Console Preview:
Should I configure something in the Compiler or QWebSettings?

I discovered.
In conventional Webkit browsers, the place to download the console shows how the request canceled, so before turning to "download manager" of the browser the request should be canceled.
solution:
//replace [QWebView] by your WebView
connect([QWebView]->page(), SIGNAL(unsupportedContent(QNetworkReply*)),
this, SLOT(downloadContent(QNetworkReply*)));
...
void [main class]::downloadContent(QNetworkReply *reply){
//Replace "[main class]" by "Class" having the signs used in WebView.
[QWebView]->stop();
//solution: stop loading --replace [QWebView] by your WebView
/*function to donwload*/
}

Edit: hard to tell without a proper backtrace I requested in the comments, but it looks like the warning might actually be harmless.
Original:
That's because the QWebView doesn't know what to do with your app.exe file -- it's not an HTML page or a text/plain document or a supported image, after all. The QWebView class is not a web browser; you apparently want to start a download of some file, but there's no full-blown download manager in that class. You will have to provide your own code for this -- the code will have to ask for a proper location to save it, etc.
You can start with QWebPage::setLinkDelegationPolicy and handle this particular click yourself.

Related

Robot Framework Browser Library: Save and access PDF file returned by server

In my robot framework script I use the browser library to open a webpage and click on a button to get a receipt in PDF.
The button does not contain the direct link to the PDF: when you click the button, it opens a new page and after ~0.5 second the page returns a PDF file generated by the server.
I was not able to get and access the generated file in the Robot Framework script.
My tentative #1
I thought the PDF file could be captured by downloading it with a promise but it failed:
Click button.okButton
Switch Page NEW
${dl_promise} Promise To Wait For Download ./${SUITE_NAME}/downloads
${file_obj}= Wait For ${dl_promise}
File Should Exist ${file_obj}[saveAs]
Failure:
Tentative #1bis
${dl_promise} Promise To Wait For Download ./${SUITE_NAME}/downloads
Click button.okButton
Switch Page NEW
${file_obj}= Wait For ${dl_promise}
File Should Exist ${file_obj}[saveAs]
Same failure (TimeoutError: page.waitForEvent: Timeout 20000ms exceeded while waiting for event "download")
My tentative #2
The new page source code is the following:
<html><head></head><body><embed name="9AF27FA0E167C8860EB51FD926BE211B" src="about:blank" type="application/pdf" internalid="9AF27FA0E167C8860EB51FD926BE211B"></body></html>
I thought it is a local storage ID, am I right? So I tried the following:
Click button.okButton
Switch Page NEW
${sourceCode} = Get Page Source
${storageItem} = Get Regexp Matches ${sourceCode} (?<=(internalid="))(.*)(?=(">))
${myPDFfile} = Local Storage Get Item ${storageItem}
Log ${myPDFfile}
But seems it does not work:
... any idea how I should proceed? Thanks so much for your help and suggestions.
Keyword Promise To Wait For Download should be before triggering action. So the logic should be the following:
Promise
Trigger download
Wait for promise

IIS 7.0+ HTTP PUT Completes, but No File Saved

I'm struggling to figure out what exactly is happening. I am using GdPicture to save a scanned document through java script using their COM+ code and source project as my starting ground. Long story short is their function issues a HTTP PUT command specifying the file name to be saved.
When I execute the command I see that the request is getting to my server, and even has the appropriate content size to include the pdf document. I even get a 200 response back to my browser, no errors or anything...... yet the pdf doesn't get saved. Is that because PUT isn't the right way to do this? I don't have the option to POST the file because the transfer is wrapped in GdPicture's api... so with that said.
I have done the following
Ensured that IIS_IUSRS group has write permissions to the "Upload" virtual directory
Added a handler that specifically allows the PUT verb for "*.pdf"
Removed the StaticFileHandler for the "Upload" virtual directory
I aplogize for the links, but I don't have 10 rep points yet
PUT Request from FIDDLER
Response
** Edit **
More information about GdPicture, I have already contacted them and their function is not the problem. The implementation is as simple as
var status = oGdViewer.SaveDocumentToPDF_2("http://domain.com/Annotation/Upload/" + FileName, "user", "pass");
Thanks!

Displaying KMZ files behind protected networks

I'm trying to display a KMZ file which resides in a folder that is password protected and has a port different from 80. It looks like this:
http://localhost:8080/assets/data/3641
That will return a KMZ file with the valid MIME type, and I can save and open it in Google Earth if I access this link in the browser.
Google Earth's API has the following methods for displaying KMZ/KML:
KmlNetworkLink - you provide the URL of the KMZ/KML and then attach this object to the GE instance
parseKml() - you provide it a KML string, it gives you back a KmlFeature to attach
fetchKml() - you provide it a URL to a KML/KMZ, it attaches it for you
Another handy method is displayKml() from the Google Earth API Utility library, which uses fetchKml()
fetchKml()
My first attempt was to use fetchKml, but this gives no response - it fails silently. I'm surprised this is considered normal behaviour by the plugin (why doesn't it throw an exception, or provide a second callback to handle errors?). This method works fine if I provide it a sample kmz in the form:
http://localhost/somefile.kmz
I believe the issue is the fact that my first URL is password protected - it will redirect to a login screen if no login session is present, and I suspect that the Google Earth plugin doesn't share the same browser session as the browser - so it runs into a login screen and fails because it receives an HTML file instead of a KMZ/KML.
parseKml()
Pressing on undeterred, I made another API method to unzip the KMZ on the server side and return the KML string:
http://localhost:8080/assets/data/unzip/3641
The beauty of this method is that I write my own JavaScript to perform the GET request - it doesn't go through Google Earth, so the login session I have opened is used and the KMZ can be downloaded. The downfall is that KMZs can contain images and music which the KML file can reference. These can't be passed along with the KML string as far as the documentation is concerned.
KmlNetworkLink
My last attempt was to use KmlNetworkLink and KmlLink. This has the same effect as fetchKml - nothing happens.
UPDATE: Also, it will fail when using "https" without a valid certificate.
Yes the issue is that URL is password protected. You can get fetchKml() to give some indication of the error if you use it like so:
google.earth.fetchKml(ge, 'http://localhost:8080/assets/data/3641
', finishFetchKml);
function finishFetchKml(kmlObject) {
// check if the KML was fetched properly
if (kmlObject) {
// add the fetched KML to Earth
currentKmlObject = kmlObject;
} else {
// setTimeout prevents a deadlock in some browsers
setTimeout(function() {
alert('Bad or null KML.');
}, 0);
}
}
Kml is designed to be a free open format - if you wish to use it privately on a secure system then you should look at using the enterprise version of the Google Earth Plugin.

Can I request scripts for use in a Spotify app?

I'm trying to use socket.io in my spotify app and the get request for [domain]/socket.io/socket.io.js keeps getting canceled. I've added the domain to the manifest and everything.
Thanks!
Try restarting Spotify. Your app's manifest.json file is loaded when you first view your app, and cached until you quit, even if you modify it.
Note: How external resource permissions work
In order to request external resources, your application needs to specify each domain it plans to connect to in its manifest.json file.
Add a line like this:
{
// ...
"RequiredPermissions": [ "http://*.spotify.com", "http://spotify.com", "http://test.example.com" ]
// ...
}
For the full details check out the Permissions section of the Spotify Apps API Guide.
I can add that when you use socket.io it will try to initialize Flash to check if flash is available so if you find a white box in Spotify (only in Windows), remove the swbobjects initialization in the socket.io.js on the node server.

Error handling in ASHX code

I created an ASHX file and use it to handle async file uploads.
Since the site might not be hosted on our servers, I want to check for write permissions and delete permissions and supply the end user (site content editor in this case) with an error they can deal with.
I'm using uploadify for the upload, I'm not sure, but I`m guessing this complicates the return of a message that can be shown on the page, but maybe not.
I ended up using the c# code in ashx file to check for permissions on the directory and returned different status codes as JSON objects.
context.response.write("{success: 'false', message: '" + ex + "'}")
And in the client side JS I just access response.message if response.success = false.
Everything works well.
Thank you!
Before the user is able to attempt an upload, trying writing and reading a small file to the destination on the server (on the server side), if this fails then you can supply them with an appropriate message.

Resources