I'm using ASP.NET WebClient.DownloadFile(url) to obtain images from the Image Servers of several of our clients. The 'url' is usually simple, like "http://somewhere.com/images/image01.jpg".
This works great for 99% of our clients. But one is giving me a "An existing connection was forcibly closed by the remote host". Every time.
I tried using DownloadData() instead, same issue. And I can get the image via a browser, but not with WebClient.
Does anyone have any recommendations?
David
Some servers will look for specific user-agent strings to prevent bots and other leeching sites from downloading images. Check out the user-agent that you're setting in webclient.
It might be worth using something like the HttpFox FireFox addon to see exactly what the server is doing when you request that file as it might be doing something "odd" like setting/reading a tracking cookie before it will download (just a random example).
It won't fix the problem, but it might give you an idea why the WebClient isn't handling it.
Related
I'm hosting a website serves global regions, and recently there's a weird issue came up.
Already checked other posts on the Internet including the one in stackoverflow with a lot of discussions:Chrome net::ERR_HTTP2_PROTOCOL_ERROR 200 after a reconnect , but none of the answers helped.
Website is building on ASP.NET webform legacy "website" (not web application).
There's a important function which performs several process once user click a button on website.
Let's say there are 100 lines of code in that function, and I've added some flags to log which steps have been hit and processed.
Weird situation is:
Only China users are facing the issue. (website is not hosted in China)
Some users are using firefox and it returned below, in English it is "Secure Connection Failed"
But checked several posts including firefox documents, there should be error code on screen like
ssl_error_no_cypher_overlap but there is nothing.
Firefox error
Some users are using other browsers which is Chrome based, it returns:
Chrome error
In additionally, I checked the process log in these user feedbacks, most of them does not finish all the code, in other words, if there are 100 lines of codes and some of them just stopped in line 50.
Website has TLS 1.2 enabled, also http2 protocol (h2) is applied when I checked via Chrome-Network tab.
I'm wondering if it is possible if client browser shut down the connection in some reasons, it will end with the result I see (stopped at the middle of entire code flow), from my opinion if a request is posted to server then no matter what client does, the process should finish entire flow.
Any ideas or thoughts will be appreciated!
I was just dealing with that exact situation.
From what I read in various posts on the HTTP2_PROTOCOL_ERROR, I think what happens is the response is started but code problem(s) prevent the server from completing the response. The incomplete response gives the protocol error in Chrome, and, because it's over TLS, Firefox sees it as a security error. (I'd share links, but I've already closed all those windows - sorry.)
Somehow my code was preventing the server from completing the response without causing an exception.
I was able to track down the offending code by commenting out the body of every code-behind procedure on the page and then bringing them back one at a time.
Good luck to you!
I can't give you a concrete example, but in my case, there was no problem on the application side.
Have you recently added settings to your in-house infrastructure engineer?
For example, have you added WAF settings? You may want to check.
FYI
The question says it. Does anybody know the answer to this? We're running into problems when 3rd-party cookies are disabled.
If your browser does not accept cookies the application server should maintain the session using a jsessionid passed in the url. BlazeDS will be aware about that and it will also add the jsessionid to the AMF messages (and on the client it will be read and added to the other requests).
If that's the case you can check this post..there are some links to a couple of articles. If you receive the error even after reading the articles (and applying the suggestions) it would be good to create a running test case (and I can take a look on it).
It is quite easy to update the interface by sending jQuery ajax request and updating with new content. But I need something more specific.
I want to send the response to client without their having requested it and update the content when they have found something new on the server. No need to send an ajax request every time. When the server has new data it sends a response to every client.
Is there any way to do this using HTTP or some specific functionality inside the browser?
Websockets, Comet, HTTP long polling.
It has name server push (you can also find it under name Comet technology). Do search using these keywords and you will find bunch examples, tools and so on. No special protocol is required for that.
Aaah! You are trying to break the principles of the web :) You see if the web was pure MVC (model-view-controller) the 'server' could actually send messages to the client(s) and ask them to update. The issue is that the server could be load balanced and the same request could be sent to different servers. Now if you were to send a message back to the client you'll have to know who all are connected to the server. Let's say the site is quite popular and you have about 100,000 people connecting to it every day. You'll actually have to store the IPs of each of them to know where on the internet they are located and to be able to "push" them a message.
Caveats:
What if they are no longer browsing your website? You see currently there is no way to log out automatically if you close your browser. The server needs to check after a fixed timeout if you have logged out (or you send a new nonce with every response to prevent the server from doing that check)
What about a system restart/crash etc? You'd lose all the IPs that you were keeping track of and you are back to square one - you have people connected to you but until you receive new requests you can't really "send" them data when they may be expecting it as per your model.
Let's take an example of facebook's news feeds or "Most recent" link close to the top right - sometimes while you are browsing your wall you see the number next to most recent has gone up or a new 'feed' has come to the top of your wall post! It's the client sending periodic requests to the server to find out what was updated rather than the other way round
You see, it keeps it simple and restful. You may feel it's inefficient for the client to "poll" the server to pull the data and you'd prefer push, but the design of the server gets simplified :)
I suggest ajax-pulling is the best way to go - you are distributing computation to the client and keeping it simple (KIS principle :)
Of course you can get around it, the question is, is it worth it?
Hope this helps :)
RFC 6202 might be a good read.
We currently have fairly robust error handling functionality in our ASP.Net application.
We log all errors in the database, a text file on the server
and also send automated emails containing the error details back to our support people.
This all happens on the server of course.
We would like to capture (and retrieve) an image of the client browser at the time the error occurred to provide additional info for troubleshooting?
Is this at all possible?
If so what would be an elegant approach to this problem?
This is not technically impossible, but it is so impractical for nearly all purposes that it might as well be impossible. You would need a plugin running on the client's machine which can receive instructions from your error page to take the screenshot, connect to the server and upload it.
If your client screens have complex data which affects the state surrounding the exception, you should revisit your design to ensure all of that is recorded before it's sent to the client, so you can keep all relevant state tracked with a given exception.
Saying something is "impractical" is usually easier than actually trying to solve something that is difficult, but not technically impossible.
I have done some more research and have come across
an approach that allows one to get hold of the rendered html server side.
Further more, there are ways to also convert html to images
I will implement the solution using a combination of the two.
Capturing a client browser screenshot is not possible due to security and privacy reasons. What you can (and imho you should) do is capture the url and the browser version and try to reproduce it in the same environment.
Is HTTP partial GET a reliable mechanism? If it is, how come it seems like modern browsers still start from the beginning instead of resuming the download?
In my experience this feature is not ubiquitous across all web servers. Probably because it is not a widely used by web clients. Sort of like HTTP HEAD requests which may or may not be implemented. As always, YMMV depending on the clients and servers involved.
The download resumption mechanism is based on HTTP range request headers that specify what part of the content you want (see here). I have not messed with this much in the last few years, so you may be better served doing a little more Google research. Here is a link to a blog posting that talks about some the latest developments regarding this feature.
Whenever I download big files with wget, I might interrupt them and resume with -c. I don't remember ever getting a corrupted file. Safari allows you to resume (instead of restart) a stopped download, works fine there too.
Yes, when done properly (If-Match etag...), it is reliable.