Windows 8 built-in WebDAV client ignores 401 Unauthorized - webdav

I create a webdav connection with the Windows 8 built-in WebDAV client (Microsoft-WebDAV-MiniRedir).
I have only a read permission for the files and try to delete one.
I can open by right-click the context menu and delete it, although my WebDAV server returns 401 Unauthorized. The file disappears in the explorer as if it has been deleted.
If I close the explorer window and open it again, the file is back again, what is ok.
Why the deletion is not refused and why I doesn't get from the WebDAV client an error message like "401 unauthorized access"?
Here are the request and response.
Request:
DELETE https://xxx.yyy.zz/webdav/mysharedfolder/file1.txt HTTP/1.1
Connection: Keep-Alive
User-Agent: Microsoft-WebDAV-MiniRedir/6.3.9600
translate: f
Host: xxx.yyy.zz
Authorization: Basic dlk7uXNvcmt1QHdlYi5kZTpRd2VyMTIzNA==
Cookie: JSESSIONID=A7497F42472ECC676E44A90E3C5D5E7
Response:
HTTP/1.1 401 Unauthorized
Date: Thu, 13 Nov 2014 23:21:43 GMT
Server: Apache-Coyote/1.1
WWW-Authenticate: Basic realm="https://xxx.yyy.zz/webdav/mysharedfolder/file1.txt"
Content-Length: 0
Connection: close
Content-Type: text/plain; charset=UTF-8

A redirect on an OPTIONS request (or in any webdav request actually) is suspicious, and I wouldnt assume windows will correctly handle that, so that might be something to look at. But i also vaguely remember encountering something similar with Win7 years ago. A workaround might be to return a different 4xx error code for the mini-redirector agent.

Related

Can't login to wp-admin after switching to SSL

I recently installed SSL certificate for organization's intranet wordpress site(http to https) and now I'm unable to access Wordpress Admin.
It gave me this error:
HTTP/1.1 400 Bad Request Date: Thu, 18 Aug 2022 02:41:55 GMT Server: Apache/2.4.46 (Win64) OpenSSL/1.1.1k PHP/8.0.3 Content-Length: 226 Connection: close Content-Type: text/html; charset=iso-8859-1
Bad Request
Your browser sent a request that this server could not understand.
I've tried every solutions from Cant login to my wp-admin after switching to SSL but none is working.
Is there anything I'm missing. By the way, we are using F5 to implement the SSL.
Try going to the DB and checking the wp_options table to see what the site URL is. Change it to HTTPS and try?

SignalR server closes connection to Unity client right after SSE connection is established

I'm working on a prototype project connecting a self-hosted SignalR server running on Mono with C# clients (for testing) and Unity clients (representing the actual use-case scenario). The Unity client is using BestHTTP Pro as its SignalR library.
As the WebSocket transport method is not supported on Mono, I'm focusing on server-sent events, and observing very odd behavior there. Communication between server and C# clients is working just fine out of the box. With the Unity client though, the (supposedly) persistent connection is closed immediately after the initial response to the /signalr/connect request. No errors are reported anywhere; the response code is 200 in both cases.
Further investigation with Fiddler reveals that the Unity client is sending a Connection: Keep-Alive header that the C# client doesn't send, to which the server responds with a Connection: close header and, well, closing the connection (in other words, exactly the opposite of what the client asks it to do).
Manually removing the keep-alive request header actually makes everything work with the Unity client. Since this feels more like an odd workaround than a correct solution, my question is: Is this strange server-side behavior a bug in the SignalR libraries? Or could Mono be to blame here (I suspect this might be the case)? How can I dig deeper into this, and ideally make the SSE transport work without client-side hacks?
Library versions used:
Microsoft ASP.NET SignalR 2.2.1
BestHTTP Pro 1.9.17
For reference, here are the full request/response headers; Unity/BestHTTP client:
GET /signalr/connect?tid=1&_=XXX&transport=serverSentEvents&clientProtocol=1.5&connectionToken=XXX&connectionData=XXX HTTP/1.1
Accept: text/event-stream
Cache-Control: no-cache
Accept-Encoding: identity
Host: XXX
Connection: Keep-Alive
Connection: Keep-Alive, TE
TE: identity
User-Agent: BestHTTP
HTTP/1.1 200 OK
X-Content-Type-Options: nosniff
Content-Type: text/event-stream
Server: Mono-HTTPAPI/1.0
Date: Wed, 08 Mar 2017 10:34:05 GMT
Connection: close
Content-Length: 73
C# client:
GET /signalr/connect?clientProtocol=1.4&transport=serverSentEvents&connectionData=XXX&connectionToken=XXX HTTP/1.1
User-Agent: SignalR.Client.NET45/2.2.1.0 (Microsoft Windows NT 6.2.9200.0)
Accept: text/event-stream
Host: XXX
HTTP/1.1 200 OK
X-Content-Type-Options: nosniff
Content-Type: text/event-stream
Server: Mono-HTTPAPI/1.0
Date: Wed, 08 Mar 2017 13:11:16 GMT
Transfer-Encoding: chunked
Keep-Alive: timeout=15,max=99
BestHTTP developer here.
First of all, the plugin can use WebSocket as a SignalR transport on every supported platform. While the mono framework that Unity uses has no WebSocket implementation, the plugin uses its own.
The Server-Sent Events protocol has no direct indication what should be done for a case like this, but modified the plugin to work the same way as other clients. You can wait for the next release on the Asset Store, or you can contact me for an updated package.

Accessing localhost shows 404 on IIS 7.5 webserver. Where are the logs?

Wanted to access a local website by
http://127.0.0.1/
http://localhost/
http://192.x.x.x/
but the only reply from any browser was:
HTTP/1.1 404 Not Found
Content-Type: text/html; charset=us-ascii
Server: Microsoft-HTTPAPI/2.0
Date: Thu, 03 Oct 2013 09:38:48 GMT
Connection: close
Content-Length: 315
There is nothing in the logs about what was wrong with the request
%SystemDrive%\inetpub\logs\LogFiles\W3SVC3
%SystemDrive%\inetpub\logs\FailedReqLogFiles
To solve the problem and access the website added 'localhost', 127... in IIS bindings for this website. Pages are served using these local Urls.
Requests arrived to IIS since as soon as defined the bindings requests are served.
My question: Is there a log or way to trace what was happening with the request before defined localhost binding in IIS? What is returning the 404?
You need to enable failed request tracing on the server and on the site and add a tracing rule to capture 404 requests. That will tell you exactly what the issues is

ParseError on Node.js http request

Hello StackOverflow community!
I started to learn Node.js recently, and decided to implement a reverse HTTP proxy as a task. There were a couple of rough places, which I managed to get through on my own, but now I'm a bit of stuck, and need your help. I managed to handle redirects and relative urls, and with implementation of relative url support I faced the problem I'm going to describe.
You can find my code at http://pastebin.com/vZfEfk8r. It's not very big, but still doesn't fit nicely to this page.
So to the problems (there are 2 of them). I'm using http.request to forward client's request to the target server, then waiting for response and sending this response back to client. It works okay for some of the requests, but not for others. This is the first problem: on the web-site I'm using to test the proxy ( http://ixbt.com, cool russian web-site about the tech) I can always get the main page /index.html, but when browser starts to fetch other files referenced from that page (css, img, etc.), most of the requests are ending with ParseError ({"bytesParsed":0}).
While debugging (using Wireshark) I noticed that at some of the requests (if not all) fail with this error when the following HTTP negotiation between proxy and target server occurs:
Request:
GET articles/pics2/201206/coolermaster-computex2012_70x70.jpg HTTP/1.1
Host: www.ixbt.com
Connection: keep-alive
Response:
<html>
<head><title>400 Bad Request</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>
Looks like server doesn't send the status code, and no headers. So the question is, can this be the reason of failure (ParseError)?
My another concern is that when I'm trying to get the same file as a standalone request, I have no problems. Just look:
Request:
GET /articles/pics2/201206/coolermaster-computex2012_70x70.jpg HTTP/1.1
Host: www.ixbt.com
Connection: keep-alive
Response:
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 25 Jun 2012 17:09:51 GMT
Content-Type: image/jpeg
Content-Length: 3046
Last-Modified: Fri, 22 Jun 2012 00:06:27 GMT
Connection: keep-alive
Expires: Wed, 25 Jul 2012 17:09:51 GMT
Cache-Control: max-age=2592000
Accept-Ranges: bytes
... and here goes the body ...
So in the end of the day there may be some mistake in how I do the proxy requests. Maybe it's because I actually do lots of them, when the main page is loaded - it has many images, etc.?
I hope I was clear enough, but please ask about details if I missed something. And the full source code is available (again, at the http://pastebin.com/vZfEfk8r), so if somebody would try it, it would be just great. :)
Much thanks in advance!
P.S. As I said, I'm just learning, so if you'll see some bad practices in my code (even unrelated to the question), it would be nice know them.
UPDATE: As was mentioned in comment, I didn't proxied the original request's headers, which in theory could lead to problems with the following requests. I changed that, but, unfortunately, the behavior remained the same. Here's example of new request and response:
Request
GET css/main_fixed.css HTTP/1.1
Host: www.ixbt.com
connection: keep-alive
cache-control: no-cache
pragma: no-cache
user-agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.56 Safari/536.5
accept: text/css,*/*;q=0.1
accept-encoding: gzip,deflate,sdch
accept-language: ru-RU,ru;q=0.8,en-US;q=0.6,en;q=0.4
accept-charset: windows-1251,utf-8;q=0.7,*;q=0.3
referer: http://www.ixbt.com/
Response
<html>
<head><title>400 Bad Request</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>
I had to craft the 'referer' header by hand, since browser is sending it with reverse proxy url. Still behavior is the same, as you can see. Any other ideas?
Thanks to the valuable comments, I was able to find the answer to this problem. It was nothing related to Node or target web-servers, just a coding error.
The answer is that path component of url was wrong for the relative urls. It already can be visible from my logs in the question's body. I'll repeat them here to reiterate:
Wrong request:
GET articles/pics2/201206/coolermaster-computex2012_70x70.jpg HTTP/1.1
Right request:
GET /articles/pics2/201206/coolermaster-computex2012_70x70.jpg HTTP/1.1
See the difference? The leading slash. Turns out I missed it on my requests for relative urls, due to my own awkward client's url handling. But with a quick-and-dirty fix it's working now, well enough until I'll do a proper client's url handling.
Much thanks for comments, they were insightful!
If above solutions do not work, try removing content-length header. Content-length mismatch causes body parsers to cause this errors

Getting error #2032 when calling WCF Service even though call succeeds

We are developing an application using Adobe Air from the client gui with a mix of WCF and REST on the backend. One of the requirements of this application is that it must work offline. So, when the user clicks save, the application stores it in a local sqlite database. Every 15 seconds, the application checks if it is online, and if so, sends any pending requests out. Then if the call succeeds it updates the local database so it won't try to send that case out again.
For this particular operation - OpenMedicalCase, the app sends out the request and can't decode the response. I am verifying that the WCF side of things is working correctly. The response message is well formed. The network monitor in flashbuilder says I am receiving 100 Continue:
POST /services/medicalcase.svc HTTP/1.1
Referer: app:/AWC_MRDS.swf
Accept: text/xml, application/xml, application/xhtml+xml, text/html;q=0.9, text/plain;q=0.8, text/css, image/png, image/jpeg, image/gif;q=0.8, application/x-shockwave-flash, video/mp4;q=0.9, flv-application/octet-stream;q=0.8, video/x-flv;q=0.7, audio/mp4, application/futuresplash, /;q=0.5
x-flash-version: 10,1,53,64
Content-Type: text/xml; charset=utf-8
SOAPAction: "http://tempuri.org/IMedicalCaseService/OpenMedicalCase"
Content-Length: 8534
User-Agent: Mozilla/5.0 (Windows; U; en-US) AppleWebKit/531.9 (KHTML, like Gecko) AdobeAIR/2.0.2
Host: localhost:11934
Cookie: RememberMe=1147670691^1#3435272784175716681
HTTP/1.1 100 Continue
Server: ASP.NET Development Server/10.0.0.0
Date: Thu, 23 Dec 2010 21:23:49 GMT
Content-Length: 0
Calls to other operations at the same endpoint are returning 200 OK like expected. So what ends up happening is that flex thinks the call did not succeed and sends it over and over again. AFAIK flex is not sending Expect: 100-Continue in the headers either.
Update: I attached debuggers to the wcf service AND the gui, setting a breakpoint right before the server sends a response. Flex receives 100 Continue before the service code returns anything. Please note that I am only testing this using the ASP.NET development server. Is there some property or configuration option I need to change on the flex side? WCF?

Resources