I'm trying to do a very simple CFHTTP GET call to a local website running on IIS7, however it throws a 408 Connection Failure.
I've done all the obvious things:
The site is listed in the hosts file locally
I've added the CFHTTPPARAM tags for IIS compression issues (deflate;q=0)
Surfing to the URL in the browser works fine
Doing a CFHTTP to google.com works fine, no local sites work at all.
When searching on Google there are others that have had this, but no solutions.
Anyone successfully got through this issue?
If you are using a private, or not well known certificate provider you may need to add the public key of the certificate provider to the JRUN keystore.
Here's more info on how to do that:
http://cfmasterblog.com/2008/11/09/adding-a-certificate-to-the-coldfusion-keystore/
You may just need to restart CF if you changed your HOSTS file after CF was started. It caching DNS entries pretty greedily.
It's a bad implementation. Use cfx_http.
Related
I have recently replaced the SSL certificates issued by RapidSSL because Symantec is no longer trusted by Google. But, after replacing the new SSL certificates I'm experiencing a privacy issue from web browsers. Browser indicates following error when I try to access a domain name which was affected by the new SSL certificates.
NET::ERR_SSL_PINNED_KEY_NOT_IN_CERT_CHAIN
So I did research on this error and found out that this error occurs due to the public key pins cache. but further researching on this matter I found out that the public key pin for SSL certificates has not been changed with newly issued certificates. So if we compare the public key pins between certificates prior to replacement and after the replacement both has same BASE64 encoded hash value. Further, I saw that this issue is getting resolved once the browser cache is cleaned. But our clients do not like to clean their browser cache. So we cannot rely on clearing browser cache. Can you please let us know is there any other way where we can overcome this issue?
Thanks for help. I also found a solution.
For GC:
Follow instructions from this image:
For FF:
Close the browser.
Open the file "SiteSecurityServiceState.txt" from the profile folder and remove lines with site domain.
Open the browser.
I hope this will be useful if someone is confronted with the same situation.
I have a Windows 2008 Server running the default IIS 7.0. I have an http handler where GET, POST, and PUT are working fine, but whenever I issue a DELETE request, there is nothing coming back from the server at all.
I'm using Fiddler to issue the request (which works great on my development Windows 7 machine). I have disabled WebDav, etc., which by the way should have given me some kind of error response anyway. In this case there's nothing. I've tried to enable tracing on the server and don't see anything there either.
I have tried to issue the DELETE web requests on other pages on the server as well, including non-existing ones, but there's never any respone. Maybe something on the server is "eating" the request before it gets to IIS?
To test this last question, I installed Fiddler on the server itself and posted the DELETE request from within the server. This actually worked!!! So, what's stopping the external request then?
I know that there are many questions like this, but most seem to be older.
I know all about setting up my web app on the above services, and have seen that most require a valid domain for a callback [www.abc.com/oauthReturn.aspx].
What do I need to do in order to test locally, where the return URL would be: http://localhost:0000/oauthReturn.aspx ?
You can't use http://127.0.0.1 in Yahoo - not a valid URL.
I'm running asp.net/VB.net, IIS7. Localhost runs Windows 7 & Prod server runs Windows 2008 server.
When I try accessing the microsoft site, I get an error message, after I sign in, about not allowing localhost.
Any help is appreciated.
Thanx
Jerry
Try following the instructions here. Make sure to read all the way to the bottom where they suggest registering the localhost call back URL in bit.ly (or some other URL shortener) and using the bit.ly link as the call back URL.
All of a sudden all of my websites on my server return 400 Bad Request Error. I don't have a clue what happened. App Pools are running in Classic pipeline mode (4.0, 2.0), doesn't matter.
Every URL that I type comes back as 400 Bad Request. Real URLs, even fake URLs that don't exist (which should come back as 404) all are 400.
http://mywebsite.com/AFile.aspx
http://mywebsite.com/AFolder/AnotherFile.aspx
http://mywebsite.com/Bfolder/YetAnotherSillyPage.aspx
http://mywebsite.com/A_stupid_URL_that_does_not_even_exist_fjfjffjfj.aspx
Everything 400 Bad Request. Totally screwed up my ASP.NET. Where should I begin to look? Machine.config? Web.config?
UPDATE:
After trying a million different settings, I finally set the App Pool to Integrated and set the Identity to LocalSystem and all of a sudden it works.
Bad Request usually is HTTP.sys stopping the request due to something really bad (like invalid URLs, or something like that).
You probably should look at HTTP.sys logs (Not IIS) at:
C:\Windows\System32\LogFiles\HTTPERR
Also, maybe something got broken in the http.sys configuration so try running:
netsh http show servicestate
And see if for your web site it has the correct bindings, for example it could be that the bindings are only listening on only specific IP Addresses and yet its coming from another one, or similar problem but with Host Name, etc.
Finally you might want to run:
C:\Windows\System32\inetsrv\appcmd list sites
And see if the bindings and status makes sense.
Have you tried some mixture of re-installing (or uninstall/install) asp.net using the aspnet_regiis.exe utility? That's fixed strange IIS/ASP.NET server issues for me in the past.
Have you looked in the event log for any error messages or further clues?
I am working on an update to one of our sites. This version will have unique behaviors based on the host name in the request. In order to test this behavior, I modified my computers host file by adding entries that point back to my computer.
127.0.0.1 newhostname.sample.com
127.0.0.1 oldhostname.sample.com
Everything seemed to be working fine, until I started working with the Session object. I discovered that after each request all my session variables were lost. Further investigation revealed that each response from the server contained a new SessionID.
Why is that?
I was able to hard code some flags to complete my testing using 'localhost' for requests without any problems.
I think this has to do with the domain of the site and the session cookie passed - the browser won't pass a cookie sent to it from oldhostname.sample.com to newhostname.sample.com.
To fix this, you'll need to set the domain of the session cookie that is sent. This question should show how to do this - ASP.NET Session Cookies - specifying the base domain.
Alternatively, you could look into using cookie-less sessions. http://msdn.microsoft.com/en-us/library/aa479314.aspx
I can't explain it, but I have an acceptable work around to my own problem.
Rather than use 127.0.0.1 in the Host file I am using my local IP. So requests to the names in my host file are handled locally and I keep the same SessionID throughout the site.
If anyone else can explain I'd be happy to know what IIS (or asp.net) is doing when using 127.0.0.1.