There's another question that answers this though it doesn't specify anything regarding proxy authentication.
Its solution is
(setq url-proxy-services '(("no_proxy" . "work\\.com")
("http" . "proxy.work.com:911")))
Nowadays, my approach to the "authenticated proxy problem" is to use CNTLM. It is portable, quite easy to configure and may be run as deamon.
I get authorization working without user interaction by:
(setq url-proxy-services
'(("no_proxy" . "^\\(localhost\\|10.*\\)")
("http" . "proxy.com:8080")
("https" . "proxy.com:8080")))
(setq url-http-proxy-basic-auth-storage
(list (list "proxy.com:8080"
(cons "Input your LDAP UID !"
(base64-encode-string "LOGIN:PASSWORD")))))
This work for Emacs 24.3. It based on non-public API tricks, so might not work in anther Emacs versions...
Replace LOGIN and PASSWORD with your auth info...
Well, if you really want to do this and do not mind using another program then ... socat is the answer. Use socat to forward a local port through to a connection passing through the http proxy. You are not bypassing it, just "bolting on" the functionality to an application that does not have it (in case anyone asks). This might be difficult.
Another solution that would work great if you are on a unixy OS is to install your own non-authenticating http proxy that uses the authenticating proxy (like squid). This might look like circumvention to some people. Be careful.
For example, take a look at Proxytunnel.
UPDATE: Mike Hoss seems to be correct in the comment he adds to the question linked to above. The URL package will ask for id and password. At least that is what I see in the defun for url-http-create-request in file url-http.el.
In case anyone else hits what I've just struggled with:
If you use cntlm or some other local authenticating proxy, you may need to specify a loopback IP address rather than "localhost". I found "localhost" silently failed, but "127.0.0.1" worked a treat.
ELPA uses the "url" package. As far as I know, there is no way to do proxy authentication with it.
Can you set up your proxy auth outside of Emacs?
Related
I saw this question in another post but the solution did not work correctly.
I use:
System.Net.Dns.GetHostEntry(HttpContext.Current.Request.ServerVariables.Item("REMOTE_HOST")).HostName
This worked correctly on localhost but on server it has problems and returns an empty string.
Any idea?
Two things to consider:
Take into account that HttpContext.Current.Request.ServerVariables.Item("REMOTE_HOST") will return the name of the host making the request, not the address.
Try doing System.Net.Dns.GetHostByAddress(Request.ServerVariables.Item("REMOTE_HOST")).HostName instead.
You can find a list of the Server Variables at Microsoft's MSDN website.
Hope that helps,
To get client IP address use
HttpContext.Current.Request.UserHostAddress.ToString
or
HttpContext.Current.Request.UserHostName
To get client browser use
HttpContext.Current.Request.Browser.Browser
I find out that already in VS2010 "GetHostByAddress" is deprecated.
Instead, use: System.Net.Dns.GetHostEntry(HttpContext.Current.Request.ServerVariables["REMOTE_HOST"])
My web app makes request to third party servers, and we sometimes route them trough proxies. I'd like to be able to "see what they see" -- see what the request looks like once its been routed through the proxy.
Specifically, I'm interested in how much identifying information about the source (my web app) is left in the request once it reaches the destination, having been routed through the proxy.
Does anyone know an easy way to do this? Maybe a web service that will just echo back all the information about the incoming request in the outgoing response?
Not a full answer, but maybe you can try:
http://www.cantoni.org/2012/01/08/simple-webservice-echo-test
And the other 2 webs mentioned there:
http://respondto.it/
http://requestb.in/
To setup a URL to send your requests and see if the info provided helps you.
I'm just stating this as an idea that came to me. You could try sending requests to your own URL, which you control (i.e. a resource in your own web application). That way, you can use your debugging infrastructure or other facilities (basically anything you want) to inspect the request that's coming into your application. It seems to me this might be the most powerful / easiest way to do this. It won't let you test the URL you were trying to test, but in terms of proxy visibility, it might be what you need.
Good luck!
If the proxy supports the TRACE method and the Max-Forwards header you can use that. Not all do, however.
I have my production site's app pool to recycle every 2 hours or so. I noticed that when the first call to the site is made, the App Pool caches the base url (e.g. www.mysite.com). This makes sense as this is used to resolve relative paths in ASP.NET e.g. ~/MyFolder/MyPage.aspx, which is resolved to:
http://www.mysite.com/MyFolder/MyPage.aspx
However since the site can be reached via our host name e.g.
http://masdfg.my.provider.net
IIS thinks the url is
http://masdfg.my.provider.net/MyFolder/MyPage.aspx
As you can image, this causing an issue with SSL as well as others. How can I prevent this from happening?
UPDATE: The work around was to create a url redirect. If anyone knows how to prevent this let me know.
I hope I've understood your question correctly, but please do let me know if I haven't.
It sounds like the sole issue you have is that when you write the links to the response they sometimes reference the wrong root URL.
I notice that you use ~/ . This would resolve and write the entire URL to the response I think. It is better to use only / when writing links to the response.
So in your example you would write /myfolder/mypage.aspx. The browser would then resolve the / to mean that it's from the root address of the site, whichever that may be.
Like I said, I hope I've understood your question correctly and apologies if I haven't.
I know it's a long shot, but I've had a similar problem with my IIS setup. I solved it by going to the already mentioned "bindings" window through "Edit Bindings".
Then I removed all the not wanted bindings, then adding the hostname www.mydomain.com the server should answer to.
Finally I edited the windows hosts file at
%windir%\System32\drivers\etc\hosts
Adding the line
127.0.0.1 www.mydomain.com
This ensures that www.mydomain.com always resolves to the local computer.
After executing iisreset.exe as administrator my problem was over.
HttpContext.Current.Request.Url is not a cacheable item. That value comes from the HOST value of the HTTP headers. Which means it is passed in to the application from the request.
The only time it should take that second URL is if the requests HOST value was masdfg.my.provider.net
There are three possible fixes here. The first is to set your bindings and have any requests to masdfg.my.provider.net be forwarded over to www.mysite.com
The second, because your primary issue appears to be about SSL is to get a unified communications (UC) SSL certificate and install that on your server. This would be to cover the mysite.com and masdfg.my.provider.net domain names.
The third is to simply create a separate IIS site which points to the exact same production directory as the first one. Each site would have only 1 domain name it's responsible for.
HI,
I have issue in using Windows live API
Iam using asp.net, am not able to use the callback url on local
The signin link is working only if i provide live url, but i cant able to use local host.
Please help
It may help someone else also -
Please add following entry in hosts file (located at [%system drive%]\Windows\System32\drivers\etc)
127.0.0.1 www.example.com
#[Please replace example domain with your actual one]
Windows live server expects your return url to have http:// in it but chrome does not add it and IE do add it, I realized this after wasting sometime.
This should get you through testing api on your local machine.
Go and setup a dynamic dns and a name for your computer and make your tests this way.
For example you can setup on DynDns.com a name for your dynamic ip, and then setup your router with that name to automatic assign it (or do it manual from the pages), and then you can use this name, and not the localhost. Do not forget to open the port to your router so the other side can make requests.
Also on /windows/system32/drivers/etc/host you can also setup the same name to see your local host and make your tests and callbacks.
Your problem is that the callback address needs to be the same as the address you used to sign up with.
(In relation to your callback), from the documentation:
The domain name portion of the URL (for example, www.contoso.com) must
be the same as the one that you specified when you created your
application with Live Connect. The URL must use URL escape codes, such
as %20 for spaces, %3A for colons, and %2F for forward slashes.
So, based on what you have said, you are using localhost (which you can't). As #Aristos suggested, add an entry to /windows/system32/drivers/etc/host to the domain you have registered (eg www.contoso.com).
Use www.contoso.com instead of localhost to test.
I have some software which makes a request to a specific URL in internet and I want it to receive my custom response. Is there any software tool for that on Windows? Also it would be nice if I could map a regexp instead of specific URL
Found the solution myself:
Set the domain of the URL to point to 127.0.0.1 in windows hosts file
Install nginx and set it up to show your file for the request response to which you're willing to modify and proxy all other requests to the original server
You could consider writing a test and mocking out the http response with your custom response.
I could give an example using C# and rhino mocks but it's not clear which platform you are working with.
You can:
Try to enject your dll into the process and replace functions like (HttpSendRequest, HttpQueryInfo,...) with your oun versions.
Try to use something like WinPCap (http://www.winpcap.org/).
Fiddler (www.fiddler2.com) has an AutoResponder feature which does exactly that.