I have an LCDS server sitting behind a corporate proxy/firewall.
I need to use a configured HTTPProxyService on the LCDS server to make requests out to beyond the firewall (can't go directly from the client because of crossdomain.xml issues)
How do I configure LCDS to use the corporate proxy on it's outbound requests?
Docs ftw:
http://livedocs.adobe.com/livecycle/es/sdkHelp/programmer/lcds/wwhelp/wwhimpl/common/html/wwhelp.htm?context=LiveDocs_Parts&file=rpc_config_4.html
Use the <external-proxy /> tag.
LCDS is using Apache HTTP client in order to establish a connection through an external proxy. All the parameters from the tag are going to be used in order to initialize an org.apache.commons.httpclient.UsernamePasswordCredentials instance (or NTCredentials).
I think that first it's easier if you build a standalone java application using HTTP client to use your corporate proxy (easier from a testing point of view), and after you succeed to find all the settings you can add them into proxy-service.xml (I can help you with that).
Related
Im' looking to build a similar application to https://www.proxysite.com/ but am not sure on the best architecture.
Looking to have a data flow like this.
User Web Browser -> myproxysite.com -> Ngninx Proxy Server (somehow rotating IP for each client session) -> Targetsite.com
Then the user would need to maintain a full session on Targetsite.com as a logged in user.
In this example, targetsite.com is always the same site and is pre-determined. The challenge we are facing is that targetsite.com is blocking our users based on IP, many of whom are accessing it from the same office network.
So my questions are:
Does this seem correct?
Is there anyway for me to configure nginx with a rotating proxy service like luminati? Or do I need to add an API software layer to handle the actual IP changes?
Any guidance on this one would be greatly appreciated!
While I can't help you with your application, I do want to suggest an alternative. You mentioned an office so it sounds like the users who will use the proxy are workers.
Luminati (now BrightData) has a proxy manager which you can host on any server. The proxy manager allows you to create ports (ie port 24000) and configure it with whatever proxy you want (doesn't have to be BrightData's proxy). It has a ton of different parameters that you can include for each proxy (including IP rotation) and each port can be configured to have a unique setup.
Then you simply go to your user PC, open the browser proxy settings, type the IP address of the server that the proxy manager is running on and the specific port you configured and voila. You have central control of the managing the proxies and your user's browser is proxied.
A big benefit of this is the logs in the proxy manager show all activity on each port you setup, so you can monitor traffic and the success rates right there.
Proxy manager: https://prnt.sc/13uyjgj
Recently I have come across an 0day in the most popular software in, let's just say "Entertainment" industry, where the remote code execution can be achieved via MITM.
Usually, I use Burp to accomplish MITM. But this one is a client-side program that spawns random local ports to send HTTP requests to its server. Since ports are randomized, Burp proxy couldn't channel traffic to its listener as Burp requires predefined proxy port to be bound to Firefox/Chrome
(The software I mentioned above is not a browser though it facilitates some behavior, so configuring it to use a proxy is basically out of the question).
So, is there any alternative program that could serve as a proxy, in the mean time provides similar real-time capabilities of Burp?
Firstly, you could still use Burp. You have 3 options, one might work:
Look for a proxy setup in the client. Lots of clients allow you to use proxies. You can look for a config parameter, or a command line switch etc.
Set the system proxy to use Burp. In this way all HTTP traffic will be sent to Burp. In linux you can use the http_proxy https_proxy environment variable, or in winsdows in the Internet Settings.
If the client connects to a hostname and not to an IP, you can configure this hostname in the OS's hosts file to resolve to 127.0.0.1 , and configure Burp to listen on the port, which the client tries to connect to. Of course this will not work, if the the server port is also randomized, but that would be really weird. In Burp you also have to configure to send the whole traffic to the target server and to work as a transparent proxy.
If all these don't work, you can try with bettercap, which is a MITM tool.
I'm creating a web server using Jetty (v9) and I need any traffic between browsers and the server to be encrypted. I'll be uploading files to the server, plus the client/server will maintain a session carrying sensitive access tokens.
I don't have much experience with web servers, but it seems like the solution is to have the web server serve on port 443 so that communication will use the HTTPS protocol.
I was going to start running through this tutorial for configuring Jetty with SSL, but before I start messing around with certificates and signing etc. I just wanted to ask if this is the right approach or if there is something else more suitable that I don't know about.
In answer to your question, using https is indeed the right approach.
Iam coding an application which needs to do some web automation to some websites from our intranet. Some are simple web services while some will be https websites. My application needs to connect to them via socks proxies.
Now httpwebrequest class does not support socks so Iam looking to code a complete HTTP wrapper using Sockets . I need suggestions from my fellow coders here on what would be a good component to use as I am not looking to re invent the wheel here, rather use some existing socket based solution, either opensource or paid components.
Any suggestions guys? I need only socket based components which support socks proxies.
SecureBlackbox includes HTTP / HTTPS components that support both HTTP proxy, HTTPS proxy and SOCKS. The components use their own socket class which can be used separately as well.
Here is our current infrastructure:
2 web servers behind a shared load balancer
dns is pointing to the load balancer
web app is done in asp.net, with wcf services
My question is how to set up the SSL certificate to support https connection.
Here are 2 ideas that I have:
SSL certificate terminates at the load balancer. secure/unsecure communication behind the load balancer will be forwarded to 2 different ports.
pro: only need 1 certificate as I scale horizontally
cons: I have to check secure or not secure by checking which port the request is
coming from. doesn't quite feel right to me
WCF by design will not work when IIS is binded 2 different ports
(according to this)
SSL certificate terminates on each of the server?
cons: need to add more certificates to scale horizontally
thanks
Definitely terminate SSL at the load balancer!!! Anything behind that should NOT be visible outside. Why wouldn't two ports for secure/insecure work just fine?
You don't actually need more certificates at all. Because the externally seen FQDN is the same you use the same certificate on each machine.
This means that WCF (if you're using it) will work. WCF with the SSL terminating on the external load balancer is painful if you're signing/encrypting at a message level rather than a transport level.
You don't need two ports, most likely. Just have the SSL virtual server on the load balancer add an HTTP header to the request and check for that. It's what we do with our Zeus ZXTM 5.1.
You don't have to get a cert for every site there are such things as wildcard certs. But it would have to be installed on every server. (assuming you are using subdomains, if not then you can reuse the same cert across machines)
But I would probably put the cert on the load balancer if not just for the sake of easy configuration.