Connect to web application protected by GlobalProtect with custom HTTP requests - http

Our company CRM web application does not have an API and therefore is extremely cumbersome to use when having to upload data as this all has to be done manually and through the GUI. To access this application you need to be identified in GlobalProtect. My idea was to use http requests sent directly to the server to try to circumvent the lack of an API.
However, i have been very unsuccesful as copying http request out of the chrome devtools and changing their payloads using Postman resulst in an ECONN_RESET error. I am no networking expert, but i believe this error has to do with the fact that the application is protected by GLobalProtect and there is more communication necessary with the server to make it accept my modified http requests.
Is there anyone with knowledge about networking and GlobalProtect that could point me in the right direction towards setting up a direct communication with the server? Thanks so much!

Related

Docusign Connect / webhook error: The underlying connection was closed: An unexpected error occurred on a send

Not sure if this is a Docusign or ngix question.I'm working on integrating an application with Docusign and I keep seeing this somewhat vague error below in the Docusign Connect logs. In our nginx logs I see that a POST to our application's /webhook endpoint was attempted but doesn't go through. I've specified TLS 1.2 and have tried increasing our nginx timeout but that doesn't seem to fix it.
One theory I have is that our server's certificate isn't chained to a Microsoft trusted CA but I would expect a different error if that was the case.
Any help or guidance would be greatly appreciated.
This most likely would require you to get the IT folks managing this server and networking involved. And yes, they may need to install a certificate, but other errors can be related to the firewall blocking certain requests, an anti-virus blocking requests and even DNS related error preventing the HTTP request from being sent and received by the server.
We highly recommend to use a public cloud for Connect and we have plenty of examples how this can work, while still having your code run on your own IT server.

Wrap external http url into https

I have url of some external service I need to integrate our legacy system with.
Our legacy system is using some sort of bridges (pre-defined connectors) to talk with external world.
And currently there is only web connector for https. But that external service is available only through http, i.e. no SSL on that end and we can not do anything about it.
So I'm wondering maybe there is some online service, which could wrap http url into https, some sort of public proxy or whatever, so I could get https url in a few clicks.
For now it's just a proof of concept project, so I'm trying to avoid installing any internal proxy in our network etc. Need just the simplest and the quickest solution, which would give me https url.
Thanks in advance for you help, guys.

Where to host SignalR when long-running service via WCF is backend

I'm sure that was a confusing enough title.
I have a long running Windows service dealing with things happening in the world. This service is my canonical source of truth for the rest of my system. Now I want to slap a web interface onto this so the clients can see what is actually going on. At first this would simply be a MVC5 application with some Web API stuff. Then I plan to use SignalR 2.0 and Ember.js to make this application more interactive and "realtime".
The client communicates with the Windows Service over named pipes using WCF. A client (such as a web app) could request an instance of for example IEventService, would be given a WCF proxy client, and could read about events through this interface. Simple enough.
However, a web application basically just exists in the sense that it responds to requests from the user. The way I understand it, this is not the optimal environment for a long lived WCF client proxy to raise events in, and thus I wonder how to host my SignalR stuff. Keep in mind that a user would log in to the MVC5 site, but through the magic of SignalR, they will keep interacting with the service without necessarily making further requests to the website.
The way I see it, there are two options:
1) Host SignalR stuff as part of the web app. Find a way to keep it "long-running" while it has active clients, so that it can react to events on the WCF client proxy by passing information out to the connected web users.
2) Host SignalR stuff as part of my Windows service. This is already long-running, but I know nada about OWIN and what this would mean for my project. Also the SignalR client will have to connect to a different port than where the web app was served from, I assume.
Any advice on which is the right direction to go in? Keep in mind that in extreme cases, a web user would log in when they get to work in the morning, and only have signalr traffic going back and forth (i.e. no web requests) for a full work day, before logging out. I need them to keep up with realtime events all that time.
Any takers? :)
The benefit of self-hosting as part of your Windows service is that you can integrate the calls to clients directly with your existing code and events. If you host the SignalR server separately, you'd have another layer of communication between your service and the SignalR server.
If you've already decided on using WCF named pipes for that, then it probably won't make a difference whether you self-host or host in IIS (as long as it's on the same machine). The SignalR server itself is always "long-running" in the sense that as long as a client is connected, it will receive updates. It doesn't require manual requests from the user.
In any case, you'll probably need a web server to serve the HTML, scripts and images.
Having clients connected for a day shouldn't be a problem either way, as far as I can see.

Flex: Does SecureSockets' <policy-file-request/> is encrypted with the target servers public key?

I have an Apache SSL connection available with a C# server that listens to port 843 (i wrote a basic c# server since i don't know how to make Apache respond properly upon a specific request).
When using the Socket object, all seems to be find and the connection gets approved, thus allowing the crossdomain communication, however, when using a SecureSocket object, instead of getting , i get lots of gibberish.
I've been trying to figure out what's going on and assumed it's either:
A. using the connection target private key to encrypt the request.
B. trying to authenticate itself via SSL prior to sending the request.
I've spent the entire week trying to figure out whats going on with no luck so if someone can shed some light regarding the way that the SecureSocket obj deals with crossdomain requests it'll be greatly appreciated.
Also, is there a way to use a normal Socket and somehow get the information?
with regards,
Mike
You need to set up a SSL enabled policy server. Using Apache to get rid of the SSL communication and proxy the the policy server didn't work for me as it complains on a bad method.
I wrote a simple java program SSL enabled using the same certificates that the target server (I haven't tested with a different one, so don't know if it's mandatory) to listen on port 843 and replay with the policy. Now I can communicate with SecureSockets.

Workaround if the Application is Down

We have deployed an application on the server.
Problem is, sometimes the application will be down due to some issue (Ex: While Downloading huge volume of data into Excel).
The application will be up after manually restarting the IIS.
We are creating a new application, so we are not working to fix this issue.
As a workaround, we are trying to build an exe with the below requirement:
Ping the application deployed on the server and find out whether the application is up or down, If the application is down, restart IIS.
Is it possible to ping a local website on the IIS? Is there any other way to do a temporary fix?
Hmmm, that kind of stability isn't good. However, you're interested in monitoring a URL and determining whether it is active...
TBH, I'm sure there are a few monitoring applications knocking around, some even free if that's you thing that will recognise specific ports and utilise appropriate protocols such as HTTP. But if you fancy having a go yourself you could always utilise the HttpWebRequest to mock up a request to the server and hopefully it will respond in a timely manner. Typically if you're just touching the server you can utilise a 'HEAD' request you just receives the header data rather than all the data. Check out this example.

Resources