We have two servers, both are containing a local application connecting to local web service, applications and services are identical on both servers.
One of the servers work just fine,
The other one is just dead, I have impression the the security configuration are different on those servers.
What prevents an application X from connecting a web-service, given that another application y on the same server can connect to it. and X is a windows service.
What I should check, what is chances?
Thanks
Check if there is any firewall that might need to some ports opened up.
Could there be any kind of AntiVirus or similar set up on one of the servers?
Basic troubleshooting of loosely-coupled applications means independent testing/verification of those services.
Can you access the web service locally through a different application, i.e. a web browser? If you can't reach the service through the browser, then the server configurations (at some level) are not identical.
Only after you're certain the service is reachable should you look into issues with the windows service.
Related
I am creating a web app. I want to create a listening service (TCP) that listens continuously and updates web page according to that.
A Windows service or a WCF service?
At the end I just want a background service that listens on a socket continuously and update data in database. and when database is updated I will use signal r to show that in my page.
Right now I am trying with WCF but I am wondering if it can be done with Windows service also. And right now this application will work on LAN. But in the future, it can also be in the cloud.
First of all, it is important to understand that a Windows service and a WCF service are not the same.
A Windows service is a specialized executable that runs in the background on Windows.
A WCF service is a specialized piece of code that exposes some functionality through a well-defined endpoint. It does not run on its own, but instead must be hosted by some parent process, like IIS, a desktop application, or even a Windows service.
In thinking about the problem you've described, I suppose the most fundamental question to ask is whether or not you have control over the data that will be received via the TCP connection. WCF is built on the notion of the ABCs (Address, Binding, and Contract), all of which have to match in order to facilitate data exchange between WCF endpoints. For example, if you wish to expose a WCF endpoint via IIS that accepts TCP connections from some remote WCF endpoint, the remote WCF endpoint needs to send data to your IIS-hosted WCF endpoint using the agreed-upon data contract. Absent that, WCF will not work. So, if you cannot define the data contract to be used between WCF endpoints, then you'll need to find another option. An option that will work is to open a TCP listener within a Windows service, process the data as it is received, update your database, and listen for more data.
================================================
By way of example, I work on a project that has a front-end desktop application that communicates with a back-end Windows service. We build both the application and the Windows service, so we have full control over the data exchange between the two processes. At one point in time, we used WCF as the mechanism for data exchange. The Windows service would host a WCF service that exposed a NetNamedPipeBinding, which we later on changed to NetTcpBinding to get around some system administration issues. The application would then create its own endpoint to communicate with the WCF service being hosted within the Windows service.
This worked fine.
As our system got more mature, we needed to start sending more and more information from the Windows service to the application. If I recall correctly, I believe we experimented with streaming within WCF and concluded that the overhead was not something we could tolerate. So, we used WCF to exchange commands and status information between the application and the Windows service, but we simultaneously used a TCP socket connection to stream the data from the Windows service to the application.
This worked fine.
When we got a chance to update the Windows service software, we decided that it would be better to have a single communication mechanism between the Windows service and the application. So, we replaced WCF altogether with a TCP socket connection that uses a homegrown messaging protocol to exchange information in both directions - application to Windows service and Windows service to application.
This works fine and is the approach we've used for a couple of years now.
HTH
I am in a stage of using SignalR in my project and i don't understand when to use Self hosted option and when we should not use. As a example if I am willing to host my web application in server farm,
There will be separate hosting servers
Separate SignalR hubs in each IIS server
If we want to broadcast message into each client, how this is working in SignalR
The idea with SignalR running in multiple instances is that clients connected on instance A cannot get messages from clients connected to instance B.
(SignalR scaleout documentation)
However, when you scale out, clients can get routed to different
servers. A client that is connected to one server will not receive
messages sent from another server.
The solution to this is using a backplane - everytime a server recieves a message, it forwards it to all other servers. You can do this using Azure Service Bus, Redis or SQL.
The way I see, you use the self host option when you either don't want the full IIS running (because you have some lightweight operations that don't require all IIS heaviness) or you don't want a web server at all (for example you want to add real-time functionality to an already existing let's say forms application, or in any other process).
Be sure to read the documentation for self-hosting SignalR and decide whether you actually need to self host SignalR.
If you are developing a web application under IIS, I don't see any reason why you would want to self-host SignalR.
Hope this helps. Best of luck!
There is an intranet based ASP.NET application that is deployed to a server (IIS) and a group of clients (about ten). The end user can then decide to either connect to the local application (deployed to their local machine) or the server version. I do not understand the reasoning for doing this. My question is: is this common practice?
yes, it is a common practice to verify the performance of the application. Each client will have their own settings and as per process, application should not break in any kind of environment. it is always beneficial to put a server version and a local version.
If the clients are laptops, and the application supports disconnected data sets and synchronization, it would make sense. Typically you'd see something like this when the client machines are taken off-network to be used at a remote work site.
I have a weird case here at work.
The customer(telecommunication firm) has a server which we publish asp.net web service codes which we designed for them. We use that server and web service to get data from the customers own web service and give out for client(telephone) to use it.
The customer does not allow us to code on the remote server, so we have to work on local computer.
The customer has 2 IPs for its own web services. One of them can be reached only from the remote server, this is an internal IP. Second IP is public which I can reach from my local computer. They address the same methods. For security reasons, they divided IPs.
Everything is fine while developing on local. But when I need to publish web service to the server, I need to change web service URLs to remote servers internal IP. But the local Visual Studio web reference doesn't change web service URLs because it can't reach to service as it is only permitted to reach from the server. So I cannot get a build and publish my code.
Somehow I need to change my visual studio reference URLs to internal IP(so far nobody can reach from local), in order
Hope I am clear.
Thanks
It can be changed from the web.config of your local project.
I have a situation very similar to the one in this question:
Selective Cache clearing across load balanced servers (ASP.Net)
The difference is that due to our hosting configuration, I am unable to address individual servers by IP address. Assuming I cannot access specific servers via web requests, is it possible to access the HttpContext of a web application running on the same machine? I'm thinking I could accomplish this with a windows service that I could address by machine name, or alternately a console application, I just don't know if I can gain access to the web application cache either way.
You can expose content of the WebCache of an app through some Remoting/WCF code built into the web app. I hope you can use localhost to access it from an app on the same box.