Why using web services over remote connections? - odbc

Can we just use remote connections when we need to connect to remote database on remote server?
or we should use web services and what is the web service Architecture? is it differs when we use Lan or internet?

To put it simple: web service are based on remote connections (TCP-connections) between server and client. However web services use standard formats to code and transport requests and answers. There are standard libraries for every platform, taking care of the communications.
The benefit of using web service over remote connections is that you do not have to bother with sockets, coding messages into streams and all the puzzle things. Instead you concentrate on your business logic.
In case of internet, you will need to access a proxy server to access internet. Almost every organisation has proxy server for HTTP and HTTPS, they can be used for web services directly. If you use your own remote connections, you may not able to configure the proxy server to let them through.

Related

What is more suitable: A windows service or WCF service?

I am creating a web app. I want to create a listening service (TCP) that listens continuously and updates web page according to that.
A Windows service or a WCF service?
At the end I just want a background service that listens on a socket continuously and update data in database. and when database is updated I will use signal r to show that in my page.
Right now I am trying with WCF but I am wondering if it can be done with Windows service also. And right now this application will work on LAN. But in the future, it can also be in the cloud.
First of all, it is important to understand that a Windows service and a WCF service are not the same.
A Windows service is a specialized executable that runs in the background on Windows.
A WCF service is a specialized piece of code that exposes some functionality through a well-defined endpoint. It does not run on its own, but instead must be hosted by some parent process, like IIS, a desktop application, or even a Windows service.
In thinking about the problem you've described, I suppose the most fundamental question to ask is whether or not you have control over the data that will be received via the TCP connection. WCF is built on the notion of the ABCs (Address, Binding, and Contract), all of which have to match in order to facilitate data exchange between WCF endpoints. For example, if you wish to expose a WCF endpoint via IIS that accepts TCP connections from some remote WCF endpoint, the remote WCF endpoint needs to send data to your IIS-hosted WCF endpoint using the agreed-upon data contract. Absent that, WCF will not work. So, if you cannot define the data contract to be used between WCF endpoints, then you'll need to find another option. An option that will work is to open a TCP listener within a Windows service, process the data as it is received, update your database, and listen for more data.
================================================
By way of example, I work on a project that has a front-end desktop application that communicates with a back-end Windows service. We build both the application and the Windows service, so we have full control over the data exchange between the two processes. At one point in time, we used WCF as the mechanism for data exchange. The Windows service would host a WCF service that exposed a NetNamedPipeBinding, which we later on changed to NetTcpBinding to get around some system administration issues. The application would then create its own endpoint to communicate with the WCF service being hosted within the Windows service.
This worked fine.
As our system got more mature, we needed to start sending more and more information from the Windows service to the application. If I recall correctly, I believe we experimented with streaming within WCF and concluded that the overhead was not something we could tolerate. So, we used WCF to exchange commands and status information between the application and the Windows service, but we simultaneously used a TCP socket connection to stream the data from the Windows service to the application.
This worked fine.
When we got a chance to update the Windows service software, we decided that it would be better to have a single communication mechanism between the Windows service and the application. So, we replaced WCF altogether with a TCP socket connection that uses a homegrown messaging protocol to exchange information in both directions - application to Windows service and Windows service to application.
This works fine and is the approach we've used for a couple of years now.
HTH

implementing an MQTT server capable of serving a website too

short question : How can I host an MQTT server on my remote Ubuntu 16 server while at the same time hosting an HTTP server that will be using the MQTT data ?
true question : I want to build an IoT system that will be MONITORED and CONTROLLED by ESP32, which will SEND FEEDBACK and ACCEPT COMMANDS respectively from a remote server (maybe LAMP ?). I also want the user to log-in in a website hosted on this remote server, where s/he can monitor any sensor values or send commands (e.g. turning a led on or off).
So what's the way to go here?
I was adviced to go with MQTT but then the above problem arised.
what I've found : I 've found that using Mosquitto MQTT, I may be able to serve a website using websockets. But I prefer a more scalable HTTPS approach. That is, I intend to have a database linked with my site and running my PHP scripts.
I'm not that experienced, so please don't take anything for granted :)
MQTT uses TCP connection and follows publish/subscribe API model where as the web(http) follows Restful API model(Create,read,update,delete). If you want to stick with MQTT then you should use SAAS service like enterprise MQTT from HIVE which provide this integrability but will charge some fees and in return, they will provide you with an account and a dashboard for all your devices. Otherwise, you can try to make your own middleware which can integrate MQTT with web services .
Another thing I would recommend is CoAP which is also an M2M protocol but follows Restful API model and UDP connection. It has direct forward proxy to convert coap packets to https packets and vice versa.
In MQTT you have a central server(Broker) to which the nodes send their data and fetch their required data through topic filters.
In CoAP each device having some data to share becomes a server and the other device interested in it's data becomes a client and sends a GET request to the respective server to get its data. Similarly a PUT request along with a payload from a client would update the value at the server.
You really should not be looking to combine the MQTT broker with a HTTP server, especially if you intent the HTTP Server to actually be an application server (Running back end logic e.g. PHP). These are 2 totally separate systems. There is nothing to stop your application logic connecting to the broker as a client.
If you intend to use MQTT over WebSockets you can use something link nginx to proxy the WebSockets connection to the broker so it can sit behind the same logical HTTP/HTTPS address.

When we should use SignalR self hosted and when we should not?

I am in a stage of using SignalR in my project and i don't understand when to use Self hosted option and when we should not use. As a example if I am willing to host my web application in server farm,
There will be separate hosting servers
Separate SignalR hubs in each IIS server
If we want to broadcast message into each client, how this is working in SignalR
The idea with SignalR running in multiple instances is that clients connected on instance A cannot get messages from clients connected to instance B.
(SignalR scaleout documentation)
However, when you scale out, clients can get routed to different
servers. A client that is connected to one server will not receive
messages sent from another server.
The solution to this is using a backplane - everytime a server recieves a message, it forwards it to all other servers. You can do this using Azure Service Bus, Redis or SQL.
The way I see, you use the self host option when you either don't want the full IIS running (because you have some lightweight operations that don't require all IIS heaviness) or you don't want a web server at all (for example you want to add real-time functionality to an already existing let's say forms application, or in any other process).
Be sure to read the documentation for self-hosting SignalR and decide whether you actually need to self host SignalR.
If you are developing a web application under IIS, I don't see any reason why you would want to self-host SignalR.
Hope this helps. Best of luck!

How can I use SignalR Hubs / Proxies without SignalR connection?

Here is my situation:
I have a 4-tier web application consisting of browser, web server, application servers and database.
Browser and application server should communicate in a RPC-style way.
The backend will run on windows machines, so I will use IIS as web server The application needs real time communication between application server and browser.
I want to use a SignalR connection for the communication between browser and web server. For the communication between web server and application server's I want to use a plain TCP connection.
I think this approach will enable me to send JSON messages between browser and application servers. But how can I realize a RPC communication?
Can I write a SignalR Hub, generate a JS proxy and bind the Hub to a TCP socket?
Here is a picture: https://www.dropbox.com/s/xeaja4dos4bgvbz/SignalR_Hubs_Stackoverflow.png
Nope. SignalR is based on HTTP not TCP directly. WebSockets is the closest thing to a raw tcp socket and it has the added benefit that it works over port 80.

Utilize WCF Service in Asp.net

I hosted some WCF Services on my client machine and this machine is connected to internet through any DSL. So, there is no live IP or any other static IP associated with it. Now, I want to utilize these WCF Services on my webpages through Asp.Net.
I need to ask, is this possible to access WCF services hosted on a machine which is connected through simple internet?
Few other things to keep in mind that, client (WCF services hosted) and server (Asp.Net pages hosted) in totally different domain. But, I know client machine IP or MAC address.
You can use services like www.dyndns.com to setup something like that.

Resources