I have the System below
one server
20 clients (could increase)
the clients display a webpage that has the flex swf file. it will display a list. it will poll server every 1sec for any changes in the data. if there's any, it will refresh the data.
the polling is handled using a url that returns json object.
Now i want to have an webapplication that i can use to see the current status of all the monitors on the network
any smart solution?
You could potentially have all the monitor apps connect to a NetConnectionGroup along with your status check app. They could then post pings and health checks into the group, and if your status check app is also connected it could report those statuses (this of course doesn't help if one of the monitors has crashed or not connected, but you'll have that problem with any solution!)
Related
We have an already running MQTT setup for communication between smart home devices and remote server, for remotely controlling the devices. Now we want to integrate our devices with Google Home and Alexa. These two use HTTP for communication with third party device clouds.
I have implemented this for Google Home and after receiving the request to device cloud, the request is converted to MQTT. This MQTT request is then sent to smart home device. The device cloud waits for few seconds to receive reply from smart home device. If no reply is received within predefined time, it then sends failure HTTP response to Google Home else it sends the received reply.
Is there a better way to handle this? Since this is a commercial project I want to get this implemented in the correct way.
Any help will be appreciated.
Thanks
We're using AWS IoT and I think it's a good way to handle IoT issues, below are some features of it:
Certification, each device is a thing and attached its own policy, it's security
Shadow, it's device's current state JSON document, The Device Shadow service acts as an intermediary, allowing devices and
applications to retrieve and update a device's shadow
Serverless, we use lambda to build skill and servers, it's flexible
Rule, we use it to intercept MQTT messages so that we can report device's state changing to Google and Alexa. BTW, to Google, Report State implementation has become mandatory for all partners launch & certify.
You can choose either MQTT or HTTP
It’s time-consuming but totally worth it! We've sold 8k+ products, so far so good.
At least Google Home doesn't really require synchronous operation there. Once you get the EXECUTE-intent via their API, you just need to sent that to your device (but it doesn't necessarily has to report its state back).
Once its state changes, you either store it for further QUERY-intents or provide this data to the Google Homegraph server using the "Report State" interface.
I'm developing gBridge.io as a project providing quite similar functionality (but for another target group). There, it is strictly split as described: A HTTP endpoint listener reacts to commands from Google Home and sends it to a cache, where it is eventually sent to the matching MQTT topic. Another worker is listening to MQTT topics from the users and storing there information in the cache, so it can be sent back to Google once required.
I want to know SignalR per-server connection limitations. Let's say that my mobile app is starting a connection to the server. The app is idle for let's say 5 minutes (not data is send from a specific client to the server nor from the server to a specific client, does SignalR can use that connection to serve other users, or SignalR creates a separate connection for each user?
I want to know whether I should use SignalR or just call the server every few seconds. My mobile app will be running in the background on the user's mobile phone and might be active on the user's mobile phone all day long.
SignalR has 1 connection for every user and the amount of connections you can have open at a given time completely depends on the server implementation, hardware etc.
If your app does not rely on real-time data then polling is an appropriate approach. However if you do want nearly real-time data then I'd argue that polling every 2-3 can be just as taxing as maintaining an open connection.
As a final note SignalR can be configured to poll via its Long Polling transport but it will still maintain a connection object on the server, the request just wont be held onto. That way SignalR can keep track of all the users and will ensure that users get the messages that were sent to them.
I have developed serve-client model based on UDP. Client are connected to server on random basis. I mean number of clients alive at a time is not fixed.
Any new client can communicate any time. It means, there could be 1 live client or 100 clients or any number of clients.
Now in such model, I need to add HTTP requests. Browser could send request to server and then server will forward that to any of client based on some identification.
Is there any method or readymade server(like nginix or lighttpd), which I can use for this requirement.
My big worry is that, destination client are not fixed, they keep changing. Most of server (nginix or lighttd) have static entries for destination address.
I visualize your scenario as multiple sensors that connect to the servers when they have something to say, and then they send a request and wait for the answer.
I visualize you also want to somehow administer such modules so that you want to access to them via HTTP.
You could leave the new configuration items on the regular server so that upon any update connection the response would include (in a piggy-backed fashion) the changes to the node.
Or the server could mark somehow your interest in accessing a certain node, and then, when this connects, the server could notify the interested client. The sensor should pay attention to clients wanting to connect to them during a window time.
Certainly, more information would help us help you.
I would want to send a message from the server actively, such as using UDP/TCPIP to a client using an arduino. It is known that this is possible if the user has port forward the specific port to the device on local network. However I wouldn't want to have the user to port forward manually, perhaps using another protocol, will this be possible?
1 Arduino Side
I think the closest you can get to this is opening a connection to the server from the arduino, then use available to wait for the server to stream some data to the arduino. Your code will be polling the open connection, but you are avoiding all the back and forth communications to open and close the connection, passing headers back and forth etc.
2 Server Side
This means the bulk of the work will be on the server side, where you will need to manage open connections so you can instantly write to them when a user triggers some event which requires a message to be pushed to the arduino. How to do this varies a bit depending on what type of server application you are running.
2.1 Node.js "walk-through" of main issues
In Node.js for example, you can res.write() on a connection, without closing it - this should give a similar effect as having an open serial connection to the arduino. That leaves you with the issue of managing the connection - should the server periodically check a database for messages for the arduino? That simply removes one link from the arduino -> server -> database polling link, so we should be able to do better.
We can attach a function triggered by the event of a message being added to the database. Node-orm2 is a database Object Relational Model driver for node.js, and it offers hooks such as afterSave and afterCreate which you can utilize for this type of thing. Depending on your application, you may be better off not using a database at all and simply using javascript objects.
The only remaining issue then, is: once the hook is activated, how do we get the correct connection into scope so we can write to it? Well you can save all the relevant data you have on the request to some global data structure, maybe a dictionary with an arduino ID as index, and in the triggered function you fetch all the data, i.e. the request context and you write to it!
See this blog post for a great example, including node.js code which manages open connections, closing them properly and clearing from memory on timeout etc.
3 Conclusion
I haven't tested this myself - but I plan to since I already have an existing application using arduino and node.js which is currently implemented using normal polling. Hopefully I will get around to it soon and return here with results.
Typically in long-polling (from what I've read) the connection is closed once data is sent back to the client (arduino), although I don't see why this would be necessary. I plan to try keeping the same connection open for multiple messages, only closing after a fixed time interval to re-establish the connection - and I hope to set this interval fairly high, 5-15 minutes maybe.
We use Pubnub to send notifications to a client web browser so a user can know immediately when they have received a "message" and stuff like that. It works great.
This seems to have the same constraints that you are looking at: No static IP, no port forwarding. User can theoretically just plug the thing in...
It looks like Pubnub has an Arduino library:
https://github.com/pubnub/arduino
I'm using Websphere Application Server (WAS) 6.1's default messaging provider for JMS. My remote client application creates a connection, then does a setExceptionListener to register the callback.
When I simply stop the messaging engine using the WAS Integrated Solutions Console, my app behaves as expected, i.e., onException is called immediately and my app reacts accordingly. However, when I pull the network cable, the onException callback does not get called back for somewhere between 30 and 60 seconds.
The ugly result is that my app just tries to keep sending messages to WAS during this 30 to 60 second time frame and those messages just get lost. I've done several searches trying to find out more about the ExceptionListener (e.g., is there some configuration parameter used to specify a callback timeout), but have not had any success.
Hopefully, this makes sense to someone out there. Any suggestions how I might be able to detect the cable "cut" scenario more quickly? Thanks for your help.
-Kris
You don't happen to have a 30 second TCP timeout defined?
If so, then MQ has handed over its responsibility temporarily to the JVM/OS and is waiting for for it to ACK with whatever network related operation it has requested. Perhaps try lowering the TCP timeout value...