Hello I'm currently new to android and I'm trying to make a simple RSS application on android.
I've put a together all basic aspects of the application as the parser and fetching the RSS through Http connection through ASyncTask as well as displaying the data in a listView.
How can I refresh the RSS feed (Google News) without starting the application ?? What is the best method for it (Push/Pull) and a simple explanation on implementing?? Thanks.
Option 1:
Implement AlarmManager which will start background service every specific time, complete action and go to sleep until further call.
https://developer.android.com/training/scheduling/alarms.html
Option 2:
Use Google Cloud Messaging (server sends your phone data which triggers app/service to start) and do action. However I don't think this is required unless you want it to get new data when it's available rather than every specific interval.
Related
Context:
There are four fields in my data stream (text), id, userName, loginDateTime, loginGeoLocation. A user shouldn't be allowed to log in from different machines (geoLocation), but they are logging in from multiple machines. Data Stream is coming from many external systems & we don't have any control over it, and landing on a Kafka topic, and a logstash pipeline is picking up from and pushing to OpenSearch Indice.
Problem Statement:
I want to identify (alert) if some user login from a totally different location than is frequent. For example user-a login from Redwood, CA - but suddenly login from Boston, MS. On this action an alert trigger and send email/push/notify etc. How can we achieve this using pipeline, logstash or any method available with OpenSearch other than development or interceptor on stream.
After spending a lot of time, figured out two ways for this,
Create an anomaly detection (https://www.elastic.co/guide/en/machine-learning/7.17/ml-configuring-detector-custom-rules.html)
Use fingerprint filter from logstash while ingressing data (https://www.elastic.co/blog/logstash-lessons-handling-duplicates)
Customise the plugin to use custom written rules (https://github.com/logstash-plugins/logstash-input-java_input_example)
Hope it help to someone in similar problem.
I'm trying to figure out what one could do to see where a flowfile is at in a nifi flow over http. For example, say I have a webpage where a user could upload files. I want to indicate to the user that that file is currently being ingested/processed, and possibly what stage its at. What does nifi offer that I could leverage to get this information? Like is there a way to see which processors a flowfile has gone through, or the processor/queue it's currently in?
Thanks
One option would be to use NiFi's provenance events and issue a provenance query for the flow file UUID to see all the current events which should give you a graph of all the processors it has passed through:
https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#data_provenance
You can open Chrome Dev Tools while using provenance features in the UI and see what calls are being made to the REST API.
Another option is to build in some kind of status updates into your flow. You could stand up your own HTTP services that receives simple events like an id, timestamp, and processor name, then in your flow you could put InvokeHttp processors wherever you want to report status to your service. Then your UI would use the status events in it's own DB or wherever you store them.
I am working on a web application where i have some charts and i want to consistently get data from the database for the charts so that the user will always have the latest picture of the data. I used signalR and on the page load of Report page i am calling a server method in hub's class which is getting chart data from database and passing it onto the client and in turn a handler at client processes the data and draws the chart. What i want is to keep getting the data consistently. What are my options for it?
What is the best way to keep calling hub's method after a fix interval of say one minute?
I would not poll on client or server. Instead use a service bus and publish your changes on that bus when they occur.
You can then forward these events to the client using SignalR. I have made a library for just that called SignalR.EventAggregatorProxy
https://github.com/AndersMalmgren/SignalR.EventAggregatorProxy/wiki
I wrote a blog post about it here
http://andersmalmgren.com/2014/05/27/client-server-event-aggregation-with-signalr/
You can also look at the demo
https://github.com/AndersMalmgren/SignalR.EventAggregatorProxy/tree/master/SignalR.EventAggregatorProxy.Demo.MVC4
Or look at the live demo here
http://malmgrens.org/signalr/
SignalR makes polling the server obsolete. What you want to do is to schedule your database access. If you want to update your charts every minute, create a timer job in the server app where you collect data from database which has been added within the last minute. Then send this data to your clients. On this way your connected clients are always up to date without polling the server.
I am looking for working example of chat application using glassfish and comet.
I expect that when one client will update data in server , the other clients will also see the data witout refreshing the page. Or any push notification that the data in the server has changed.
Its very urgent. Thanks in advance
Instead of Commet , Just use Socket . You need feel the pulse of the server continually for getting update from other users's responses . In the response token sent from the server , if there is new update, pull the updates in a new thread. The messages can have a header which contains the sender device identity and the message body. So at a certain point in time, you can pull all the updates for your devices and the user interface can show them respectively in each view;
In the list view of the users, show the last message + the number of the new unread messages. In the chat window, show all the messages. that belongs to the user.
Hope it helps
The easiest way to build a web-based chatroom is to use a comet cloud service rather than build long polling stuff by yourself. You can see an example from EZComet
enter link description here
Though, it is in PHP, but I think it would be easy to write a Java version.
I need to invoke a long running task from an ASP.NET page, and allow the user to view the tasks progress as it executes.
In my current case I want to import data from a series of data files into a database, but this involves a fair amount of processing. I would like the user to see how far through the files the task is, and any problems encountered along the way.
Due to limited processing resources I would like to queue the requests for this service.
I have recently looked at Windows Workflow and wondered if it might offer a solution?
I am thinking of a solution that might look like:
ASP.NET AJAX page -> WCF Service -> MSMQ -> Workflow Service *or* Windows Service
Does anyone have any ideas, experience or have done this sort of thing before?
I've got a book that covers explicitly how to integrate WF (WorkFlow) and WCF. It's too much to post here, obviously. I think your question deserves a longer answer than can readily be answered fully on this forum, but Microsoft offers some guidance.
And a Google search for "WCF and WF" turns up plenty of results.
I did have an app under development where we used a similar process using MSMQ. The idea was to deliver emergency messages to all of our stores in case of product recalls, or known issues that affect a large number of stores. It was developed and testing OK.
We ended up not using MSMQ because of a business requirement - we needed to know if a message was not received immediately so that we could call the store, rather than just letting the store get it when their PC was able to pick up the message from the queue. However, it did work very well.
The article I linked to above is a good place to start.
Our current design, the one that we went live with, does exactly what you asked about a Windows service.
We have a web page to enter messages and pick distribution lists. - these are saved in a database
we have a separate Windows service (We call it the AlertSender) that polls the database and checks for new messages.
The store level PCs have a Windows service that hosts a WCF client that listens for messages (the AlertListener)
When the AlertSender finds messages that need to go out, it sends them to the AlertListener, which is responsible for displaying the message to the stores and playing an alert sound.
As the messages are sent, the AlertSender updates the status of the message in the database.
As stores receive the message, a co-worker enters their employee # and clicks a button to acknowledge that they've received the message. (Critical business requirement for us because if all stores don't get the message we may need to physically call them to have them remove tainted product from shelves, etc.)
Finally, our administrative piece has a report (ASP.NET) tied to an AlertId that shows all of the pending messages, and their status.
You could have the back-end import process write status records to the database as it completes sections of the task, and the web-app could simply poll the database at arbitrary intervals, and update a progress-bar or otherwise tick off tasks as they're completed, whatever is appropriate in the UI.