I'm trying to figure out what one could do to see where a flowfile is at in a nifi flow over http. For example, say I have a webpage where a user could upload files. I want to indicate to the user that that file is currently being ingested/processed, and possibly what stage its at. What does nifi offer that I could leverage to get this information? Like is there a way to see which processors a flowfile has gone through, or the processor/queue it's currently in?
Thanks
One option would be to use NiFi's provenance events and issue a provenance query for the flow file UUID to see all the current events which should give you a graph of all the processors it has passed through:
https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#data_provenance
You can open Chrome Dev Tools while using provenance features in the UI and see what calls are being made to the REST API.
Another option is to build in some kind of status updates into your flow. You could stand up your own HTTP services that receives simple events like an id, timestamp, and processor name, then in your flow you could put InvokeHttp processors wherever you want to report status to your service. Then your UI would use the status events in it's own DB or wherever you store them.
Related
I have developed a BizTalk Application. It receives a xml file and, after applying the business logic, it sends the file to another location using FILE adapter. I need to track the start and end time for both Receive Port and Send Port. I have created BAM activities and view and have created a tracking profile using Tracking Profile Editor. I have used Interchange ID as continuation ID token.
The problem is that in the BAM tracking, I am getting two rows, one for the receive port and second for the send port. The continuation between the receive and send port is not working.
The Continuation is not working most likely because InterchangeID is not naturally Promoted.
The small issue you have is that there is no naturally Promoted Property that can be used out of the box for this.
The simplest solution would be to create a custom Pipeline Component that Promotes InterchangeID (same property, just Promoted). Then your Tracking Profile should start working.
FYI, it this point, you don't really need BAM as it's pretty easy to query tracking directly using the same Promoted Property (which is what BAM is essentially doing using a slightly different path).
The interchange id will be present in the message context. Can you please confirm you mapped receive and send ports to the continuations in the tracking profile editor. Refer the article https://www.biztalk-server-tutorial.com/2013/02/08/how-to-enable-bam-continuation-between-receive-send-ports-using-tracking-profile-editor/ which shows the steps to add continuation correctly.
Hello I'm currently new to android and I'm trying to make a simple RSS application on android.
I've put a together all basic aspects of the application as the parser and fetching the RSS through Http connection through ASyncTask as well as displaying the data in a listView.
How can I refresh the RSS feed (Google News) without starting the application ?? What is the best method for it (Push/Pull) and a simple explanation on implementing?? Thanks.
Option 1:
Implement AlarmManager which will start background service every specific time, complete action and go to sleep until further call.
https://developer.android.com/training/scheduling/alarms.html
Option 2:
Use Google Cloud Messaging (server sends your phone data which triggers app/service to start) and do action. However I don't think this is required unless you want it to get new data when it's available rather than every specific interval.
As far as I understand, both web-feeds RSS and Atom request, starting at the client side, content from the server, and they do that at periodic intervals of time. It doesn't matter whether there is new content or not, the client checks for updates.
Wouldn't it be more efficient the other way round? Let the server announce new updates. In this scenario, it would have to keep track of the clients, and when each got what update. It would also have to send a message to each one. But still, it looks more efficient if client-server were not communicating when there are no new news.
Is there a reason why web-feeds are the way they are?
This model is not inherent to feeds (RSS or Atom), but to HTTP itself, where a client queries a server to get data. This is at this point, the only way in a pure client -> server model to determine whether there is any new data available or updated.
Now, in the context of server querying other servers, PubsubHubbub solves that with webhooks. Basically, when polling any given resource, a server can also "subscribe" by providing a webhook which will be called upon a change or update in the feed. This way the subscriber does not have to poll the feed over and over again.
I am writing a web application using ASP.NET (not MVC), with .NET v4 (not v4.5).
I fetch some of the data which I must display from a 3rd-party web service, one of whose methods takes a long time (several seconds) to complete. The information to be fetched/prefetched varies depending on the users' initial requests (because different users ask for details about different objects).
In a single-user desktop application, I might:
Display my UI as quickly as possible
Have a non-UI background task to fetch the information in advance
Therefore hope have an already-fetched/cached version of the data, by the time the user drills down into the UI to request it
To do something similar using ASP.NET, I guessed I can:
Use a BackgroundWorker, passing the Session instance as a parameter to the worker
On completion of the worker's task, write fetched data to the Session
If the user's request for data arrives before the task is complete, then block until it it has completed
Do you foresee problems, can you suggest improvements?
[There are other questions on StackOverflow about ASP.NET and background tasks, but these all seem to be about fetching and updating global application data, not session-specific data.]
Why not use same discipline as in a desktop application:
Load the page without the data from the service ( = Display my UI as quickly as possible)
Fetch the service data using an ajax call (= Have a non-UI background task to fetch the information in advance)
this is actually the same, although you can show an animated gif indicating you are still in progress... (Therefore hope have an already-fetched/cached version of the data, by the time the user drills down into the UI to request it)
In order to post an example code it will be helpful to know if you are using jquery? plain javascript? something else? no javascript?
Edit
I am not sure if this was your plan but Another idea is to fetch the data on server side as well, and cache the data for future requests.
In this case the stages will be:
Get a request.
is the service data cached?
2.a. yes? post page with full data.
2.b. no? post page without service data.
2.b.i. On server side fetch service data and cache it for future requests.
2.b.ii. On client side fetch service data and cache it for current session.
Edit 2:
Bare in mind that the down side of this discipline is that in case the method you fetch the data changes, you will have to remember to modify it both on server and client side.
I need to invoke a long running task from an ASP.NET page, and allow the user to view the tasks progress as it executes.
In my current case I want to import data from a series of data files into a database, but this involves a fair amount of processing. I would like the user to see how far through the files the task is, and any problems encountered along the way.
Due to limited processing resources I would like to queue the requests for this service.
I have recently looked at Windows Workflow and wondered if it might offer a solution?
I am thinking of a solution that might look like:
ASP.NET AJAX page -> WCF Service -> MSMQ -> Workflow Service *or* Windows Service
Does anyone have any ideas, experience or have done this sort of thing before?
I've got a book that covers explicitly how to integrate WF (WorkFlow) and WCF. It's too much to post here, obviously. I think your question deserves a longer answer than can readily be answered fully on this forum, but Microsoft offers some guidance.
And a Google search for "WCF and WF" turns up plenty of results.
I did have an app under development where we used a similar process using MSMQ. The idea was to deliver emergency messages to all of our stores in case of product recalls, or known issues that affect a large number of stores. It was developed and testing OK.
We ended up not using MSMQ because of a business requirement - we needed to know if a message was not received immediately so that we could call the store, rather than just letting the store get it when their PC was able to pick up the message from the queue. However, it did work very well.
The article I linked to above is a good place to start.
Our current design, the one that we went live with, does exactly what you asked about a Windows service.
We have a web page to enter messages and pick distribution lists. - these are saved in a database
we have a separate Windows service (We call it the AlertSender) that polls the database and checks for new messages.
The store level PCs have a Windows service that hosts a WCF client that listens for messages (the AlertListener)
When the AlertSender finds messages that need to go out, it sends them to the AlertListener, which is responsible for displaying the message to the stores and playing an alert sound.
As the messages are sent, the AlertSender updates the status of the message in the database.
As stores receive the message, a co-worker enters their employee # and clicks a button to acknowledge that they've received the message. (Critical business requirement for us because if all stores don't get the message we may need to physically call them to have them remove tainted product from shelves, etc.)
Finally, our administrative piece has a report (ASP.NET) tied to an AlertId that shows all of the pending messages, and their status.
You could have the back-end import process write status records to the database as it completes sections of the task, and the web-app could simply poll the database at arbitrary intervals, and update a progress-bar or otherwise tick off tasks as they're completed, whatever is appropriate in the UI.