Designing a visualisation for success/failure of processes - kibana

Can someone suggest a way to achieve the following:
I'm getting a constant stream of events from a source. These events include data that specifies whether a particular process has completed successfully, as indicated by the presence of these events:
Process started successfully
Process in progress
Process completed successfully
If the process does not complete successfully, the following events would be sent:
Process started successfully
Process in progress
Process failed
My question is - how can I set up a visualisation(?) in Kibana to display:
Process number
Process completed successfully (perhaps with a green button/tick) OR
Process failed (perhaps with a red button/cross
Not sure how to achieve this.

Related

Why is the task status in DolphinScheduler always in the successfully submitted status?

when I click the Start button to run the workflow, I meet the following situation: the task status always in the successfully submitted status, how can I solve this problem?
1、first check whether the WorkerServer service exists through jps, or directly check whether there is a worker service in zk from the service monitoring.
2、If the WorkerServer service is normal, you need to check whether the MasterServer puts the task task in the zk queue. You need to check whether the task is blocked in the MasterServer log and the zk queue.
3、If there is no problem above, you need to locate whether the Worker group is specified, but the machine grouped by the worker is not online.

Running process not shown in active process Instances and difference between ASYNC & SYNC tasks

My workflow is quite simple, I have two script, first script is ASYNC and the second is SYNC. In each script I have a loop from 0 to Integer.MAX_VALUE as follow
for(int i=0;i<Integer.MAX_VALUE;i++)
System.out.println("value is "+i);
When I run my process, it starts working and I can see in my log file that it is being filled. But when I want to stop it, I find nothing in my active process instances, neither in completed process or even in aborted. even if I check my data base, I have nothing related to this process in the ProcessInstanceInfo or even ProcessInstanceLog. So weird isn't it? what could be the reason?
The goal from creating this workflow is to see the difference between ASYNC and SYNC tasks, because as I know that ASYNC tasks when they start running, the workflow don't have to wait until this task finish, but what I have is that my task ASYNC is still running and it didn't go to next task. So my second question is can any one give me the difference between ASYNC and SYNC with a good example to learn. I would appreciate if I'll get at least one answer on one of my two questions. thanks
What do you stop? Do you abort the process instance ?
In the scripts you can populate the process variables with kcontext.setVariable("variable_name","variable_value"). This will reflect in DB if you have defined the process variable persistent in the process model.
The tasks, the sync one will return the flow control to the process when is completed. In contrast to the async one, process flow will continue immediately after it sends the async tasks to execute.

Task with no status leads to DAG failure

I have a DAG that fetches data from Elasticsearch and ingests into the data lake. The first task, BeginIngestion, opens in several tasks (one for each resource), and these tasks open in more tasks (one for each shard). After the shards are fetched, the data is uploaded to S3 and then closed into a task EndIngestion, followed by a task AuditIngestion.
It was executing correctly, but now all tasks are executed successfully, but the "closing task" EndIngestion remains with no status. When I refresh the webserver's page, the DAG is marked as Failed.
This image shows successful upstream tasks, with the task end_ingestion with no status and the DAG marked as Failed.
I also dug into the task instance details and found
Dagrun Running: Task instance's dagrun was not in the 'running' state but in the state 'failed'.
Trigger Rule: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'failed': 0, 'upstream_failed': 0, 'skipped': 0, 'done': 49, 'successes': 49}, upstream_task_ids=['s3_finish_upload_ingestion_raichucrud_complain', 's3_finish_upload_ingestion_raichucrud_interaction', 's3_finish_upload_ingestion_raichucrud_company', 's3_finish_upload_ingestion_raichucrud_user', 's3_finish_upload_ingestion_raichucrud_privatecontactinteraction', 's3_finish_upload_ingestion_raichucrud_location', 's3_finish_upload_ingestion_raichucrud_companytoken', 's3_finish_upload_ingestion_raichucrud_indexevolution', 's3_finish_upload_ingestion_raichucrud_companyindex', 's3_finish_upload_ingestion_raichucrud_producttype', 's3_finish_upload_ingestion_raichucrud_categorycomplainsto', 's3_finish_upload_ingestion_raichucrud_companyresponsible', 's3_finish_upload_ingestion_raichucrud_category', 's3_finish_upload_ingestion_raichucrud_additionalfieldoption', 's3_finish_upload_ingestion_raichucrud_privatecontactconfiguration', 's3_finish_upload_ingestion_raichucrud_phone', 's3_finish_upload_ingestion_raichucrud_presence', 's3_finish_upload_ingestion_raichucrud_responsible', 's3_finish_upload_ingestion_raichucrud_store', 's3_finish_upload_ingestion_raichucrud_socialprofile', 's3_finish_upload_ingestion_raichucrud_product', 's3_finish_upload_ingestion_raichucrud_macrorankingpresenceto', 's3_finish_upload_ingestion_raichucrud_macroinfoto', 's3_finish_upload_ingestion_raichucrud_raphoneproblem', 's3_finish_upload_ingestion_raichucrud_macrocomplainsto', 's3_finish_upload_ingestion_raichucrud_testimony', 's3_finish_upload_ingestion_raichucrud_additionalfield', 's3_finish_upload_ingestion_raichucrud_companypageblockitem', 's3_finish_upload_ingestion_raichucrud_rachatconfiguration', 's3_finish_upload_ingestion_raichucrud_macrorankingitemto', 's3_finish_upload_ingestion_raichucrud_purchaseproduct', 's3_finish_upload_ingestion_raichucrud_rachatproblem', 's3_finish_upload_ingestion_raichucrud_role', 's3_finish_upload_ingestion_raichucrud_requestmoderation', 's3_finish_upload_ingestion_raichucrud_categoryproblemto', 's3_finish_upload_ingestion_raichucrud_companypageblock', 's3_finish_upload_ingestion_raichucrud_problemtype', 's3_finish_upload_ingestion_raichucrud_key', 's3_finish_upload_ingestion_raichucrud_macro', 's3_finish_upload_ingestion_raichucrud_url', 's3_finish_upload_ingestion_raichucrud_document', 's3_finish_upload_ingestion_raichucrud_transactionkey', 's3_finish_upload_ingestion_raichucrud_catprobitemcompany', 's3_finish_upload_ingestion_raichucrud_privatecontactinteraction', 's3_finish_upload_ingestion_raichucrud_categoryinfoto', 's3_finish_upload_ingestion_raichucrud_marketplace', 's3_finish_upload_ingestion_raichucrud_macroproblemto', 's3_finish_upload_ingestion_raichucrud_categoryrankingto', 's3_finish_upload_ingestion_raichucrud_macrorankingto', 's3_finish_upload_ingestion_raichucrud_categorypageto']
As you see, the "Trigger Rule" field says that one of the tasks is in a "non-successful state", but at the same time the stats shows that all upstreams are marked as successful.
If I reset the database, it doesn't happen, but I can't reset it for every execution (hourly). I also don't want to reset it.
Does anyone have any light?
PS: I am running in an EC2 instance (c4.xlarge) with LocalExecutor.
[EDIT]
I found in the scheduler log that the DAG is in deadlock:
[2017-08-25 19:25:25,821] {models.py:4076} DagFileProcessor157 INFO - Deadlock; marking run failed
I guess this may be due to some exception treatment.
I have had this exact issue before, for me my code was generating duplicate task ids. And it looks like in your case there is also a duplicate id:
s3_finish_upload_ingestion_raichucrud_privatecontactinteraction
This is probably a year late for you, but hopefully this will save others, lots of debugging time :)

NSTextView freezing my app when adding a lot of data asyncronously

I'm building a simple talker/listener app that receives OSC data through UDP. I'm using OSCKit pod which itself uses CocoaAsyncSocket library for the internal UDP communication.

When I'm listening to a particular port to receive data from another OSC capable software, I log the received commands to a NSTextView. The problem is that sometimes, I receive thousands of messages in a very short period of time (EDIT: I just added a counter to see how many messages I'm receiving. I got over 14000 in just a few seconds and that is only a single moving object in my software). There is no way to predict when this is gonna happen so I cannot lock the textStorage object of the NSTextView to keep it from sending all its notifications to update the UI. The data is processed through a delegate callback function.

So how would you go around that limitation?
///Handle incoming OSC messages
func handle(_ message: OSCMessage!) {
print("OSC Message: \(message)")
let targetPath = message.address
let args = message.arguments
let msgAsString = "Path: \"\(targetPath)\"\nArguments: \n\(args)\n\n"
print(msgAsString)
oscLogView.string?.append(msgAsString)
oscLogView.scrollToEndOfDocument(self)
}
As you can see here (this is the callback function) I'm updating the TextView directly from the callback (both adding data and scrolling to the end), every time a message is received. This is where Instruments tell me the slow down happens and the append is the slowest one. I didn't go further than that in the analysis, but it certainly is due to the fact that it tries to do a visual update, which takes a lot more time than parsing 32bits of data, and when it's finished it receives another update right away from the server's buffer.
Could I send that call to the background thread? I don't feel like filling up the background thread with visual updates is such a great idea. Maybe growing my own string buffer and flushing it to the TextView every now and then with a timer?
I want to give this a console feel, but a console that freezes is not a console.
Here is a link to the project on github. the pods are all there and configured with cocoapods, so just open the workspace. You guys might not have anything to generate that much OSC traffic, but if you really feel like digging in, you can get IanniX, which is an open-source sequencer/trajectory automator that can generate OSC and a lot of it. I've just downloaded it and I'll build a quick project that should send enough data to freeze the app and I'll add it to the repo if anybody want to give it a shot.
I append the incoming data to a buffer variable and I use a timer that flushes that buffer to the textview every 0.2 seconds. The update cycle of the textview is way too slow to handle the amount of incoming data so unloding the network callback to a timer let the server process the data instead of being stopped every 32bits.
If anybody come up with a more elegant method, I'm open minded.

build queue issues in CC.net

Having a question on how the build queue is configured in CC.net.
I believe we have an issue , when trying to “force” build a scheduled project, the server tries to run several builds at the same time and fails
Most of them except the one that started first.
We need to get to a state when regardless how many builds are scheduled or how many we “force” start in about the same time, all build requests are placed in to a build queue and
executed one after finishing another in the order they were placed, and no extra request are generated.
Build Failed email is sent but the build was actually successful.
In short,The erroneous email is likely due to an error in the build server’s build scheduler/queue, trying to run 2 builds instead of one when asked for a “forced” build, as a result the first one is successful and the second one fails.
How to correct/resolve this issue....?
Thanks
Nilesh
To specify your projects' queue you need to set the queue property like this :
<project name="MyFirstProject" queue="Q1" queuePriority="1">
The default value is a queue per project. If you manually set the same queue (for example Q1) for all you project then, you will have a unique queue.
As for the queuePriority, the project (not yet started) in the queue are ordonned by queuePriority, low queuePriority projects start first.
It's all described in the cc net documentation which is now offline due to a problem at sourceforge.

Resources