In the ETL there are 2 parallel jobs and those 2 jobs run start and finish almost at the same time. After they are finished they will update few details to a same look up table. Both the parallel jobs are taking the same amount of time to finish. If both the jobs are started at the same time then the jobs are creating a mlock on the look up table and both the jobs are failing. Tried releasing mlock and running them again. But that is still going into mlock again. So added a wait time of 20 secs before the 1st parallel job. Then both the parallel jobs finished with any issue and updated the required details to the look up table.
Adding a wait time is not an ideal solution in this use case, because later on few parallel jobs needs to added (somewhere around 20-30). If wait steps were to be added before each parallel job, then the time increases drastically. Because for the 2nd job 20 sec, 3rd job 40 sec, so on.. This will be a large number by the time 20th job is reached.
So looking for an alternative solution rather than adding a wait step before each job to avoid the MLock issue?
Be sure each job has its own distinct log, work, and error table names. If multiple jobs that potentially try to run in parallel use the same names (e.g. the default generated names), you are more likely to run into issues.
But if the update is only for a "few details" then MLOAD is the wrong tool.
Related
I have a DAG that inserts data into a SQL Server database. Some of the tasks take 24+ hours to run as the database its inserting into is not high performing.
I need to mark the tasks as complete automatically if they take more than 24 hours to run, as I need to move on from them so I can start inserting the next days worth of data (the DAG runs daily and the data source has new data coming in every day). How can I do this programmatically, where I don't have to go into the UI to mark it as 'Success' or 'Failed'?
You could follow a similar approach as shown in this StackOverflow post: kill or terminate subprocess when timeout. Then once the timeout occurs, you just need to make sure you don't raise any Exception.
We have a composer environment which has below configuration details.
Composer Version: composer-1.10.0-airflow-1.10.6
Machine Type : n1-standard-4
Disk size (GB): 100
Worker Nodes: 6
python version:3
worker_concurrency: 32
parallelism:128
We have a problem in DAG to initialize it's task and it is taking more time. For example DAG has 3 tasks like Task1 -> Task2 -> Task3. Task1 initializes taking time (minimum 5 mins) and once initialized completion time of that task within seconds. Task2 initialized taking again 5 mins and executed within seconds. Like that task's initialization is taking time but completion of that task is quickly done. Have scheduled this DAG every 5 mins, but completing this DAG takes around 10 mins at least. So affecting functionalities and execution of the process.
Here are the functionalities of each three tasks. Task1 objective is to gather the basic information such as storage location from configuration files/variables. Task2 checks the storage whether any new files are coming and based on the file triggers the relevant DAGs. Task3 objective is to send success email.
Also, I noted that worker nodes did not splitted the work among themselves. Always one worker node's CPU utilization is high compared to other worker nodes. Do not know what could be the reason for it. One more interesting is even though the other DAG's are not running at that time this DAG still takes 10 mins to execute.
Appreciated your help in solving this case.
This should be a comment but I don't have the reputation required.
My initial advice is to upgrade your Composer version, 1.10.0 has a few known bugs that are fixed in later versions. Right now the latest version is 1.10.4. This should correct the CPU that stays at 100% (it did in our case). Are there many other DAGs running on your instance?
As I mentioned in the comment the reason behind the high CPU pressure on the particular GKE node might be more evident after the troubleshooting performed on Airflow workflow/GKE sides.
It is happening regularly that on some Aiflow runtime node the computation resources (Memory/CPU) are running out of the node capacity causing Airflow workloads(Pods) being Evicted and further restarted loosing all the process states data, however Celery executor which is responsible for assigning tasks to the Airflow workers can even be not aware about inconvenient state/time-out of the worker and doesn't keep the certain action to re-assign this task to another worker.
According to GCP Composer release notes, the vendor has provided some essential fixes in the latest composer-1.10.* patches, improving Composer runtime performance and reability, as #parakeet said in his answer.
You can also refer to this GCP Composer known issues knowledge base to keep track of the current problems and workarounds that vendor shares to the community.
I am trying to understand autosys job. Suppose I have Job A that runs every 15 minutes. Suppose for some reason if Job A takes more than 15 minutes, will another instance of it run or it will wait for the job to finish before running another instance?
In my experience, if the previous job run is still running, another instance will not run if the next scheduled time comes. The next time the job runs is when the previous run is finished and the next scheduled time comes.
Another user also experienced this according to this answer.
I did not find any AutoSys documentation that officially confirms what happens in this situation, but I guess the best way to find out is to test it on your AutoSys instance.
I have experienced this first hand and can confirm that there won't be two instances in the mentioned scenario. The job will wait on the previous run to complete and will immediately kick off the next instance if the time condition is met before the previous completes.
But this will be the case only when the job is in running state, if the job is in any other state it will kick off based on the given start_time condition.
I am scheduling an Oozie MapReduce job to run every 15 minutes. I wonder what would happen if each job will take longer than that set time? Will it result in a job backlog? Or Oozie will create a new task / thread / fork for the new job while the previous one is still running?
Oozie won't run the next job before the previous one is over. If the first job takes more than 15 minutes to execute then the next one will be run after scheduled time. So scheduled time and running time may be different in Oozie.
EDIT:
Anyway, the described behaviour is default only and can be changed. You can set concurrency property from controls block to more than 1, and the next job will be run even the first one is still running. Check my answer on similar question
My requirement is to create a job in informatica which will run for every 15 min and look for a status column in abc table.If it is “Approved” THEN It will exit and kick off the rest of the jobs.
If the status is not approved it will not do anything and run after 15 min.This process wil continue until we have a approval status.
So, No matter what happens in the above two scenarios,This process will run in every 15 minutes.
I have worked on the same requirement in unix using loops and conditional statments but I am not sure how this can be achieved using informatica.Could you please help me on this.
Regards,
Karthik
I would try adding a scheduler that runs every 15 minutes. The best way that I've found to "loop" sessions in Informatica is:
run the session once, check if it failed using conditional links
if it did fail, run a timer task for an amount of time (a minute, an hour, whatever)
then try to run the same session again by copying and pasting the session up ahead of the timer task, and repeat a few times as necessary.
So if you added a scheduler into the mix, you could set the scheduler to have the workflow run every 15 minutes, and have the timer tasks halt the workflow for 4 or 5 minutes each. Then you could use SESSSTARTTIME function in some pre/post-session task to determine when the scheduler will fire off again and simply abort the workflow before that time.