I have a use case where multiple jobs can run at the same time. The output of all the jobs will have to merged with a common master file in HDFS(containing key value pairs) that has no duplicates. I'm not sure how to avoid the race condition that could crop up in this case. As an example both Job 1 and Job 2 simultaneously write the same value to the master file resulting in duplicates. Appreciate your help on this.
Apache Hadoop doesn't support parallel writing to the same file. Here is the reference.
Files in HDFS are write-once and have strictly one writer at any time.
So, multiple maps/jobs can't write to the same file simultaneously. Another job/shell or any other program has to be written to merge the output of multiple jobs.
Related
I am trying to solve the following problem with airflow:
I have a data pipeline where I want to run several processes on a number of excel documents (eg: 5,000 excel files a day). My idea for a DAG is below:
Task 1 = Take an excel file, and adds a new sheet to it.
Task 2 = Convert this returned excel to a PDF.
Task 1 and 2 in the DAG would call a processing tool running outside airflow via an API call (So the actual data processing isnt happening inside airflow).
I seem to be going around in circles with figuring out the best approach to this workflow. Some questions I keep having are:
Should each DagRun be one excel, or should the DagRun take in a batch
of excels?
If taking in a batch (which I presume is the correct approach), what is the recommend batch amount?
How would I pass the returned values from task 1 to task 2. Would it be an XCOM dictionary with a reference to each newly saved excel? I read somewhere that the max size of an xcom should be 48kb. So if i have a XCOM of 5,000 excel filepaths, that will probabaly be larger than 48kb.
The last, most tricky question I have is, I would obviously want to start processing task 2 as soon as even 1 excel from Task 1 had completed, because i wouldnt want to wait for the entire batch of Task 1 to complete before starting Task 2. How can I run Task 2, multiple times within the same DagRun for each new result that Task 1 produces? Or should Task 2 be its own DAG?
Am I approaching this problem the right way? How should I be tackling this problem?
Assumptions
I made some assumptions since I don't know all the details of the Excel file processing:
You cannot merge the Excel files since you need them separate.
Excel files are accessible from Airflow DAG (same filesystem or similar).
If something of that is not true, please clarify accordingly.
Answers
That being said, I'll first answer your questions and then comment on some thoughts:
I think you can do in batches, since using one run per file will be very slow (because of the scheduler time mostly, that will add time between Excel files processing). You're also not using all the available resources, so better push Airflow to be more busy.
The batch amount will depend on the processing load and the task design. From your question I assume you're thinking about having the batch inside the task, but if the service that process the Excel files could handle good parallelism, I'd rather recommend one task per Excel file. Having 5000 tasks (one for each file) will be a bad idea (because that'll be difficult so see in the UI), but the exact number of processes per batch depends on your resources and service SLA mostly.
From my experience I recommend using one task for everything, since you can call the service in parallel and right after the service completes, you can directly transform the Excel file in PDF.
This gets solved with the answer from question #3.
Solution overview
The solution I imagine is something like:
First task for checking existence of pending files. You can do a fork using a BranchPythonOperator (example here).
Then you have X parallel tasks to process Excel (call the service) and transform that to PDF. Could be one PythonOperator task. If you use Airflow 2, you can simply use #task() decorator to simplify the code. The X could be from 10 to 100 for example, depending on the resources and the service throughput.
Have a final task that triggers the DAG again to process more files. This could be implemented using a TriggerDagRunOperator (example here).
I am new to airflow, I took some courses about it but did not come across any example for my use case, I would like to:
Fetch data from postgres (data is usually 1m+ rows assuming it is large for xcoms)
(In some cases process the data if needed but this can be usually done inside the query itself)
Insert data into oracle
I tend to see workflows like exporting data into a csv first (from postgres), then loading it into destination database. However, I feel like it would be best to do all these 3 tasks in a single python operator (for example looping with a cursor and bulk inserting) but not sure if this is suitable for airflow.
Any ideas on possible solutions to this situation? What is the general approach?
As you mentioned there are several options.
To name a few:
Doing everything in one python task.
Create a pipeline
Create a custom operator.
All approaches are valid each one has advantages and disadvantages.
First approach:
You can write a python function that uses PostgresqlHook to create a dataframe and then load it to oracle.
def all_in_one(context):
pg_hook = PostgresHook('postgres_conn')
df = pg_hook.get_pandas_df('SELECT * FROM table')
# do some transformation on df as needed and load to oracle
op = PyhtonOperator(task_id='all_in_one_task',
python_callable=all_in_one,
dag=dag
)
Advantages :
easy coding (for people who are used to write python scripts)
Disadvantages:
not suitable for large transfers as it's in memory.
If you need to backfill or rerun the entire function is executed. So if there is an issue with loading to oracle you will still rerun the code that fetch records from PostgreSQL.
Second approach:
You can implement your own MyPostgresqlToOracleTransfer with any logic you wish. This is useful if you want to reuse the functionality in different DAGs
Third approach:
Work with files (data lake like).
the file can be on local machine if you have only 1 worker, if not the file must be uploaded to a shared drive (S3, Google Storage, any other disk that can be accessed by all workers).
Possible pipeline can be:
PostgreSQLToGcs -> GcsToOracle
Depends on what service you are using, some of the required operators may already been implemented by Airflow.
Advantages :
Each task stand for itself thus if you successful exported the data to disk, in event of backfill / failure you can just execute the failed operators and not the whole pipe. You can also save the exported files in cold storage in case you will need to rebuild from history.
Suitable for large transfers.
Disadvantages:
Adding another service which is "not needed" (shared disk resource)
Summary
I prefer the 2nd/3rd approaches. I think it's more suitable to what Airflow provides and allow more flexibility.
I have multiple txt files, each with 1 million records (Say 10 files) and these files are saved in LIST_OF_FILES.txt.
I have created a sequence and parallel job to extract data from those files to load into the tables (db2).
Just imagine, I am done with first 2 files. While loading the 3rd file (consider 10000 records are loaded into table so far), the parallel job got aborted due some environmental issue.
Now I want to load records from 10001, where the job got aborted.
JOB DESIGN
Execute command activity_1: wc -l LIST_OF_FILES.txt.
Starting loop: Start:1 , Step: 1 , To: output of Execute command activity_1.
Execute command activity_2: head -output_loop_counter LIST_OF_FILES.txt | tail -1.
parallel job: extract job to load records from file to table.
Execute command activity_3: Moving the extracted file to another folder.
End loop: above steps will continue till last file.
This is not an out-of-the-box capability. You need a job design that keeps track of the number of records processed, so that you can restart from a known good point. Keep in mind, too, that any pending transactions are rolled back if a job aborts - your design probably needs to check how many rows were actually loaded into the target table.
I would maintain the above sequence design and alter the extract job to perform a lookup against the target table on your UC/Primary key for the table, assuming that you have one.
Set failed lookup to reject, and connect your load stage to the reject link. You can dump the valid lookups into a copy stage to dead-end them and get rid of them. This slightly mimics the change capture stage, but with no sorting requirement and no warnings on values.
I would like to know how to transfer data between tasks without storing them in between.
Attached image one can find the flow of tasks. As of now I am storing the output csv files of each task as a file in my local machine and fetching this csv file again as an input to next task. I wanted to know if there is any otherway to pass data between tasks without storing it after each task. I researched a bit and came across Xcoms. I wanted to make sure if Xcoms are the right way to achieve this or am I wrong. I could not find any practical examples. Any help is appreciated as I am just a newbie in airflow started couple of days
Short answer is no, tasks require data to be at rest before moving to the nest task. Xcom's are most suited to short strings that can be shared between tasks (file directories, object names, etc.). Your current flow of storing the data in csv files between tasks is the optimal way of running your flow.
XCom is intended for sharing little pieces of information, like the len of the sql table, any specific values or things like that. It is not made for sharing dataframes (which can be huge) because the shared information is written in the metadata database.
So either you keep exporting the csv to your computer (or uploading them somewhere), for reading it in the next Operator, or you combine the operators into one.
I'm using Hadoop and working with a map task that creates files that I want to keep, currently I am passing these files through the collector to the reduce task. The reduce task then passes these files on to its collector, this allows me to retain the files.
My question is how do I reliably and efficiently keep the files created by map?
I know I can turn off the automatic deletion of map's output, but that is frowned upon are they any better approaches?
You could split it up into two jobs.
First create a map only job outputting the sequence files you want.
Then, taking your existing job (doing really nothing in the map anymore but you could do some crunching depending on your implementation & use cases) and reducing as you do now inputting the previous map only job through as your input to the second job.
You can wrap this all up in one jar running the 2 jars as such passing the output path as an argument to the second jobs input path.