Extracting policy data into csv file - gosu

I want to take data from the the gx model and save it into csv file. There are following challenge i am facing -
How to store gxmodel file into a object.
after storing the object, what are the way to store it in csv file.

Question 1:
If my understanding is correct, U have an xml input and u need to store the data into a POJO class(object).
If that is the requirement, u need to parse the xml -> pick each value and map it to the pojo.
If its a gx model, u might have the schema (XSD) also, in that case GW auto generates(in ver 9) and u can directly access the xml content through the instance created by GW for the particular GX model. The rest is mapping which wont be a great deal.
Question 2:
For ListIterator we can directly export the content to csv (or any oth) format. You can refer --Batch History Export-- functionality for the same.
Many jars are available for the same also.
CSV means comma separated values. U can even create a file with the same logic and save it with .csv extension.
Hope this makes sense. :)

Related

debatch big input flat file into smaller multiple output files with specific count

I have a positional input flat file schema of the following kind.
<Employees>
<Employee>
<Data>
In mapping, I need to extract the strings on position basis to pass on to the target schema.
I have the following conditions -
If Data has 500 records, there should be 5 files of 100 records at the output location.
If Data has 522 records, there should be 6 files (5*100, 1*22 records) at the output location.
I have tried few suggestions from internet like
Setting “Allow Message Breakup At Infix Root” to “Yes” and setting maxoccurs to "100". This doesn't seem to be working. How to Debatch (Split) a Flat File using Flat File Schema ?
I'm also working on a custom receive pipeline component suggested at Split Flat Files into smaller files (on row count) using Custom Pipeline but I'm quite new to this so it's taking some time.
Please let me know if there is any simpler way of doing this, without implementing the custom pipeline component.
I'm following the approach to divide the input flat file into multiple small files as per condition and write at the receive location, then process the files with native flat file dissembler. Please correct me if there is a better approach.
You have two options:
Import the flat file to a SQL table using SSIS.
Parse the input file as one Message, then map to a Composite Operation to insert the records into a SQL table. You could use in Insert Updategram also.
After either 1 or 2, call a Stored Procedure to retrieve the Count and Order of messages you need.
A simple way for a flat file structure without writing custom C# code is to just use a Database table. Just insert the whole file as records into the table, and then have a Receive Location that polls for records in the batch size you want.
Another approach is called the Scatter Gather Pattern, in this case you do set the Occurs to 1 which will debatch into individual records, and you then have an Orchestration that re-assembles it into the batch size you want. You will have to read up about Correlations Sets to do this.

Add file name as column in data factory pipeline destination

I am new to DF. i am loading bunch of csv files into a table and i would like to capture the name of the csv file as a new column in the destination table.
Can someone please help how i can achieve this ? thanks in advance
If you use a Mapping data flow, there is an option under source settings to hold the File name being used. And later in it can be mapped to column in Sink.
If your destination is azure table storage, you could put your filename into partition key column. Otherwise, I think there is no native way to do this with ADF. You may need custom activity or stored procedure.
A post said the could use data bricks to handle this.
Data Factory - append fields to JSON sink
Another post said they are using USQL to hanlde this.
use adf pipeline parameters as source to sink columns while mapping
For stored procedure, please reference this post. Azure Data Factory mapping 2 columns in one column

Consuming multiple csv files describing a nested structure in BizTalk

I have a requirement to consume a csv "dataset" consisting of 3 flat files - a control file, a headers file, and a line file - which together define a nested data structure.
The control file items have a field called ControlID, which can be used in the headers file to identify those header records which "belong" to that control item.
The header records have a field called HeaderID, which can be used in the lines file to identify those line records which "belong" to a given header record.
I'd like to consume all three files and then map them into some kind of nested schema structure. My question is how would I do this? Can I do it in a pipeline component?
I would look at two options. Both involve correlation all three files to an Orchestration using a Parallel Convoy.
Use a Multi-input Map to join the files. You should be able to use the HeaderID as filter using the Equal Function to match the lines to their header.
Use a SQL Stored Procedure to group the data as described here: BizTalk: Sorting and Grouping Flat File Data In SQL Instead of XSL

Pentaho Kettle: Mailing the result of a transformation

I am having a kettle job and transformation.
Transformation will write the result set of a select sql into a csv file.
job will get the result file and mail it to the user.
I need to mail only if the file consists any data, else should not mail the result to user.
or how to find the result of a transformation is empty or not(is there any file size validator job entry available?).
I am not able to find any job entries for this kind of conditioning.
Thanks in advance.
You can use the Evaluate files metrics job step in the Conditions branch. Set your condition on the Advanced tab.
You can set your transformation to generate the file only if there is data and then use in your main job the File exists? step.

store and display of file from sql database

how to store files (pdf and word files) into sql database and how to display that files with an option of "save" , "open" from sql data base when user click. i am doing project using c# + asp.net web application
You need to do several things here:
1) Create UI page that allows users to upload files.
This page will have a FileUpload control to check for the desired extentions
2) Write code to save these files to a database
The files can be stored as binary blobs. It will be up to you and your application to decide the schema of your database. You may also choose one of many ORM tools to provide you access to the database from your code see
Linq2SQL
EntityFramework
ADO.net
Or see Creating A Data Access Layer
You have many choices, choose whatever seems most natural / easy for you.
3) Create a UI for the user to select existing files
This will use your ORM data layer to read back the lists of files and display some sort of buttons / links for the user to select and download
4) Retrieve the files from the database once the user selects one and return the file
Read this MSDN article about returning binary files
Furthermore, google around a bit, you'll probably find lots of existing solutions with frameworks like DNN etc.
To store files, you should check out Filestream from SQL Server 2008: http://msdn.microsoft.com/en-us/library/cc716724.aspx
Other traditional platforms have similar support (with binary or image data types).
An alternative is you can store the file in a shared filesystem, and only store the path to the file in the SQL table.
The most common way is to have two columns in the database for the file to be stored propley. 1 column holds the filename with its extensions(ex: file1.txt) and the 2nd column will be of datatype binary.
at the application level. a method gets the uploaded filename and converts it to an array of bytes. then this array is stored in sql in the binary field. the filename is stored in the 1st field.
to read the file back, another method will read the binary field from sql and convert it back to a FileStream then save it with the original filename(extension).
Use a fileUploaded to upload the file.
Read the file into a byte array:
byte[] data = System.File.ReadAllByte(File Path);
convert the byte[] to hex and store it in a nvarchhar data field in SQL
stringBuilder sb = new StringBuilder()
foreach(byte b in data)
{
sb.Append(b.ToString("X");
}
When you need to display it, convert it back to byte[] and create a file out of it, and let the user Open/Save it from there.

Resources