Load multiple files through SQL Loader - unix

I have a requirement to load multiple files received from two source systems into 1 table using SQL Loader.
To make this possible, I want to understand the following --
Pros and Cons of integrating multiple files like this ? - Need this to compare the merging option at source or via SQL Loader
Any other way of interfacing the data from .CSV files except SQL Loader in Oracle for multiple files ? -- I don't think so but still need expert's confirmation
What are the things I need to mindful about? -
Ex- file format and the header sequence should be same for all the files.
Thanks in Advance.

Related

Peforming DML on a table by taking Excel file as an input

I am writing a PLSQL procedure that takes input as an excel file through front end and using that excel input the procedure inserts , updates or deletes the records present in an existing table . Can anyone show me the approach for this?
If that "Excel" file has to be really in native XLS(X) format, a simple option - if you want to stay within Oracle boundaries - is an Apex application which offers a data loading wizard. Takes 4 pages to create it (don't worry, Apex Wizard creates almost everything for you). Once the loading is over, a (stored) procedure can do the rest of processing (you'd call it by pushing a button).
Alternatively, if you save contents of that file as a CSV file, you can load it with SQL*Loader, utility ran at the operating system command prompt. You'd have to create a control file (no wizard to do that, I'm afraid). This approach probably isn't convenient for end users (who's going to type anything at the command prompt?) so you'd have to create some kind of an application to do that.
Or, CSV again, but this time used as an external table. This approach requires the file to be located in a directory accessible by the database server (most frequently, the directory is located on that computer, and you most frequently don't want to allow access to anyone to it). Its advantage is that you can access the CSV file directly from (PL/)SQL, fetch data from it, perform various adjustments etc.
If you're capable of writing programs that aren't part of the Oracle niche (I'm not), go for it (but I can't suggest anything; someone else might).

Flyway specific migration with csv files

We are using Flyway to keep up-to-date many databases in our test environments with sql scripts and it works fine.
But we have a special need to also update databases with csv files.
I know Flyway offers some Java based migrations to handle more complicated updates.
But the problem is that these Java classes have the wanted version in their names, that would oblige us to recompile the class each time we want to use it.
It would be more simple if we could drop our csv files in migration directories exactly like we do with sql files.
Then some specific Java code would handle these csv files to do the right update.
So how can we extend Flyway with this specific code that would handle our csv files ?
Thanks
There is currently no support this. Sounds like the same issue as https://github.com/flyway/flyway/issues/469
I am still not sure how to resolve this without exposing too much of Flyway's internals.

Should we store data in database?

i'm asp.net beginner and currently working in "upload download file" project with asp.net and vb.net as code behind language (like skydrive's web).
what i'm want ask is about upload file in server, must we store path file, size, accessed or created date into database? as we know we can use directory listing in system.io.
Thanks for your help.
You definetly want to store the path of the file. You want a way to find the file ;) Maybe later you will have multiple servers, replication or other fancy things.
For the rest, it depends a bit on the type of website. If it's going to get high traffic then store it in the database, this will limit the number of IO call (very slow). Also, it'll be a lot easier to handle sorting and queries. (sort by date, pull only the read onyl files, ...).
Database will also help if you want to show history or statistique.
You can save file in some directory and can save path of that file in database. You can also store size and created date of that file in DB. But storing a file in DB is a bit difficult. Rather than save file in Directory and save path of that file in DB
you could store the file information in a database to built some extra features like "avoid storing duplicate files", because you are having a faster search in the database! if you search the filesystem always a recursive function call get started

How to open infopath templates in code and change data connections URL

I need to iterate through infopath templates (xsn files) and change the URL of data connections, and then save the changes to the templates.
The data connections I want to change, points to lists in a sharepoint environment.
So, how could I accomplish this task?
I was thinking doing this with a console application.
Infopath definitely doesn't make it easy to deploy to different servers. I have used a powershell script but you can use any console app or scripting language.
Steps to follow:
1. Extract the files from the XSN (either use extrac32 util from MS or rename to zip and use any zip library)
2. Change the data connection (string replace) in manifest.xsf, template.xml, and sampledata.xml
3. Repackage the files into the XSN (either use cabarc util from MS or zip and rename)
It is a pain to have to do all that but the entire script is less than a page long and runs pretty fast. One caveat I ran into was I needed a delay between steps 1 and 2 - the files weren't actually finished extracting and my script was trying to change them.

need help in choosing the right tool

I have a client who has set-up a testing environment in some AI language. It basically runs some predefined test cases and stores the results in as log files (comma separated txt files). My job is to identify and suggest a reporting system and I have these options in mind. either
1. Importing the logs into MSSQL and use the reporting(SSRS) it uses
2. or us import the logs to MySQL and use PHP to develop custom reporting.
I am thinking that going with option2 is better. The reason for this is, the logs are inconsistent and contain unexpected wild characters that normally DB's don't accept. So, I can write some scripts in php before loading them to the database.
Can anyone please suggest if this is your problem what will you suggest to do?
It depends how fancy you need to be. If the data is in CSV files, you could even go so simple as to load it into Excel (or their favorite spreadsheet tool), and use spreadsheet macros to analyze it.

Resources