Dynamic query generation in U SQL - u-sql

My requirement is to dynamically schematize the extract statements in U SQL for separate files according to the metadata information of each file present in external data source. Please see if anyone can help here.

You have the following options:
Write a U-SQL script that reads the meta data information from the external data source which then creates the U-SQL script you want to run.
Write some custom code in your favorite scripting language (PowerShell, node.js, Python etc.) to generate the script and submit it.

Related

Load multiple files through SQL Loader

I have a requirement to load multiple files received from two source systems into 1 table using SQL Loader.
To make this possible, I want to understand the following --
Pros and Cons of integrating multiple files like this ? - Need this to compare the merging option at source or via SQL Loader
Any other way of interfacing the data from .CSV files except SQL Loader in Oracle for multiple files ? -- I don't think so but still need expert's confirmation
What are the things I need to mindful about? -
Ex- file format and the header sequence should be same for all the files.
Thanks in Advance.

Peforming DML on a table by taking Excel file as an input

I am writing a PLSQL procedure that takes input as an excel file through front end and using that excel input the procedure inserts , updates or deletes the records present in an existing table . Can anyone show me the approach for this?
If that "Excel" file has to be really in native XLS(X) format, a simple option - if you want to stay within Oracle boundaries - is an Apex application which offers a data loading wizard. Takes 4 pages to create it (don't worry, Apex Wizard creates almost everything for you). Once the loading is over, a (stored) procedure can do the rest of processing (you'd call it by pushing a button).
Alternatively, if you save contents of that file as a CSV file, you can load it with SQL*Loader, utility ran at the operating system command prompt. You'd have to create a control file (no wizard to do that, I'm afraid). This approach probably isn't convenient for end users (who's going to type anything at the command prompt?) so you'd have to create some kind of an application to do that.
Or, CSV again, but this time used as an external table. This approach requires the file to be located in a directory accessible by the database server (most frequently, the directory is located on that computer, and you most frequently don't want to allow access to anyone to it). Its advantage is that you can access the CSV file directly from (PL/)SQL, fetch data from it, perform various adjustments etc.
If you're capable of writing programs that aren't part of the Oracle niche (I'm not), go for it (but I can't suggest anything; someone else might).

Loading reference data tables using Flyway

Our application has a number of tables containing reference data. We have been using the traditional Flyway approach of creating delta files for each change in data but with frequent changes its a bit hard to manage this way. It would be easier to have a script with a truncate followed by inserts to reload the table from scratch and when data changes the developer would edit this file as needed.
Is there a clean way to accomplish this in Flyway without generating checksum errors? Hopefully without creating a new version of the load script each time a change is needed.
Have you tried adding your reference data as a beforeMigrate callback script?
"Using the default settings, Flyway looks in its default locations (/sql) for the Command-line tool) for SQL files like beforeMigrate.sql, beforeEachMigrate.sql, afterEachMigrate.sql"

Should we store data in database?

i'm asp.net beginner and currently working in "upload download file" project with asp.net and vb.net as code behind language (like skydrive's web).
what i'm want ask is about upload file in server, must we store path file, size, accessed or created date into database? as we know we can use directory listing in system.io.
Thanks for your help.
You definetly want to store the path of the file. You want a way to find the file ;) Maybe later you will have multiple servers, replication or other fancy things.
For the rest, it depends a bit on the type of website. If it's going to get high traffic then store it in the database, this will limit the number of IO call (very slow). Also, it'll be a lot easier to handle sorting and queries. (sort by date, pull only the read onyl files, ...).
Database will also help if you want to show history or statistique.
You can save file in some directory and can save path of that file in database. You can also store size and created date of that file in DB. But storing a file in DB is a bit difficult. Rather than save file in Directory and save path of that file in DB
you could store the file information in a database to built some extra features like "avoid storing duplicate files", because you are having a faster search in the database! if you search the filesystem always a recursive function call get started

need help in choosing the right tool

I have a client who has set-up a testing environment in some AI language. It basically runs some predefined test cases and stores the results in as log files (comma separated txt files). My job is to identify and suggest a reporting system and I have these options in mind. either
1. Importing the logs into MSSQL and use the reporting(SSRS) it uses
2. or us import the logs to MySQL and use PHP to develop custom reporting.
I am thinking that going with option2 is better. The reason for this is, the logs are inconsistent and contain unexpected wild characters that normally DB's don't accept. So, I can write some scripts in php before loading them to the database.
Can anyone please suggest if this is your problem what will you suggest to do?
It depends how fancy you need to be. If the data is in CSV files, you could even go so simple as to load it into Excel (or their favorite spreadsheet tool), and use spreadsheet macros to analyze it.

Resources