I have some tab delimited files that I am importing from a folder using SQL Assistant. These files each have more than 200 thousand records, when I try to insert them, I get the error message 'Wrong number of data values in record', I have tried importing 10,000 at a time and they insert. Is there a way I can insert a whole file with over 200 thousand records?
Related
The use case is that, there is an Informatica Cloud mapping which loads from SQL Server to Teradata database. If there any failures during the run time of the mapping then that mappings writes all the failed rows to a table in Teradata database. The key column in this error table is HOSTDATA which I assume. I am trying to decode the HOSTDATA column so that if a similar ETL failure happens in the production then it would be helpful in identifying the root cause much quickly. By default HOSTDATA is a column of type VARBYTES.
To decode the HOSTDATA column, converted the column to ASCII and Base 16 format. None of them made any use.
Then tried the below from the Teradata forum.
Then tried to extract the data from the error table using a BTEQ script. For that the data is being exported into a .err file and it is being loaded back into the Teradata database using a fastload script. Fastload is unable to load the data because there is no specific delimiter for the data. There data in the .err file looks gibberish. Snapshot of the data from the .err file:
My end goal is to interpret the Hostdata column in a more human readable way. Any suggestions in this direction are also welcome.
The Error Table Extractor command twbertbl which is part of "Teradata Parallel Transporter Base" software is designed to extract and format HOSTDATA from the error table VARBYTE column.
Based on the screenshot in your question, I suspect you will need to specify FORMATTED as the record format option for twbertbl (default is DELIMITED).
I'm creating a report in BI Publisher using the BI Publisher Desktop tool for Word.
What I need is to have a table with a dynamic column number.
Let's imagine I'm listing stocks by store: Each line is an item and I need to have a column for each store in the database, but that must be dynamic because a store can be created or deleted at any moment.
The number of stores, i.e., the number of columns that need to exist is obtained from an SQL query that goes into the report by a data set.
The query will be something like SELECT COUNT(*) AS STORE_COUNT FROM STORE; in a data set named G_1, so the number of columns is the variable G_1::STORE_COUNT.
Is there any way that can be achieved?
I'm developing the report using an .rtf file, so any help related would be appretiated.
Thank you very much.
Create a .rtf file with the column names mapped to a .xdo or .xdm file. The mapped column in .xdo or .xdm file should be in the cursor or the select statement of your stored procedure of function.
I am using jdbc and uploading data to Teradata. I used to have 100,000 rows of batch previously and it ALWAYS worked fine for me. No dataset failed uploading EVER !
Now, I tried to upload a one column table (all integers) , I get this Too many data records packed in one USING row ? As I changed the batch to 16,383 it worked .
I found out that I am still able to use 100,000 rows batch for tables with multiple columns, however when I try to upload a table with a single column, it throws Too many data records packed in one USING row . . . I just can't understand why ? Intuitively , a single column table should be easier to upload right ? What is going on here ?
16383 is the limit for a PreparedStatement using a non-FastLoad INSERT for Teradata JDBC.
Have you considered adding TYPE=FASTLOAD to your connection parameters and allowing Teradata to invoke the FastLoad API to bulk load your data for INSERT statements that are supported by FastLoad? The JDBC FastLoad mechanism is suggested for inserts of 100K records or more. The big factor here is that your target table in Teradata must be empty.
If it isn't empty then you may be able to load an empty stage table that you in turn use the ANSI MERGE operator to perform an UPSERT of the stage data to the target table.
I have Uploaded a file into server and I want to read the data from that file and insert data into the oracle. I am using a list for taking the data from the file and Data is reading from this List. No problem with my code. It reads completely and data's are inserted into the oracle table Locally.. But After Hosting the data's not completely inserted to the table..After inserting some data it become stuck.
We get new data for our database from an online form that outputs as an Excel sheet. To normalize the data for the database, I want to combine multiple columns into one row.
Example, I want data like this:
ID | Home Phone | Cell Phone | Work Phone
1 .... 555-1234 ...... 555-3737 ... 555-3837
To become this:
PhoneID | ID | Phone Number | Phone type
1 ............ 1 ....... 555-1234 ....... Home
2 ............ 1 ....... 555-3737 ....... Cell
3 ............ 1 ....... 555-3837 ...... Work
To import the data, I have a button that finds the spreadsheet and then runs a bunch of queries to add the data.
How can I write a query to append this data to the end of an existing table without ending up with duplicate records? The data pulled from the website is all stored and archived in an Excel sheet that will be updated without removing the old data (we don't want to lose this extra backup), so with each import, I need it to disregard all of the previously entered data.
I was able to make a query that lists everything out in the correct from the original spreadsheet (I entered the external spreadsheet into an unnormalized table in Access to test it) but when I try to append it to the phone number table, it adds all of the data repeatedly. I can remove it with a query to remove duplicate data, but I'd rather not leave it like that.
There are several possible approaches to this problem; which one you choose may depend on the size of the dataset relative to the number of updates being processed. Basically, the choices are:
1) Add a unique index to the destination table, so that Access will refuse to add a duplicate record. You'll need to handle the possible warning ("Access was unable to add xxx records due to index violations" or similar).
2) Import the incoming data to a staging table, then outer join the staging table to the destination table and append only records where the key field(s) in the destination table are null (i.e., there's no matching record in the destination table).
I have used both approaches in the past - I like the index approach for its simplicity, and I like the staging approach for its flexibility, because you can do a lot with the incoming data before you append it if you need to.
You could run a delete query on the table where you store the queried data and then run your imports.
Assuming that the data is only being updated.
The delete query will remove all records and then you can run the import to repopulate the table - therefore no duplicates.