Export in parquet file format in Teradata - teradata

I am trying to export data using the TDload utility of Teradata and I need the exported file to be in parquet format.
The command I have used is:
tdload --SourceTdpid xxx.xxx.xxx.xxx --SourceUserName dbc
--SourceUserPassword dbc --SourceTable DimAccount
--TargetFilename DimAccount.parquet
But this does not export the data in parquet.
How to achieve it?

Related

Can't export special characters from DynamoDB with AWS cli

I am trying to export all my data from DynamoDB using AWS cli.
I use this command:
aws dynamodb scan --tablename TABLENAME > output.json
When I open the file, the Norwegian special characters (æøå) are replaced with this symbol �.
Is there any way to export the data with special characters?

SQL lite import csv error CREATE TABLE data;(...) failed: near ";": syntax error

Brand new to SQL lite, running on a mac. I'm trying to import a csv file from the SQL lite tutorial:
http://www.sqlitetutorial.net/sqlite-import-csv/
The 'cities' data I'm trying to import for the tutorial is here:
http://www.sqlitetutorial.net/wp-content/uploads/2016/05/city.csv
I try and run the following code from Terminal to import the data into a database named 'data' and get the following error:
sqlite3
.mode csv
.import cities.csv data;
CREATE TABLE data;(...) failed: near ";": syntax error
A possible explanation may be the way I'm downloading the data - I copied the data from the webpage into TextWrangler and saved it as a .txt file. I then manually changed the extension to .csv. This doesn't seem very eloquent but that was the advice I found online for creating the .csv file: https://discussions.apple.com/thread/7857007
If this is the issue then how can I resolve it? If not then where am I going wrong?
Another potentially useful point - when I executed the code yesterday there was no problem, it created a database with the data. However, running the same code today produces the error.
sqlite3 dot commands such as .import are not SQL and don't need semicolon at end. Replace
.import cities.csv data;
with
.import cities.csv data

Cannot import csv to PostgreSQL using dbWriteTable

I tried to use R package RPostgreSQL to direct import csv files into pgAdmin 4. My code is as below:
dbWriteTable(localdb$con,'test1',choose.files(), row.names=FALSE)
I got the error message:
Warning message:
In postgresqlImportFile(conn, name, value, ...) :
could not load data into table
I checked my pgAdmin 4, there does exist a imported table named test1, but it had no observations. Then I imported this csv to R first, and then used dbWriteTable to write it to PostgreSQL, and it worked well. I am not sure which part is wrong.
The reason I am not using psql or pgAdmin 4 to import csv file directly is that I kept getting the error message "relationship does not exist" every time when I use the COPY FROM commands. I am now using R package RPostgreSQL to bypass this issue, but sometimes my data file is too big to import to R. I need to find a way to use the dbWriteTable function to import file directly to PostgreSQL without consuming R's memory.

Error loading csv data into Hive table

i have a csv file in hadoop and i have a Hive table ,now i want to laoad that csv file into this Hive table
i have used load LOAD DATA local 'path/to/csv/file' overwrite INTO TABLE tablename;
ended up with this error :
Error in .verify.JDBC.result(r, "Unable to retrieve JDBC result set for ", :
Unable to retrieve JDBC result set for LOAD DATA local
'path/to/csv/file' overwrite INTO TABLE tablename
(Error while processing statement: FAILED:
ParseException line 1:16 missing INPATH at ''path/tp csv/file'' near '<EOF>'
)
Note: i am trying this using RJDBC connection in r
I think the command to load CSV to Hive table is ( when CSV is in HDFS).
LOAD DATA INPATH '/user/test/my.csv' INTO TABLE my_test;
As your file is already present in the HDFS remove the keyword Local
LOAD DATA inpath 'path/to/csv/file' overwrite INTO TABLE tablename;
I have developed a tool to generate hive scripts from a csv file. Following are few examples on how files are generated.
Tool -- https://sourceforge.net/projects/csvtohive/?source=directory
Select a CSV file using Browse and set hadoop root directory ex: /user/bigdataproject/
Tool Generates Hadoop script with all csv files and following is a sample of
generated Hadoop script to insert csv into Hadoop
#!/bin/bash -v
hadoop fs -put ./AllstarFull.csv /user/bigdataproject/AllstarFull.csv
hive -f ./AllstarFull.hive
hadoop fs -put ./Appearances.csv /user/bigdataproject/Appearances.csv
hive -f ./Appearances.hive
hadoop fs -put ./AwardsManagers.csv /user/bigdataproject/AwardsManagers.csv
hive -f ./AwardsManagers.hive
Sample of generated Hive scripts
CREATE DATABASE IF NOT EXISTS lahman;
USE lahman;
CREATE TABLE AllstarFull (playerID string,yearID string,gameNum string,gameID string,teamID string,lgID string,GP string,startingPos string) row format delimited fields terminated by ',' stored as textfile;
LOAD DATA INPATH '/user/bigdataproject/AllstarFull.csv' OVERWRITE INTO TABLE AllstarFull;
SELECT * FROM AllstarFull;
Thanks
Vijay

Unable to import oracle dump in oracle 11g

I am trying to import oracle dump in Oracle 11g XE by using the below command
imp system/manager#localhost file=/home/madhu/test_data/oracle/schema_only.sql full=y
Getting like below
IMP-00037: Character set marker unknown
IMP-00000: Import terminated unsuccessfully
Any one please help me
You received IMP-00037 error because of export file corrupted. I'd suspect either your dump file is corrupted or the dump file was not created by exp utility.
If the issue was occured because of corrupted dump file, then there is no choice other than obtaining uncorrupted dump file. Use impdp utility to import if you have used expdp utility to prepare dumpfile.
Following link will be helpful to try other option:
https://community.oracle.com/thread/870104?start=0&tstart=0
https://community.oracle.com/message/734478
If you are not sure which command(exp/expdp) was used, you could check log file which was created during dump export. It contains exact command which was executed to prepare the dump file.

Resources