No remaining space in Teradata Database - teradata

I created a database of storage 18 gb using:
CREATE DATABASE demo FROM DBC
AS
PERM = 18000000000, -- 18GB
SPOOL = 15000000000; -- 15GB
NO FALLBACK
NO BEFORE JOURNAL
NO AFTER JOURNAL;
I need to load data from a csv file of size 10 gb into this database and I'm using the Tdload utility.
The command I'm using is:
tdload -h hostid -u user -p password -t demo_table -f demofile.csv
--TargetWorkingDatabase demo --TargetErrorLimit 100 jobid
I'm getting No Space remaining in the database message after a while, after running this command.
How to resolve this?
Note: The database has only 1 table which is empty, in which I'm trying to load data.

Related

How to export and import mysql database to ignore Duplicate entries for key 'PRIMARY'?

I'm attempting to write a bash script that will dump a database and then import it to a staging database. I would like the staging database to match the 'master' database.
I have the following code, however I recieve:
ERROR 1062 (23000) at line 23: Duplicate entry '1' for key 'PRIMARY'
# Dump production master database, excluding school_hosts table
mysqldump -h $MYSQL_HOST -u $MYSQL_USERNAME -p$MYSQL_PASSWORD --no-create-info --ignore-table=hcl_master.school_hosts hcl_master > hcl_master.sql
# Dump hcl staging database, for backup.
mysqldump -h $MYSQL_HOST -u $MYSQL_USERNAME -p$MYSQL_PASSWORD hclstaging_master > hclstaging_master_backup.sql
# Import dump file into staging master database
mysql -h $MYSQL_HOST -u $MYSQL_USERNAME -p$MYSQL_PASSWORD hclstaging_master < hcl_master.sql
After searching, I found that I could add --replace to the mysql command that is importing, however I recieve an error stating that:
mysql: unknown option '--replace'
Can anybody help with getting this script to work correctly? I'm unsure how I can drop the staging database before i import or how to get it to overwrite the primary key record?
Any help would be much appreciated. I am using MariaDB.
--replace is a mysqldump option that you specify when creating the dump, not something you can tell mysql when importing the dump.

MariaDB create database via CLI

How do I issue commands to MariaDB via the CLI without actually jumping into the interactive use mode?
I know I can type mysql which will then jump me into the interactive mode where I can write SQL commands like CREATE DATABASE dbname; and then exit to go back to the regular terminal.
However I'd like to skip that and do something like mysql 'CREATE DATABASE dbname;' all in one line.
mysql --help | grep "\-execute"
Output:
-e, --execute=name Execute command and quit
So to create a database with command line client, you just need to execute
mysql -uuser -p -e"CREATE DATABASE dbname"
You can also concatenate several SQL statements, e.g.
mysql -uuser -p -e"CREATE DATABASE dbname;SHOW DATABASES"
Put the commands that you want executed into a text file (optionally with a file extension of .sql) then, from the command line, do mysql -uuser -p < yourtextfile.sql to have all of the commands in the file executed.

Can not get flyway-docker to recognize local files in volumes

I am trying to use Flyway to set up a DB2 test/demo environment in a Docker container. I have an image of DB2 running in a docker container and now am trying to get flyway to create the database environment. I can connect to the DB2 docker container and create DB2 objects and load them with data, but am looking for a way for non-technical users to do this (i.e. clone a GitHub repo and issue a single docker run command).
The Flyway Docker site (https://github.com/flyway/flyway-docker) indicates that it supports the following volumes:
| Volume | Description |
|-------------------|--------------------------------------------------------|
| `/flyway/conf` | Directory containing a flyway.conf |
| `/flyway/drivers` | Directory containing the JDBC driver for your database |
| `/flyway/sql` | The SQL files that you want Flyway to use |
I created the conf, drivers, and sql directories. In the conf directory, I placed the file flyway.conf that contained my flyway Url, user name, and password:
flyway.url=jdbc:db2://localhost:50000/apidemo
flyway.user=DB2INST1
flyway.passord=mY%tEst%pAsSwOrD
In the drivers directory, I added the DB2 JDBC Type 4 drivers (e.g. db2jcc4.jar, db2jcc_license_cisuz.jar),
And in the sql directory I put in a simple table creation statement (file name: V1__make_temp_table.sql):
CREATE TABLE EDS.REFT_TEMP_DIM (
TEMP_ID INTEGER NOT NULL )
, TEMP_CD CHAR (8)
, TEMP_NM VARCHAR (255)
)
DATA CAPTURE NONE
COMPRESS NO;
Attempting to perform the docker run with the flyway/flyway image as described in the GitHub Readme.md, it is not recognizing the flyway.conf file, since it does not know the url, user, and password.
docker run --rm -v sql:/flyway/sql -v conf:/flyway/conf -v drivers:/flyway/drivers flyway/flyway migrate
Flyway Community Edition 6.5.5 by Redgate
ERROR: Unable to connect to the database. Configure the url, user and password!
I then put the url, user, and password inline and It could not find the JDBC driver.
docker run --rm -v sql:/flyway/sql -v drivers:/flyway/drivers flyway/flyway -url=jdbc:db2://localhost:50000/apidemo -user=DB2INST1 -password=mY%tEst%pAsSwOrD migrate
ERROR: Unable to instantiate JDBC driver: com.ibm.db2.jcc.DB2Driver => Check whether the jar file is present
Caused by: Unable to instantiate class com.ibm.db2.jcc.DB2Driver : com.ibm.db2.jcc.DB2Driver
Caused by: java.lang.ClassNotFoundException: com.ibm.db2.jcc.DB2Driver
Therefore, I believe it is the way that I am setting up the local file system or associating to local files with the flyway volumes that is causing the issue. Does anyone have an idea of what I am doing wrong?
You need to supply absolute paths to your volumes for docker to mount them.
Changing the relative paths to absolute paths fixed the volume mount issue.
docker run --rm \
-v /Users/steve/github-ibm/flyway-db-migration/sql:/flyway/sql \
-v /Users/steve/github-ibm/flyway-db-migration/conf:/flyway/conf \
-v /Users/steve/github-ibm/flyway-db-migration/drivers:/flyway/drivers \
flyway/flyway migrate

Copy a heroku postgres db to a local sqlite db

I want to copy my heroku production db (postgres) to my development (sqlite).
Copying a postgres db into another postgres db is easy using heroku pg:pull. Does anyone know how to use this command to copy postgres into sqlite?
Heroku docs on pg:pull do not say how to use different types of dbs. This old article implied that it used to be possible. Setting up a local postgres db is something I'd like to avoid.
You will need do a pg_restore locally then dump the data using the -a option to dump data only.
It should look something like this:
Download a data dump.
heroku addons:add pgbackups
heroku pgbackups:capture
curl -o latest.dump `heroku pgbackups:url`
Create a temporary database.
sudo -u postgres createdb tempdb
Restore the dump to your temporary database.
sudo -u postgres pg_restore --verbose --clean --no-acl --no-owner -h localhost -d tempdb latest.dump
Dump the data in the correct format.
sudo -u postgres pg_dump --inserts -a -b tempdb > data.sql
Read dump in sqlite3.
sqlite3
> .read data.sql
This is an approximate solution. You will most likely need to make some small adjustments.
I agree with Craig Ringer that it might be worth getting postgres running locally. Hopefully this process will do the trick though!

Unable to copy postgresql table in another database

I try to copy postgresql table in another database as I write in pgAdmin 3 this query
$pg_dump -t pl_biz_enhanced business_catalog | psql business_catalog_enhanced
here pl_biz_enhanced is the table i want to copy and business_catalog is the database in which is this table
But I receive syntax error near $.
That's not an SQL query.
$pg_dump -t pl_biz_enhanced business_catalog | psql business_catalog_enhanced
The $ is a reference to the UNIX shell prompt, which usually ends in $.
This is a shell command. You can't run it in PgAdmin-III.
As far as I know there's no equivalent feature in PgAdmin-III. Either do the pg_dump | pg_restore in the command prompt or manually do the equivalent in PgAdmin-III, which would be to dump just the pl_biz_enhanced table of business_catalog and then restore it to the separate database business_catalog_enhanced.

Resources