I want to copy my heroku production db (postgres) to my development (sqlite).
Copying a postgres db into another postgres db is easy using heroku pg:pull. Does anyone know how to use this command to copy postgres into sqlite?
Heroku docs on pg:pull do not say how to use different types of dbs. This old article implied that it used to be possible. Setting up a local postgres db is something I'd like to avoid.
You will need do a pg_restore locally then dump the data using the -a option to dump data only.
It should look something like this:
Download a data dump.
heroku addons:add pgbackups
heroku pgbackups:capture
curl -o latest.dump `heroku pgbackups:url`
Create a temporary database.
sudo -u postgres createdb tempdb
Restore the dump to your temporary database.
sudo -u postgres pg_restore --verbose --clean --no-acl --no-owner -h localhost -d tempdb latest.dump
Dump the data in the correct format.
sudo -u postgres pg_dump --inserts -a -b tempdb > data.sql
Read dump in sqlite3.
sqlite3
> .read data.sql
This is an approximate solution. You will most likely need to make some small adjustments.
I agree with Craig Ringer that it might be worth getting postgres running locally. Hopefully this process will do the trick though!
Related
I'm attempting to write a bash script that will dump a database and then import it to a staging database. I would like the staging database to match the 'master' database.
I have the following code, however I recieve:
ERROR 1062 (23000) at line 23: Duplicate entry '1' for key 'PRIMARY'
# Dump production master database, excluding school_hosts table
mysqldump -h $MYSQL_HOST -u $MYSQL_USERNAME -p$MYSQL_PASSWORD --no-create-info --ignore-table=hcl_master.school_hosts hcl_master > hcl_master.sql
# Dump hcl staging database, for backup.
mysqldump -h $MYSQL_HOST -u $MYSQL_USERNAME -p$MYSQL_PASSWORD hclstaging_master > hclstaging_master_backup.sql
# Import dump file into staging master database
mysql -h $MYSQL_HOST -u $MYSQL_USERNAME -p$MYSQL_PASSWORD hclstaging_master < hcl_master.sql
After searching, I found that I could add --replace to the mysql command that is importing, however I recieve an error stating that:
mysql: unknown option '--replace'
Can anybody help with getting this script to work correctly? I'm unsure how I can drop the staging database before i import or how to get it to overwrite the primary key record?
Any help would be much appreciated. I am using MariaDB.
--replace is a mysqldump option that you specify when creating the dump, not something you can tell mysql when importing the dump.
All I want to do is essentially take the exact dynamdb tables with their data that exist in a remote instance ( e.g. amplify staging environment/api) and import those locally.
I looked at datasync but that seemed to be FE only. I want to take the exact data from staging and sync that data to my local amplify instance - is this even possible? I can't find any information that is helping right now.
Very used to using mongo/postgres etc. and literally being able to take a DB dump and just import that...I may be missing something here?
How about using dynamodump
Download the data from AWS to your local machine:
python dynamodump.py -m backup -r REGION_NAME -s TABLE_NAME
Then import to Local DynamoDB:
dynamodump -m restore -r local -s SOURCE_TABLE_NAME -d LOCAL_TABLE_NAME --host localhost --port 8000
You have to build a custom script that reads from the online DynamoDB and then populate the Local DynamoDB. I found the docker image be just perfect to have an instance, make sure to give the jar file name to prevent the image to be ephemeral and have persistence of data.
Sort of macro instructions:
Download Docker Desktop (if you want)
Start docker desktop and in a terminal ask for Dynamo DB official image:
https://hub.docker.com/r/amazon/dynamodb-local/
docker pull amazon/dynamodb-local
And then run the docker container:
docker run --name dynamodb -p 8000:8000 -d amazon/dynamodb-local -jar DB.jar
Now you can start a python script that get the data from the online DB and copy in the local dynamoDB, as in official docs:
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-amazon-dynamodb-tables-across-accounts-using-a-custom-implementation.html
Working out the connections to the local container (localhost:8000) you shall be able to copy all the data.
I'm not too well versed on local Amplify instances, but for DynamoDB there is a product which you can use locally called DynamoDB Local.
Full list of download instructions for DynamoDB Local are available here: Downloading And Running DynamoDB Local
If you have docker installed in your local machine, you can easily download and start the DynamoDB Local service with a few commands:
Download
docker pull amazon/dynamodb-local
Run
docker run --name dynamodb -p 8000:8000 -d amazon/dynamodb-local -jar DB.jar
This will allow you to avail of 90% of DynamoDB features locally. However, migrating data from DynamoDB Web Service to DynamoDB local is not something that is provided out of the box. For that, you would need to create a small script which you run locally, read data from your existing table and write to your local instance.
An example of reading from one table and writing to a second can be found in the docs here: Copy Amazon Dynamodb Tables Across Accounts
One change you will have make is manually setting the endpoint_url for DynamoDB Local:
dynamodb_client = boto3.Session(
aws_access_key_id=args['AWS_ACCESS_KEY_ID'],
aws_secret_access_key=args['AWS_SECRET_ACCESS_KEY'],
aws_session_token=args['TEMPORARY_SESSION_TOKEN'],
endpoint_url='YOUR_DDB_LOCAL_ENDPOINT_URL'
).client('dynamodb')
How do I issue commands to MariaDB via the CLI without actually jumping into the interactive use mode?
I know I can type mysql which will then jump me into the interactive mode where I can write SQL commands like CREATE DATABASE dbname; and then exit to go back to the regular terminal.
However I'd like to skip that and do something like mysql 'CREATE DATABASE dbname;' all in one line.
mysql --help | grep "\-execute"
Output:
-e, --execute=name Execute command and quit
So to create a database with command line client, you just need to execute
mysql -uuser -p -e"CREATE DATABASE dbname"
You can also concatenate several SQL statements, e.g.
mysql -uuser -p -e"CREATE DATABASE dbname;SHOW DATABASES"
Put the commands that you want executed into a text file (optionally with a file extension of .sql) then, from the command line, do mysql -uuser -p < yourtextfile.sql to have all of the commands in the file executed.
I have one project in Google Clould with a Postgres database, someone can help me export this data to my pc?
Well you can ssh to your Google Cloud instance and run the command: pg_dump db_name > db_name.sql. The pg_dump command exports the given database to sql format. You can then download the database to your local computer.
See this link: https://www.postgresql.org/docs/9.4/static/app-pgdump.html
From the terminal, I had to specify the database IP address, user and password
pg_dump -h [IP-ADDRESS] -U [DB-USER] [DB-NAME] > dump.sql
I have a MySQL DB that is the backend for my drupal web site, I am going through the drupal upgrade and before I do the upgrade I need to make a copy of the database.
mysqladmin create ts_prod_bak -u root --password=XXXXXX && \
mysqldump -u root --password=XXXXXX ts_prod | mysql -u root --password=XXXXXX -h localhost ts_prod_bak
This does create a new DB called ts_prod_bak and fills that DB with data from ts_prod but it isn't copying all of the data. I see some tables, cache_* that are created in the new DB but have a different size than the source. Because of this I am not confident in my backup/copy.
How can I make an exact duplicate of my source database and verify that by restoring it to another DB?