How do I issue commands to MariaDB via the CLI without actually jumping into the interactive use mode?
I know I can type mysql which will then jump me into the interactive mode where I can write SQL commands like CREATE DATABASE dbname; and then exit to go back to the regular terminal.
However I'd like to skip that and do something like mysql 'CREATE DATABASE dbname;' all in one line.
mysql --help | grep "\-execute"
Output:
-e, --execute=name Execute command and quit
So to create a database with command line client, you just need to execute
mysql -uuser -p -e"CREATE DATABASE dbname"
You can also concatenate several SQL statements, e.g.
mysql -uuser -p -e"CREATE DATABASE dbname;SHOW DATABASES"
Put the commands that you want executed into a text file (optionally with a file extension of .sql) then, from the command line, do mysql -uuser -p < yourtextfile.sql to have all of the commands in the file executed.
Related
I am trying to run a Unix script which populates our Aged Debt table for our finance department from SSIS but cannot get my head around it. The script has to be run under user "username" and the script to run is :
P1='0*99999999' P2='2015_03_25*%%YY*Y' P3='Y*0.0' P4='Y*0.0' P5='Y*0.0' P6='Y*0.0' P7='Y*0.0' P8='Y*0.0' /cer_cerprod1/exe/par50219r
I believe that I need to have ssh configured on both sides to do this and I believe that I may do this from the "Execute Process Task" but I don't think that I am populating the parameters correctly.
Can anyone help.
I currently do this using putty/plink. Like sorrell says above, You use an execute process task to call a batch file. That batch file calls plink. I pass plink the shell script on the unix server that I want it to execute.
example of batch file:
echo y | "d:\program files\putty\plink.exe" [username#yourserver.com] -pw [password] -v sh /myremotescriptname.sh
the echo y at the beginning is to tell plink to accept the security credentials of the server.
I want to copy my heroku production db (postgres) to my development (sqlite).
Copying a postgres db into another postgres db is easy using heroku pg:pull. Does anyone know how to use this command to copy postgres into sqlite?
Heroku docs on pg:pull do not say how to use different types of dbs. This old article implied that it used to be possible. Setting up a local postgres db is something I'd like to avoid.
You will need do a pg_restore locally then dump the data using the -a option to dump data only.
It should look something like this:
Download a data dump.
heroku addons:add pgbackups
heroku pgbackups:capture
curl -o latest.dump `heroku pgbackups:url`
Create a temporary database.
sudo -u postgres createdb tempdb
Restore the dump to your temporary database.
sudo -u postgres pg_restore --verbose --clean --no-acl --no-owner -h localhost -d tempdb latest.dump
Dump the data in the correct format.
sudo -u postgres pg_dump --inserts -a -b tempdb > data.sql
Read dump in sqlite3.
sqlite3
> .read data.sql
This is an approximate solution. You will most likely need to make some small adjustments.
I agree with Craig Ringer that it might be worth getting postgres running locally. Hopefully this process will do the trick though!
I try to copy postgresql table in another database as I write in pgAdmin 3 this query
$pg_dump -t pl_biz_enhanced business_catalog | psql business_catalog_enhanced
here pl_biz_enhanced is the table i want to copy and business_catalog is the database in which is this table
But I receive syntax error near $.
That's not an SQL query.
$pg_dump -t pl_biz_enhanced business_catalog | psql business_catalog_enhanced
The $ is a reference to the UNIX shell prompt, which usually ends in $.
This is a shell command. You can't run it in PgAdmin-III.
As far as I know there's no equivalent feature in PgAdmin-III. Either do the pg_dump | pg_restore in the command prompt or manually do the equivalent in PgAdmin-III, which would be to dump just the pl_biz_enhanced table of business_catalog and then restore it to the separate database business_catalog_enhanced.
I'm running the following command (where variables have valid values for ssh command and $file - is a .sql file).
nohup ssh -qn ${ssh_user}#${dbs} "sqlplus $dbuser/${dbpswd}#${dbname} <<ENDSQL | tee "${sql_run_output_file}".ssh.log
set echo off
set echo on
set timing on
set time on
set serveroutput on size 1000000
#${file}
ENDSQL
"
When I was using the above command without "nohup" before ssh command, after 1 hour or so, my connection from source server (where im running ssh) was getting an error/message "Connection reset...." and hanging my BASH shell script (which contains this ssh command in it). When, I use nohup, i dont see the connection issue.
Here's what I'm trying to get and need your help.
Change the command shown above so that the command will NOT create a nohup.out
(Did I read that I can use > instead of | tee ... and use 2>&1)
I DO NOT want to run the command giving a "&" (background)
I DO want a LOG file for the sqlplus session that's running on the target DB server via ssh command/connection (initiated from source server).
Thanks.
You can still lose the connection when running ssh under nohup, so it's not really a good solution. If possible, I would recommend that you copy the sql file via scp to the target server, then ssh in to the server, open a screen and run the command from there (Or run it under nohup). Is that an option?
I was looking for unix bash history to give * for mysql password.
Eg: If I issue -
mysql -uroot -psecuritydemon -h192.168.90.888
then in unix prompt if I use history | grep -i mysql -> I get the password entry too.. Instead I would like to see for the history grep result as below
mysql -uroot -p*** -h192.168.90.888
Any way to achieve this?
I don't think it is possible to filter the command that is written to your history in bash. However, I would suggest you use a ~/.my.cnf configuration file as described here: http://support.modwest.com/content/6/242/en/how-do-i-create-a-mycnf-mysql-preference-file.html
. And make sure you set the permissions to go-rwx so that noone else can read your file.
Your bash history is not the biggest (or at least not the only) concern: if you run sql this way, anyone can see your password with a simple ps ax while your session is open! Instead use mysql -uroot -p without a password: then the mysql client will present a password prompt that nobody can sniff (unless they're standing over your shoulder, or have root on your computer, or something equally unpreventable).