command line error: DataSource not set - flyway

This is running on Windows 2008r2
with cygwin bash.
I am stuck on what is wrong here. Hopefully I have given
the pertinent information.
$ sh flyway.sh migrate
Flyway (Command-line Tool) v.3.1
ERROR: DataSource not set! Check your configuration!
----------- file layout
flyway.sh
conf/flyway.properties
drivers/sqljdbc4.jar
sql/V2014.10.20.06.30__rgx_test_live.sql
----------- flyway.sh
#!/usr/bin/bash
'/cygdrive/c/Program Files/Java/jre1.8.0_25/bin/java.exe' \
-cp 'c:\gcs\apps\rgx_flyway\flyway\lib\flyway-commandline-3.1.jar;c:\gcs\apps\rgx_flyway\flyway\lib\flyway-core-3.1.jar' \
org.flywaydb.commandline.Main \
$#
----------- conf/flyway.properties
flyway.url=jdbc:sqlserver://127.0.0.1;databaseName=rgx_mars;
flyway.user=auser
flyway.password=apassword
flyway.locations=filesystem:./sql

Related

QEMU serial std output diverges on archlinux guest

I'm trying to bootstrap some installation automation of a freshly downloaded ISO in QEMU. I create a clean img to install to and kick off QEMU like this:
$ qemu-img create -f qcow2 out/main.img 15G
$ qemu-system-x86_64
-m 8G \
-serial stdio \
-cdrom out/linux.iso \
-drive file=out/main.img,if=virtio \
-netdev user,id=net0 \
-device e1000,netdev=net0
and I can see Arch boot up. At first both the display and the terminal are in sync, but they soon diverge after this the GRUB boot up screen.
I'm not sure what piece I'm missing to get this to work. I've seen some people suggest adding -append "root=/dev/sda console=ttyS0" to your QEMU arguments, but (from what I can tell) while it requires you to extract the kernel and the initram from the ISO (which should be easy enough as mounting and copy/pasting the correct files) but it also expects you to already have an installed system on /dev/sda (which is what I'm trying to bootstrap).
At this point I don't know what to search for next, how do I get the full terminal session in my current terminal and not just in my display?
In this case, it was as #Peter Maydell commented; this is not a QEMU question. QEMU was doing exactly what it was supposed to do, but Arch had to be told to utilize the serial console as its primary means of communication.
Two samples of how this can be done
bash via console
pipe_dir="$(mktemp -d)"
mkfifo "${pipe_dir}/pipe.in" "${pipe_dir}/pipe.out"
function cleanup {
rm -rfv "${pipe_dir}"
}
trap cleanup EXIT
qemu-system-x86_64 \
-m 8G \
-display none \
-serial stdio \
-drive file=./out/linux.iso,index=0,media=cdrom \
-drive file=./out/main.img,if=virtio &
sleep 2s
printf "\t" > "${pipe_dir}/pipe.in"
sleep 2s
printf " console=ttyS0,115200" > "${pipe_dir}/pipe.in"
sleep 2s
echo > "${pipe_dir}/pipe.in"
# Whatever other interactions you want go here...
wait
expect via console
set timeout -1
spawn qemu-system-x86_64 \
-m 8G \
-display none \
-serial stdio \
-drive file=./out/linux.iso,index=0,media=cdrom \
-drive file=./out/main.img,if=virtio
sleep 1
send \t
sleep 1
send " console=ttyS0,115200"
sleep 1
send \n
In theory this should be fine, but in practice I still had difficulty interacting with the console and sending characters over to login correctly. I'm sure there is probably more user-error on my part than not.
A better solution (again contextual to Arch and not QEMU specifically) was to use a cloud-init script that included my SSH public key. Interactions with the VM were stable, reliable, and easily reproducible.
bash with cloud-init/ssh
$ touch ./out/meta-data
$ cat > ./out/user-data <<EOF
#cloud-config
users:
- name: root
ssh_authorized_keys:
- $(cat ${HOME}/.ssh/id_ed25519.pub)
EOF
$ xorriso -as genisoimage -output ./out/cloud-init.iso \
-volid CIDATA -joliet -rock ./out/meta-data ./out/user-data
$ qemu-system-x86_64 \
-m 8G \
-drive file=./out/linux.iso,index=0,media=cdrom \
-drive file=./out/cloud-init.iso,index=1,media=cdrom \
-drive file=./out/main.img,if=virtio \
-net user,hostfwd=tcp::10022-:22 \
-net nic &
$ function qemu-ssh {
ssh -q -o ConnectTimeout=5 -o StrictHostKeyChecking=no -o "UserKnownHostsFile /dev/null" -p 10022 root#localhost ${#}
}
$ printf 'Waiting for SSH to go live (this will take a while)...'
$ until qemu-ssh exit; do
printf '.'
done
# This convenience function starts an interactive
# session when supplied with no additional arguments
# but your automation can go here
$ qemu-ssh

rsync to local USB disk gives "rsync error 2 (Protocol incompatibility)

I am using backintime for backup which in turn uses rsync to make snapshots.
Most filesystems on the computer are XFS, including the rsync target,
system is Ubuntu 20.04 with rsync version 3.1.3 protocol version 31.
I get an exit code 2 from rsync which is Protocol incompatibility,
and some digging shows this happens if you are running rsync across some (ssh) connection
between two computers with different rsync versions, or login scripts injecting some unexpected output into the ssh connection. None of that is the case here, this is all local,
see below for the command line.
=> Anymore insights into this rsync error ? How can a local Protocol incompatibility happen
if there is just one /usr/bin/rsync ?
Yours,
Steffen
The local USB disk mounted as
type xfs (rw,nosuid,nodev,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota,uhelper=udisks2)
INFO: Call rsync to take the snapshot
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root'
WARNING: Command "rsync --recursive --times --devices --specials --hard-links --human-readable \
--links --acls --xattrs --perms --executability --group --owner --info=progress2 \
--no-inc-recursive --delete --delete-excluded -v -i \
--out-format=BACKINTIME: %i %n%L --link-dest=../../20210301-082432-781/backup \
--chmod=Du+wx --exclude=/media/sneumann/LinuxBackup/msbi-corei \
--exclude=/root/.local/share/backintime --exclude=.local/share/backintime/mnt \
--exclude=.gvfs --exclude=.cache* --exclude=[Cc]ache* --exclude=.thumbnails* \
--exclude=[Tt]rash* --exclude=*.backup* --exclude=*~ \
--exclude=/home/sneumann/Ubuntu One --exclude=.dropbox* --exclude=/proc/* \
--exclude=/sys/* --exclude=/dev/* --exclude=/run/* --exclude=/media \
--exclude=/root/.local/share/backintime/takesnapshot_.log \
--exclude=/root/.local/share/backintime --include=/ --include=/** \
--exclude=* / /media/sneumann/LinuxBackup/msbi-corei/backintime/msbi-corei/root/1/new_snapshot/backup"
returns 2

How are zsh autocompletions for commands with subcommands defined?

I am trying to write a tab-completion script for borg.
So far, I have managed to define completions for borg itself, as well as borg key with its subcommands and borg benchmark with its singular subcommand. However, I am now trying to define completion for borg init and I am having trouble.
The issue presents itself only when I define two arguments under the borg init command to use the same description text; i.e. both -e and --encryption should use the same description, as they are practically the same argument. This has worked fine for borg's arguments, but now it breaks.
This is my code, slightly redacted to spare you the redundancy:
compdef _borg borg
function _borg {
local line ret=1
local -a argus
local logs="--critical --error --warning --debug --info -v --verbose"
argus+=(
"(*)"{-h,--help}"[Show help and exit]"
"(*)-V[Show Borg version and exit]"
"($logs)--critical[Work on log level CRITICAL]"
"($logs)--error[Work on log level ERROR]"
"($logs)--warning[Work on log level WARNING (default)]"
"($logs)"{--info,-v,--verbose}"[Work on log level INFO]"
"($logs)--debug[Enable debug output; log level DEBUG]"
{-p,--progress}"[Show progress]"
"--log-json[Output one JSON object per log line instead of formatted text]"
"--show-version[Show/log borg version]"
"--show-rc[Show/log returncode]"
"--consider-part-files[treat part files like normal files (e.g. to list/extract them)]"
"--lock-wait[Wait at most SECONDS for acquiring a repository/cache lock (default 1)]:SECONDS:()"
"--umask[Set umask to M (local and remote; default 0077)]:M (umask value, e.g. 0077):()"
"--remote-path[Use PATH as borg executable on the remote (default: \"borg\")]:PATH:()"
"--remote-ratelimit[Set remote network upload rate limit in kiByte/s (default: 0=unlimited)]:RATE:()"
"--debug-profile[Write execution profile in Borg format into FILE.]:FILE:_files"
"--rsh[Use this command to connect to the \"borg serve\" process (default: \"ssh\")]:RSH:()"
"1: :((init\:\"Initialize a new repository\" \
create\:\"Create a new archive\" \
extract\:\"Extract the contents of an archive\" \
check\:\"Verifies consistency of a repository and its archives\" \
rename\:\"Renames an archive in a repository\" \
list\:\"Lists contents of a repository or archive\" \
diff\:\"Finds differences between archives\" \
delete\:\"Deletes an archive or an entire repository (and its cache)\" \
prune\:\"Prunes a repository\" \
info\:\"Shows info about a repository or archive\" \
mount\:\"Mounts an archive as a FUSE filesystem\" \
unmount\:\"Unmounts a FUSE filesystem mounted with \\\"borg mount\\\"\" \
key\:\"Keyword for key-related functions\" \
upgrade\:\"Upgrade a local Borg repository\" \
recreate\:\"EXPERIMENTAL: Recreates contents of existing archives\" \
export-tar\:\"Creates a tarball from an archive\" \
serve\:\"Starts repository server process. Not usually used manually.\" \
config\:\"Gets and sets options in local repository and cache config files\" \
with-lock\:\"Executes another command with the repository lock held\" \
break-lock\:\"Breaks the repository and cache locks\" \
benchmark\:\"Keyword for the benchmark function\"))" \
"*::arg:->args"
)
_arguments -w -s -S -C $argus[#] && ret=0
case $line[1] in
benchmark)
_borg_benchmark
;;
init)
_borg_init
;;
key)
_borg_key
;;
esac
return ret
}
function _borg_benchmark {
# stuff
}
function _borg_benchmark_crud {
# stuff again
}
function _borg_init {
local line ret=1
local -a argus
argus+=(
"-t[This is a test]"
"--test[This is a test]"
"(--append-only)--append-only[Create an append-only mode repository]"
"*::arg:->args"
)
_arguments -w -s -S -C $argus[#] && ret=0
return ret
}
function _borg_key {
# key stuff
}
function _borg_key_changepassphrase {
# stuff
}
function _borg_key_export {
# more stuff
}
function _borg_key_import {
# other stuff
}
If I try to tab-complete borg init - using this setup, I get the following output:
$ borg init -
Completing option
--append-only
--test
-t
-- Create an append-only mode repository
-- This is a test
--append-only
--test
-t
-- Create an append-only mode repository
-- This is a test
--append-only
--test
-t
-- Create an append-only mode repository
-- This is a test
--append-only
--test
-t
-- Create an append-only mode repository
-- This is a test
The completion appears to forget what tabs are and repeats itself four times. If I change --test[This is a test] to --test[This is another test] in _borg_init, I instead get the following completion:
$ borg init -
Completing option
--append-only -- Create an append-only mode repository
--test -- This is another test
-t -- This is a test
The above is "correct", in the sense that it's not broken, but I cannot seem to define arguments that share a description in a subcommand. How should I do that? And, more generally, how are you supposed to define completions for commands with subcommands (which may, in turn, have more arguments)?

How to use placeholders prefix in flyway command line

Originally posted in https://github.com/flyway/flyway/issues/2429
I have an issue (probably a wrong configuration) using flyway placeholders; I can use placeholders for my own variables; but it fails cos one value in an sql query has a similar syntax as flyway placeholder syntax.
Which version and edition of Flyway are you using?
5.2.4 using official docker image
If this is not the latest version, can you reproduce the issue with the latest one as well? (Many bugs are fixed in newer releases and upgrading will often resolve the issue)
5.2.4 tag is the latest version in docker hub (https://hub.docker.com/r/boxfuse/flyway/)
Which client are you using? (Command-line, Java API, Maven plugin, Gradle plugin)
Command line thru the docker image
Which database are you using (type & version)?
MySQL Server version: 5.7.26 - MySQL Community Server (GPL) - This is a legacy project
Which operating system are you using?
Linux CentOS 7 x64 (uname -r = 3.10.0-957.5.1.el7.x86_64)
What did you do?
(Please include the content causing the issue, any relevant configuration settings, the SQL statement that failed (if relevant) and the command you ran.)
I apply flyway to initialize/update a MySQL database; here are a couple of SQL commands.
Here I use placeholders with xxx prefixes:
CREATE USER IF NOT EXISTS '${xxxdbuser}'#'${xxxdbclip}' IDENTIFIED WITH mysql_native_password BY '${xxxdbpass}';
GRANT ALL PRIVILEGES ON ${xxxdbbase}.* TO '${xxxdbuser}'#'${xxxdbclip}';
FLUSH PRIVILEGES;
... then in another SQL script, from a thirdparty app, I insert a content with ${row}. I don't want Flyway to interpret ${row} as a placeholder, only my own vars starting by ${xxx such as ${xxxdbuser}
INSERT INTO `xxx_xxx` (`name`, `template`, `lang`, `group`, `version`, `data`, `size`, `style`, `modified`) VALUES
... ('addressbook.email.rows', '', '', 0, '1.3.001', 'a:1:{i:0;a:6:{ ... \"label\";s:21:\"$row_cont[type_label]\";s:4:\"name\";s:12:\"${row}[type]\";s:5:\"align\";... :{i:0;s:4:\"100%\";}}}', '100%', '', 1150326789), ...
I guess the placeholderPrefix parameter described into https://flywaydb.org/documentation/commandline/info or FLYWAY_PLACEHOLDER_PREFIX env var described into https://flywaydb.org/documentation/envvars#FLYWAY_PLACEHOLDER_PREFIX is for that purpose; but I didn't succeed in using them!
Here is my command using docker:
docker run --rm --network="$(docker network ls --filter name=app_mysql_dev --filter "label=type=app" --format '{{.ID}}')" \
-v `pwd`/code/Admin/install:/flyway/sql \
-e FLYWAY_URL=jdbc:mysql://${host}:${port}?useSSL=false \
-e FLYWAY_SCHEMAS=${base} \
-e FLYWAY_USER=root \
-e FLYWAY_PASSWORD=${root_pwd} \
-e FLYWAY_PLACEHOLDERS_PREFIX="\${xxx" \
-e FLYWAY_PLACEHOLDERS_XXXDBBASE=${base} \
-e FLYWAY_PLACEHOLDERS_XXXDBUSER=${user} \
-e FLYWAY_PLACEHOLDERS_XXXDBPASS=${pass} \
-e FLYWAY_PLACEHOLDERS_XXXDBCLIP=${clip} \
-e FLYWAY_PLACEHOLDERS_XXXVHOST=${vhost} \
-e FLYWAY_PLACEHOLDERS_XXXSCHEME=${scheme} \
-e FLYWAY_CONNECT_RETRIES=5 \
boxfuse/flyway:5.2.4 -locations=filesystem:/flyway/sql/custom/ \
migrate
What did you expect to see?
All ${xxx placeholders should be replaced by their corresponding ENV values; and the ${row} chain in SQL code stay unchanged.
What did you see instead?
Flyway error:
Flyway Community Edition 5.2.4 by Boxfuse
Database: jdbc:mysql://tasks.atlas-mysql:3306 (MySQL 5.7)
ERROR: No value provided for placeholder expressions: ${row}. Check your configuration!
I guess I did not configure my command correctly... any help, advise and/or command line example would help.
Regrads,
Chris
I think there are a couple of problems in your command:
-e FLYWAY_PLACEHOLDERS_PREFIX="\${xxx"
should be FLYWAY_PLACEHOLDER_PREFIX (no S), and
-e FLYWAY_PLACEHOLDERS_XXXDBBASE=${base}
should be FLYWAY_PLACEHOLDERS_DBBASE (as XXX is part of the prefix, it's not included in the placeholder name; and analogously for following lines).

How to use Qt's 'windeployqt' in Linux / Fedora

I'm currently trying to cross-compile my Qt apps on a Fedora 21 machine to Windows (32 bit, for now). Compilation works without problems, but deployment doesn't. Of cours, I could copy all the necessary files out of the directories, but I think that's a waste of time, so I want to use Qt's 'windeployqt' tool.
But whenever I invoke it, e.g. in Qt Creator as a build step, it just puts out this message(my test application is called day_404 :D) :
Unable to find dependent libraries of /home/marius/Entwicklung/build-day_404-Windows_32bit-Release/release/day_404.exe :Not implemented.
Does any of you know how to fix this, and use windeployqt without using Windows?
Thanks in advance,
Marius
The windeployqt tool doesn't seem to be usable on Fedora 23. It relies on accessing qmake, thus, it doesn't work in the mingw cross-compile environment, where you build with mingw32-qmake-qt5 (or mingw64-qmake-qt5). Even if this issue is patched - it wouldn't work with Qt5 projects using mingw64-cmake.
A relatively simple way to get a list of all DLLs that need to be copied for deployment is to run the application under wine and trace all dll loads.
For example like this:
$ WINEDEBUG=+loaddll wine ./myapp 2> dll.log
The dll paths can be extracted like that:
$ grep Loaded dll.log | grep -v 'system32\|:load_builtin_dll' \
| awk -F'"' '{print $2}' \
| sed -e 's#\\\\#/#g' -e 's/^[A-Z]://' \
| sort > dll.lst
The file dll.lst looks like this for a typical Qt5 project cross-compiled with mingw64:
/path/to/cwd/myapp.exe
/path/to/cwd/project.dll
[..]
/usr/x86_64-w64-mingw32/sys-root/mingw/bin/libpng16-16.dll
/usr/x86_64-w64-mingw32/sys-root/mingw/bin/libstdc++-6.dll
/usr/x86_64-w64-mingw32/sys-root/mingw/bin/libwinpthread-1.dll
/usr/x86_64-w64-mingw32/sys-root/mingw/bin/libxml2-2.dll
/usr/x86_64-w64-mingw32/sys-root/mingw/bin/Qt5Core.dll
/usr/x86_64-w64-mingw32/sys-root/mingw/bin/Qt5Gui.dll
/usr/x86_64-w64-mingw32/sys-root/mingw/bin/Qt5Widgets.dll
/usr/x86_64-w64-mingw32/sys-root/mingw/bin/zlib1.dll
/usr/x86_64-w64-mingw32/sys-root/mingw/lib/qt5/plugins/imageformats/qgif.dll
/usr/x86_64-w64-mingw32/sys-root/mingw/lib/qt5/plugins/imageformats/qico.dll
/usr/x86_64-w64-mingw32/sys-root/mingw/lib/qt5/plugins/imageformats/qjpeg.dll
/usr/x86_64-w64-mingw32/sys-root/mingw/lib/qt5/plugins/platforms/qwindows.dll
You can then deploy those files like this:
$ mkdir -p "$deploy_dir"/{imageformats,platforms}
$ for i in imageformats platforms ; do
grep "/plugins/$i" dll.lst | xargs -r cp -t "$deploy_dir"/$i
done
$ grep -v '/plugins/' dll.lst | xargs -r cp -t "$deploy_dir"
Wine Config
For running a cross-compiled binary under wine, the mingw dll directory has to be added to the wine path, e.g. via:
sed 's/^\("PATH".*\)"$/\1;Z:\\\\usr\\\\x86_64-w64-mingw32\\\\sys-root\\\\mingw\\\\bin"/' \
-i $HOME/.wine/system.reg
The file ~/.wine/system.reg is automatically created by wine if it is doesn't exist, yet.
PELDD
You can also use the tool peldd to get a list of all DLLs that a windows binary depends on. The tool runs on Linux, e.g.:
$ peldd myapp.exe -a -p . \
| sed -e 's#^\./#'"$PWD"'/#' -e 's#^\([^/]\)#'"$PWD"'/\1#' \
| sort > dll2.lst
The tool transitively walks all dependencies as compiled into the binaries - but - DLLs that are conditionally loaded at runtime (think dlopen(), think Qt plugin) don't leave traces in the binary headers. In contrast to that: when running under wine, those DLLs are recorded, as well. For our example, this could be:
/usr/x86_64-w64-mingw32/sys-root/mingw/bin/libjpeg-62.dll
/usr/x86_64-w64-mingw32/sys-root/mingw/lib/qt5/plugins/imageformats/qgif.dll
/usr/x86_64-w64-mingw32/sys-root/mingw/lib/qt5/plugins/imageformats/qico.dll
/usr/x86_64-w64-mingw32/sys-root/mingw/lib/qt5/plugins/imageformats/qjpeg.dll
/usr/x86_64-w64-mingw32/sys-root/mingw/lib/qt5/plugins/platforms/qwindows.dll

Resources