Can not get flyway-docker to recognize local files in volumes - flyway

I am trying to use Flyway to set up a DB2 test/demo environment in a Docker container. I have an image of DB2 running in a docker container and now am trying to get flyway to create the database environment. I can connect to the DB2 docker container and create DB2 objects and load them with data, but am looking for a way for non-technical users to do this (i.e. clone a GitHub repo and issue a single docker run command).
The Flyway Docker site (https://github.com/flyway/flyway-docker) indicates that it supports the following volumes:
| Volume | Description |
|-------------------|--------------------------------------------------------|
| `/flyway/conf` | Directory containing a flyway.conf |
| `/flyway/drivers` | Directory containing the JDBC driver for your database |
| `/flyway/sql` | The SQL files that you want Flyway to use |
I created the conf, drivers, and sql directories. In the conf directory, I placed the file flyway.conf that contained my flyway Url, user name, and password:
flyway.url=jdbc:db2://localhost:50000/apidemo
flyway.user=DB2INST1
flyway.passord=mY%tEst%pAsSwOrD
In the drivers directory, I added the DB2 JDBC Type 4 drivers (e.g. db2jcc4.jar, db2jcc_license_cisuz.jar),
And in the sql directory I put in a simple table creation statement (file name: V1__make_temp_table.sql):
CREATE TABLE EDS.REFT_TEMP_DIM (
TEMP_ID INTEGER NOT NULL )
, TEMP_CD CHAR (8)
, TEMP_NM VARCHAR (255)
)
DATA CAPTURE NONE
COMPRESS NO;
Attempting to perform the docker run with the flyway/flyway image as described in the GitHub Readme.md, it is not recognizing the flyway.conf file, since it does not know the url, user, and password.
docker run --rm -v sql:/flyway/sql -v conf:/flyway/conf -v drivers:/flyway/drivers flyway/flyway migrate
Flyway Community Edition 6.5.5 by Redgate
ERROR: Unable to connect to the database. Configure the url, user and password!
I then put the url, user, and password inline and It could not find the JDBC driver.
docker run --rm -v sql:/flyway/sql -v drivers:/flyway/drivers flyway/flyway -url=jdbc:db2://localhost:50000/apidemo -user=DB2INST1 -password=mY%tEst%pAsSwOrD migrate
ERROR: Unable to instantiate JDBC driver: com.ibm.db2.jcc.DB2Driver => Check whether the jar file is present
Caused by: Unable to instantiate class com.ibm.db2.jcc.DB2Driver : com.ibm.db2.jcc.DB2Driver
Caused by: java.lang.ClassNotFoundException: com.ibm.db2.jcc.DB2Driver
Therefore, I believe it is the way that I am setting up the local file system or associating to local files with the flyway volumes that is causing the issue. Does anyone have an idea of what I am doing wrong?

You need to supply absolute paths to your volumes for docker to mount them.
Changing the relative paths to absolute paths fixed the volume mount issue.
docker run --rm \
-v /Users/steve/github-ibm/flyway-db-migration/sql:/flyway/sql \
-v /Users/steve/github-ibm/flyway-db-migration/conf:/flyway/conf \
-v /Users/steve/github-ibm/flyway-db-migration/drivers:/flyway/drivers \
flyway/flyway migrate

Related

AWS amplify/dynamo/appsync - how to sync data locally

All I want to do is essentially take the exact dynamdb tables with their data that exist in a remote instance ( e.g. amplify staging environment/api) and import those locally.
I looked at datasync but that seemed to be FE only. I want to take the exact data from staging and sync that data to my local amplify instance - is this even possible? I can't find any information that is helping right now.
Very used to using mongo/postgres etc. and literally being able to take a DB dump and just import that...I may be missing something here?
How about using dynamodump
Download the data from AWS to your local machine:
python dynamodump.py -m backup -r REGION_NAME -s TABLE_NAME
Then import to Local DynamoDB:
dynamodump -m restore -r local -s SOURCE_TABLE_NAME -d LOCAL_TABLE_NAME --host localhost --port 8000
You have to build a custom script that reads from the online DynamoDB and then populate the Local DynamoDB. I found the docker image be just perfect to have an instance, make sure to give the jar file name to prevent the image to be ephemeral and have persistence of data.
Sort of macro instructions:
Download Docker Desktop (if you want)
Start docker desktop and in a terminal ask for Dynamo DB official image:
https://hub.docker.com/r/amazon/dynamodb-local/
docker pull amazon/dynamodb-local
And then run the docker container:
docker run --name dynamodb -p 8000:8000 -d amazon/dynamodb-local -jar DB.jar
Now you can start a python script that get the data from the online DB and copy in the local dynamoDB, as in official docs:
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-amazon-dynamodb-tables-across-accounts-using-a-custom-implementation.html
Working out the connections to the local container (localhost:8000) you shall be able to copy all the data.
I'm not too well versed on local Amplify instances, but for DynamoDB there is a product which you can use locally called DynamoDB Local.
Full list of download instructions for DynamoDB Local are available here: Downloading And Running DynamoDB Local
If you have docker installed in your local machine, you can easily download and start the DynamoDB Local service with a few commands:
Download
docker pull amazon/dynamodb-local
Run
docker run --name dynamodb -p 8000:8000 -d amazon/dynamodb-local -jar DB.jar
This will allow you to avail of 90% of DynamoDB features locally. However, migrating data from DynamoDB Web Service to DynamoDB local is not something that is provided out of the box. For that, you would need to create a small script which you run locally, read data from your existing table and write to your local instance.
An example of reading from one table and writing to a second can be found in the docs here: Copy Amazon Dynamodb Tables Across Accounts
One change you will have make is manually setting the endpoint_url for DynamoDB Local:
dynamodb_client = boto3.Session(
aws_access_key_id=args['AWS_ACCESS_KEY_ID'],
aws_secret_access_key=args['AWS_SECRET_ACCESS_KEY'],
aws_session_token=args['TEMPORARY_SESSION_TOKEN'],
endpoint_url='YOUR_DDB_LOCAL_ENDPOINT_URL'
).client('dynamodb')

Alfresco server gets startup gets hung

I am trying to start a Alfresco server but it got hung in between,Please see below screenshot, I have copied Alfresco instance from one server to another server, I have also made necessary changes in Alfresco-global.properties.
Please help on this
For backup of your database and alf_data you can download and run the following script.
http://www.contcentric.com/alfresco-backup/
Note: you will have to manually backup the indexes from the solr4 folder and other customizations (like amps and jar deployed)
Follow the alfresco restore steps
1. Install new alfresco instance. Do not start server
2. Start postgresql using the following command
./alfresco.sh start postgresql
3. Go to the <ALF-HOME>/postgresql/bin
4. Run the following commmand
psql -U alfresco -h <hostname> -p port
e.g. psql -U alfresco -h localhost -p 5422
5. It will ask you to set the password, enter the password and remember it
6. Run the following command
psql -U alfresco -h <host> -p port <dbname> < dumpFile
e.g. psql -U alfresco -h localhost -p 5422 alfresco < /opt/migration-backup/01-10-2018-15-54-47/database/alfresco_db_dump
7. You will notice the multiple tables and index are created
8. Start the tomcat using the following command
./alfresco.sh start tomcat
9. Test your migration.

OpenStack multi-node setup doesn't show VM images on Dashboard

I'm new to OpenStack and I used DevStack to configure a multi-node dev environment, currently compound of a controller and two nodes.
I followed the official documentation and used the development version of DevStack from the official git repo. The controller was set up in a fresh Ubuntu Server 16.04.
I automated all the steps described in the docs using some scripts I made available here.
The issue is that my registered VM images don't appear on the Dashboard. The image page is just empty. When I install a single-node setup, everything works fine.
When I run openstack image list or glance image-list, the image registered during the installation process is listed as below, but it doesn't appear at the Dashboard.
----------------------------------------------------------
| ID | Name | Status |
----------------------------------------------------------
| f1db310f-56d6-4f38 | cirros-0.3.5-x86_64-disk | active |
----------------------------------------------------------
openstack --version openstack 3.16.1
glance --version glance 2.12.1.
I've googled a lot but got no clue.
Is there any special configuration to make images available in multi-node setup?
Thanks.
UPDATE 1
I tried to set the image as shared using
glance image-update --visibility shared f1db310f-56d6-4f38-b5da-11a714203478, then to add it to all listed projects (openstack project list) using the command openstack image add project image_name project_name but it doesn't work either.
UPDATE 2
I've included the command source /opt/stack/devstack/openrc admin admin inside my ~/.profile file so that all environment variables are set. It defines the username and project name as admin, but I've already tried to use the default demo project and demo username.
All env variables defined by the script is shown below.
declare -x OS_AUTH_TYPE="password"
declare -x OS_AUTH_URL="http://10.105.0.40/identity"
declare -x OS_AUTH_VERSION="3"
declare -x OS_CACERT=""
declare -x OS_DOMAIN_NAME="Default"
declare -x OS_IDENTITY_API_VERSION="3"
declare -x OS_PASSWORD="stack"
declare -x OS_PROJECT_DOMAIN_ID="default"
declare -x OS_PROJECT_NAME="admin"
declare -x OS_REGION_NAME="RegionOne"
declare -x OS_TENANT_NAME="admin"
declare -x OS_USERNAME="admin"
declare -x OS_USER_DOMAIN_ID="default"
declare -x OS_USER_DOMAIN_NAME="Default"
declare -x OS_VOLUME_API_VERSION="3"
When I type openstack domain list I get the domain list below.
----------------------------------------------------
| ID | Name | Enabled | Description |
----------------------------------------------------
| default | Default | True | The default domain |
----------------------------------------------------
As the env variables show, the domain is set as the default one.
After reviewing all the installation process, the issue
was due to an incorrect floating IP range defined inside the local.conf file.
The FLOATING_RANGE variable in such a file must be defined as a subnet of the node network. For instance, my controller IP is 10.105.0.40/24 while the floating IP range is 10.105.0.128/25.
I just forgot to change the FLOATING_RANGE variable (I was using the default value as shown here).

odbcinst: SQLGetPrivateProfileString failed with Unable to find component name

I am able to use unixodbc without any problem with my default user. But when I switch to another user, I get an error.
[centos# ~]$ odbcinst -q -s
[ODBC]
[Amazon Redshift DSN 32]
[centos# ~]$ su ruser
Password:
[ruser# centos]$ odbcinst -q -s
odbcinst: SQLGetPrivateProfileString failed with Unable to find component name.
Environment variables are set in both of the users:
AMAZONREDSHIFTODBCINI=/etc/amazon.redshiftodbc.ini
ODBCSYSINI=/usr/local/odbc
ODBCINI=/etc/odbc.ini
LD_LIBRARY_PATH=/usr/local/lib
LD_PRELOAD=/usr/local/lib/libodbcinst.so
Odbc configuration is as follows:
[ruser# centos]$ odbcinst -j
unixODBC 2.3.4
DRIVERS............: /usr/local/odbc /odbcinst.ini
SYSTEM DATA SOURCES: /usr/local/odbc /odbc.ini
FILE DATA SOURCES..: /usr/local/odbc /ODBCDataSources
USER DATA SOURCES..: /etc/odbc.ini
SQLULEN Size.......: 8
SQLLEN Size........: 8
SQLSETPOSIROW Size.: 8
By the way, I don't understand why there are spaces in the above paths. I don't know if there is a way to change them. Any ideas to solve this issue? Overall odbc configuration seems the same in both of the users.
I found the exact same issue on Centos, where I could use the default centos user to connect, but not any other users.
I was able to resolve the issue by copying the working user (or system) odbc.ini and odbcinst.ini files across into the home directory of the other user, like so (where I have renamed them to .odbc.ini and .odbcinst.ini respectively):
~/.odbc.ini
[MSSQLTest]
Driver = ODBC Driver 17 for SQL Server
Server = tcp:<ip of server>
~/.odbcinst.ini
[ODBC Driver 17 for SQL Server]
Description=Microsoft ODBC Driver 17 for SQL Server
Driver=/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.2.so.0.1
UsageCount=1
Finally, I just had to set the following environment variables inside my ~/.bashrc file, and I was able to connect.
export ODBCSYSINI="<full path to user folder, which would be the evaluated path of {echo ~}>"
export ODBCINSTINI=".odbcinst.ini"
export ODBCINI="<full path to user folder, which would be the evaluated path of {echo ~}>/.odbc.ini"
For some reason, I wasn't able to use ~ to reference the user folders, so I had to manually specify the full user path in the environment variables, and thus my complete .bashrc file is simply:
export ODBCSYSINI="/home/mitch"
export ODBCINSTINI=".odbcinst.ini"
export ODBCINI="/home/mitch/.odbc.ini"
With this setup, I could now run the following and connect successfully:
$ isql -v MSSQLTest <sql server username> <sql server password>
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
SQL>

Docker: How can I have sqlite db changes persist to the db file?

FROM golang:1.8
ADD . /go/src/beginnerapp
RUN go get -u github.com/gorilla/mux
RUN go get github.com/mattn/go-sqlite3
RUN go install beginnerapp/
VOLUME /go/src/beginnerapp/local-db
WORKDIR /go/src/beginnerapp
ENTRYPOINT /go/bin/beginnerapp
EXPOSE 8080
The sqlite db file is in the local-db directory but I don't seem to be using the VOLUME command correctly. Any ideas how I can have db changes to the sqlite db file persisted?
I don't mind if the volume is mounted before or after the build.
I also tried running the following command
user#cardboardlaptop:~/go/src/beginnerapp$ docker run -p 8080:8080 -v ./local-db:/go/src/beginnerapp/local-db beginnerapp
docker: Error response from daemon: create ./local-db: "./local-db" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
EDIT: Works with using /absolutepath/local-db instead of relative path ./local-db
You are not mounting volumes in a Dockerfile.
VOLUME tells docker that content on those directories can be mounted via docker run --volumes-from
You're right. Docker doesn't allow relative paths on volumes on command line.
Run your docker using absolute path:
docker run -v /host/db/local-db:/go/src/beginnerapp/local-db
Your db will be persisted in the host file /host/db/local-db
If you want to use relative paths, you can make it work with docker-compose with "volumes" tag:
volumes:
- ./local-db:/go/src/beginnerapp/local-db
You can try this configuration:
Put the Dockerfile in a directory, (e.g. /opt/docker/myproject)
create a docker-compose.yml file in the same path like this:
version: "2.0"
services:
myproject:
build: .
volumes:
- "./local-db:/go/src/beginnerapp/local-db"
Execute docker-compose up -d myproject in the same path.
Your db should be stored in /opt/docker/myproject/local-db
Just a comment. The content of local-db (if any) will be replaced by the content of ./local-db path (empty). If the container have any information (initialized database) will be a good idea to copy it with docker cp or include any init logic on an entrypoint or command shell script.

Resources