gnu make target variable not expanded properly - gnu-make

I'm confused at why stack_name is not being populated. I've defined the variables, and they are not passed to other make tasks; in this case deploy
deploy-blog: distribution_id = blah
deploy-blog: AWS_PROFILE = blog
deploy-blog: domain_name = blog.example.com
deploy-blog: stack_name = $(subst .,-,${domain_name})
deploy-blog: deploy
If I call make a second time, the same thing happens...
deploy-blog: distribution_id = blah
deploy-blog: AWS_PROFILE = blog
deploy-blog: domain_name = blog.example.com
deploy-blog: stack_name = $(subst .,-,${domain_name})
deploy-blog:
$(MAKE) deploy
I must explicitly pass them to the deploy target...
deploy-blog: distribution_id = blah
deploy-blog: AWS_PROFILE = blog
deploy-blog: domain_name = blog.example.com
deploy-blog: stack_name = $(subst .,-,${domain_name})
deploy-blog:
$(MAKE) deploy distribution_id=${distribution_id} AWS_PROFILE=${AWS_PROFILE} domain_name=${domain_name} stack_name=${stack_name}
Why is this happening?
How can I make this work without calling make again?
How can I title this question to be more in line with what people might search for?
Thanks.

Maybe it is related to the way you use variables in your deploy target.
deploy-blog: distribution_id = blah
deploy-blog: AWS_PROFILE = blog
deploy-blog: domain_name = blog.example.com
deploy-blog: stack_name = $(subst .,-,${domain_name})
deploy-blog: deploy
deploy:
#echo "distribution_id=$(distribution_id) AWS_PROFILE=$(AWS_PROFILE) domain_name=$(domain_name) stack_name=$(stack_name)"
This example produces the following result, with variables correctly expanded.
make deploy-blog
# distribution_id=blah AWS_PROFILE=blog domain_name=blog.example.com stack_name=blog-example-com
However, if $(stack_name) is expanded more than once, the command is executed each time. To avoid that you can use a simple variable with := that will be expanded only once.
stack_name := $(subst .,-,${domain_name})

Related

Change Airflow Services Logs Path

I am looking for resources to change the log paths for Airflow services such as Webserver and Scheduler. I am running out of space every now and then and so want to move the logs into a bigger mount space.
airflow-scheduler.log
airflow-webserver.log
airflow-scheduler.out
airflow-webserver.out
airflow-scheduler.err
airflow-webserver.err
I am starting the services using below given command:
airflow webserver -D
airflow scheduler -D
Thanking in advance!
From https://airflow.apache.org/howto/write-logs.html#writing-logs-locally
Users can specify a logs folder in airflow.cfg using the base_log_folder setting. By default, it is in the AIRFLOW_HOME directory.
You need to change the airflow.cfg for log related parameters as below:
[core]
...
# The folder where airflow should store its log files
# This path must be absolute
base_log_folder = /YOUR_MOUNTED_PATH/logs
...
[webserver]
...
# Log files for the gunicorn webserver. '-' means log to stderr.
access_logfile = /YOUR_MOUNTED_PATH/webserver-access.log"
error_logfile = /YOUR_MOUNTED_PATH/webserver-error.log"
...
Log location can be specified on airflow.cfg as follows. By default, it is under AIRFLOW_HOME
[core]
...
# The folder where airflow should store its log files
# This path must be absolute
base_log_folder = /airflow/logs
...
Please refer to this for additional information https://airflow.apache.org/howto/write-logs.html?highlight=logs
In both master (code) and the 1.10 branch (code), the locations of the following files are hardcoded unless you pass an argument to the cli:
airflow-webserver.err
airflow-webserver.out
airflow-webserver.log
airflow-scheduler.err
airflow-scheduler.out
airflow-scheduler.log
The rest of the log locations can be modified through one of the following variables:
In the [core] section:
base_log_folder
log_filename_template
log_processor_filename_template
dag_processor_manager_log_location
And in the [webserver] section:
access_logfile
error_logfile
You can supply flags to the airflow webserver -D and airflow scheduler -D commands to put all of the generated webserver and scheduler log files where you want them. Here's an example:
airflow webserver -D \
--port 8080 \
-A $AIRFLOW_HOME/logs/webserver/airflow-webserver.out \
-E $AIRFLOW_HOME/logs/webserver/airflow-webserver.err \
-l $AIRFLOW_HOME/logs/webserver/airflow-webserver.log \
--pid $AIRFLOW_HOME/logs/webserver/airflow-webserver.pid \
--stderr $AIRFLOW_HOME/logs/webserver/airflow-webserver.stderr \
--stdout $AIRFLOW_HOME/logs/webserver/airflow-webserver.stdout
and
airflow scheduler -D \
-l $AIRFLOW_HOME/logs/scheduler/airflow-scheduler.log \
--pid $AIRFLOW_HOME/logs/scheduler/airflow-scheduler.pid \
--stderr $AIRFLOW_HOME/logs/scheduler/airflow-scheduler.stderr \
--stdout $AIRFLOW_HOME/logs/scheduler/airflow-scheduler.stdout
Note: If you use these, you'll need to create the logs/webserver and logs/scheduler subfolders. This is only tested for airflow 2.1.2.

Passing AWS S3 environment variables to docker with an R script

I have a dataframe I would like to write to an S3 bucker. I am using aws.s3 for this task. My script looks like the following.
library(aws.s3)
# set up AWS credentials
Sys.setenv("AWS_ACCESS_KEY_ID" = "ASUPERSECRETSTRING",
"AWS_SECRET_ACCESS_KEY" = "ASUPERSECRETSTRING",
"AWS_DEFAULT_REGION" = "us-east-somwhere")
s3write_using(my_data, FUN = write.csv,
bucket = "www.My_bucket",
object = unique_name)
I don't have any problems with the above script, but I don't like hard coding my AWS credentials. What can I do to prevent this?
Pass your ENV to docker run command
sudo docker run -dit -e AWS_ACCESS_KEY_ID='your_key' -e AWS_SECRET_ACCESS_KEY='your_secret' -e AWS_DEFAULT_REGION='bucket_region' busybox sh
Then modify a bit your script.
test_env=Sys.getenv(c("R_HOME"))
AWS_ACCESS_KEY_ID=Sys.getenv("AWS_ACCESS_KEY_ID");
AWS_SECRET_ACCESS_KEY=Sys.getenv("AWS_SECRET_ACCESS_KEY")
AWS_SECRET_ACCESS_KEY=Sys.getenv("AWS_DEFAULT_REGION")
message("test env is:",test_env)
Above code will get env and also one test env. If you still need to call Sys.setenv then you can pass like this once you get env.
Sys.setenv("AWS_ACCESS_KEY_ID" = AWS_ACCESS_KEY_ID,
"AWS_SECRET_ACCESS_KEY" = AWS_SECRET_ACCESS_KEY,
"AWS_DEFAULT_REGION" = AWS_SECRET_ACCESS_KEY)

OpenStack multi-node setup doesn't show VM images on Dashboard

I'm new to OpenStack and I used DevStack to configure a multi-node dev environment, currently compound of a controller and two nodes.
I followed the official documentation and used the development version of DevStack from the official git repo. The controller was set up in a fresh Ubuntu Server 16.04.
I automated all the steps described in the docs using some scripts I made available here.
The issue is that my registered VM images don't appear on the Dashboard. The image page is just empty. When I install a single-node setup, everything works fine.
When I run openstack image list or glance image-list, the image registered during the installation process is listed as below, but it doesn't appear at the Dashboard.
----------------------------------------------------------
| ID | Name | Status |
----------------------------------------------------------
| f1db310f-56d6-4f38 | cirros-0.3.5-x86_64-disk | active |
----------------------------------------------------------
openstack --version openstack 3.16.1
glance --version glance 2.12.1.
I've googled a lot but got no clue.
Is there any special configuration to make images available in multi-node setup?
Thanks.
UPDATE 1
I tried to set the image as shared using
glance image-update --visibility shared f1db310f-56d6-4f38-b5da-11a714203478, then to add it to all listed projects (openstack project list) using the command openstack image add project image_name project_name but it doesn't work either.
UPDATE 2
I've included the command source /opt/stack/devstack/openrc admin admin inside my ~/.profile file so that all environment variables are set. It defines the username and project name as admin, but I've already tried to use the default demo project and demo username.
All env variables defined by the script is shown below.
declare -x OS_AUTH_TYPE="password"
declare -x OS_AUTH_URL="http://10.105.0.40/identity"
declare -x OS_AUTH_VERSION="3"
declare -x OS_CACERT=""
declare -x OS_DOMAIN_NAME="Default"
declare -x OS_IDENTITY_API_VERSION="3"
declare -x OS_PASSWORD="stack"
declare -x OS_PROJECT_DOMAIN_ID="default"
declare -x OS_PROJECT_NAME="admin"
declare -x OS_REGION_NAME="RegionOne"
declare -x OS_TENANT_NAME="admin"
declare -x OS_USERNAME="admin"
declare -x OS_USER_DOMAIN_ID="default"
declare -x OS_USER_DOMAIN_NAME="Default"
declare -x OS_VOLUME_API_VERSION="3"
When I type openstack domain list I get the domain list below.
----------------------------------------------------
| ID | Name | Enabled | Description |
----------------------------------------------------
| default | Default | True | The default domain |
----------------------------------------------------
As the env variables show, the domain is set as the default one.
After reviewing all the installation process, the issue
was due to an incorrect floating IP range defined inside the local.conf file.
The FLOATING_RANGE variable in such a file must be defined as a subnet of the node network. For instance, my controller IP is 10.105.0.40/24 while the floating IP range is 10.105.0.128/25.
I just forgot to change the FLOATING_RANGE variable (I was using the default value as shown here).

Programmatically determine if git-flow is initialized

Is there any way to do this? Is the repo considered initialized if it simply has the git-flow directives in .git/config like
....
[gitflow "branch"]
master = master
develop = develop
[gitflow "prefix"]
feature = feature/
release = release/
hotfix = hotfix/
support = support/
versiontag = v
?
Answered here. Basically:
Check config for gitflow.branch.master and if the branch does exist in the repo
Check config for gitflow.branch.develop and if the branch does exist in the repo
Master branch can not be the same as develop branch.
Make sure all the prefixes are configured.
What I do to check is I run following command:
git flow config >/dev/null 2>&1. If it is initialized, then it exits with 0 otherwise with 1.
I usually do something like this:
if $(git flow config >/dev/null 2>&1)
then
echo initialized
else
echo not initialized
git flow init -d
fi
I some time short it like:
git flow config >/dev/null 2>&1 || git flow init -d

How to replace +,/ and = from url on NGINX

I am sending encoded URL after replacing +,/ and = to NginX, but not able to replace them back on Nginx before decoding.
I don't wont to use something like php, which handle the request and response back from PHP server.
I want something in nginx only to manipulate string.
Till now I have tried with replace-filter-nginx-module and let module, but not get sucess.
Any help will be praised!
Download Lua from http://luajit.org/download.html and unzip the folder. Then, run the following commands (remember to replace [FOLDER_PATH] by the actual place where you unzipped it):
cd [FOLDER_PATH]/
make
make install
Lua NGINX Module
Put inside add module in ./configure
https://github.com/openresty/lua-nginx-module/tags
Assuming NGINX is installed at /opt/nginx, run:
./configure --prefix=/opt/nginx --with-http_ssl_module \
--with-http_secure_link_module \
--add-module=/opt/nginxdependencies/ngx_devel_kit-master \
--add-module=[FOLDER_PATH]/set-misc-nginx-module-0.23 \
--add-module=[FOLDER_PATH]/lua-nginx-module-0.9.10 \
--with-ld-opt='-Wl,-rpath,/usr/local/lib'
Now download https://github.com/openresty/lua-resty-string and add entry to nginx.conf above server { ... }:
lua_package_path "[FOLDER_PATH]/lua-resty-string-master/lib/?.lua;;";

Resources