How to Handle Runtime Configuration of Symfony2 Using Consul Service Discovery - symfony

Our team is presently exploring the idea of service discovery for a Symfony2 application using Consul. Being in the relative frontier, there's very little out there in the way of discussion. So far we've discovered:
Runtime configuration has previously been shot down.
A bundle exists to handle such a use case, but has also hasn't seen a lot of activity as of late.
One of the contributors of said bundle suggested that External Parameters might be the answer to the problem.
Sensio has created its own Consul SDK. However, there seems to be little in the way of documentation/official blog articles re: Symfony2 integration
Consol provides watches which can be triggered on various changes
Current thoughts are to explore utilizing the Consul watchers to re-trigger a cache build along with external parameters. That said, there is some concern on the overhead of such an operation if services change semi-frequently.
Based on the above, and knowledge of Consul/Symfony internals, would that be an advisable approach? If not, why, and what alternatives are available?

In the company I work, we took a different route.
Instead of fighting against Symfony to accept runtime configuration (something it should, like Spring Data Consul, for example), we decided to make Consul update Symfony configuration, in a similar in concept, different in implementation than Frank did.
We installed Consul and Consul Template. We create a K/V entry pair that contains the entire parameters.yml file. Example:
Key: eblock/config/parameters.yml
parameters:
router.request_context.host: dev.eblock.ca
router.request_context.scheme: http
router.request_context.base_url: /
Then a consul template configuration file was added at location /opt/consul-template/config/eblock.cfg:
template {
source = "/opt/consul-template/templates/eblock-parameters.yml.ctmpl"
destination = "/var/www/eblock/app/config/parameters.yml"
command = "/opt/eblock/scripts/parameters_updated.sh"
}
The contents of ctmpl file are:
{{key "eblock/config/parameters.yml"}}
Finally, our parameters_updated.sh script does:
#!/bin/bash
readonly PROGNAME=$(basename "$0")
readonly LOCKFILE_DIR=/tmp
readonly LOCK_FD=201
lock() {
local prefix=$1
local fd=${2:-$LOCK_FD}
local lock_file=$LOCKFILE_DIR/$prefix.lock
# create lock file
eval "exec $fd>$lock_file"
# acquire the lock
flock -n $fd \
&& return 0 \
|| return 1
}
lock $PROGNAME || exit 0
export HOME=/root
logger "Starting composer install" && \
/usr/local/bin/composer install -d=/var/www/eblock/ --no-interaction && \
logger "Running composer dump-autoload" && \
/usr/local/bin/composer dump-autoload -d=/var/www/eblock/--optimize && \
logger "Running app/console c:c/c:w" && \
/usr/bin/php /var/www/eblock/app/console c:c -e=prod --no-warmup && \
/usr/bin/php /var/www/eblock/app/console c:w -e=prod && \
logger "Running doctrine commands" && \
/usr/bin/php /var/www/eblock/app/console doctrine:database:create --env=prod --if-not-exists && \
/usr/bin/php /var/www/eblock/app/console doctrine:migrations:migrate -n --env=prod && \
logger "Restarting php-fpm" && \
/bin/systemctl restart php-fpm &
Knowing that both consul and consul-template services are up, as soon as your value changes in the specified key for consul template, it'll dump the file into configured destination and run the command for parameters updated.
It works like a charm. =)

A simple KV watcher that puts the value into parameters.yml, triggers a cache:clear is the simplest option in my opinion and also provides the benefit of compilation so that it doesn't have to go to Consul each time to check if values are updated. Like you said, some overhead but seems to be ok if you don't change your parameters every 5 minutes.
We're exploring that option now but if you made any progress on this, an update would be appreciated.
[Update 2016-02-23] We've implemented the idea I mentioned above and it works as expected: well. Mind you, we change our parameters only on deploy of a new version (because we also use service discovery by Consul so no need to update service lists in parameters). We mostly did it because it saves us the boring job of changing parameters on several servers. As usual: this might not work for you but I think you would be safe if, like I said before, you don't change your parameters every 5 mins :)

Related

404 error when using Google Cloud Scheduler to run Docker container on Cloud Run

I am posting a follow on question to this one that I posted recently: Docker container failed to start when deploying to Google Cloud Run. I am new to GCP, and am trying to teach myself by deploying a simple R script in a Docker container that connects to BigQuery and writes the system time. I've been able to successfully deploy the Docker container, but I cannot invoke it. I believe I'm misunderstanding something fundamental about APIs, and I'd greatly appreciate any input!
So far, I have:
1.- Used the plumber R package to expose the R code as a service by "decorating" it with special annotations
# script called big-query-tutorial.R
library(bigrquery)
library(tidyverse)
project = "xxxx-project"
dataset = "xxxx-dataset"
table = "xxxx-table"
bq_auth("/home/rstudio/xxxx-xxxx.json", email="xxxx#xxxx.com")
#* #get /time
systime <- function(){
# upload Sys.time() to Big Query
insert_upload_job(project=project, data=dataset, table=table, write_disposition="WRITE_APPEND", values=Sys.time() %>% as_tibble(), billing=project)
}
2.- Translated the R code from (1) to a plumber API with this R script
# script called main.R
library(plumber)
r <- plumb("/home/rstudio/big-query-tutorial.R")
r$run(host="0.0.0.0", port=8080)
3.- Made the Dockerfile
FROM rocker/tidyverse:latest
# BEGIN rstudio/plumber layers
RUN apt-get update -qq && apt-get install -y --no-install-recommends \
git-core \
libssl-dev \
libcurl4-gnutls-dev \
curl \
libsodium-dev \
libxml2-dev
RUN R -e "install.packages('plumber', repos='http://cran.us.r-project.org')"
RUN R -e "install.packages('bigrquery', repos='http://cran.us.r-project.org')"
# add json file for authentication with BigQuery and necessary R scripts
ADD xxxx-xxxx.json /home/rstudio
ADD big-query-tutorial.R /home/rstudio
ADD main.R /home/rstudio
# open port 8080 to traffic
EXPOSE 8080
# when the container starts, start the main.R script
ENTRYPOINT ["Rscript", "/home/rstudio/main.R", "--host", "0.0.0.0"]
4.- Successfully run the container locally on my machine, with the system time being written to BigQuery when I visit http://0.0.0.0:8080/time and then refresh the browser.
5.- Pushed the container to my container registry in Google Cloud
6.- Successfully deployed the container to Cloud Run.
7.- Created a service account (i.e., xxxx#xxxx.iam.gserviceaccount.com) that has roles "Cloud Run Invoker" and "Cloud Scheduler Service Agent".
8.- Set up a Cloud Scheduler job by filling out the fields in the console as follows
Frequency: ***** (i.e., once per minute)
Timezone: Pacific Standard Time (PST)
Target: HTTP
URL: xxxx-xxxx.run.app
HTTP method: GET
Auth header: Add OIDC token
Service account: xxxx#xxxx.iam.gserviceaccount.com (i.e., account from (7))
Audience: xxxx-xxxx.run.app (I leave this field blank, it is automatically added)
When I click on "RUN NOW" in Cloud Scheduler, I get the error
httpRequest: {
status: 404
}
When I check the log for Cloud Run, every minute there is the 404 error. The request count under the "METRICS" tab averages out to 0.02/s.
Thank you!
-H.
A couple of recommendations:
Make sure your service account has roles/iam.serviceAccountTokenCreator and roles/cloudscheduler.serviceAgent that will enable impersonation. And roles/run.Invoker to be able to call Cloud Run.
Also you have chosen OIDC Audience
A bit about the audience: field in OIDC tokens.
You must set this field for the invoking service and specify the fully qualified URL of the receiving service. For example, if you are invoking Cloud Run or Cloud Functions, the id_token must include the URL/path of the service.
Example declaration:
gcloud beta scheduler jobs create http oidctest --schedule "5 * * * *" --http-method=GET \
--uri=https://hello-6w42z6vi3q-uc.a.run.app \
--oidc-service-account-email=schedulerunner#$PROJECT_ID.iam.gserviceaccount.com \
--oidc-token-audience=https://hello-6w42z6vi3q-uc.a.run.app

How to avoid duplicating workers with Symfony local server?

I found out that I have quite many Symfony local web server workers registered (around ~35), and the number keeps growing. I usually just start server with symfony serve and then kill it (Ctrl + \) when no longer needed. Apparently killing it leaves a worker behind, as seen in symfony server:status. Running symfony serve again just creates a new worker.
symfony server:status output:
Local Web Server
Not Running
Workers
PID 6327: /usr/bin/php7.2 -S 127.0.0.1:43653 -d variables_order=EGPCS /home/mindaugas/.symfony/php/83247c3521c3ac3990bf3f823ef473db0a9445e1-router.php
PID 24596: /usr/bin/php7.2 -S 127.0.0.1:37789 -d variables_order=EGPCS /home/mindaugas/.symfony/php/83247c3521c3ac3990bf3f823ef473db0a9445e1-router.php
PID 6575: /usr/bin/php7.4 -S 127.0.0.1:42505 -d variables_order=EGPCS /home/mindaugas/.symfony/php/83247c3521c3ac3990bf3f823ef473db0a9445e1-router.php
PID 41550: /usr/bin/php7.4 -S 127.0.0.1:36313 -d variables_order=EGPCS /home/mindaugas/.symfony/php/83247c3521c3ac3990bf3f823ef473db0a9445e1-router.php
...
Environment Variables
None
So my questions regarding this:
#1: is it possible to quickly kill the server? I assume symfony server:stop is more correct way, but that requires additional console window and entering the command.
#2: how to kill those workers that are registered from previous sessions? Trying e.g. kill 6327 says that there's no such process. Also they're not gone after system restart.
Those extra workers are bothering me because for each one of them the server log output in the console is duplicated. So right now on each request to the server I get around 3k lines of log output in the console. Which makes it pretty useless.
I have the same problem after upgrading to Symfony CLI version v4.19.0...
My (very) bad workaround:
rm /home/myusername/.symfony/var/83247c3521c3ac3990bf3f823ef473db0a9445e1/*
Edit: this answer is not accurate, as hinted a by #CrSrr's answer above.
The symfony command adds data to both the ./log and ./var directories. Deleting entries in only one of those does not remove the appearance of non-existent workers in the project directory. I was fooled by checking status in a directory where the server:start had never been run.
A bug report is on file with symfony here.
Just faced with a similar issue. The PIDs were not to be found.
PS G:\workspace\joined> symfony server:status
Local Web Server
Not Running
Workers
PID 7732: C:\php\php-cgi.exe -b 63801 -d error_log=C:\Users\George\.symfony\log\e79ad2f4b30a2f0a35c3b5ab08772770b382a3d6.log
PID 19324: C:\php\php-cgi.exe -b 62927 -d error_log=C:\Users\George\.symfony\log\e79ad2f4b30a2f0a35c3b5ab08772770b382a3d6.log
PID 17968: C:\php\php-cgi.exe -b 50197 -d error_log=C:\Users\George\.symfony\log\e79ad2f4b30a2f0a35c3b5ab08772770b382a3d6.log
PID 14040: C:\php\php-cgi.exe -b 55075 -d error_log=C:\Users\George\.symfony\log\e79ad2f4b30a2f0a35c3b5ab08772770b382a3d6.log
Environment Variables
None
In the Windows OS, the log files are kept in %USERPROFILE%.symfony. There's most likely a similar location in your home directory. Deleting all the contents of that directory allowed a new Windows Terminal app to show:
PS G:\workspace> symfony server:status
Local Web Server
Not Running
Workers
No Workers
Environment Variables
None
do symfony serve:stopto stopped the server and do against symfony serve to run the server good luck

Firestore authorization for Google Compute engine for app on a docker container

I have deployed a Node.js app on a Google compute instance via a Docker container. Is there a recommended way to pass the GOOGLE_APPLICATION_CREDENTIALS to the docker container?
I see the documentation states that GCE has Application Default Credentials (ADC), but these are not available in the docker container. (https://cloud.google.com/docs/authentication/production)
I am a bit new to docker & GCP, so any help would be appreciated.
Thank you!
So, I could find this documentation on where you can inject your GOOGLE_APPLICATION_CREDENTIALS into a docker in order to test cloud run locally, I know that this is not cloud run, but I believe that the same command could be used in order to inject your credentials to the container.
As I know that a lot of the times the community needs the steps and commands as the links could change and information also could change I will copy the steps needed in order to inject the credentials.
Refer to Getting Started with Authentication for instructions
on generating, retrieving, and configuring your Service Account
credentials.
The following Docker run flags inject the credentials and
configuration from your local system into the local container:
Use the --volume (-v) flag to inject the credential file into the
container (assumes you have already set your
GOOGLE_APPLICATION_CREDENTIALS environment variable on your machine):
-v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro
Use the --environment (-e) flag to set the
GOOGLE_APPLICATION_CREDENTIALS variable inside the container:
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json
Optionally, use this fully configured Docker run command:
PORT=8080 && docker run \
-p 9090:${PORT} \
-e PORT=${PORT} \
-e K_SERVICE=dev \
-e K_CONFIGURATION=dev \
-e K_REVISION=dev-00001 \
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json \
-v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro \ gcr.io/PROJECT_ID/IMAGE
Note that the path
/tmp/keys/FILE_NAME.json
shown in the example above is a reasonable location to place your
credentials inside the container. However, other directory locations
will also work. The crucial requirement is that the
GOOGLE_APPLICATION_CREDENTIALS environment variable must match the
bind mount location inside the container.
Hope this works for you.

Cronjob in symfony running on docker

I am trying to run a symfony command through cron but it is now executing never. The application is runningin docker and I can't find information if I need to specify roles or something else. Other standard linux commands are executed successfully but looks like cron doesn't want to start app/console. Here is my cronjob:
* * * * * /usr/local/bin/php /usr/lib/myApp/app/console myCommand --env=prod >> /usr/lib/myApp/testLog.txt 2>&1
Does anyone have any suggestions how to run symfony command in docker using cron?
The philosophy of Docker is to have one process per container. That means, you usually have no init system and thus, no services running inside the container, e.g. dbus or cron.
There are ways to create your own Docker Image with such an init-system/background service. Images based on Alpine often use S6.
Another solution is to have use the cron-service on your host and rewrite the command to something like docker exec <container_name> /usr/local/bin/php ...

How to deploy home-grown applications with rpm?

here is my scenario
our team develops on AIX
dozens of applications, mostly Perl, shell scripts, batch java, C
i would like to simplify deployment/rollback procedures - currently using plain old tarballs with backups
I looked into installp vs. rpm for packaging (see my SO question) and decided to go with rpm - better docs, plus IBM included it while having their own packaging tool, so this is a valid reason for me
i would want to use a separate rpm db, not the main one - for I don't have root access and I also feel it would be beneficial to separate OS apps from our stuff.
the workflow would look like:
each app has a corresponding rpm.spec - checked-in into source control
build an rpm in a home dir
install/upgrade while using our own packages.rpm
NOTE : I will use this question as notes to myself as I proceed
building rpm's in my home :
1.
I need a .rpmmacros file in my user's root which overrides some system-wide rpm settings
%_signature gpg
%_gpg_name {yourname}
%_gpg_path ~/.gnupg
%distribution AIX 5.3
%vendor {Northwind? :)}
%make make
2.
this will create directory structure needed for rpm builds, it will also update .rpmmacros
#!/bin/sh
[ "x$1" = "x-d" ] && {
DEBUG="y"
export DEBUG
shift 1
}
IAM=`id -un`
PASSWDDIR=`grep ^$IAM: /etc/passwd | awk -F":" '{print $6}'`
HOMEDIR=${HOME:=$PASSWDDIR}
[ ! -d $HOMEDIR ] && {
echo "ERROR: Home directory for user $IAM not found in /etc/passwd."
exit 1
}
RHDIR="$HOMEDIR/rpmbuild"
RPMMACROS="$HOMEDIR/.rpmmacros"
touch $RPMMACROS
TOPDIR="%_topdir"
ISTOP=`grep -c ^$TOPDIR $RPMMACROS`
[ $ISTOP -lt 1 ] && {
echo "%_topdir $HOMEDIR/rpmbuild" >> $RPMMACROS
}
TMPPATH="%_tmppath"
ISTMP=`grep -c ^$TMPPATH $RPMMACROS`
[ $ISTMP -lt 1 ] && {
echo "%_tmppath $HOMEDIR/rpmbuild/tmp" >> $RPMMACROS
}
[ "x$DEBUG" != "x" ] && {
echo "$IAM $HOMEDIR $RPMMACROS"
echo "$RHDIR $TOPDIR $ISTOP"
}
[ ! -d $RHDIR ] && mkdir -p $RHDIR
cd $RHDIR
for i in RPMS SOURCES SPECS SRPMS BUILD tmp ; do
[ ! -d ./$i ] && mkdir ./$i
done
exit 0
you could check if rpm picked up your changes with :
rpm --showrc | grep topdir
3.
specify a non-default location of the RPM database, such as the following:
rpm --dbpath /location/of/your/rpm/database --initdb
I usually check in my spec files to the same place that my code is.
I run a build server (I use Hudson) to kick off a build every night (could be continuous but I chose nightly). The build server checks out the code, builds it, and runs rpmbuild. Hudson sets up a workspace folder that can be deleted after each build so if you set %_topdir to point to that area then you can guarantee there won't be build artifacts left over from a previous build. At the end of the build I check my rpms into version control with a comment containing the build number.
Rolling back is a matter of pulling out the last good rpm from version control, erasing the current rpm, and installing the old rpm.
Sounds like you already have a good handle on using your own package db.

Resources