Let's assume I have the next simple Ansible playbook:
---
tasks:
- name: Upgrade installed packages
become: true
apt:
upgrade: safe
- name: Install NGINX web server
become: true
apt:
name: nginx
state: latest
notify:
- Restart NGINX
handlers:
- name: Restart NGINX
become: true
service:
name: nginx
state: restarted
As you see, I upgrade installed APT packages first and only then make sure I have the latest Nginx version. The problem is that if there's an update for Nginx, it will be installed in the first task and if so, the second task won't be marked as changed and the handler won't be fired. Is it true? Or Ansible is clever enough to somehow fire this handle only when Nginx was upgraded in the first task?
I wonder about the best practice for this case. Is there a better way than move all the separate installation tasks (which should fire handlers on their change) before the task which upgrades all the installed packages?
Thanks!
This is not "The Ansible way", but it is an option.
one why you can do it is by using lsof to find all the pid's which need restart and pass this information to systemd to get the service name for each pid. And then go over the list of services and restart each one of them.
some one all ready wrote a perl-script like that
- see example here: https://rwmj.wordpress.com/2014/07/10/which-services-need-restarting-after-an-upgrade/
another option is but the same is the restart-services script from the debian-goodies repo/package.
Related
I'm trying to run a docker image, which is essentially the default ASP.Net 6 default API:
docker run my-image-name --restart=aways -d -p 80:5111
I've tried a few different ways of running this, but it appears to always do the same thing. First of all, whether I launch in detached mode or interactive, I get the following display:
{"EventId":14,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Now listening on: http://[::]:80","State":{"Message":"Now listening on: http://[::]:80","address":"http://[::]:80","{OriginalFormat}":"Now listening on: {address}"}}
{"EventId":0,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Application started. Press Ctrl\u002BC to shut down.","State":{"Message":"Application started. Press Ctrl\u002BC to shut down.","{OriginalFormat}":"Application started. Press Ctrl\u002BC to shut down."}}
{"EventId":0,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Hosting environment: Production","State":{"Message":"Hosting environment: Production","envName":"Production","{OriginalFormat}":"Hosting environment: {envName}"}}
{"EventId":0,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Content root path: /app","State":{"Message":"Content root path: /app","contentRoot":"/app","{OriginalFormat}":"Content root path: {contentRoot}"}}
It then seems to stall - I can't ctrl-c, nor can I attach to the process (it does show under docker ps). I also can't connect to the API:
http://localhost:5111/swagger
Just returns ERR_CONNECTION_REFUSED
My guess here is that there's something wrong with the dockerfile itself - however, it builds fine. The question I have is: how can I debug this in order to determine the error?
I can't add this as a comment because i don't have the reputation so it will have to be an answer instead.
I am new to Docker myself so I am not sure why it is stalling but when you specify the port with -p 80:5111 you are mapping port 5111 in the container to port 80 on the Docker host.
So you should connect with http://localhost:80/swagger instead.
I'm trying to run dagster using celery-k8s and using the examples/celery-k8s as a start. upon running the pipeline from playground I get
Initialization of resources [s3, io_manager] failed.
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I have configured aws credentials in env variables as mentioned in the document
deployments:
- name: "user-code-deployment-test"
image:
repository: "somasays/dagster-usercode-example"
tag: "0.5"
pullPolicy: Always
dagsterApiGrpcArgs:
- "-f"
- "/workspace/repo.py"
port: 3030
env:
AWS_ACCESS_KEY_ID: AAAAAAAAAAAAAAAAAAAAAAAAA
AWS_SECRET_ACCESS_KEY: qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
AWS_DEFAULT_REGION: eu-central-1
and I can also see these values are set in the env variables of the pod and can also access the s3 location after pip install awscli and aws s3 ls see the screenshot below the job pod however throws Unable to locate credentials
Please help
The deployment configuration applies to the user code servers. Meanwhile the celery executor runs your pipeline code in separate kubernetes jobs. To provide your secrets there, you will want to configure the env_secrets field of the celery-k8s executor in your pipeline run config.
See https://github.com/dagster-io/dagster/blob/master/python_modules/libraries/dagster-k8s/dagster_k8s/job.py#L321-L327 for details on the config.
I am testing salt as a management system, using ansible so far.
How can I trigger an action (specifically, a service reload) when a state has changed?
In Ansible this is done via notify but browsing salt documentation I cannot find anything similar.
I found watch, which works the other way round: "check something, and if it changed to this and that".
there is also listen which seems to be closer to my needs (the documentation mentions a service reload) but I cannot put together the pieces.
To set an example, how the following scenario would work in salt: check a git repo (= create it if not existing or pull from it otherwise) and if it has changed, reload a service? The Ansible equivalent is
- name: clone my service
git:
clone: yes
dest: /opt/myservice
repo: http://git.example.com/myservice.git
version: master
force: yes
notify:
- restart my service if needed
- name: restart my service if needed
systemd:
name: myservice
state: restarted
enabled: True
daemon_reload: yes
Your example:
ensure my service:
git.latest:
- name: http://git.example.com/myservice.git
- target: /opt/myservice
service.running:
- watch:
- git: http://git.example.com/myservice.git
When there will be change in repo (clone for the first time, update etc.)
the state will be marked as "having changes" thus the dependent states -
service.running in this case - will require changes, for service it means to restart
What you are asking is covered in salt quickstart
I am trying to install Elastic search, Nginx, Kibana and Sense.
I am following this guide:
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-ubuntu-14-04
I successfully installed Elastic search.
However I am stuck at Kibana.
I successfully followed all steps however:
root#dev:~# service kibana start
kibana started
root#dev:~# service kibana status
kibana is not running
When I run kibana service it says it started, and after that when I want to check if kibana is running, it says it is not running.
If more details are needed for this question to be answered, comment and I will provide it.
Use curl localhost:5601 to test if kibana is really working.
If not working , go to etc/kibana/ to modify the config to check if host is 0.0.0.0 and port is 5601
And the other problem is that your server'memories are not enough for kibana starting.
Hope you can provider the kibana log.
try use:
journalctl -u kibana.service
to show the log.
Even I was facing the same issue.
I have installed the latest version of kibana ( Version: 5.0.2) which is dowloaded from this site:
https://www.elastic.co/downloads/kibana
And I have installed the Elasticsearch of version ( 5.0.2)
I used the below site:
https://www.elastic.co/guide/en/elasticsearch/reference/current/zip-targz.html
It worked for me.
Thanks.
If you have installed older version of kibana or using nginx to wrap to kibana then the following command will not work
service kibana start
To start or stop service, instead of the above, use the following command:
systemctl start kibana.service
It seems like the Kibana process failed to start or execute. (I would reckon it is an issue with the kibana.yml file.) You can see why in the Kibana log output. However, Kibana doesn't have a log file by default. You can set it up using the log_file Kibana server property.
Have a look here: https://www.elastic.co/guide/en/kibana/current/kibana-server-properties.html
My vhost configuration: http://pastebin.com/ZyXUmQtx (only one domain on this installation)
I've been racking my head and Google for a solution the last two days and can't quite seem to come up with a solution that works.
My setup (from the above configuration):
IP.Board 3.4 installation in %root_domain%/forums/
IP.Content 2.3 installation in %root_domain/forums/ (with external access index.php on the top-level)
Redmine 2.2.2 install at /usr/share/redmine (this is working because Thin is running and there are no errors in either log file)
Stale phpMyAdmin configuration at /usr/share/phpmyadmin/ that also kinda doesn't load html/css properly.
Symlink to /usr/share/redmine/public to /srv/www/tiberian-genesis.net/public_html/redmine
I'm trying to get redmine setup to run under %root_domain%/redmine/, but I keep getting a 404 page from my IP.Content installation.
Accessing it takes me to the url: /redmine/login?back_url=http://redmine_thin_servers/redmine/ (which now that notice it, it seems to not like my upstream...)
In case someone requests the Thin configuration file:
---
pid: /var/run/thin/redmine.pid
group: tgmod
prefix: /redmine
timeout: 30
log: /var/log/thin/redmine.log
max_conns: 1024
require: []
max_persistent_conns: 512
environment: production
user: tgmod
servers: 1
daemonize: true
chdir: /usr/share/redmine
socket: /var/run/thin/redmine.sock
I'm out of ideas here.
Thanks in advance!
I just ended up setting it up on a sub-domain. I wanted to try to proxy it on a sub-directory, but my main website kept interfering with the rules.