How to register an existing Clearcase view existing on other server - unix

I have a two views of the same name on two different servers but they are not synchronized.
They do not have the same config spec as shown below.
Environment is AIX.
t123456#server1:/dwp_root/d/streams/rcl/bin:d> ct catcs -tag deliver_pml_ux
element * A46_1.4.2
t123456#server2:/u/t123456:t> ct catcs -tag deliver_pml_ux
element * DEM_7.7.52
Information from server 1
t123456#server1:/u/t123456:d> ct lsview -l -pro -full deliver_pml_ux
Tag: deliver_pml_ux
Global path: /clearcase/views/deliver_pml_ux.vws
Server host: server1
Region: dwh
Active: YES
View tag uuid:58a2fc3c.761011e8.8052.00:02:c7:d6:16:4c
View on host: server1
View server access path: /clearcase/views/deliver_pml_ux.vws
View uuid: 58a2fc3c.761011e8.8052.00:02:c7:d6:16:4c
View owner: ccadmin
t123456#server1:/u/t123456:d>ct lsstgloc -view -long
Name: viewstg
Type: View
Region: dwh
Storage Location uuid: 3a407c2c.ca8b11e1.805a.00:02:f6:0b:ad:4c
Global path: /clearcase/stg/views
Server host: d1dw753
Server host path: /clearcase/stg/views
Information from server 2
t123456#server2:/u/t123456:t> ct lsview -l -pro -full deliver_pml_ux
Tag: deliver_pml_ux
Global path: /clearcase/views/deliver_pml_ux.vws
Server host: server2
Region: dwh
Active: YES
View tag uuid:9c721b34.ba1811e1.8026.00:02:f6:0b:ad:4c
View on host: server2
View server access path: /clearcase/views/deliver_pml_ux.vws
View uuid: 9c721b34.ba1811e1.8026.00:02:f6:0b:ad:4c
View owner: loaddsfr
t123456#server2:/u/t123456:t> ct lsstgloc -view -long
t123456#server2:/u/t123456:t>

Check if you need to use cleartool register in your case.
ct register -view -replace -host server2 -hpath /clearcase/views/deliver_pml_ux.vws /clearcase/views/deliver_pml_ux.vws
It needs to run on a ClearCase client within the right target region.

Related

AWS codedeploy hooks section next event if failure

Is there a way to instruct the codeDeploy to move on to the next deployment event if a script execution failed in one of the steps defined in appspec.yml?
For example, if the script stop_service.sh called by an ApplicationStop event failed, I would like the error to be ignored and start_service.sh in the ApplicationStart to be executed instead. please ignore the malformat in snipet below.
version: 0.0
os: linux
files:
- source: /
destination: /opt/custody_register/platform/3streamer/dev
overwrite: true
file_exists_behavior: OVERWRITE
permissions:
- object: /deployment/
pattern: "**"
owner: root
group: root
mode: 777
type:
- file
hooks:
ApplicationStop:
- location: ../../../../../custody_register/platform/3streamer/dev/deployment/stop_service.sh
timeout: 40
runas: root
allow_failure: true
ApplicationStart:
- location: ../../../../../custody_register/platform/3streamer/dev/deployment/start_service.sh
timeout: 40
runas: root
allow_failure: true

Ejabberd APIs not working with python requests

I'm using ejabberd from the docker container. I followed this link to install ejabberd docker container.
I tried the Administration APIs in the docs. For example, I have tried to register users with API in postman. It worked and created the user in the server. But when I tried to send a post request with the python requests library, I get 401 error.
My ejabberd.yml file:
###
### ejabberd configuration file
###
### The parameters used in this configuration file are explained at
###
### https://docs.ejabberd.im/admin/configuration
###
### The configuration file is written in YAML.
### *******************************************************
### ******* !!! WARNING !!! *******
### ******* YAML IS INDENTATION SENSITIVE *******
### ******* MAKE SURE YOU INDENT SECTIONS CORRECTLY *******
### *******************************************************
### Refer to http://en.wikipedia.org/wiki/YAML for the brief description.
###
hosts:
- localhost
loglevel: 4
log_rotate_size: 10485760
log_rotate_date: ""
log_rotate_count: 1
log_rate_limit: 100
certfiles:
- /home/ejabberd/conf/server.pem
ca_file: "/home/ejabberd/conf/cacert.pem"
## When using let's encrypt to generate certificates
##certfiles:
## - /etc/letsencrypt/live/localhost/fullchain.pem
## - /etc/letsencrypt/live/localhost/privkey.pem
##
##ca_file: "/etc/letsencrypt/live/localhost/fullchain.pem"
listen:
-
port: 5222
ip: "::"
module: ejabberd_c2s
max_stanza_size: 262144
shaper: c2s_shaper
access: c2s
starttls_required: true
-
port: 5269
ip: "::"
module: ejabberd_s2s_in
max_stanza_size: 524288
-
port: 5443
ip: "::"
module: ejabberd_http
tls: true
request_handlers:
"/admin": ejabberd_web_admin
"/api": mod_http_api
"/bosh": mod_bosh
"/captcha": ejabberd_captcha
"/upload": mod_http_upload
"/ws": ejabberd_http_ws
"/oauth": ejabberd_oauth
-
port: 5280
ip: "::"
module: ejabberd_http
request_handlers:
"/admin": ejabberd_web_admin
-
port: 5281
module: ejabberd_http
ip: 127.0.0.1
request_handlers:
/api: mod_http_api
-
port: 1883
ip: "::"
module: mod_mqtt
backlog: 1000
##
## https://docs.ejabberd.im/admin/configuration/#stun-and-turn
## ejabberd_stun: Handles STUN Binding requests
##
##-
## port: 3478
## ip: "0.0.0.0"
## transport: udp
## module: ejabberd_stun
## use_turn: true
## turn_ip: "{{ IP }}"
## auth_type: user
## auth_realm: "example.com"
##-
## port: 3478
## ip: "0.0.0.0"
## module: ejabberd_stun
## use_turn: true
## turn_ip: "{{ IP }}"
## auth_type: user
## auth_realm: "example.com"
##-
## port: 5349
## ip: "0.0.0.0"
## module: ejabberd_stun
## certfile: "/home/ejabberd/conf/server.pem"
## tls: true
## use_turn: true
## turn_ip: "{{ IP }}"
## auth_type: user
## auth_realm: "example.com"
##
## https://docs.ejabberd.im/admin/configuration/#sip
## To handle SIP (VOIP) requests:
##
##-
## port: 5060
## ip: "0.0.0.0"
## transport: udp
## module: ejabberd_sip
##-
## port: 5060
## ip: "0.0.0.0"
## module: ejabberd_sip
##-
## port: 5061
## ip: "0.0.0.0"
## module: ejabberd_sip
## tls: true
s2s_use_starttls: optional
acl:
local:
user_regexp: ""
loopback:
ip:
- 127.0.0.0/8
- ::1/128
- ::FFFF:127.0.0.1/128
admin:
user:
- "admin#localhost"
apicommands:
user:
- "admin#localhost"
access_rules:
local:
allow: local
c2s:
deny: blocked
allow: all
announce:
allow: admin
configure:
allow: admin
muc_create:
allow: local
pubsub_createnode:
allow: local
trusted_network:
allow: loopback
api_permissions:
"API used from localhost allows all calls":
who:
ip: 127.0.0.1/8
what:
- "*"
- "!stop"
- "!start"
"console commands":
from:
- ejabberd_ctl
who: all
what: "*"
"admin access":
who:
access:
allow:
acl: loopback
acl: admin
oauth:
scope: "ejabberd:admin"
access:
allow:
acl: loopback
acl: admin
what:
- "*"
- "!stop"
- "!start"
"public commands":
who:
ip: 127.0.0.1/8
what:
- status
- connected_users_number
"some playing":
from:
- ejabberd_ctl
- mod_http_api
who:
acl: apicommands
what: "*"
shaper:
normal: 1000
fast: 50000
shaper_rules:
max_user_sessions: 10
max_user_offline_messages:
5000: admin
100: all
c2s_shaper:
none: admin
normal: all
s2s_shaper: fast
max_fsm_queue: 10000
acme:
contact: "mailto:example-admin#example.com"
ca_url: "https://acme-staging-v02.api.letsencrypt.org/directory"
modules:
mod_adhoc: {}
mod_admin_extra: {}
mod_announce:
access: announce
mod_avatar: {}
mod_blocking: {}
mod_bosh: {}
mod_caps: {}
mod_carboncopy: {}
mod_client_state: {}
mod_configure: {}
mod_disco: {}
mod_fail2ban: {}
mod_http_api: {}
mod_http_upload:
put_url: https://#HOST#:5443/upload
mod_last: {}
mod_mam:
## Mnesia is limited to 2GB, better to use an SQL backend
## For small servers SQLite is a good fit and is very easy
## to configure. Uncomment this when you have SQL configured:
## db_type: sql
assume_mam_usage: true
default: never
mod_mqtt: {}
mod_muc:
access:
- allow
access_admin:
- allow: admin
access_create: muc_create
access_persistent: muc_create
access_mam:
- allow
default_room_options:
allow_subscription: true # enable MucSub
mam: false
mod_muc_admin: {}
mod_offline:
access_max_user_messages: max_user_offline_messages
mod_ping: {}
mod_privacy: {}
mod_private: {}
mod_proxy65:
access: local
max_connections: 5
mod_pubsub:
access_createnode: pubsub_createnode
plugins:
- flat
- pep
force_node_config:
## Avoid buggy clients to make their bookmarks public
storage:bookmarks:
access_model: whitelist
mod_push: {}
mod_push_keepalive: {}
mod_register:
## Only accept registration requests from the "trusted"
## network (see access_rules section above).
## Think twice before enabling registration from any
## address. See the Jabber SPAM Manifesto for details:
## https://github.com/ge0rg/jabber-spam-fighting-manifesto
ip_access: trusted_network
mod_roster:
versioning: true
mod_sip: {}
mod_s2s_dialback: {}
mod_shared_roster: {}
mod_stream_mgmt:
resend_on_timeout: if_offline
mod_vcard: {}
mod_vcard_xupdate: {}
mod_version:
show_os: false
### Local Variables:
### mode: yaml
### End:
### vim: set filetype=yaml tabstop=8
I tried to create user with POSTMAN and its working.
But, when I tried to create it with requests library, its not working.
api.py
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
import requests
from requests.auth import HTTPBasicAuth
url = "http://localhost:5443/api/register"
data = {
"user": "testuser2",
"host": "localhost",
"password": "password"
}
# res = requests.post(url, json=data, auth=("admin", "admin_password"))
res = requests.post(url, json=data, auth=HTTPBasicAuth("admin", "root"))
print(res)
The response when I run the above script:
<Response [401]>
I have admin user in the server with the same credentials that I passed to the auth attribute of the post method.
I'm new to XMPP and I'm not sure what I'm missing here.
I'm using the latest version of ejabberd docker container. I have disabled SSL while using POSTMAN.
The full process, with all the steps that you must do, or ensure are correctly done:
Register the account "admin#localhost":
ejabberdctl register admin localhost somepass
Add the "admin#localhost" account to the "admin" ACL:
acl:
admin:
user:
- "admin#localhost"
Allow the "admin" ACL to perform "register" API calls:
api_permissions:
"admin access":
who:
access:
allow:
- acl: admin
what:
- "register"
In your query, set the account JID and password, not only the account username:
res = requests.post(url, json=data, auth=HTTPBasicAuth("admin#localhost", "somepass"))
Now run your program:
$ python3 api.py
<Response [200]>
Check the account was registered:
$ ejabberdctl registered_users localhost
admin
testuser2
If you to register an account that already exists, it will response 409:
$ python3 api.py
<Response [409]>

Get a YAML file with HTTP and use it as a variable in an Ansible playbook

Background
I have a YAML file like this on a web server. I am trying to read it and make user accounts in the file with an Ansible playbook.
users:
- number: 20210001
name: Aoki Alice
id: alice
- number: 20210002
name: Bob Bryant
id: bob
- number: 20210003
name: Charlie Cox
id: charlie
What I tried
To confirm how to read a downloaded YAML file dynamically with include_vars, I had written a playbook like this:
- name: Add users from list
hosts: workstation
tasks:
- name: Download yaml
get_url:
url: http://fqdn.of.webserver/path/to/yaml.yml
dest: "/tmp/tmp.yml"
notify:
- Read yaml
- List usernames
handlers:
- name: Read yaml
include_vars:
file: /tmp/tmp.yml
name: userlist
- name: List usernames
debug:
var: "{{ item }}"
loop: "{{ userlist.users }}"
Problem
In the handler Read yaml, I got the following error message. On the target machine (workstation.example.com), /tmp/tmp.yml is downloaded correctly.
RUNNING HANDLER [Read yaml] *****
fatal: [workstation.example.com]: FAILED! => {"ansible facts": {"userlist": []},
"ansible included var files": [], "changed": false, "message": "Could not find o
r access '/tmp/tmp. yml' on the Ansible Controller.\nIf you are using a module a
nd expect the file to exist on the remote, see the remote src option"}
Question
How can I get a YAML file with HTTP and use it as a variable with include_vars?
Another option would be to use the uri module to retrieve the value into an Ansible variable, then the from_yaml filter to parse it.
Something like:
- name: Add users from list
hosts: workstation
tasks:
- name: Download YAML userlist
uri:
url: http://fqdn.of.webserver/path/to/yaml.yml
return_content: yes
register: downloaded_yaml
- name: Decode YAML userlist
set_fact:
userlist: "{{ downloaded_yaml.content | from_yaml }}"
Note that uri works on the Ansible Controller, while get_url works on the target host (or on the host specified in delegate_to); depending on your network configuration, you may need to use different proxy settings or firewall rules to permit the download.
The include_vars task looks for files on the local (control) host, but you've downloaded the file to /tmp/tmp.yml on the remote host. There are a number of ways of getting this to work.
Perhaps the easiest is just running the download task on the control machine instead (note the use of delegate_to):
tasks:
- name: Download yaml
delegate_to: localhost
get_url:
url: http://fqdn.of.webserver/path/to/yaml.yml
dest: "/tmp/tmp.yml"
notify:
- Read yaml
- List usernames
This will download the file to /tmp/tmp.yml on the local system, where it will be available to include_vars. For example, if I run this playbook (which grabs YAML content from an example gist I just created)...
- hosts: target
gather_facts: false
tasks:
- name: Download yaml
delegate_to: localhost
get_url:
url: https://gist.githubusercontent.com/larsks/70d8ac27399cb51fde150902482acf2e/raw/676a1d17bcfc01b1a947f7f87e807125df5910c1/example.yaml
dest: "/tmp/tmp.yml"
notify:
- Read yaml
- List usernames
handlers:
- name: Read yaml
include_vars:
file: /tmp/tmp.yml
name: userlist
- name: List usernames
debug:
var: item
loop: "{{ userlist.users }}"
...it produces the following output:
RUNNING HANDLER [Read yaml] ******************************************************************
ok: [target]
RUNNING HANDLER [List usernames] *************************************************************
ok: [target] => (item=bob) => {
"ansible_loop_var": "item",
"item": "bob"
}
ok: [target] => (item=alice) => {
"ansible_loop_var": "item",
"item": "alice"
}
ok: [target] => (item=mallory) => {
"ansible_loop_var": "item",
"item": "mallory"
}
Side note: based on what I see in your playbook, I'm not sure you want
to be using notify and handlers here. If you run your playbook a
second time, nothing will happen because the file /tmp/tmp.yml
already exists, so the handlers won't get called.
With #Larsks 's answer, I made this playbook that works correctly in my environment:
- name: Download users list
hosts: 127.0.0.1
connection: local
become: no
tasks:
- name: Download yaml
get_url:
url: http://fqdn.of.webserver/path/to/yaml/users.yml
dest: ./users.yml
- name: Add users from list
hosts: workstation
tasks:
- name: Read yaml
include_vars:
file: users.yml
- name: List usernames
debug:
msg: "{{ item.id }}"
loop: "{{ users }}"
Point
Run get_url on the control host
As #Larsks said, you have to run the get_url module on the control host rather than the target host.
Add become: no to the task run on the control host
Without "become: no", you will get the following error message:
TASK [Gathering Facts] ******************************************************
fatal: [127.0.0.1]: FAILED! => {"ansible_facts": {}, "changed": false, "msg":
"The following modules failed to execute: setup\n setup: MODULE FAILURE\nSee
stdout/stderr for the exact error\n"}
Use connection: local rather than local_action
If you use local_action rather than connection: local like this:
- name: test get_url
hosts: workstation
tasks:
- name: Download yaml
local_action:
module: get_url
url: http://fqdn.of.webserver/path/to/yaml/users.yml
dest: ./users.yml
- name: Read yaml
include_vars:
file: users.yml
- name: output remote yaml
debug:
msg: "{{ item.id }}"
loop: "{{ users }}"
You will get the following error message:
TASK [Download yaml] ********************************************************
fatal: [workstation.example.com]: FAILED! => {"changed": false, "module_stde
rr": "sudo: a password is required\n", "module_stdout":"", "msg":"MODULE FAIL
URE\nSee stdout/stderr for the exact error", "rc": 1}
get_url stores a file on the control host
In this situation, the get_url module stores users.yml on the control host (in the current directory). So you have to delete the users.yml if you don't want to leave it.

Argo artifacts gives error "http: server gave HTTP response to HTTPS client"

I was setting up Argo in my k8s cluster in Argo namespace.
I also Installed MinIO as an Artifact repository (https://github.com/argoproj/argo-workflows/blob/master/docs/configure-artifact-repository.md).
I am configuring a workflow which tries to access that Non-Default Artifact Repository as:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: artifact-passing-
spec:
entrypoint: artifact-example
templates:
- name: artifact-example
steps:
- - name: generate-artifact
template: whalesay
- - name: consume-artifact
template: print-message
arguments:
artifacts:
# bind message to the hello-art artifact
# generated by the generate-artifact step
- name: message
from: "{{steps.generate-artifact.outputs.artifacts.hello-art}}"
- name: whalesay
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["cowsay hello world | tee /tmp/hello_world.txt"]
outputs:
artifacts:
# generate hello-art artifact from /tmp/hello_world.txt
# artifacts can be directories as well as files
- name: hello-art
path: /tmp/hello_world.txt
s3:
endpoint: argo-artifacts-minio.argo:9000
bucket: my-bucket
key: /my-output-artifact.tgz
accessKeySecret:
name: argo-artifacts-minio
key: accesskey
secretKeySecret:
name: argo-artifacts-minio
key: secretkey
- name: print-message
inputs:
artifacts:
# unpack the message input artifact
# and put it at /tmp/message
- name: message
path: /tmp/message
s3:
endpoint: argo-artifacts-minio.argo:9000
bucket: my-bucket
accessKeySecret:
name: argo-artifacts-minio
key: accesskey
secretKeySecret:
name: argo-artifacts-minio
key: secretkey
container:
image: alpine:latest
command: [sh, -c]
args: ["cat /tmp/message"]
I created the workflow in argo namespace by:
argo submit --watch artifact-passing-nondefault-new.yaml -n argo
But the workflow fails with an error:
STEP PODNAME DURATION MESSAGE
✖ artifact-passing-z9g64 child 'artifact-passing-z9g64-150231068' failed
└---⚠ generate-artifact artifact-passing-z9g64-150231068 12s failed to save outputs: Get https://argo-artifacts-minio.argo:9000/my-bucket/?location=: http: server gave HTTP response to HTTPS client
Can someone help me to solve this error?
Since the minio setup runs without TLS configured, the workflow should specify that it should connect to an insecure artifact repository.
Including a field insecure: true in the s3 definition section of the workflow solves the issue.
s3:
endpoint: argo-artifacts-minio.argo:9000
insecure: true
bucket: my-bucket
key: /my-output-artifact.tgz
accessKeySecret:
name: argo-artifacts-minio
key: accesskey
secretKeySecret:
name: argo-artifacts-minio
key: secretkey

Salt pillar setup correct but apache-formula keeps generating defalt configuration

im playing around with saltstack and wanted to use the apache-formula from github/saltstack-formulas.
my pillar looks like following:
top.sls
base:
'ubuntu-xenial-salt':
- systems.ubuntu-xenial-salt
systems/ubuntu-xenial-salt.sls
include:
- setups.apache.prod
apache:
sites:
ubuntu-salt-xenial:
enabled: True
template_file: salt://apache/vhosts/standard.tmpl
template_engine: jinja
interface: '*'
port: '80'
exclude_listen_directive: True # Do not add a Listen directive in httpd.conf
ServerName: ubuntu-salt-xenial
ServerAlias: ubuntu-salt-xenial
ServerAdmin: minion#ubuntu-salt-xenial.com
LogLevel: debug
ErrorLog: /var/log/apache2/example.com-error.log
CustomLog: /var/log/apache2/example.com-access.log
DocumentRoot: /var/www/ubuntu-salt-xenial/
Directory:
default:
Options: -Indexes +FollowSymLinks
Require: all granted
AllowOverride: None
setups/apache/prod.sls
include:
- applications.apache
# ``apache`` formula configuration:
apache:
register-site:
# any name as an array index, and you can duplicate this section
UNIQUE_VALUE_HERE:
name: 'PROD'
path: 'salt://path/to/sites-available/conf/file'
state: 'enabled'
# Optional - use managed file as Jinja Template
#template: true
#defaults:
# custom_var: "default value"
modules:
enabled: # List modules to enable
- rewrite
- ssl
disabled: # List modules to disable
- ldap
# KeepAlive: Whether or not to allow persistent connections (more than
# one request per connection). Set to "Off" to deactivate.
keepalive: 'On'
security:
# can be Full | OS | Minimal | Minor | Major | Prod
# where Full conveys the most information, and Prod the least.
ServerTokens: Prod
# ``apache.mod_remoteip`` formula additional configuration:
mod_remoteip:
RemoteIPHeader: X-Forwarded-For
RemoteIPTrustedProxy:
- 10.0.8.0/24
- 127.0.0.1
# ``apache.mod_security`` formula additional configuration:
mod_security:
crs_install: True
# If not set, default distro's configuration is installed as is
manage_config: True
sec_rule_engine: 'On'
sec_request_body_access: 'On'
sec_request_body_limit: '14000000'
sec_request_body_no_files_limit: '114002'
sec_request_body_in_memory_limit: '114002'
sec_request_body_limit_action: 'Reject'
sec_pcre_match_limit: '15000'
sec_pcre_match_limit_recursion: '15000'
sec_debug_log_level: '3'
rules:
enabled:
modsecurity_crs_10_setup.conf:
rule_set: ''
enabled: True
modsecurity_crs_20_protocol_violations.conf:
rule_set: 'base_rules'
enabled: False
custom_rule_files:
# any name as an array index, and you can duplicate this section
UNIQUE_VALUE_HERE:
file: 'PROD'
path: 'salt://path/to/modsecurity/custom/file'
enabled: True
applications/apache.sls
apache:
lookup:
version: '2.4'
default_charset: 'UTF-8'
global:
AllowEncodedSlashes: 'On'
name_virtual_hosts:
- interface: '*'
port: 80
- interface: '*'
port: 443
Using this pillar configuration and calling highstate for my minion ubuntu-xenial-salt run without any error, however the setup is not the same as declared in the pillar:
for example:
the enabled rewrite module is not there.
the virtual-host config is not as setup in the pillar.
everything seems to be pretty much standard configuration from example.pillar.
When i call
salt 'ubuntu-xenial-salt' pillar.data
i get the pillar data just like i modified it... i cant understand what is happening...
ubuntu-xenial-salt:
----------
apache:
----------
keepalive:
On
lookup:
----------
default_charset:
UTF-8
global:
----------
AllowEncodedSlashes:
On
name_virtual_hosts:
|_
----------
interface:
*
port:
80
|_
----------
interface:
*
port:
443
version:
2.4
mod_remoteip:
----------
RemoteIPHeader:
X-Forwarded-For
RemoteIPTrustedProxy:
- 10.0.8.0/24
- 127.0.0.1
mod_security:
----------
crs_install:
True
custom_rule_files:
----------
UNIQUE_VALUE_HERE:
----------
enabled:
True
file:
PROD
path:
salt://path/to/modsecurity/custom/file
manage_config:
True
rules:
----------
enabled:
None
modsecurity_crs_10_setup.conf:
----------
enabled:
True
rule_set:
modsecurity_crs_20_protocol_violations.conf:
----------
enabled:
False
rule_set:
base_rules
sec_debug_log_level:
3
sec_pcre_match_limit:
15000
sec_pcre_match_limit_recursion:
15000
sec_request_body_access:
On
sec_request_body_in_memory_limit:
114002
sec_request_body_limit:
14000000
sec_request_body_limit_action:
Reject
sec_request_body_no_files_limit:
114002
sec_rule_engine:
On
modules:
----------
disabled:
- ldap
enabled:
- ssl
- rewrite
register-site:
----------
UNIQUE_VALUE_HERE:
----------
name:
PROD
path:
salt://path/to/sites-available/conf/file
state:
enabled
security:
----------
ServerTokens:
Prod
sites:
----------
ubuntu-salt-xenial:
----------
CustomLog:
/var/log/apache2/example.com-access.log
Directory:
----------
default:
----------
AllowOverride:
None
Options:
-Indexes +FollowSymLinks
Require:
all granted
DocumentRoot:
/var/www/ubuntu-salt-xenial/
ErrorLog:
/var/log/apache2/example.com-error.log
LogLevel:
debug
ServerAdmin:
minion#ubuntu-salt-xenial.com
ServerAlias:
ubuntu-salt-xenial
ServerName:
ubuntu-salt-xenial
enabled:
True
exclude_listen_directive:
True
interface:
*
port:
80
template_engine:
jinja
template_file:
salt://apache/vhosts/standard.tmpl
Do someone knows what's happening here and can help me get it running?

Resources