Enabling account deletion on nats server - nats.io

I was trying to prune some users from my nats server by doing:
nsc push --system-account SYS -u nats://localhost:4222 -P
but I got the following error:
server nats-comm-2 responded with error: delete accounts request by SOME_KEY_VALUE failed - delete must be enabled in server config
The meaning of the error is pretty obvious, when I examine the help documentation for nsc push -P:
Only works with nats-resolver enabled nats-server. Mutually exclusive of account-removal/diff
But I'm not sure how to enable this in my nats server config. How do I allow for account pruning?

I found documentation in the resolver section, here, showing that I could add allow_delete: true to the config, but as the YAML format is in camel-case, I had to modify it to be allowDelete: true instead.
nats:
auth:
enabled: true
resolver:
type: full
allowDelete: true

Related

Ansible Hetzner Cloud - Create a server in private network

I am using Ansible to create a server in the Hetzner Cloud, the playbook reads:
- name: create the server at Hetzner
hetzner.hcloud.hcloud_server:
name: "{{server_hostname}}"
enable_ipv4: false
enable_ipv6: false
server_type: cx11
location: "{{server_location}}"
image: ubuntu-22.04
ssh_keys:
- "mykey"
state: present
api_token: "{{hetzner_secret}}"
private_networks: ipfire
register: server
My aim is to integrate the new server into the private network named 'ipfire' that I have previously created. The server should not be accessible via the internet, so I have disabled ipv4 and ipv6. Rather, I'd like to access the server by connecting via OpenVPN to the private network 'ipfire' and connect by use of ssh from there.
Unfortunately, I get an error message as follows:
PLAY [Order servers] ********************************************************************************************************
TASK [hetznerserver : create the server at Hetzner] *************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (hetzner.hcloud.hcloud_server) module: private_networks. Supported parameters include: rebuild_protection, api_token, location, enable_ipv6, upgrade_disk, ipv4, endpoint, ipv6, firewalls, server_type, state, force, labels, ssh_keys, delete_protection, image, id, name, enable_ipv4, placement_group, force_upgrade, user_data, datacenter, rescue_mode, allow_deprecated_image, volumes, backups."}
PLAY RECAP ******************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
The module private_networks does not seem to work like this?
Error messages like Unsupported parameters for (<moduleName>) module: <givenParameter>. Supported parameters include: <supportedParametersList> are usually syntax errors of the module used.
Therefore one may need to look up the respective documentation, in the example case hcloud_server module – Create and manage cloud servers on the Hetzner Cloud.
If the documentation shows the Parameters in question are available, it indicates
either a version mismatch of module used, means the used version is too old and an update is necessary
or an bug within the module code and further debugging and investigation within the module code is necessary
Code and Documentation Links
Community Authors> hetzner> hcloud
ansible-collections / hetzner.hcloud
After further investigation it might turn out that the parameter in question was introduced recently, in example
Github hetzner.hcloud Issue #150 "Unable to create cloud server without public ipv4 and ipv6"
Github hetzner.hcloud Pull #160 "Add possibility to specify private network when creating or updating servers"
which indicates in your example case that you'll need to update the Ansible Collection module in question since the parameter wasn't introduced in your used version of the module but as of v1.9.0.

Not able to generate a Let's Encrypt Certificat

I am using Laravel Forge to manage my servers and websites. So generating SSL certificates via Let's Encrypt is also done vie Forge. Somehow one of my domains throws me an error (see attached).
This domain is running on a server which holds several other domains. The nginx configuration is exactly the same.
The application is a Laravel app running on Laravel Octane.
Error:
2022-06-13 10:41:26 URL:https://forge-certificates.laravel.com/le/1441847/1663342/ecdsa?
env=production [4653] -> "letsencrypt_script1655109686" [1] Cloning
into 'letsencrypt1655109686'... Note: switching to
'91cccc0c234e4decf0a19595fa19a6f306788032'.
You are in 'detached HEAD' state. You can look around, make
experimental changes and commit them, and you can discard any commits
you make in this state without impacting any branches by switching
back to a branch.
If you want to create a new branch to retain commits you create, you
may do so (now or later) by using -c with the switch command. Example:
git switch -c
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to
false
HEAD is now at 91cccc0 ensure newline before new section in
openssl.cnf ERROR: Challenge is invalid! (returned: invalid) (result:
["type"] "http-01" ["status"] "invalid"
["error","type"] "urn:ietf:params:acme:error:connection"
["error","detail"] "111.222.333.444: Fetching
http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw:
Timeout during connect (likely firewall problem)"
["error","status"] 400
["error"] {"type":"urn:ietf:params:acme:error:connection","detail":"111.222.333.444:
Fetching
http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw:
Timeout during connect (likely firewall problem)","status":400}
["url"] "https://acme-v02.api.letsencrypt.org/acme/chall-v3/119151352296/awZDUg"
["token"] "_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw"
["validationRecord",0,"url"] "http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw"
["validationRecord",0,"hostname"] "www.my-domain.de"
["validationRecord",0,"port"] "80"
["validationRecord",0,"addressesResolved",0] "111.222.333.444"
["validationRecord",0,"addressesResolved",1] "2a01:4f8:141:333::84"
["validationRecord",0,"addressesResolved"] ["111.222.333.444","2a01:4f8:141:333::84"]
["validationRecord",0,"addressUsed"] "2a01:4f8:141:333::84"
["validationRecord",0] {"url":"http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"www.my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"2a01:4f8:141:333::84"} ["validationRecord",1,"url"] "http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw"
["validationRecord",1,"hostname"] "www.my-domain.de"
["validationRecord",1,"port"] "80"
["validationRecord",1,"addressesResolved",0] "111.222.333.444"
["validationRecord",1,"addressesResolved",1] "2a01:4f8:141:333::84"
["validationRecord",1,"addressesResolved"] ["111.222.333.444","2a01:4f8:141:333::84"]
["validationRecord",1,"addressUsed"] "111.222.333.444"
["validationRecord",1] {"url":"http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"www.my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"111.222.333.444"}
["validationRecord",2,"url"] "http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw"
["validationRecord",2,"hostname"] "my-domain.de"
["validationRecord",2,"port"] "80"
["validationRecord",2,"addressesResolved",0] "111.222.333.444"
["validationRecord",2,"addressesResolved",1] "2a01:4f8:141:333::84"
["validationRecord",2,"addressesResolved"] ["111.222.333.444","2a01:4f8:141:333::84"]
["validationRecord",2,"addressUsed"] "2a01:4f8:141:333::84"
["validationRecord",2] {"url":"http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"2a01:4f8:141:333::84"} ["validationRecord"] [{"url":"http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"www.my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"2a01:4f8:141:333::84"},{"url":"http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"www.my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"111.222.333.444"},{"url":"http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"2a01:4f8:141:333::84"}] ["validated"] "2022-06-13T08:41:47Z")
I've finally found the solution. Laravel Forge does not support IPv6 out of the box. So you either have to configure Forge to use IPv6 as well or remove all AAAA records pointing to the server managed by Laravel Forge.

How to write airflow logs to Elasticsearch?

I am using Airflow 1.10.5. Can't seem to find complete documentation or sample on how to setup remote logging using Elasticsearch. I saw airflow documentation about logging, but it wasn't helpful. I am trying to write the airflow (not task) logs to ES.
As far as I understand the docs, the ES log handler can only read from ES. You would have to setup your logging to print into a file, then use something like filebeat to post the file content to ES and Airflow can then read them back...
https://airflow.readthedocs.io/en/stable/howto/write-logs.html#writing-logs-to-elasticsearch
Writing Logs to Elasticsearch
Airflow can be configured to read task
logs from Elasticsearch and optionally write logs to stdout in
standard or json format. These logs can later be collected and
forwarded to the Elasticsearch cluster using tools like fluentd,
logstash or others.
I was able to achieve using [filebeat][1] shipper.
Input config section in filebeat.yml
</snip>
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /path/to/logs/*.log
</snip>
Output config section in filebeat.yml
<snip>
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "changeme"
</snip>
Good doc to read especially about airflow --> ES.

Why does Meteor Up (MUP) fail on authentication?

I am currently trying to deploy a Meteor project to an external server for the first time. The server is hosted by DigitalOcean, running ubuntu 16.04, and has an SSH key set up for password-free access.
The error I am getting from MUP is:
[159.203.165.13] - Setup Docker
events.js:165
throw er; // Unhandled 'error' event
^
Error: All configured authentication methods failed
at tryNextAuth (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:290:17)
at SSH2Stream.onUSERAUTH_FAILURE (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:469:5)
at SSH2Stream.emit (events.js:180:13)
at parsePacket (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:3647:10)
at SSH2Stream._transform (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:551:13)
at SSH2Stream.Transform._read (_stream_transform.js:185:10)
at SSH2Stream._read (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:212:15)
at SSH2Stream.Transform._write (_stream_transform.js:173:12)
at doWrite (_stream_writable.js:410:12)
at writeOrBuffer (_stream_writable.js:396:5)
at SSH2Stream.Writable.write (_stream_writable.js:294:11)
at Socket.ondata (_stream_readable.js:651:20)
at Socket.emit (events.js:180:13)
at addChunk (_stream_readable.js:274:12)
at readableAddChunk (_stream_readable.js:261:11)
at Socket.Readable.push (_stream_readable.js:218:10)
Emitted 'error' event at:
at tryNextAuth (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:292:12)
at SSH2Stream.onUSERAUTH_FAILURE (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:469:5)
[... lines matching original stack trace ...]
at Socket.Readable.push (_stream_readable.js:218:10)
At this point I have tried several solutions involving the mup file as per other recommendations such as:
1) Adding in a password - Gives the exact same error as though the change didn't occur.
2) Adding in the same SSH key that I use for authentication to the server as per digital ocean - Says 'privateKey value does not contain a (valid) private key'. I have tried both the key that is used for authentication to the server and every other key I could find short of generating a new one just for Meteor's use.
3) Leaving both blank and allowing it to 'try' ssh-agent - pretends it doesn't know what ssh-agent is and throws an error saying the same thing as when I use a password.
I have looked through and followed the same instructions in the following article: http://meteortips.com/deployment-tutorial/digitalocean-part-1/
This article assumes that there are only two possible states. One being that an ssh key has NOT been used or set up so it needs to be generated. The second being that an ssh key exists and is set up exactly where they expect it. Unfortunately I seem to be in a different situation. I generated a key using putty prior to setting up the D.O server and created the droplet using that. After creation, the file did not exist. The only thing in the ~/.ssh/ directory was a single file named "authorized_keys" that held the key I would use to connect to the server. This file cannot be used, nor any file on the server in the other ssh key locations.I also tried copying over the file directly onto the server to no avail as well.
In some vain hope at finding a solution I also tried running these same commands in both the Meteor build bundle an the source code folder. Neither worked. I should mention that although this is the only article I still have open to try for a solution, I have tried every one I could find using MUP.
If anyone can point me in the right direction with this so I can stop flailing wildly in the dark I would be incredibly grateful.
Edit: As requested, below is the current mup.js file with removed credentials
module.exports = {
servers: {
one: {
// TODO: set host address, username, and authentication method
host: '111.111.111.11',
username: 'root',
// ssh-agent: '/home/Meteor/MeteorKey.pem'
pem: '~/.ssh/id_rsa.pub'
// password: 'password1'
// or neither for authenticate from ssh-agent
}
},
app: {
// TODO: change app name and path
name: 'app-name',
path: '../',
servers: {
one: {},
},
buildOptions: {
serverOnly: true,
},
env: {
// TODO: Change to your app's url
// If you are using ssl, it needs to start with https://
ROOT_URL: 'http://www.app-name.com',
MONGO_URL: 'mongodb://mongodb/meteor',
MONGO_OPLOG_URL: 'mongodb://mongodb/local',
},
docker: {
// change to 'abernix/meteord:base' if your app is using Meteor 1.4 - 1.5
image: 'abernix/meteord:node-8.4.0-base',
},
// Show progress bar while uploading bundle to server
// You might need to disable it on CI servers
enableUploadProgressBar: true
},
mongo: {
version: '3.4.1',
servers: {
one: {}
}
},
// (Optional)
// Use the proxy to setup ssl or to route requests to the correct
// app when there are several apps
// proxy: {
// domains: 'mywebsite.com,www.mywebsite.com',
The error message you are receiving:
Error: All configured authentication methods failed
Means that the SSH connection is failing. So the credentials you are using (pity you removed them from the config) are not working. Try using a command line ssh using these same credentials, and then trouble shoot that - once you can ssh into the server, then mup should be able to do it's work.
You can get more information out of ssh by specifying one or more -v parameters, eg:
ssh -v -v my_user#remote.com
and it will give you information about the authentication methods it is trying as it goes through them. This will help you narrow down the problem.

Gremlin remote command fails on azure cosmosDB: Host did not respond in a timely fashion

I was following the guide create-graph-gremlin-console for setting up the gremlin console with CosmosDB.
My remote-secure.yaml file is:
hosts: [***.documents.azure.com]
port: 433
username: /dbs/graphdb/colls/person
password: <PRIMARY KEY>
connectionPool: {enableSsl: false}
serializer: { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, config: { serializeResultToString: true }}
and when I ran the command :> g.V().count(), I get
Host did not respond in a timely fashion - check the server status and
submit again.
You can see my terminal for the stack trace.
I'm using gremlin console 3.3.0
I already visited this, but didn't found the answer very useful.
I checked the status of my CosmosDB server, & it is working fine
I used same configuration in Node.js app and it works fine there too.
Host needs to be ****.graphs.azure.com, instead of ***.documents.azure.com.
Port needs to be 443.

Resources