Kibana Server not allowing remote access - kibana

I've edited my Kibana.yaml config file to allow remote access using the DHCP IP address on my router from a bridged connection using my adapter.
It seems to not establish a connection using the port and IP assigned.
[root#localhost bin]# ./kibana --allow-root
^C^C log [14:36:15.000] [info][plugins-service] Plugin "visTypeXy" is disabled.
log [14:36:15.025] [info][plugins-service] Plugin "auditTrail" is disabled.
log [14:36:15.084] [warning][config][deprecation] Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0."
^C[root#localhost bin]# ./kibana --allow-root &
[1] 2499
[root#localhost bin]# log [14:36:23.872] [info][plugins-service] Plugin "visTypeXy" is disabled.
log [14:36:23.878] [info][plugins-service] Plugin "auditTrail" is disabled.
log [14:36:23.960] [warning][config][deprecation] Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0."
log [14:36:24.133] [info][plugins-system] Setting up [96] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,newsfeed,mapsLegacy,kibanaLegacy,translations,share,legacyExport,embeddable,uiActionsEnhanced,expressions,data,home,observability,cloud,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,indexPatternManagement,advancedSettings,fileUpload,savedObjects,dashboard,visualizations,visTypeVega,visTypeTimelion,timelion,features,upgradeAssistant,security,snapshotRestore,enterpriseSearch,encryptedSavedObjects,ingestManager,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboardMode,beatsManagement,transform,ingestPipelines,maps,licenseManagement,graph,dataEnhanced,visTypeTable,visTypeMarkdown,tileMap,regionMap,inputControlVis,visualize,esUiShared,charts,lens,visTypeVislib,visTypeTimeseries,rollup,visTypeTagcloud,visTypeMetric,watcher,discover,discoverEnhanced,savedObjectsManagement,spaces,reporting,lists,eventLog,actions,case,alerts,stackAlerts,triggersActionsUi,ml,securitySolution,infra,monitoring,logstash,apm,uptime,bfetch,canvas]
log [14:36:24.394] [warning][config][plugins][security] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml
log [14:36:24.395] [warning][config][plugins][security] Session cookies will be transmitted over insecure connections. This is not recommended.
log [14:36:24.433] [warning][config][encryptedSavedObjects][plugins] Generating a random key for xpack.encryptedSavedObjects.encryptionKey. To be able to decrypt encrypted saved objects attributes after restart, please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml
log [14:36:24.439] [warning][ingestManager][plugins] Fleet APIs are disabled due to the Encrypted Saved Objects plugin using an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml.
log [14:36:24.561] [warning][config][plugins][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in kibana.yml
log [14:36:24.563] [warning][config][plugins][reporting] Chromium sandbox provides an additional layer of protection, but is not supported for Linux CentOS 8.3.2011
OS. Automatically setting 'xpack.reporting.capture.browser.chromium.disableSandbox: true'.
log [14:36:24.575] [warning][actions][actions][plugins] APIs are disabled due to the Encrypted Saved Objects plugin using an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml.
log [14:36:24.596] [warning][alerting][alerts][plugins][plugins] APIs are disabled due to the Encrypted Saved Objects plugin using an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml.
log [14:36:24.785] [info][monitoring][monitoring][plugins] config sourced from: production cluster
log [14:36:25.067] [info][savedobjects-service] Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations...
log [14:36:25.409] [info][savedobjects-service] Starting saved objects migrations
log [14:36:25.976] [info][plugins-system] Starting [96] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,newsfeed,mapsLegacy,kibanaLegacy,translations,share,legacyExport,embeddable,uiActionsEnhanced,expressions,data,home,observability,cloud,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,indexPatternManagement,advancedSettings,fileUpload,savedObjects,dashboard,visualizations,visTypeVega,visTypeTimelion,timelion,features,upgradeAssistant,security,snapshotRestore,enterpriseSearch,encryptedSavedObjects,ingestManager,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboardMode,beatsManagement,transform,ingestPipelines,maps,licenseManagement,graph,dataEnhanced,visTypeTable,visTypeMarkdown,tileMap,regionMap,inputControlVis,visualize,esUiShared,charts,lens,visTypeVislib,visTypeTimeseries,rollup,visTypeTagcloud,visTypeMetric,watcher,discover,discoverEnhanced,savedObjectsManagement,spaces,reporting,lists,eventLog,actions,case,alerts,stackAlerts,triggersActionsUi,ml,securitySolution,infra,monitoring,logstash,apm,uptime,bfetch,canvas]
log [14:36:25.978] [info][plugins][taskManager][taskManager] TaskManager is identified by the Kibana UUID: dbda794a-41a8-4223-b66f-b4fed95353db
log [14:36:26.302] [info][crossClusterReplication][plugins] Your basic license does not support crossClusterReplication. Please upgrade your license.
log [14:36:26.339] [info][plugins][watcher] Your basic license does not support watcher. Please upgrade your license.
log [14:36:26.340] [info][kibana-monitoring][monitoring][monitoring][plugins] Starting monitoring stats collection
[2021-01-16T09:36:26,422][INFO ][o.e.c.m.MetadataIndexTemplateService] [localhost.localdomain] adding template [.management-beats] for index patterns [.management-beats]
log [14:36:27.290] [info][listening] Server running at http://10.0.0.137:5601
log [14:36:28.153] [info][server][Kibana][http] http server running at http://10.0.0.137:5601
log [14:36:28.157] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:Actions-actions_telemetry]: version conflict, document already exists (current version [4])
log [14:36:28.181] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:Lens-lens_telemetry]: version conflict, document already exists (current version [4])
log [14:36:28.182] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:Alerting-alerting_telemetry]: version conflict, document already exists (current version [4])
log [14:36:28.183] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:endpoint:user-artifact-packager:1.0.0]: version conflict, document already exists (current version [64])
log [14:36:28.184] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:apm-telemetry-task]: version conflict, document already exists (current version [4])
log [14:36:28.973] [warning][plugins][reporting] Enabling the Chromium sandbox provides an additional layer of protection.

Related

Not able to generate a Let's Encrypt Certificat

I am using Laravel Forge to manage my servers and websites. So generating SSL certificates via Let's Encrypt is also done vie Forge. Somehow one of my domains throws me an error (see attached).
This domain is running on a server which holds several other domains. The nginx configuration is exactly the same.
The application is a Laravel app running on Laravel Octane.
Error:
2022-06-13 10:41:26 URL:https://forge-certificates.laravel.com/le/1441847/1663342/ecdsa?
env=production [4653] -> "letsencrypt_script1655109686" [1] Cloning
into 'letsencrypt1655109686'... Note: switching to
'91cccc0c234e4decf0a19595fa19a6f306788032'.
You are in 'detached HEAD' state. You can look around, make
experimental changes and commit them, and you can discard any commits
you make in this state without impacting any branches by switching
back to a branch.
If you want to create a new branch to retain commits you create, you
may do so (now or later) by using -c with the switch command. Example:
git switch -c
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to
false
HEAD is now at 91cccc0 ensure newline before new section in
openssl.cnf ERROR: Challenge is invalid! (returned: invalid) (result:
["type"] "http-01" ["status"] "invalid"
["error","type"] "urn:ietf:params:acme:error:connection"
["error","detail"] "111.222.333.444: Fetching
http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw:
Timeout during connect (likely firewall problem)"
["error","status"] 400
["error"] {"type":"urn:ietf:params:acme:error:connection","detail":"111.222.333.444:
Fetching
http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw:
Timeout during connect (likely firewall problem)","status":400}
["url"] "https://acme-v02.api.letsencrypt.org/acme/chall-v3/119151352296/awZDUg"
["token"] "_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw"
["validationRecord",0,"url"] "http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw"
["validationRecord",0,"hostname"] "www.my-domain.de"
["validationRecord",0,"port"] "80"
["validationRecord",0,"addressesResolved",0] "111.222.333.444"
["validationRecord",0,"addressesResolved",1] "2a01:4f8:141:333::84"
["validationRecord",0,"addressesResolved"] ["111.222.333.444","2a01:4f8:141:333::84"]
["validationRecord",0,"addressUsed"] "2a01:4f8:141:333::84"
["validationRecord",0] {"url":"http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"www.my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"2a01:4f8:141:333::84"} ["validationRecord",1,"url"] "http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw"
["validationRecord",1,"hostname"] "www.my-domain.de"
["validationRecord",1,"port"] "80"
["validationRecord",1,"addressesResolved",0] "111.222.333.444"
["validationRecord",1,"addressesResolved",1] "2a01:4f8:141:333::84"
["validationRecord",1,"addressesResolved"] ["111.222.333.444","2a01:4f8:141:333::84"]
["validationRecord",1,"addressUsed"] "111.222.333.444"
["validationRecord",1] {"url":"http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"www.my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"111.222.333.444"}
["validationRecord",2,"url"] "http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw"
["validationRecord",2,"hostname"] "my-domain.de"
["validationRecord",2,"port"] "80"
["validationRecord",2,"addressesResolved",0] "111.222.333.444"
["validationRecord",2,"addressesResolved",1] "2a01:4f8:141:333::84"
["validationRecord",2,"addressesResolved"] ["111.222.333.444","2a01:4f8:141:333::84"]
["validationRecord",2,"addressUsed"] "2a01:4f8:141:333::84"
["validationRecord",2] {"url":"http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"2a01:4f8:141:333::84"} ["validationRecord"] [{"url":"http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"www.my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"2a01:4f8:141:333::84"},{"url":"http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"www.my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"111.222.333.444"},{"url":"http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"2a01:4f8:141:333::84"}] ["validated"] "2022-06-13T08:41:47Z")
I've finally found the solution. Laravel Forge does not support IPv6 out of the box. So you either have to configure Forge to use IPv6 as well or remove all AAAA records pointing to the server managed by Laravel Forge.

Issue with connecting Golang application on Cloud Run with Firestore

I try to get all Documents from Firestore using the below function.
The credentials are stored in an encrypted file in a GCP Cloud Source repository.
I decrypted the configuration in the Cloud Build trigger and set the ENV in the Dockerfile pointing to the file. I see the content by RUN ls /app/credentials.json.
The error I get in the application log:
rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority"
The credentials are stored in an encrypted file in a GCP Cloud Source repository.
I decrypted the configuration in the Cloud Build trigger and set the ENV in the Dockerfile pointing to the file. I see the content by RUN ls /app/credentials.json.
The error I get in the application log:
rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority"
This error is the result of an HTTPS failure where the certificate cannot be verified. The Alpine base image is missing a package that provides root certificates. Currently the Cloud Run quickstart is missing this for at least the Go language.
Assuming this is your problem, add the following to the final stage of your Dockerfile:
RUN apk add --no-cache ca-certificates

MPI A process or daemon was unable to complete a TCP connection

Open MPI: 4.0.1a
HostFile:
34bb0519eAAA
a2935f150BBB
I am in machine 34bb0519eAAA. And I could use ssh a2935f150BBB to connect a2935f150BBB successfully. And also ssh 34bb0519eAAA In machine a2935f150BBB to connect 34bb0519eAAA successfully .
But when I mpiexec command . I get error message
****Warning: Permanently added '[XX.XX.XX.XX]:XX' (a2935f150BBB'IP address) to the list of known hosts.**
----------------------**--------------------------------------
A process or daemon was unable to complete a TCP connection
to another process:
Local host: a2935f150BBB
Remote host: 34bb0519eAAA
This is usually caused by a firewall on the remote host. Please
check that any firewall (e.g., iptables) has been disabled and
ORTE was unable to reliably start one or more daemons.
This usually is caused by:
* not finding the required libraries and/or binaries on
one or more nodes. Please check your PATH and LD_LIBRARY_PATH
settings, or configure OMPI with --enable-orterun-prefix-by-default
* lack of authority to execute on one or more specified nodes.
Please verify your allocation and authorities.
* the inability to write startup files into /tmp (--tmpdir/orte_tmpdir_base).
Please check with your sys admin to determine the correct location to use.
* compilation of the orted with dynamic libraries when static are required
(e.g., on Cray). Please check your configure cmd line and consider using
one of the contrib/platform definitions for your system type.
* an inability to create a connection back to mpirun due to a
lack of common network interfaces and/or no route found between
them. Please check network connectivity (including firewalls
and network routing requirements).
I am very confused that.Because I run ssh to each other successfully . How could fail that.
Here is ssh connection
ssh a2935f150BBB
Warning: Permanently added '[XX.XX.XX.XX]:XX to the list of known hosts.
Welcome to Ubuntu 18.04.1 LTS (XXXXXXXXXXXXXXXXXX)
Documentation: https://help.ubuntu.com
Management: https://landscape.canonical.com
Support: https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.
To restore this content, you can run the 'unminimize' command.
Last login:XXXXXXXXXXXXX from XXXXXXXXXX

OpenVAS: OSPD scanner can't be used as scanner in new task

After understanding how to add an ospd scanner, verify it etc ...
I though I could finally use it but got an error through UI to add it to a task.
In my case, I run OpenVAS 9 on a debian 9 and I'm trying to include a w3af scanner but I got the same issue with every OSP scanner I add.
my pip freeze :
ospd==1.2.0
ospd-debsecan==1.2b1
ospd-nmap==1.0b1
ospd-w3af==1.0.0
Note that here is an example of w3af but the issue is the same for debsecan scanner and nmap scanner.
my openvas-check-setup :
Step 1: Checking OpenVAS Scanner ...
OK: OpenVAS Scanner is present in version 5.1.1.
OK: redis-server is present in version v=3.2.6.
OK: scanner (kb_location setting) is configured properly using the redis-server socket: /tmp/redis.sock
OK: redis-server is running and listening on socket: /tmp/redis.sock.
OK: redis-server configuration is OK and redis-server is running.
OK: NVT collection in /usr/local/var/lib/openvas/plugins contains 47727 NVTs.
WARNING: Signature checking of NVTs is not enabled in OpenVAS Scanner.
SUGGEST: Enable signature checking (see http://www.openvas.org/trusted-nvts.html).
OK: The NVT cache in /usr/local/var/cache/openvas contains 47727 files for 47727 NVTs.
Step 2: Checking OpenVAS Manager ...
OK: OpenVAS Manager is present in version 7.0.2.
OK: OpenVAS Manager database found in /usr/local/var/lib/openvas/mgr/tasks.db.
OK: Access rights for the OpenVAS Manager database are correct.
OK: sqlite3 found, extended checks of the OpenVAS Manager installation enabled.
OK: OpenVAS Manager database is at revision 184.
OK: OpenVAS Manager expects database at revision 184.
OK: Database schema is up to date.
OK: OpenVAS Manager database contains information about 47727 NVTs.
OK: At least one user exists.
OK: OpenVAS SCAP database found in /usr/local/var/lib/openvas/scap-data/scap.db.
OK: OpenVAS CERT database found in /usr/local/var/lib/openvas/cert-data/cert.db.
OK: xsltproc found.
Step 3: Checking user configuration ...
WARNING: Your password policy is empty.
SUGGEST: Edit the /usr/local/etc/openvas/pwpolicy.conf file to set a password policy.
Step 4: Checking Greenbone Security Assistant (GSA) ...
OK: Greenbone Security Assistant is present in version 7.0.2.
OK: Your OpenVAS certificate infrastructure passed validation.
Step 5: Checking OpenVAS CLI ...
OK: OpenVAS CLI version 1.4.5.
Step 6: Checking Greenbone Security Desktop (GSD) ...
SKIP: Skipping check for Greenbone Security Desktop.
Step 7: Checking if OpenVAS services are up and running ...
OK: netstat found, extended checks of the OpenVAS services enabled.
OK: OpenVAS Scanner is running and listening on a Unix domain socket.
OK: OpenVAS Manager is running and listening on a Unix domain socket.
OK: Greenbone Security Assistant is listening on port 443, which is the default port.
Step 8: Checking nmap installation ...
WARNING: Your version of nmap is not fully supported: 7.40
SUGGEST: You should install nmap 5.51 if you plan to use the nmap NSE NVTs.
Step 10: Checking presence of optional tools ...
OK: pdflatex found.
WARNING: PDF generation failed, most likely due to missing LaTeX packages. The PDF report format will not work.
SUGGEST: Install required LaTeX packages.
OK: ssh-keygen found, LSC credential generation for GNU/Linux targets is likely to work.
OK: rpm found, LSC credential package generation for RPM based targets is likely to work.
OK: alien found, LSC credential package generation for DEB based targets is likely to work.
OK: nsis found, LSC credential package generation for Microsoft Windows targets is likely to work.
To create the scanner in openvas, I use:
openvasmd --create-scanner="w3af" --scanner-host=127.0.0.1 --scanner-port=1235 --scanner-type="OSP" \
--scanner-ca-pub=/usr/local/var/lib/openvas/CA/cacert.pem \
--scanner-key-pub=/usr/local/var/lib/openvas/CA/clientcert.pem \
--scanner-key-priv=/usr/local/var/lib/openvas/private/CA/clientkey.pem
To run ospd-w3af scanner, I use:
~# ospd-w3af -b 127.0.0.1 -p 1235 -k \
/usr/local/var/lib/openvas/private/CA/clientkey.pem -c \
/usr/local/var/lib/openvas/CA/clientcert.pem --ca-file \
/usr/local/var/lib/openvas/CA/cacert.pem -L DEBUG
When I verify the scanner with openvasmd --verify-scanner xxxxx I got
Scanner version: 2018.8.22.
note: in the logs of the scanner I got this for every verify I do, I don't know if it's related or no and I didn't find a way to fix this:
2018-10-15 14:27:47,413 ospd.ospd: DEBUG: New connection from 127.0.0.1:60078
2018-10-15 14:27:49,430 ospd.ospd: DEBUG: Error: ('The read operation timed out',)
2018-10-15 14:27:49,433 ospd.ospd: DEBUG: 127.0.0.1:60078: Connection closed
So, my verification made, I want to create a task that uses this scanner but I can't save it due to error "Given scanner_type was invalid" :
https://i.stack.imgur.com/fvIJd.png
I got 0 connection to the chosen scanner at this moment and I can't find anything in the logs (maybe I can't search). I suspect the gsad UI being responsible for this but I can't find it.
I don't know what to do and if someone more expert than me (not very hard) could help that'd be great :)
Thanks in advance.
I solved this issue by creating a scan configuration for the ospd scanner (I though it didn't need one since it import them)
I faced another issue concerning ospd-w3af configuration, I couldn't create one because it needs ospd 1.0.0 installed, I modified the dependencies few days ago and it doesn't work with ospd 1.2.0
Now I'm facing the issue where the scans doesn't start properly. It stops at 1%
Getting openvas 9 running on new install of Ubuntu 18 was a pain. once i got past all my errors by creating files and ln -s for redis-server socks connections my tasks crapped out at 1%. My fix was install sudo apt install libopenvas-dev after that scans work and check-setup worked. Check-setup report no scanner but openvassd was running and openvasmd --verify-scanner (uuid) showed the scanner.

Debian Stretch MariaDB cannot authenticate from PHP application

I'm using a fresh installation of Debian Stretch, and installed PHP7 and MariaDB as recommended:
sudo apt-get install nginx mariadb-server mariadb-client php-mysqli php7.0-fpm php7.0-curl
Then using sudo mysql_secure_installation I followed the prompts to remove test users etc.
MariaDB seems to use unix_socket authentication (which is a new concept to me). I like how it restricts root access to sudoers and allows me to grant DB permissions to specific OS users.
However I'd prefer to assign individual user/passwords for each web application running on the server. They all run as www-data user on the system and I see no reason to let them share databases.
So I created a user for my first PHP script and granted access to a new database:
CREATE USER 'telemetry'#'localhost' IDENTIFIED BY 'yeah_toast';
UPDATE mysql.user SET plugin='mysql_native_password' WHERE user='telemetry';
GRANT ALL PRIVILEGES ON telemetry TO 'telemetry'#'localhost';
FLUSH PRIVILEGES;
But it refuses to let me connect from the application:
[error] 19336#19336: *20 FastCGI sent in stderr: "PHP message: PHP Warning: mysqli::real_connect(): (HY000/1045): Access denied for user 'telemetry'#'localhost' (using password: YES) in /path/to/database.inc.php on line 30
The credentials I'm using from the application are as follows:
Host: localhost (also tried 127.0.0.1)
Username: telemetry
Password: yeah_toast
Database: telemetry
I tried deleting and re-creating the username in case it was a password problem, and creating a user #'localhost' and #'%' but none seem to work. In fact when I log in using the same credentials from the command line without sudo it works great (mysql -utelemetry -p).
Am I missing a MariaDB configuration step here?

Resources