I have an ubuntu server running openldap to connect to our phones.
A while back I set this to use ldaps with letsencrypt which has worked fine with most things until recently they made a change ref the X3 cert. I am unable to install a late enough version so I can run the --preferred-chain "ISRG ROOT X1 and can't use the snap version as the box ix on lcx and wont run it.
The company has now bought a digi cert wild card certificate and would like this to be on the ldap server, but I can't get it to load the config
The original ldif file I created to import is below with the domain name changed.
dn: cn=config
add: olcTLSCACertificateFile
olcTLSCACertificateFile: /etc/letsencrypt/live/directory.mydomain.co.uk/fullchain.pem
-
add: olcTLSCertificateFile
olcTLSCertificateFile: /etc/letsencrypt/live/test-directory.mydomain.co.uk/cert.pem
-
add: olcTLSCertificateKeyFile
olcTLSCertificateKeyFile: /etc/letsencrypt/live/test-directory.mydomain.co.uk/privkey.pem
I have tried to change the file with modify commands and it's just wont have it and seem to keep getting the below.
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "cn=config"
ldap_modify: Inappropriate matching (18)
additional info: modify/add: olcTLSCACertificateFile: no equality matching rule
Any advise here would be great thanks.
I propose to check the contents of the ldif for special not intended characters. Like: $sudo cat -tve *.ldif?
Related
i'm trying to use the grafana-loki output plugin in fluent-bit but it seems impossible to configure with tls.
i had a working configuration running with the loki plugin like this :
[OUTPUT]
Name loki
Match *
Host my-collector-url-for-loki
Port 443
Http_User m-user
Http_Passwd some-token-value
Labels job=fluentbit
auto_kubernetes_labels on
Tls On
Tls.verify On
but the problem with this output plugin was that the logs are not showing correctly on grafana, i think a filter or parser needs to be configured for it or maybe the plugin is just meant for loki not grafana/loki, i just don't know and i got tired of trying to figure out why. So i switched to the grafana-loki plugin and the logs looked perfect on grafana but i only had it working without authentication.
this is my setup with grafana-loki output plugin
[Output]
Name grafana-loki
Match *
Url https://url-to-my-logs-collector
TenantID ""
BatchWait 1
BatchSize 1048576
Labels {job="test-fluent-bit"}
RemoveKeys kubernetes,stream
AutoKubernetesLabels false
LabelMapPath /fluent-bit/etc/labelmap.json
LineFormat json
LogLevel warn
# everything prior to this line is working successfully
# trying to set authentication here "this part doesn't work"
Tls On
Tls.verify On
Http_User m-user
Http_Passwd some-token-value
Problem with this setup, i always get a 403 forbidden http status. I'm having trouble figuring out how to set authentication on this plugin. Does anyone have a working configuration for this type of setup?
Authentication worked for me using this plugin using like below:
[Output]
Name grafana-loki
Match *
Url https://${user_loki}:${pass_loki}#lurl-to-my-logs-collector
BatchWait 1s
BatchSize 102400
TenantID ""
(...)
TLS, http.user and http.passwd options are not support, as far as I could understand, by this plugin.
System:
Maxscale 2.5.9
Ubuntu 20.04
In order to access the Web AdminGUI my maxsclale.cnf file looks like this:
[maxscale]
threads=auto
admin_host=0.0.0.0
admin_secure_gui=1
admin_auth=1
admin_enabled=1
admin_gui=1
admin_ssl_key=/etc/ssl/certs/maxscale-key.pem
admin_ssl_cert=/etc/ssl/certs/maxscale-cert.pem
admin_ssl_ca_cert=/etc/ssl/certs/ca-certificates.crt
[...all other configuration..]
With this configuration I can access the Web-AdminGUI on port 8989 from the internal ip address (not 127.0.0.1) by browser.
The SSL key/certs are self-signed .
BUT
When using the command line like:
maxctrl list servers
I get the following error:
Error: Error: socket hang up
When I remove or comment out the lines with the admin_ssl_XXX parameters and restart maxscale, command line works again, but of course the Web-AdminGUI does not.
I tried with various SSL certificate creations (also the one that is listed on the mariadb.com-Website
https://mariadb.com/docs/security/encryption/in-transit/create-self-signed-certificates-keys-openssl/#create-self-signed-certificates-keys-openssl),
the issue remains.
No errors in the maxscale.log whatsoever.
What is the best way to debug this issue?
Or do you have by any chance the right answer at hand?
YOUR help is greatly appreciated!
BR. Martin
You should use maxctrl --secure to encrypt the connections used by it.
Since you are using self-signed certificates, you have to also specify the CA certificate with --tls-ca-cert=/etc/ssl/certs/ca-certificates.crt if it's not installed in the system certificate store.
In addition, you probably need to use --tls-verify-server-cert=false to disable any warnings about self-signed certificates.
My institute recently installed a new proxy server for our network. I am trying to configure my Cygwin environment to be able to run wget and download data from a remote repository.
Browsing the internet I have found two different solutions to my problem, but no one of them seem to work in my case.
The first one I tried was to follow these instructions, so in Cygwin:
cd /cygdrive/c/cygwin64/etc/
nano wgetrc
at the end of the file, I added:
use_proxy = on
http_proxy=http://username:password#my.proxy.ip:my.port/
https_proxy=https://username:password#my.proxy.ip:my.port/
ftp_proxy=http://username:password#my.proxy.ip:my.port/
(of course, using my user and password)
The second approach was what was suggested by this SO post, so in my Cygwin environment:
export http_proxy=http://username:password#my.proxy.ip:my.port/
export https_proxy=https://username:password#my.proxy.ip:my.port/
export ftp_proxy=http://username:password#my.proxy.ip:my.port/
in both cases, if I try to test my wget, I get the following:
$ wget http://www.google.com
--2020-01-30 12:12:22-- http://www.google.com/
Resolving my.proxy.ip (my.proxy.ip)... 10.1XX.XXX.XX
Connecting to my.proxy.ip (my.proxy.ip)|10.1XX.XXX.XX|:8XXX... connected.
Proxy request sent, awaiting response... 407 Proxy Authentication Required
2020-01-30 12:12:22 ERROR 407: Proxy Authentication Required.
It looks like if my user and password are not ok, but I actually checked them on my browsers and my credentials work just fine.
Any idea on what this could be due to?
This problem was solved thanks to the suggestion of a User of the community AskUbuntu.
Basically, instead of editing the global configuration file wgetrc, I should have created a new .wgetrc with my proxy configuration in my Cygwin home directory.
In summary:
Step 1 - Create a .wgetrc file;
nano ~/.wgetrc
Step 2 - record in this file the proxy info:
use_proxy=on
http_proxy=http://my.proxy.ip:my.port
https_proxy=https://my.proxy.ip:my.port
ftp_proxy=http://my.proxy.ip:my.port
proxy_user=username
proxy_password=password
I'm using vagrant (VVV actually) to run local wordpress installs. I want to test different behaviors for different GEO's on my local machine instead of upload it every time to the server which is annoying.
So, I've tried to install the GeoIP nginx module to the local machine with the following guide https://piwik.org/faq/how-to/faq_166/ (and a bit more google but it doesn't matter at the moment).
When I'm using ./configure the following is exists:
checking for GeoIP library ... found
checking for GeoIP IPv6 support ... found
I've also set the .dat files in my conf file, and set the $_SERVER (fastcgi_param) parameters - so they displayed when I'm printing the $_SERVER var.
But those GeoIP vars are empty. I'm not sure about the reason, but 2 things is bothering me. First, when I'm write nginx -V in the terminal the argument --with-http_geoip_module is missing. Second, could it actually works if the REMOTE_ADDR (IP) is not my real IP? (192.168.1.50 for example).
nginx is a bit strange for me, so sorry if something isn't exact..
--
Operating system - macOS, nginx version - 1.3.15, running with VVV (vagrant box)
If there is a reverse proxy in front of your nginx, use geoip_proxy to set IPs whose X-Forwarded-For-Header can be trusted.
You can also use that without actually having a reverse proxy when you're developing. Add your local IP to the geoip_proxy-list and set the X-Forwarded-For-Header to your public IP in your browser (use a plugin like Modify Headers).
I am working on facebook messenger.
Facebook app only accept one url for webhook but ngrock is generating new URL every time. Now I am unable to test my app because of webhook URL changed.
UPDATE May 2020
Serveo is up and running again! No installation, no signup!
All you need to do is to run this:
ssh -R <unique subdomain>:80:<your local host>:<your local port> serveo.net
like
ssh -R youruniquesubdomain:80:localhost:8000 serveo.net
UPDATE January 2020
Since there are some issues with Serveo and localtunnel, I want to share with you another free ssh-based self-hosting service: Localhost.run
Unfortunately, it does not provide unique subdomains but it is ssh-based so you do not have to install additional applications. Still waiting for Serveo coming back.
UPDATE April 2018
I've found Serveo just now! And it is totally incredible!
UPDATE November 2017
Probably, it is not the best option for you but I started using localtunnel instead of ngrok.
An installation and run flow is very simple:
npm install -g localtunnel
lt --port <your localhost port> --subdomain youruniquesubdomain
Then I can go to my http://youruniquesubdomain.localtunnel.me
That's it!
No more free subdomain support from ngrok.....pls have a error as below
Tunnel session failed: Only paid plans may bind custom subdomains.
Failed to bind the custom subdomain 'arvindpattartestfb.ngrok.io' for the account 'arvccccc'.
This account is on the 'Free' plan.
Upgrade to a paid plan at: https://dashboard.ngrok.com/billing/plan
ERR_NGROK_313
You need to set up auth token. You can find it here https://dashboard.ngrok.com/auth. (W̶o̶r̶k̶s̶ ̶w̶i̶t̶h̶ ̶f̶r̶e̶e̶ ̶v̶e̶r̶s̶i̶o̶n̶,̶ ̶n̶o̶ ̶n̶e̶e̶d̶ ̶t̶o̶ ̶p̶a̶y̶ it's now paid feature, see ngrok pricing).
Then you can use it like this:
ngrok http 80 --subdomain yoursubdomain
Neither localtunnel.me or Serveo are working for me right now so I created a temporary solution that works for some use-cases including mine (react-native local development): using the ngrok npm package one can save the generated ngrok url into a json file, and that file can be read for any other app.
First make sure to install ngrok using npm install ngrok then use this node script:
const ngrok = require('ngrok');
const fs = require('fs').promises;
(async function() {
const url = await ngrok.connect(3000);
const api = ngrok.getApi();
let data = await api.get('api/tunnels');
data = JSON.parse(data);
let dict = {'domain': data.tunnels[0].public_url}
await fs.writeFile("config.json", JSON.stringify(dict));
console.log("saved " + data.tunnels[0].public_url);
})();
Then from your app you may read it using code similar or equal to:
const backend = require('./config.json').domain;
For DHIS2 local installation, I did this on the terminal on ubuntu server.
Make sure your web is running on the specified port. Mine was on 8080.
ssh -R dani.serveo.net:80:localhost:8080 serveo.net
The beauty of this serveo.net is you can reuse the same hostname prefix url before serveo.net as many time as you want, even if power goes off or internet diconnection.
Staqlab tunnel is providing domain for free. Its works great but need a binary to be downloaded from there website. Using this service for month without any hassle
In 2022 (almost 2023) pagekite.me works for me.
It is very similar to ngrok, and requires the installation of pagekite.py (and, obviously Python).
After installation click on pagekite.py opens pagekite shell.
Run command: 8080 subdomain.pagekite.me
I noticed that no one mentioned how to have static ngrok urls, which was the main question about.
A way to do it is to edit the ngrok.yml file, which is located at
Linux: "~/.config/ngrok/ngrok.yml"
MacOS (Darwin): "~/Library/Application Support/ngrok/ngrok.yml"
Windows: "%HOMEPATH%\AppData\Local\ngrok\ngrok.yml"
You can have content such as:
version: "2"
authtoken: valid_auth_token
tunnels:
first-app:
addr: 3000
proto: http
hostname: yourfixedngrok_id1.ngrok.io
second-app:
addr: 8000
proto: http
hostname: yourfixedngrok_id2.ngrok.io
This will help you expose multiple ports, and have a persisted url for each of which based on the value you set for hostnames.
After that, you run your ngrok using this command:
ngrok start --all
Though its not a solution but take it as workaround, I had the same problem while testing. What i did is keep the ngrok running with my http port, so my ngrok url is not changing. but I frequently changing and restarting my server for testing and debugging.