Issues getting kerberos/Windows AD login work for a web service - nginx

I have been struggling with this for quite a while now, and I can't get it to work.
Here is the setup:
I have a nginx webserver serving a django app at mywebapp.k8s.dal1.mycompany.io
It has the SPNEGO plugin compiled in and I have the following endpoint in my config:
location /ad-login {
uwsgi_pass django;
include /usr/lib/mycompany/lib/wsgi/uwsgi_params;
auth_gss on;
auth_gss_realm BURNERDEV1.DAL1.MYCOMPANY.IO;
auth_gss_service_name HTTP/mywebapp.k8s.dal1.mycompany.io;
auth_gss_allow_basic_fallback off;
}
My AD Domain controller is at burnerdev1.dal1.mycompany.io and I have the following users configured:
rep_movsd
portal
I run the following commands on the DC server in an Admin prompt:
ktpass -out krb5.keytab -mapUser portal#BURNERDEV1.DAL1.MYCOMPANY.IO +rndPass -mapOp set +DumpSalt -crypto AES256-SHA1 -ptype KRB5_NT_PRINCIPAL -princ HTTP/mywebapp.k8s.dal1.mycompany.io#BURNERDEV1.DAL1.MYCOMPANY.IO
setspn -A HTTP/mywebapp.k8s.dal1.mycompany.io#BURNERDEV1.DAL1.MYCOMPANY.IO portal
C:\Users\myself\Documents\keytab>ktpass -out krb5.keytab -mapUser portal#BURNERDEV1.DAL1.MYCOMPANY.IO +rndPass -mapOp set +DumpSalt -crypto AES256-SHA1 -ptype KRB5_NT_PRINCIPAL -princ HTTP/mywebapp.k8s.dal1.mycompany.io#BURNERDEV1.DAL1.MYCOMPANY.IO
Targeting domain controller: dal1devdc1.burnerdev1.dal1.mycompany.io
Using legacy password setting method
Failed to set property 'servicePrincipalName' to 'HTTP/mywebapp.k8s.dal1.mycompany.io' on Dn 'CN=portal,CN=Users,DC=burnerdev1,DC=dal1,
DC=mycompany,DC=io': 0x13.
WARNING: Unable to set SPN mapping data.
If portal already has an SPN mapping installed for HTTP/mywebapp.k8s.dal1.mycompany.io, this is no cause for concern.
Building salt with principalname HTTP/mywebapp.k8s.dal1.mycompany.io and domain BURNERDEV1.DAL1.MYCOMPANY.IO (encryption type 18)...
Hashing password with salt "BURNERDEV1.DAL1.MYCOMPANY.IOHTTPmywebapp.k8s.dal1.mycompany.io".
Key created.
Output keytab to krb5.keytab:
Keytab version: 0x502
keysize 110 HTTP/mywebapp.k8s.dal1.mycompany.io#BURNERDEV1.DAL1.MYCOMPANY.IO ptype 1 (KRB5_NT_PRINCIPAL) vno 3 etype 0x12 (AES256-SHA1) k
eylength 32 (0x632d9ca3356374e9de490ec2f7718f9fb652b20da40bd212a808db4c46a72bc5)
C:\Users\myself\Documents\keytab>setspn -A HTTP/mywebapp.k8s.dal1.mycompany.io#BURNERDEV1.DAL1.MYCOMPANY.IO portal
Checking domain DC=burnerdev1,DC=dal1,DC=mycompany,DC=io
Registering ServicePrincipalNames for CN=portal,CN=Users,DC=burnerdev1,DC=dal1,DC=mycompany,DC=io
HTTP/mywebapp.k8s.dal1.mycompany.io#BURNERDEV1.DAL1.MYCOMPANY.IO
Updated object
C:\Users\myself\Documents\keytab>
Now in the "Active Directory Users and Computers" section, I rightclicked the user and selected "Properties"
Then on the "Delegation" tab I set "Trust this user for delegation to any service (Kerberos only)"
Next I copy the krb5.keytab file to my webserver and restart the nginx container
On the Windows workstation which is part of the domain, I log on as rep_movsd - when I run klist:
C:\Users\rep_movsd>klist
Current LogonId is 0:0x208d7
Cached Tickets: (2)
#0> Client: rep_movsd # BURNERDEV1.DAL1.MYCOMPANY.IO
Server: krbtgt/BURNERDEV1.DAL1.MYCOMPANY.IO # BURNERDEV1.DAL1.MYCOMPANY.IO
KerbTicket Encryption Type: AES-256-CTS-HMAC-SHA1-96
Ticket Flags 0x40e10000 -> forwardable renewable initial pre_authent name_canonicalize
Start Time: 7/16/2020 2:05:51 (local)
End Time: 7/16/2020 12:05:51 (local)
Renew Time: 7/23/2020 2:05:51 (local)
Session Key Type: AES-256-CTS-HMAC-SHA1-96
#1> Client: rep_movsd # BURNERDEV1.DAL1.MYCOMPANY.IO
Server: HTTP/mywebapp.k8s.dal1.mycompany.io # BURNERDEV1.DAL1.MYCOMPANY.IO
KerbTicket Encryption Type: AES-256-CTS-HMAC-SHA1-96
Ticket Flags 0x40a10000 -> forwardable renewable pre_authent name_canonicalize
Start Time: 7/16/2020 2:06:01 (local)
End Time: 7/16/2020 12:05:51 (local)
Renew Time: 7/23/2020 2:05:51 (local)
Session Key Type: AES-256-CTS-HMAC-SHA1-96
I setup Firefox to do SPENGO authentication
Then I hit mywebapp.k8s.dal1.mycompany.io/ad-login and I get a 403 Forbidden error
The nginx server debug log shows:
[debug] 16#16: *195 Client sent a reasonable Negotiate header
[debug] 16#16: *195 GSSAPI authorizing
[debug] 16#16: *195 Use keytab /etc/krb5.keytab
[debug] 16#16: *195 Using service principal: HTTP/mywebapp.k8s.dal1.mycompany.io#BURNERDEV1.DAL1.MYCOMPANY.IO
[debug] 16#16: *195 my_gss_name HTTP/mywebapp.k8s.dal1.mycompany.io#BURNERDEV1.DAL1.MYCOMPANY.IO
[debug] 16#16: *195 gss_accept_sec_context() failed: Cannot decrypt ticket for HTTP/mywebapp.k8s.dal1.mycompany.io#BURNERDEV1.DAL1.MYCOMPANY.IO using keytab key for HTTP/mywebapp.k8s.dal1.mycompany.io#BURNERDEV1.DAL1.MYCOMPANY.IO:
[debug] 16#16: *195 GSSAPI failed
[debug] 16#16: *195 http finalize request: 403, "/ad-login?" a:1, c:1
[debug] 16#16: *195 http special response: 403, "/ad-login?"
[debug] 16#16: *195 http set discard body
[debug] 16#16: *195 charset: "" > "utf-8"
[debug] 16#16: *195 HTTP/1.1 403 Forbidden
BTW while messing around earlier - I found that if I had set a fixed password for the "portal" user with ktpass and logged in as that account on the workstation, the login would succeed.
I was under the mistaken impression that I'd need to create a new keytab for every user and combine all of them.
Any help is greatly appreciated - I read so many conflicting docs its only confused me further and I've been losing sleep over this.
Thanks in advance!

I've read your problem statement carefully, and I think if you follow the steps I wrote below the issue will be solved.
On the DC server where you are creating the keytab, (1) UAC must be temporarily disabled. (2) The user creating the keytab must be a member of the Domain Admins group.
Ensure the SPN is not a duplicate, then remove the SPN from the Active Directory user account portal. This must be done before creating a new keytab using the same SPN against the same account. The below command is a one-liner, word-wrapping makes it look like two lines.
setspn -d HTTP/mywebapp.k8s.dal1.mycompany.io#BURNERDEV1.DAL1.MYCOMPANY.IO portal
Re-create the keytab again exactly as you did before.
You do not need to run the command setspn -A HTTP/mywebapp.k8s.dal1.mycompany.io#BURNERDEV1.DAL1.MYCOMPANY.IO portal because SPN was already set on the Active Directory user account by the ktpass command in step 3.
Replace the old keytab with the new keytab.
Restart the nginx webserver service.
Clear browser cache AND clear Kerberos case (klist purge).
Try it again.
You must do all these steps including the final step 7. Do not skip any.
You service account is named portal. A hash of this password is stored in both Active Directory and the keytab. Same hash is in both locations. The keytab on the nginix server is utilized to decrypt the inbound Kerberos service tickets to determine who the user is attempting to access the web app. More specifically, the GSS authentication does all the work, it uses the keytab to un-scramble the encrypted service tickets. The user rep_movsd does not have the service account credentials. It is part of the Active Directory domain, and when accessing the nginix web server, it gets it's own Kerberos service ticket and its identity is proven to the web server by simply being in possession of a service ticket that is decrypted by the keytab. If it wasn't part of the BURNERDEV1.DAL1.MYCOMPANY.IO domain, or had an expired password, or was a disabled account, it would not be able to get a service ticket and thus not prove its identity and fail authentication.
If you have time, please see my TechNet Wiki article on keytab creation and the logic behind it to help you better understand this complex subject.

Related

Airflow SambaHook authentication issue with SpnegoError and Kerberos?

I am trying to connect to a Samba server in Airflow using the SambaHook class. The Samba server requires Kerberos authentication.
I have already defined a Samba connection in Airflow using the following parameters:
Host,Schema and Extra {"auth": "kerberos"}
airflow connections add "samba_repo" --conn-type "samba" --conn-host "myhost.mywork.com" --conn-schema "fld" --conn-extra '{"auth": "kerberos"}'
I'm trying to use the SambaHook class in Airflow to connect to a Samba server. When I run my code, I get the following error:
Failed to authenticate with server: SpnegoError (1): SpnegoError (16): Operation not supported or available, Context: Retrieving NTLM store without NTLM_USER_FILE set to a filepath, Context: Unable to negotiate common mechanism
However, when I use smbclient to connect to the same server using Kerberos authentication from the Docker terminal, it works fine with the command: smbclient //'myhost'/'fld' -c 'ls "\workpath\*" ' -k
What I tried: I set up a connection to the Samba server in Airflow using the SambaHook class and tried to use the listdirmethod to retrieve a list of files in a specific directory.
What I expected to happen: I expected the listdir method to successfully retrieve a list of files in the specified directory from the Samba server.
What actually resulted: Instead, I encountered the following error message:
Failed to authenticate with server: SpnegoError (1): SpnegoError (16): Operation not supported or available, Context: Retrieving NTLM store without NTLM_USER_FILE set to a filepath, Context: Unable to negotiate common mechanism

How to log TLS version and cripher with mail proxy

Is there any way for nginx mail module to log used TLS version and cripher in any kind of log file? Currently I see log line like this:
2022/10/03 07:34:13 [info] 14#14: *272 client logged in, client: X.X.X.X using starttls, server: 0.0.0.0:587, login: "jan#kowalski.pl", upstream: 192.168.203.5:10025
According to documentation only error log is available for mail module and there is no option for format configuration but maybe I'm missing something.

Kibana Server not allowing remote access

I've edited my Kibana.yaml config file to allow remote access using the DHCP IP address on my router from a bridged connection using my adapter.
It seems to not establish a connection using the port and IP assigned.
[root#localhost bin]# ./kibana --allow-root
^C^C log [14:36:15.000] [info][plugins-service] Plugin "visTypeXy" is disabled.
log [14:36:15.025] [info][plugins-service] Plugin "auditTrail" is disabled.
log [14:36:15.084] [warning][config][deprecation] Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0."
^C[root#localhost bin]# ./kibana --allow-root &
[1] 2499
[root#localhost bin]# log [14:36:23.872] [info][plugins-service] Plugin "visTypeXy" is disabled.
log [14:36:23.878] [info][plugins-service] Plugin "auditTrail" is disabled.
log [14:36:23.960] [warning][config][deprecation] Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0."
log [14:36:24.133] [info][plugins-system] Setting up [96] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,newsfeed,mapsLegacy,kibanaLegacy,translations,share,legacyExport,embeddable,uiActionsEnhanced,expressions,data,home,observability,cloud,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,indexPatternManagement,advancedSettings,fileUpload,savedObjects,dashboard,visualizations,visTypeVega,visTypeTimelion,timelion,features,upgradeAssistant,security,snapshotRestore,enterpriseSearch,encryptedSavedObjects,ingestManager,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboardMode,beatsManagement,transform,ingestPipelines,maps,licenseManagement,graph,dataEnhanced,visTypeTable,visTypeMarkdown,tileMap,regionMap,inputControlVis,visualize,esUiShared,charts,lens,visTypeVislib,visTypeTimeseries,rollup,visTypeTagcloud,visTypeMetric,watcher,discover,discoverEnhanced,savedObjectsManagement,spaces,reporting,lists,eventLog,actions,case,alerts,stackAlerts,triggersActionsUi,ml,securitySolution,infra,monitoring,logstash,apm,uptime,bfetch,canvas]
log [14:36:24.394] [warning][config][plugins][security] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml
log [14:36:24.395] [warning][config][plugins][security] Session cookies will be transmitted over insecure connections. This is not recommended.
log [14:36:24.433] [warning][config][encryptedSavedObjects][plugins] Generating a random key for xpack.encryptedSavedObjects.encryptionKey. To be able to decrypt encrypted saved objects attributes after restart, please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml
log [14:36:24.439] [warning][ingestManager][plugins] Fleet APIs are disabled due to the Encrypted Saved Objects plugin using an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml.
log [14:36:24.561] [warning][config][plugins][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in kibana.yml
log [14:36:24.563] [warning][config][plugins][reporting] Chromium sandbox provides an additional layer of protection, but is not supported for Linux CentOS 8.3.2011
OS. Automatically setting 'xpack.reporting.capture.browser.chromium.disableSandbox: true'.
log [14:36:24.575] [warning][actions][actions][plugins] APIs are disabled due to the Encrypted Saved Objects plugin using an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml.
log [14:36:24.596] [warning][alerting][alerts][plugins][plugins] APIs are disabled due to the Encrypted Saved Objects plugin using an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml.
log [14:36:24.785] [info][monitoring][monitoring][plugins] config sourced from: production cluster
log [14:36:25.067] [info][savedobjects-service] Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations...
log [14:36:25.409] [info][savedobjects-service] Starting saved objects migrations
log [14:36:25.976] [info][plugins-system] Starting [96] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,newsfeed,mapsLegacy,kibanaLegacy,translations,share,legacyExport,embeddable,uiActionsEnhanced,expressions,data,home,observability,cloud,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,indexPatternManagement,advancedSettings,fileUpload,savedObjects,dashboard,visualizations,visTypeVega,visTypeTimelion,timelion,features,upgradeAssistant,security,snapshotRestore,enterpriseSearch,encryptedSavedObjects,ingestManager,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboardMode,beatsManagement,transform,ingestPipelines,maps,licenseManagement,graph,dataEnhanced,visTypeTable,visTypeMarkdown,tileMap,regionMap,inputControlVis,visualize,esUiShared,charts,lens,visTypeVislib,visTypeTimeseries,rollup,visTypeTagcloud,visTypeMetric,watcher,discover,discoverEnhanced,savedObjectsManagement,spaces,reporting,lists,eventLog,actions,case,alerts,stackAlerts,triggersActionsUi,ml,securitySolution,infra,monitoring,logstash,apm,uptime,bfetch,canvas]
log [14:36:25.978] [info][plugins][taskManager][taskManager] TaskManager is identified by the Kibana UUID: dbda794a-41a8-4223-b66f-b4fed95353db
log [14:36:26.302] [info][crossClusterReplication][plugins] Your basic license does not support crossClusterReplication. Please upgrade your license.
log [14:36:26.339] [info][plugins][watcher] Your basic license does not support watcher. Please upgrade your license.
log [14:36:26.340] [info][kibana-monitoring][monitoring][monitoring][plugins] Starting monitoring stats collection
[2021-01-16T09:36:26,422][INFO ][o.e.c.m.MetadataIndexTemplateService] [localhost.localdomain] adding template [.management-beats] for index patterns [.management-beats]
log [14:36:27.290] [info][listening] Server running at http://10.0.0.137:5601
log [14:36:28.153] [info][server][Kibana][http] http server running at http://10.0.0.137:5601
log [14:36:28.157] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:Actions-actions_telemetry]: version conflict, document already exists (current version [4])
log [14:36:28.181] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:Lens-lens_telemetry]: version conflict, document already exists (current version [4])
log [14:36:28.182] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:Alerting-alerting_telemetry]: version conflict, document already exists (current version [4])
log [14:36:28.183] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:endpoint:user-artifact-packager:1.0.0]: version conflict, document already exists (current version [64])
log [14:36:28.184] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:apm-telemetry-task]: version conflict, document already exists (current version [4])
log [14:36:28.973] [warning][plugins][reporting] Enabling the Chromium sandbox provides an additional layer of protection.

installed puppet on a utility node

I'm running a version 6 puppet on a utility node and when I try to connect to the puppet master from the puppet agent I get this error.
[root#utility ~]# puppet agent --test
Warning: Unable to fetch my node definition, but the agent run will continue:
Warning: SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to get certificate CRL for /CN=utility.example.com]
Info: Retrieving pluginfacts
Error: /File[/opt/puppetlabs/puppet/cache/facts.d]: Failed to generate additional resources using 'eval_generate': SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to get certificate CRL for /CN=utility.example.com]
Error: /File[/opt/puppetlabs/puppet/cache/facts.d]: Could not evaluate: Could not retrieve file metadata for puppet:///pluginfacts: SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to get certificate CRL for /CN=utility.example.com]
Info: Retrieving plugin
Error: /File[/opt/puppetlabs/puppet/cache/lib]: Failed to generate additional resources using 'eval_generate': SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to get certificate CRL for /CN=utility.example.com]
Error: /File[/opt/puppetlabs/puppet/cache/lib]: Could not evaluate: Could not retrieve file metadata for puppet:///plugins: SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to get certificate CRL for /CN=utility.example.com]
Info: Loading facts
Error: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to get certificate CRL for /CN=utility.example.com]
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Also, the certificate on the puppet agent does not show on the puppet master when I run puppet cert list --all
Warning: `puppet cert` is deprecated and will be removed in a future release.
(location: /opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application.rb:370:in `run')
Since the agent is not issuing a certificate-signing request, it must already have a signed certificate. But it seems not to be a certificate that the master recognizes, therefore the master will not accept it. Possibly the agent does not accept the master's cert, either.
The master refusing service to an unrecognized agent is exactly what one would expect and want if an unauthorized node attempted to retrieve a catalog. The agent refusing to complete a connection to the master is exactly what one would expect and want if an agent's catalog request were delivered to an imposter posing as the master.
But if an authorized agent is having such a problem requesting a catalog from a genuine master that it should recognize, then you have a trust failure. This might happen, for example, if the agent's original master were replaced with a new one, or if Puppet were removed from the master and then re-installed.
If indeed that master has no cert for the agent in question, then you should be able to resolve the issue by shutting down the agent (if it is running as a daemon), then clearing out its certificates so that it generates a new one on its next run. The Puppet docs describe how this can be done (you should need only step 3, "Clear and regenerate certs for Puppet agents", and only for the affected agent).

How to set up a secure connection between Filbeat and Elasticsearch using SSL

I'm unable to setup an SSL connection between Filebeat and Elasticsearch.
My knowledge is lacking when it comes to SSL. I'm using X-Pack to generate a certificate using the certutil command. bin/xpack/certutil ca generates a certificate authority under the name elastic-stack-ca.p12.
Then
$ bin/x-pack/certutil cert --ca elastic-stack-ca.p12
Which I believe creates a certificate signed by that CA. This results in the file elastic-certificates.p12. From here I'm clueless.
I tried testing to see if the certificates work by setting up a HTTPS connection to ES.
I put
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: /path/to/elastic-certificates.p12
xpack.security.http.ssl.certificate: /path/to/elastic-certificates.p12
xpack.security.http.ssl.certificate_authorities: [ "/path/to/elastic-stack-ca.p12" ]
However, this brings up quite a few errors one of them being
caught exception while handling client http traffic, closing connection
When I add the https IP and the CA in Kibana it fails to connect with ES.
I would like to know how to successfully set up https. Also how can a SSL connection be established between two servers. One containing Filebeat, but no X-Pack and the receiving server with ES on it alongside X-Pack installed.
After adding those SSL settings in your elasticsearch.yml, you also need to add the password to the Elasticsearch keystore and truststore. You should've set a password when you ran the certutil command. You can do that with:
$ echo password | /usr/share/elasticsearch/bin/elasticsearch-keystore add --stdin xpack.security.transport.ssl.keystore.secure_password
$ echo password | /usr/share/elasticsearch/bin/elasticsearch-keystore add --stdin xpack.security.transport.ssl.truststore.secure_password
Make sure you restart Elasticsearch after making these changes.

Resources