ScaleFT using certificated based Authentication for ssh, and according to the documentation, it issues ephemeral credentials which expire in few minutes, https://www.scaleft.com/docs/sshkeys/
I am investigating the possibility of using the ephemeral credentials which scaleft generated and wondering if anyone has been able to use python Paramiko to build ssh connections with ScaleFT enabled?
I am not sure if the ephemeral credentials are available to user or not. Moreover, even if the ephemeral credentials can be found and used , when the certs expired, will it causes unstable connections?
Related
For the past few days, i've been trying to configure freeradius to authenticate wifi clients in OpenLDAP (without TLS - 389 bind).
I tried several guides and did not get the result i was looking for.
At localhost, RADTEST works and i receive an Accept-Accpet.
User is found within LDAP and accepts authentication.
When i try to authenticate via wifi (Windows 10), i can't connect.
The configuration i am currently using is this:
https://gitlab.com/ae-dir/client-examples/-/blob/master/freeradius/radiusd.conf
Someone with experience integrating freeradius with openldap?
I need wifi clients to connect with their ldap credentials.
You have not explained more details about the authentication method you are trying to use. This is important. However, a simple recipe for making freeradius+ldap authentication work with Windows10, Ubuntu and Android in EAP/TTLS mode is as follows:
Make sure the radius server has access to the LDAP server. Also make sure that the clients (access points) have access to the radius server. Check firewall issues and freeradius configuration (for Debian10 the file is /etc/freeradius/3.0/clients.conf)
For the authentication test (taking into account that the previous step has already been certified by you), there are two interesting tools: radtest (part of the freeradius-utils package) which does not support EAP/TTLS authentication, and a tool called eapol_test, which is part of the wpa_supplicant package and supports EAP-TTLS.
Follow the EAP/TTLS configuration steps and how to use the eapol_test tool on this link.
Make sure you generate the new certificates (don't use snakeoil certificates at all) and don't forget to change the certificate settings in /etc/freeradius/3.0/mods-enabled/eap). The link from the previous step does not talk about this step.
Run freeradius in full debug mode to find any errors (ie freeradius -X).
Don't forget to check the password and protocol compatibility list.
Let's say I want to set up a poor man's authentication scheme for a simple network service.
I don't want to bother with username/password authentication, for simplicity I just want to have a list of public keys in my application and anyone who can prove they are the owner of that key can use my service.
For the purposes of my application it would greatly simplify the authentication process since all my users are on the local network and they all use Unix. Anytime I onboard a new user I can just ask them for their ssh public key.
Is there a simple way to reuse the mechanism involved in ssh public key authentication in a non-ssh application? This is question is intended to be language agnostic.
If you just have a list of users that can use your application and you have no need to see who did what.
You can setup your server so that it listens only on localhost (127.1) rather than 0.0.0.0, and provide a restricted sshd, forwarding the port required to connect to the application
~/.ssh/authorized_keys will provide a list of the authorized keys that can be used.
ssh -I private_key_file <hostname> -L 3000:localhost:3000
For a basic setup and help with configuring your sshd, check out this answer:
https://askubuntu.com/questions/48129/how-to-create-a-restricted-ssh-user-for-port-forwarding
Note: Be warned that if you don't lock it down, any user will have full shell access on your box where the machine is hosted.
A dirty hack from top of my head: could you wrap the application so that it would create an actual SSH tunnel from localhost to your server, and use that for ?
Assuming you are talking about a web based application. What you are really looking for is X.509 Client certificates (1.3.6.1.5.5.7.3.2). This will allow you to identify a user individually to your application.
These face the same issues that are usually faced when looking at key distribution. Which is generally considered a hard problem.
If you wanted to head down this road here is what you would need to do.
Generate a root certificate (once)
Setup web server with appropriate modules to parse the certificate (nginx/apache)
Generate a certificate for each user (openssl)
Download cerificiate from centralized server. (maybe use their ssh pub key here)
Install the x509 cert locally (OS Dependent)
On the server side, you would need to process the cert as part of the web-server (nginx or apache should have modules to do this) and then pass the name onto your application as a header field which you can then process internally.
This is a much better security solution than usernames and passwords, however is complex because of the key distribution issue. Most people wouldn't bother since in most applications it is easy enough to integrate logins with LDAP or radius.
Recently all our testing machines have been moved to a secured shell network. As a result, ip addresses of all these machines has now been changed and we have to access these machines using SSH protocol now onwards.
However, I am not able to access any target machine which is also enabled for SSH using "remsh" to perform some task.
I have checked the existence of ".rhosts" file and and entry of the target machine's ip entry into "/etc/hosts" file.
Kindly let me know if I need to change/look any where else to make remsh work?
Remsh, rlogin, rsh, and rcp are not a secure systems as information is sent as plain text between the machines and because the hosts verification is not done with secret keys but is host-based and can be forged. I would think that you have changed to ssh precisely for these reasons.
Luckily you can do all the same things using ssh. For example, after configuring the machines to use public & private key pairs, you can run commmands on remote machine automatically (by supplying password or using passwordless keys):
ssh user#remotehost command-to-be-run
If you haven't used ssh much earlier, there are a lot of things to learn, but isn't that fun? As a result you will also know how to do state of the art secure connections. You will want to learn especially about public key authentication.
There are lots of tutorials on the Internet how to create and use keys and use ssh. http://www.olearycomputers.com/ll/ssh_guide.html seems like a good starting point. https://engineering.purdue.edu/ECN/Support/KB/Docs/SSHReplacingRhosts discusses specifically replacing .rhosts authentication with a key pair.
I've a quick question:
I have 2 websites, 1 has some links to file downloads. Those files are hosted on another server.
I need to encrypt the request data between the 2 servers..can I do it just using a SSL certificate?
Any other/better idea?
Those files are private docs, so I don't want the server 2 or any other people being able to track the file requests between the servers.
Thanks
Yes, use SSL (or actually TLS) if you want to achieve transport level security. If these are two servers that you control you can configure your own self signed certificates. If you want to make sure that only the two servers can communicate with each other, then require client-authentication, where both the server and client use a certificate/private key pair.
Most of the time the trick is to implement a sensible key management procedure. Setting up a web server to handle TLS using certificates should not be too hard.
SSL certificate will work fine for ensuring the transfer is encrypted. Even just a self signed certificate will be fine for this purpose (provided you can tell the client you're going to use to accept the self signed cert)
Alternatively if it's two linux machines then scp (secure copy) is a great tool where it'll connect via ssh and grab the files. (There probably is a windows scp tool but I don't know it)
Rsync also supports going via ssh
As for tracking the request... there's nothing you can do to prevent any device between your computer and the destination computer logging the fact a connection was made but the encryption should prevent anyone from getting to the actual data you're sending.
Issues have been asked many times about how to handle self-signed certificates with Java and implementations are often provided. However, I'm not sure that these implementations will give me the security/trust I am looking for.
My circumstance is as follows: I have a client program connecting to our server application. Both of these we have complete control over. Our client post's a stream using https to a URL at our server, and the server responds. Currently (and this is what I'm trying to fix) the server has a self signed certificate. Java doesn't like this and FOR TESTING ONLY, we are pretty much ignoring the certificate altogether by trusting any certificate.
I have little knowledge of SSL. My boss says we can use our self-signed certificate and it will be secure as long we don't make our crypt. key public. This sounds correct to me, but a lot of posts say self-signed cert's are automatically vulnerable to man-in-the-middle attacks. Does this mean SSL sends the crypt. key along with the certificate?
Since we have control over both ends, should we just encrypt our data ourselves with a secret key, and decrypt it at the end using our key? Or is there reason to use SSL?
Instead of trusting any certificate blindly (which would make the connection vulnerable to MITM attacks), configure your Java client to trust that particular certificate. Self-signed certificates do not inherently make SSL/TLS connections vulnerable to MITM attacks, they just make their distribution and the evaluation of trust more specific to this particular deployment (i.e. you have to configure it manually).
You can do this in at least 3 ways (pick the easiest one for you, I'd suggest bullet point #2):
Import the server certificate into your client's global trust store (lib/security/cacerts in your JRE directory). This will make all applications run with this JRE trust this certificate.
Import the server certificate into another truststore (possibly a local copy of lib/security/cacerts) and make this particular application use this truststore. This can be done using the javax.net.ssl.trustStore system properties.
Make your client application use an SSLContext initialised with an X509TrustManager configured to trust that certificate: either something written manually or a trust manager coming from TrustManagerFactory initialised by loading a local keystore that contains that particular certificate (as in the previous method).
You'll find more details about all this in the JSSE Reference Guide.
(This answer to a similar question should give you the details for doing all this properly, in particular keytool -import ....)
The arguments against self signed certificates mainly apply to web-applications. Since with the current infrastructure a browser won't be able to validate your self-signed certificate.
Since you have control over the client, you can simply hardcode the certificate you expect into the client. For example you might calculate the sha1 hash of the certificate, and check if that matches the expected value. That way you don't even need to trust hundreds of CAs.
To achieve secure communication you need to first ensure your talking to the right computer. When the client first attempts to establish a secure connection, it pings the server and the server responds with its cert. At this point you MUST validate the servers cert before continuing. The cert includes a public key and signature that can be used to ensure the cert is valid. For example, in web browsers this means checking to see it's been signed by an authority listed as trusted in your browser settings, if that check fails you'll see red warnings in your browser. In your case this will mean you have manually (or in code) added the servers cert into a trust store so that it is trusted.