Is SSL appropriate for sending secure contents? - r

I am using mailR to send emails through R. This is my code
send.mail(from = [from],
to = [to],
subject = "msg",
body = "contents",
html = FALSE,
inline = FALSE,
authenticate = TRUE,
smtp = list(host.name = "smtp.gmail.com",
port = 465,
user.name = [username],
passwd = [password],
ssl = TRUE),
attach.files = "/home/User/outputlog.txt",
send = TRUE)
I am sending sensitive info in the attachment. I am sending it through SSL.
I read this post about how secure SSL is and it looks pretty secure.
Does this message get encrypted in transit?

In theory, yes (for some definition of "transit"), but in practice for "Does this message get encrypted in transit?" the answer is maybe. In short, just ssl = True or equivalent put somewhere does almost not guarantee anything really, for all the reasons explained below.
Hence you are probably not going to like the following detailed response, as it shows basically that nothing is simple and that you have no 100% guarantee even if you do everything right and you have A LOT of things to do right.
Also TLS is the real true name of the feature you are using, SSL is dead since 20 days now, yes everyone use the old name, but that does not make this usage right nevertheless.
First, and very important, TLS provides various guarantees, among which confidentiality (the content is encrypted while in transit), but also authentication which is in your case far more important, and for the following reasons.
You need to make sure that smtp.gmail.com is resolved correctly, otherwise if your server uses lying resolvers, and is inside an hostile network that rewrites the DNS queries or responses, then you can send an encrypted content... to another party than the real "smtp.gmail.com" which makes the content not confidential anymore because you are sending it to a stranger or an active attacker.
To solve that, you need basically DNSSEC, if you are serious.
No, and contrary to what a lot of people seem to believe and convey, TLS alone or even DOH - DNS over HTTPS - do not solve that point.
Why? Because of the following that is not purely theoretical since it
happened recently (https://www.bleepingcomputer.com/news/security/hacker-hijacks-dns-server-of-myetherwallet-to-steal-160-000/), even if it was in the WWW world and not the email, the scenario can be the same:
you manage to grab the IP addresses tied to the name contacted (this can be done by a BGP hijack and it happens, for misconfigurations, "policy" reasons, or active attacks, all the time)
now that you control all communications, you put whatever server you need at the end of it
you contact any CA delivering DV certificates, including those purely automated
since the name now basically resolve to an IP you control, the web (or even DNS) validation that a CA can do will succeed and the CA will give you a certificate for this name (which may continue to work even after the end of the BGP hijack because CAs may not be quick to revoke certificates, and clients may not properly check for that).
hence any TLS stack accepting this CA will happily accept this certificate and your client will send securely content with TLS... to another target than the intended one, hence 0 real security.
In fact, as the link above shows, attackers do not even need to be so smart: even a self signed certificate or an hostname mismatch may go through because users will not care and/or library will have improper default behavior and/or programmer using the library will not use it properly (see this fascinating, albeit a tad old now, paper showing the very sad state of many "SSL" toolkits with incorrect default behavior, confusing APIs and various errors making invalid use of it far more probably than proper sane TLS operations: https://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf)
Proper TLS use does not make DNSSEC irrelevant. Both targets and protects against different attacks. You need both to be more secure than just with one, and any of the two (properly used) does not replace the other. Never has and never will.
Now even if the resolution is correct, someone may have hijacked (thanks to BGP) the IP address. Then, again, you are sending to some host some encrypted content except that you do not really authenticate who is this host, so it can be anyone if an attacker managed to hijack IP addresses of smtp.gmail.com (it does not need to do it globally, just locally, "around" where your code execute).
This is where the very important TLS property of authentication kicks in.
This is typically done through X.509 certificates (which will be called - incorrectly - SSL certificates everywhere). Each end of the communication authenticate the other one by looking at the certificate it presented: either it recognizes this certificate as special, or it recognizes the issuing authority of this certificate as trusted.
So you do not just need to connect with TLS on smtp.gmail.com you also need to double check that the certificate then presented:
is for smtp.gmail.com (and not any other name), taking into account wildcards
is issued by a certificate authority you trust
All of this is normally handled by the TLS library you use except that in many cases you need at least to explicitly enable this behaviour (verification) and you need, if you want to be extra sure, to decide clearly with CAs you trust. Otherwise, too many attacks happen as can be seen in the past by rogue, incompetent or other adjectives CAs that issued certificates where they should not (and yes noone is safe against that, even Google and Microsoft got in the past mis-issued certificates with potential devastating consequences).
Now you have another problem more specific to SMTP and SMTP over TLS: the server typically advertises it does TLS and the client seeing this then can start the TLS exchange. Then all is fine (baring all the above).
But in the path between the SMTP server and you someone can rewrite the first part (which is in clear) in order to remove the information that this SMTP server speaks TLS. Then the client will not see TLS and will continue (depending on how it is developed, of course to be secure in such cases the client should abort the communication), then speaking in clear. This is called a downgrade attack. See this detailed explanation for example: https://elie.net/blog/understanding-how-tls-downgrade-attacks-prevent-email-encryption/
As Steffen points out, based on the port you are using this above issue of SMTP STARTTLS and hence the possible downgrade does not exist, because this is for port 25 which you are not using. However I prefer to still warn users about this case because it may not be well known and downgrade attacks are often both hard to detect and hard to defend against (all of this because protocols used nowadays were designed at a time where there was no need to even think about defending one against a malicious actor on the path)
Then of course you have the problem of the TLS version you use, and its parameters. The standard is now TLS version 1.3 but this is still slowly being deployed everywhere. You will find many TLS servers only knowing about 1.2
This can be good enough, if some precautions are taken. But you will also find old stuff speaking TLS 1.1, 1.0 or even worse (that is SSL 3). A secure client code should refuse to continue exchanging packets if it was not able to secure at least a TLS 1.2 connection.
Again this is normally all handled by your "SSL" library, but again you have to check for that, enable the proper settings, etc.
You have also a similar downgrade attack problem: without care, a server first advertise what it offers, in clear, and hence an attacker could modify this to remove the "highest" secure versions to force the client to use a lower versions that has more attacks (there are various attacks against TLS 1.0 and 1.1).
There are solutions, specially in TLS 1.3 and 1.2 (https://www.rfc-editor.org/rfc/rfc7633 : "The purpose of the TLS feature extension is to prevent downgrade
attacks that are not otherwise prevented by the TLS protocol.")
Aside and contrary to Steffen's opinion I do no think that TLS downgrade attacks are purely theoretical. Some examples:
(from 2014): https://p16.praetorian.com/blog/man-in-the-middle-tls-ssl-protocol-downgrade-attack (mostly because web browsers are eager to connect no matter what so typically if an attempt with highest settings fail they will fallback to lower versions until finding a case where the connection happens)
https://www.rfc-editor.org/rfc/rfc7507 specifically offers a protection, stating that: "All unnecessary protocol downgrades are undesirable (e.g., from TLS
1.2 to TLS 1.1, if both the client and the server actually do support
TLS 1.2); they can be particularly harmful when the result is loss of
the TLS extension feature by downgrading to SSL 3.0. This document
defines an SCSV that can be employed to prevent unintended protocol
downgrades between clients and servers that comply with this document
by having the client indicate that the current connection attempt is
merely a fallback and by having the server return a fatal alert if it
detects an inappropriate fallback."
https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2019/february/downgrade-attack-on-tls-1.3-and-vulnerabilities-in-major-tls-libraries/ discusses not less than 5 CVEs in 2018 that allows TLS attacks: " Two ways exist to attack TLS 1.3. In each attack, the server needs to support an older version of the protocol as well. [..] The second one relies on the fact that both peers support an older version of TLS with a cipher suite supporting an RSA key exchange." and "This prowess is achieved because of the only known downgrade attack on TLS 1.3." and "Besides protocol downgrades, other techniques exist to force browser clients to fallback onto older TLS versions: network glitches, a spoofed TCP RST packet, a lack of response, etc. (see POODLE)".
Even if you are using a correct version, you need to make sure to use correct algorithms, key sizes, etc. Sometimes some server/library enable a "NULL" encryption algorithm, which means in fact no encryption. Silly of course, but that exists, and this is a simple case, there are far more complicated ones.
This other post from Steffen: https://serverfault.com/a/696502/396475 summarizes and touches the various above points, and gives another views on what is most important (we disagree on this, but he answered here as well so anyone is free to take both views into account and make their own opinion).
Hence MTA-STS instead of SMTP STARTTLS, https://www.rfc-editor.org/rfc/rfc8461 with this clear abstract:
SMTP MTA Strict Transport Security (MTA-STS) is a mechanism
enabling mail service providers (SPs) to declare their ability to
receive Transport Layer Security (TLS) secure SMTP connections and
to specify whether sending SMTP servers should refuse to deliver to
MX hosts that do not offer TLS with a trusted server certificate.
Hence you will need to make sure that the host you send your email too does use that feature, and that your client is correctly programmed to handle it.
Again, probably done inside your "SSL Library" but this clearly show you need specific bit in it for SMTP, and you need to contact a webserver to retrieve the remote end SMTP policies, and you need also to do DNS requests, which gets back to you on one of the earlier point about if you trust your resolver or not and if records are protected with DNSSEC.
And with all the above, which already covers many areas and is really hard to do correctly, there are still many other points to cover...
The transit is safe, let us assume. But then how does the content gets retrieved? You may say it is not your problem anymore. Maybe. Maybe not. Do you want to be liable for that? Which means that you should maybe also encrypt the attachment itself, this is in addition (not in replacement) of the transport being secured.
The default mechanisms to secure email contents either use OpenPGP (has a more geek touch to it), or S/MIME (has a more corporate touch to it). This works for everything. Then you have specific solutions depending on the document (but this does not solve the problem of securing the body of the email), like PDF documents can be protected by a password (warning: this has been cracked in the past).
I am sending sensitive info
This is then probably covered by some contract or some norms, depending on your area of business. You may want to dig deeper into those to see exactly what are the requirements forced upon you so that you are not liable for some problems, if you secured everything else correctly.

First, even if SSL/TLS is properly used when delivering the mail from the client it only protects the first step of delivery, i.e. the delivery to the first MTA (mail transfer agent). But mail gets delivered in multiple steps over multiple MTA and then it gets finally retrieved from the client from the last mail server.
Each of these hops (MTA) has access to the plain mail, i.e. TLS is only between hops but not end-to-end between sender and recipient. Additionally the initial client has no control how one hop will deliver the mail to the next hop. This might be also done with TLS but it might be done in plain. Or it might be done with TLS where no certificates get properly checked which means that it is open to MITM attacks. Apart from that each MTA in the delivery chain has access to the mail in plain text.
In addition to that the delivery to the initial MTA might already have problems. While you use port 465 with smtps (TLS from start instead upgrade from plain using a STARTTLS command) the certificate of the server need to be properly checked. I've had a look at the source code of mailR to check how this is done: mailR essentially is using Email from Apache Commons. And while mailR uses setSSL to enable TLS from start it does not use setSSLCheckServerIdentity to enable proper checking of the certificate. Since the default is to not properly check the certificate already the connection to the initial MTA is vulnerable to man in the middle attacks.
In summary: the delivery is not secure, both due to how mail delivery works (hop-by-hop and not end-to-end) and how mailR uses TLS. To have proper end-to-end security you'll to encrypt the mail itself and not just the delivery. PGP and S/MIME are the established methods for this.
For more see also How SSL works in SMTP? and How secure is e-mail landscape right now?.

Related

How to tell users to upgrade to browser supporting TLS 1.2 in ASP.NET

Apparently we are turning off support of TLS < 1.2 in the near future. So, we would like to inform users that access our site, prior to the turn off, to upgrade their browsers.
Initially I looked at HowsMySSL.com, which has an API that can be accessed via Javascript, but ultimately we don't want to access a 3rd party API.
Is there not a server variable in ASP.NET, which indicates which cipher version has been handshaken between the client and server?
To reiterate, we haven't turned off TLS < 1.2 YET, but want to be proactive to inform those users that will be affected. So, the users will successfully negotiate the handshake, I'm just looking to get the value of the cipher used...
See this thread (oh the futility!): Check ssl protocol, cipher & other properties in an asp.net mvc 4 application
We haven't come up with a solution yet either. Though the SCHANNEL event-log parsing is looking like a promising way to at least get a feel for how many people are connecting with which protocol.

Simple password authentication in TCP client server architecture

Good morning everyone.
I've been reading (most of it here in stack overflow) about how to make a secure password authentication (hashing n times, using salt, etc) but I'm in doubt of how I'll actually implement it in my TCP client-server architecture.
I have already implemented and tested the methods I need (using jasypt digester), but my doubt is where to do the hashing and its verification.
As for what I read, a good practice is to avoid transmitting the password. In this case, the server would send the hashed password and the client would test it with the one entered by the user. After that I have to tell the server if the authentication was successful or not. Ok, this won't work becouse anyone who connect to the socket the server is reading and send a "authentication ok" will be logged on.
The other option is to send the password's has to the server. In this case I don't see any actual benefit from hashing, since the "attacker" will have to just send the same hash to authenticate.
Probably I'm not getting some details, so, can anyone give me a light on this?
The short answer to your question is definitely on the side that permanently stores the hashes of the passwords.
The long answer: hashing passwords only allows to prevent an attacker with read-only access to your passwords storage (e.g. database) from escalating to higher power levels and to prevent you knowing the actual secret password, because lots of users use same pass across multiple services (good description here and here). That is why you need to do the validation on the storage side (because otherwize, as you've mentioned, attacker would just send "validation ok" message and that's it).
However if you want to implement truly secure connection, simple passwords hashing is not enough (as you've also mentioned, attacker could sniff TCP traffic and reveal the hash). For this purpose you need to establish a secure connection, which is much harder than just hashing password (in web world a page where you enter your pass should always be served over HTTPS). The SSL/TLS should be used for this, however these protocols lie on top of TCP, so you might need another solution (in common, you need to have a trusted certificate source, need to validate the server cert, need to generate a common symmetric encryption key and then encrypt all data you send). After you've established secure encrypted connection, encrypted data is useless to sniff, the attacker would never know the hash of the password.

What Necessitates a Different Protocol for Email?

In what way is HTTP inappropriate for E-mail? How (for example) does the statefulness of IMAP benefit client development?
What actually are the arguments for keeping them separate other then historical and backwards compatibility reasons?
SMTP, IMAP, and HTTP are specialized application-level protocols. If there was a generic application-level protocol which all of these could inherit from, you could usefully refactor things, but since that is not the case, wedging the other protocols into one of the existing protocols is hardly worth the effort, and would hardly simplify things.
As things are now, the history and backwards compatibility is not just a cultural heritage, it is also a long and complex process of defining application-specific features for each protocol. SMTP is store-and-forward, which introduces the need for audit headers (Received: et al.). IMAP was designed for concurrent access to a data store, which is what made it necessary to introduce state (who are you, where are you authorized to connect, which folder are you connected to, what have you already seen, read, or deleted). HTTP is fundamentally a pull protocol (pull down a web page) and the POST facility carries with it a lot of functionality specific to the CGI protocol and the overall content model of HTTP.
SMTP is a protocol that identifies the sender and the recipients to send individual mail messages, each mail server accepts (or not) mail to forward, eventually reaching the destination. HTTP is meant for anybody to connect to the server and look at (mostly the same) contents. They are quite fundamentally different, and so it makes a lot of sense to use different protocols.

OPENSSL vs IPSEC

just a very general question, but can somebody tell me when I use openSSL and
when IPSEC to secure data transfer over the internet? It seems both of them
are doing the same, only at different levels of the network protocol. So
I am not absolutely sure why we need both of them.
Cheers for your help
Yes, different levels of the network protocol. One is implemented in the OS and the other in an application.
So the reason that both are needed:
IPSEC can secure all traffic including that from applications that don't use encryption. But, both sides must use an OS that supports IPSEC and must be configured by the system administrator.
SSL can secure the traffic for one application. It does not need to use a particular OS and it does not need administrator access permissions to configure it.
You are getting it all wrong buddy...IPSEC is required for a secure communication between two machines.
Like you want to send a packet to other machine but you want that no one could possibly even determine what protocol you are using (tcp/udp.. etc) then you use this IPSEC. and it is not all over there is so much to explore about IPSEC.
openssl is you can say just a encrytion/authentication functions library.
A clear difference could be understood wh a little example.
Suppose you want to secure traffic between two machines so you create secure encrypted packet , send it to other machine there it needs to be decrypted based on security associations.All this is part of IPSEC Protocol.
While when encrypting the packet on your sending machine you may have used some C/Linux functions to encrypt the packet.This is where openssl comes in place.
Similarly on the other end when you will capture the packet and extract the required part then you can decrypt it using openssl function used on your machine.
I tried explaining it with my best ... hope it helped !!! If still you have any doubt do clear !!!
IPSec is based on a configuration file that runs in the background and encrypts all the data between two machines. This encryption is based on IP pairs, an initiator and a responder (at least that's the configuration they use at my workplace, which more or less conforms to the standards). ALL the IP traffic between the two machines is then encrypted. Neither the type nor the content of the traffic is shown. It has its own encapsulation that encapsulates the WHOLE packet (including all the headers that the packet previously had). The packet is then decapsulated (if that's a word) at the other end to get a fully formed packet (not just the payload). The encryption might be using the encryption provided by SSL (e.g. OpenSSL).
SSL, on the other hand, encrypts the data and then you can do what ever you want with it. You can put it on a USB and then give it to someone or just keep it encrypted locally to prevent data theft or send it over the internet or a network (in which case the packet itself won't be encrypted, only the payload, which will be encrypted by SSL).

How to tell if a Request is coming from a Proxy?

Is it possible to detect if an incoming request is being made through a proxy server? If a web application "bans" users via IP address, they could bypass this by using a proxy server. That is just one reason to block these requests. How can this be achieved?
IMHO there's no 100% reliable way to achieve this but the presence of any of the following headers is a strong indication that the request was routed from a proxy server:
via:
forwarded:
x-forwarded-for:
client-ip:
You could also look for the proxy or pxy in the client domain name.
If a proxy server is setup properly to avoid the detection of proxy servers, you won't be able to tell.
Most proxy servers supply headers as others mention, but those are not present on proxies meant to completely hide the user.
You will need to employ several detection methods, such as cookies, proxy header detection, and perhaps IP heuristics to detect such situations. Check out http://www.osix.net/modules/article/?id=765 for some information on this situation. Also consider using a proxy blacklist - they are published by many organizations.
However, nothing is 100% certain. You can employ the above tactics to avoid most simple situations, but at the end of the day it's merely a series of packets forming a TCP/IP transaction, and the TCP/IP protocol was not developed with today's ideas on security, authentication, etc.
Keep in mind that many corporations deploy company wide proxies for various reasons, and if you simply block proxies as a general rule you necessarily limit your audience, and that may not always be desirable. However, these proxies usually announce themselves with the appropriate headers - you may end up blocking legitimate users, rather than users who are good at hiding themselves.
-Adam
Did a bit of digging on this after my domain got hosted up on Google's AppSpot.com with nice hardcore porn ads injected into it (thanks Google).
Taking a leaf from this htaccess idea I'm doing the following, which seems to be working. I added a specific rule for AppSpot which injects a HTTP_X_APPENGINE_COUNTRY ServerVariable.
Dim varys As New List(Of String)
varys.Add("VIA")
varys.Add("FORWARDED")
varys.Add("USERAGENT_VIA")
varys.Add("X_FORWARDED_FOR")
varys.Add("PROXY_CONNECTION")
varys.Add("XPROXY_CONNECTION")
varys.Add("HTTP_PC_REMOTE_ADDR")
varys.Add("HTTP_CLIENT_IP")
varys.Add("HTTP_X_APPENGINE_COUNTRY")
For Each vary As String In varys
If Not String.IsNullOrEmpty(HttpContext.Current.Request.Headers(vary)) Then HttpContext.Current.Response.Redirect("http://www.your-real-domain.com")
Next
You can look for these headers in the Request Object and accordingly decide whether request is via a proxy/not
1) Via
2) X-Forwarded-For
note that this is not a 100% sure shot trick, depends upon whether these proxy servers choose to add above headers.

Resources