beginner here.
I am currently looking at some C&C traffic from an infected machine and have come across some interesting TCP segments within the PCAP file. Most of the traffic is just regular HTTPS traffic where I could see the TLS handshake occur - however, there is one connection between this machine and a C2 host where there was only a three-way TCP handshake followed by an encrypted connection. No exchange of cipher suites, etc. like with TLS. It was just a handshake and then immediately encrypted traffic with nothing else. How does this occur? I feel like this is a stupid question to ask but can't seem to find any information on Google relating to this.
One of my assumptions is that the machine and server already know which keys to use for encrypting/decrypting data as multiple pairs are dropped onto the machine when the infection occurs.
Handshake followed by encrypted traffic:
Payload:
I would appreciate any help! Thank you.
The traffic is not necessarily encrypted, all we see is a hexadecimal Data exchange - it can be only encoded as well.
There is no hard requirement to use any higher layer application protocols such as HTTPS with TLS etc. TCP is already standing for Transmission Control Protocol, so you can transmit data with it.
And yes, malware can ship the keys, but then you have a good chance to retrieve and decode the communication after reverse engineering.
You only need to open a listening and a sender TCP socket and can send any data of your choice. Maybe read a bit about and play with TCP sockets from here.
I am using mailR to send emails through R. This is my code
send.mail(from = [from],
to = [to],
subject = "msg",
body = "contents",
html = FALSE,
inline = FALSE,
authenticate = TRUE,
smtp = list(host.name = "smtp.gmail.com",
port = 465,
user.name = [username],
passwd = [password],
ssl = TRUE),
attach.files = "/home/User/outputlog.txt",
send = TRUE)
I am sending sensitive info in the attachment. I am sending it through SSL.
I read this post about how secure SSL is and it looks pretty secure.
Does this message get encrypted in transit?
In theory, yes (for some definition of "transit"), but in practice for "Does this message get encrypted in transit?" the answer is maybe. In short, just ssl = True or equivalent put somewhere does almost not guarantee anything really, for all the reasons explained below.
Hence you are probably not going to like the following detailed response, as it shows basically that nothing is simple and that you have no 100% guarantee even if you do everything right and you have A LOT of things to do right.
Also TLS is the real true name of the feature you are using, SSL is dead since 20 days now, yes everyone use the old name, but that does not make this usage right nevertheless.
First, and very important, TLS provides various guarantees, among which confidentiality (the content is encrypted while in transit), but also authentication which is in your case far more important, and for the following reasons.
You need to make sure that smtp.gmail.com is resolved correctly, otherwise if your server uses lying resolvers, and is inside an hostile network that rewrites the DNS queries or responses, then you can send an encrypted content... to another party than the real "smtp.gmail.com" which makes the content not confidential anymore because you are sending it to a stranger or an active attacker.
To solve that, you need basically DNSSEC, if you are serious.
No, and contrary to what a lot of people seem to believe and convey, TLS alone or even DOH - DNS over HTTPS - do not solve that point.
Why? Because of the following that is not purely theoretical since it
happened recently (https://www.bleepingcomputer.com/news/security/hacker-hijacks-dns-server-of-myetherwallet-to-steal-160-000/), even if it was in the WWW world and not the email, the scenario can be the same:
you manage to grab the IP addresses tied to the name contacted (this can be done by a BGP hijack and it happens, for misconfigurations, "policy" reasons, or active attacks, all the time)
now that you control all communications, you put whatever server you need at the end of it
you contact any CA delivering DV certificates, including those purely automated
since the name now basically resolve to an IP you control, the web (or even DNS) validation that a CA can do will succeed and the CA will give you a certificate for this name (which may continue to work even after the end of the BGP hijack because CAs may not be quick to revoke certificates, and clients may not properly check for that).
hence any TLS stack accepting this CA will happily accept this certificate and your client will send securely content with TLS... to another target than the intended one, hence 0 real security.
In fact, as the link above shows, attackers do not even need to be so smart: even a self signed certificate or an hostname mismatch may go through because users will not care and/or library will have improper default behavior and/or programmer using the library will not use it properly (see this fascinating, albeit a tad old now, paper showing the very sad state of many "SSL" toolkits with incorrect default behavior, confusing APIs and various errors making invalid use of it far more probably than proper sane TLS operations: https://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf)
Proper TLS use does not make DNSSEC irrelevant. Both targets and protects against different attacks. You need both to be more secure than just with one, and any of the two (properly used) does not replace the other. Never has and never will.
Now even if the resolution is correct, someone may have hijacked (thanks to BGP) the IP address. Then, again, you are sending to some host some encrypted content except that you do not really authenticate who is this host, so it can be anyone if an attacker managed to hijack IP addresses of smtp.gmail.com (it does not need to do it globally, just locally, "around" where your code execute).
This is where the very important TLS property of authentication kicks in.
This is typically done through X.509 certificates (which will be called - incorrectly - SSL certificates everywhere). Each end of the communication authenticate the other one by looking at the certificate it presented: either it recognizes this certificate as special, or it recognizes the issuing authority of this certificate as trusted.
So you do not just need to connect with TLS on smtp.gmail.com you also need to double check that the certificate then presented:
is for smtp.gmail.com (and not any other name), taking into account wildcards
is issued by a certificate authority you trust
All of this is normally handled by the TLS library you use except that in many cases you need at least to explicitly enable this behaviour (verification) and you need, if you want to be extra sure, to decide clearly with CAs you trust. Otherwise, too many attacks happen as can be seen in the past by rogue, incompetent or other adjectives CAs that issued certificates where they should not (and yes noone is safe against that, even Google and Microsoft got in the past mis-issued certificates with potential devastating consequences).
Now you have another problem more specific to SMTP and SMTP over TLS: the server typically advertises it does TLS and the client seeing this then can start the TLS exchange. Then all is fine (baring all the above).
But in the path between the SMTP server and you someone can rewrite the first part (which is in clear) in order to remove the information that this SMTP server speaks TLS. Then the client will not see TLS and will continue (depending on how it is developed, of course to be secure in such cases the client should abort the communication), then speaking in clear. This is called a downgrade attack. See this detailed explanation for example: https://elie.net/blog/understanding-how-tls-downgrade-attacks-prevent-email-encryption/
As Steffen points out, based on the port you are using this above issue of SMTP STARTTLS and hence the possible downgrade does not exist, because this is for port 25 which you are not using. However I prefer to still warn users about this case because it may not be well known and downgrade attacks are often both hard to detect and hard to defend against (all of this because protocols used nowadays were designed at a time where there was no need to even think about defending one against a malicious actor on the path)
Then of course you have the problem of the TLS version you use, and its parameters. The standard is now TLS version 1.3 but this is still slowly being deployed everywhere. You will find many TLS servers only knowing about 1.2
This can be good enough, if some precautions are taken. But you will also find old stuff speaking TLS 1.1, 1.0 or even worse (that is SSL 3). A secure client code should refuse to continue exchanging packets if it was not able to secure at least a TLS 1.2 connection.
Again this is normally all handled by your "SSL" library, but again you have to check for that, enable the proper settings, etc.
You have also a similar downgrade attack problem: without care, a server first advertise what it offers, in clear, and hence an attacker could modify this to remove the "highest" secure versions to force the client to use a lower versions that has more attacks (there are various attacks against TLS 1.0 and 1.1).
There are solutions, specially in TLS 1.3 and 1.2 (https://www.rfc-editor.org/rfc/rfc7633 : "The purpose of the TLS feature extension is to prevent downgrade
attacks that are not otherwise prevented by the TLS protocol.")
Aside and contrary to Steffen's opinion I do no think that TLS downgrade attacks are purely theoretical. Some examples:
(from 2014): https://p16.praetorian.com/blog/man-in-the-middle-tls-ssl-protocol-downgrade-attack (mostly because web browsers are eager to connect no matter what so typically if an attempt with highest settings fail they will fallback to lower versions until finding a case where the connection happens)
https://www.rfc-editor.org/rfc/rfc7507 specifically offers a protection, stating that: "All unnecessary protocol downgrades are undesirable (e.g., from TLS
1.2 to TLS 1.1, if both the client and the server actually do support
TLS 1.2); they can be particularly harmful when the result is loss of
the TLS extension feature by downgrading to SSL 3.0. This document
defines an SCSV that can be employed to prevent unintended protocol
downgrades between clients and servers that comply with this document
by having the client indicate that the current connection attempt is
merely a fallback and by having the server return a fatal alert if it
detects an inappropriate fallback."
https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2019/february/downgrade-attack-on-tls-1.3-and-vulnerabilities-in-major-tls-libraries/ discusses not less than 5 CVEs in 2018 that allows TLS attacks: " Two ways exist to attack TLS 1.3. In each attack, the server needs to support an older version of the protocol as well. [..] The second one relies on the fact that both peers support an older version of TLS with a cipher suite supporting an RSA key exchange." and "This prowess is achieved because of the only known downgrade attack on TLS 1.3." and "Besides protocol downgrades, other techniques exist to force browser clients to fallback onto older TLS versions: network glitches, a spoofed TCP RST packet, a lack of response, etc. (see POODLE)".
Even if you are using a correct version, you need to make sure to use correct algorithms, key sizes, etc. Sometimes some server/library enable a "NULL" encryption algorithm, which means in fact no encryption. Silly of course, but that exists, and this is a simple case, there are far more complicated ones.
This other post from Steffen: https://serverfault.com/a/696502/396475 summarizes and touches the various above points, and gives another views on what is most important (we disagree on this, but he answered here as well so anyone is free to take both views into account and make their own opinion).
Hence MTA-STS instead of SMTP STARTTLS, https://www.rfc-editor.org/rfc/rfc8461 with this clear abstract:
SMTP MTA Strict Transport Security (MTA-STS) is a mechanism
enabling mail service providers (SPs) to declare their ability to
receive Transport Layer Security (TLS) secure SMTP connections and
to specify whether sending SMTP servers should refuse to deliver to
MX hosts that do not offer TLS with a trusted server certificate.
Hence you will need to make sure that the host you send your email too does use that feature, and that your client is correctly programmed to handle it.
Again, probably done inside your "SSL Library" but this clearly show you need specific bit in it for SMTP, and you need to contact a webserver to retrieve the remote end SMTP policies, and you need also to do DNS requests, which gets back to you on one of the earlier point about if you trust your resolver or not and if records are protected with DNSSEC.
And with all the above, which already covers many areas and is really hard to do correctly, there are still many other points to cover...
The transit is safe, let us assume. But then how does the content gets retrieved? You may say it is not your problem anymore. Maybe. Maybe not. Do you want to be liable for that? Which means that you should maybe also encrypt the attachment itself, this is in addition (not in replacement) of the transport being secured.
The default mechanisms to secure email contents either use OpenPGP (has a more geek touch to it), or S/MIME (has a more corporate touch to it). This works for everything. Then you have specific solutions depending on the document (but this does not solve the problem of securing the body of the email), like PDF documents can be protected by a password (warning: this has been cracked in the past).
I am sending sensitive info
This is then probably covered by some contract or some norms, depending on your area of business. You may want to dig deeper into those to see exactly what are the requirements forced upon you so that you are not liable for some problems, if you secured everything else correctly.
First, even if SSL/TLS is properly used when delivering the mail from the client it only protects the first step of delivery, i.e. the delivery to the first MTA (mail transfer agent). But mail gets delivered in multiple steps over multiple MTA and then it gets finally retrieved from the client from the last mail server.
Each of these hops (MTA) has access to the plain mail, i.e. TLS is only between hops but not end-to-end between sender and recipient. Additionally the initial client has no control how one hop will deliver the mail to the next hop. This might be also done with TLS but it might be done in plain. Or it might be done with TLS where no certificates get properly checked which means that it is open to MITM attacks. Apart from that each MTA in the delivery chain has access to the mail in plain text.
In addition to that the delivery to the initial MTA might already have problems. While you use port 465 with smtps (TLS from start instead upgrade from plain using a STARTTLS command) the certificate of the server need to be properly checked. I've had a look at the source code of mailR to check how this is done: mailR essentially is using Email from Apache Commons. And while mailR uses setSSL to enable TLS from start it does not use setSSLCheckServerIdentity to enable proper checking of the certificate. Since the default is to not properly check the certificate already the connection to the initial MTA is vulnerable to man in the middle attacks.
In summary: the delivery is not secure, both due to how mail delivery works (hop-by-hop and not end-to-end) and how mailR uses TLS. To have proper end-to-end security you'll to encrypt the mail itself and not just the delivery. PGP and S/MIME are the established methods for this.
For more see also How SSL works in SMTP? and How secure is e-mail landscape right now?.
I have been writing my own version of the 802.11 protocol with network stack. This is mostly a learning experience to see more in depth on how networks work.
My question is, is there a standard for replying to client devices that a certain protocol is unsupported?
I have an android device connecting to my custom wifi device and immediately sending a TON of requests at the DNS port of my UDP protocol. Since I would like to test out other protocols I would very much like a way for my wifi device to tell the android device that DNS is not available and get it to quite down a little.
Thanks in advance!
I don't see a possibility to send a reply that a service is not available.
I can't find anything about this case in the UDP specification.
One part of the DNS specification assumes that there are multiple DNS servers and defines how to handle communication with them. This explains part of the behavior in your network, but does not provide much information how to handle it.
4.2.1 Messages - format - UDP usage
The optimal UDP retransmission policy will vary with performance of the
Internet and the needs of the client, but the following are recommended:
The client should try other servers and server addresses
before repeating a query to a specific address of a server.
The retransmission interval should be based on prior
statistics if possible. Too aggressive retransmission can
easily slow responses for the community at large. Depending
on how well connected the client is to its expected servers,
the minimum retransmission interval should be 2-5 seconds.
7.2 Resolver Implementation - sending the queries
If a resolver gets a server error or other bizarre response
from a name server, it should remove it from SLIST, and may
wish to schedule an immediate transmission to the next
candidate server address.
According to this you could try to send garbage back to the client, but this is rather a hack, or an error, but how does an error look like? Such a solution assumes that you have knowledge about the service that you don't support.
I believe that the DNS - requests can be avoided by using DHCP. DHCP allows to specify DNS-servers as listed in the linked page. This is the usual way that I know for a DNS-resolver in a LAN to get initial DNS servers although I don't find anything about this in the DNS specification. You can give the Android - device a DNS-server with DHCP so that it does to need to try to query your device. Querying your device could be a fallback.
Additionally to DNS there is mDNS which uses multicasts in the network to send queries. This seems not to be the protocol you have to do with because it uses the special port 5353.
Not possible to stop DNS in the way you intend. However, only for your tests you can check the UDP messages and find out the names the device is looking for. Then you update the hosts file (google how to do it: http://www.howtogeek.com/140576/how-to-edit-the-hosts-file-on-android-and-block-web-sites/) and add those names with some localoop IP address. That might work for your test.
Other possibility is to change DNS server to some localloop IP address: http://xslab.com/2013/08/how-to-change-dns-settings-on-android/
Again, this is only to avoid having all the DNS messages through the wifi connection.
Reading about Webrtc i get the feeling that "it will drop server bandwidth usage dramatically" except for "a few corner enterprise-firewall cases" where one needs a TURN server which relays the whole traffic between the peers.
For example, altough not webrtc related but the idea is similar, the wikipedia article of Chatroulette states: The website uses Adobe Flash to display video and access the user's webcam. Flash's peer-to-peer network capabilities (via RTMFP) allow almost all video and audio streams to travel directly between user computers, without using server bandwidth. However, certain combinations of routers will not allow UDP traffic to flow between them, and then it is necessary to fall back to RTMP.
Also similar articles on Webrtc focus on "yeah there might be problems with firewalls so you need a TURN server but ignore this and look at my awesome PeerConnection javascript code".
What i don't understand:
A Connection between two peers requires a server socket to be open so the peers can connect to it. Even UDP requires the concept of a udp server socket. Since nearly all not-server internet connected peers are behind some kind of router. E.g. every smartphone uses a wifi router, desktop PC's use the router of the service provider, ...
It shouldnt be possible to connect to a server socket hosted on a smartphone (browser webrtc server socket) or desktop cause of the router/firewall.
Thus my understanding is practically no two peers which need to send their traffic through the internet will be able to use a direct P2P connection, right?
So the only useful case to use Webrtc is in a LAN like environment, right?
Furtherly in case of a video chat service like chatroulette based on webrtc would need to use a bunch of TURN servers to relay nearly ALL traffic. Which makes Webrtc equally costly regarding server bandwidth like hosting my own solution.
So my question is: Am i right? If not what is the technical detail that allows a PeerConnection to be used without a TURN server but for two nodes separated by the Internet? How is the connection established on Layer 4 the TCP/UDP Transport Layer? Is it using UDP and all wifi routers allow hosting UDP server sockets or such? Which wouldnt make much sense cause of NAT and security.
UPDATE 1:
Digging a bit further i found what "symmetric nat" means and what it has to do with enterprises: In most enterprises it seems that the device connected to the internet has symmetric nat implemented. This means that the routing table which maps internal "internal-ip:internal-port" tuples to "internet-ip:internet-port" also stores "destination-ip:destination-port". So such routes/nats store a table for every (tcp?) connection having 6 columns "internal-ip:internal-port:internet-ip:internet-port:destination-ip:destination-port". This means no one else but the destination is allowed to communicate with internal-ip:internal-port.
Whereas non-enterprise-routers seem to only store the "internal-ip:internal-port:internet-ip:internet-port" combination. Thats also what is meant as "poke a hole in the firewall".
You're not right. All peers have IP addresses in order to communicate, and can be reached on those same addresses, provided a firewall allows it.
NATs tend to be optimized for client-initiated client-server traffic only. That typically means they initially allow outbound traffic only, and only allow inbound traffic on the same line after outbound traffic has happened. Perfect for servers. See this WebRTCHacks article for an intro to the problem.
This is where ICE comes in to attempt to poke holes in the firewall from the inside (client-side), in order to establish a line of communication directly between two peers, without needing any "server" socket, whatever that means.
How ICE works is quite complicated, and is explained in detail in the RFC.
But in broad terms it works in a number of steps:
Each peer (e.g. browser) has an "ICE agent" that collects candidates. Candidates are addresses (IP:port numbers) at which this peer can be reached, including:
Host candidates: e.g. immediate LAN/wifi/VPN IPs of the machine.
Server-reflexive candidates: public (outside-NAT) addresses of the machine, obtained by bouncing requests off mirroring (STUN) servers on the internet.
Relay candidates: addresses to a shared TURN server to forward data if all else fails.
Once discovered, candidates are inserted into the local SDP, and trickled over the signaling channel to the other peer, where they are inserted in it's remote description, where the other agent sees them.
Once an ICE agent has both local and remote candidates, it starts pairing local and remote candidates, and checks them for connectivity by sending STUN requests on them (effectively attempts at reaching the peer).
Successful pairs are ones both ICE agents have gotten a response back on (a 4-way handshake if you will).
If there's more than one successful pair, they're sorted by some metric, and the best pair becomes selected.
The selected pair is then used to send media over. One pair is needed for each track of (video or audio) media.
If a better pair is found later, the selected pair may change, affecting what address media is sent on.
TURN should only be needed in cases either where both clients are behind symmetrical NATs, or UDP traffic is blocked entirely.
Would it be possible to establish a TLS connection over TLS with OpenSSL or some other tool?
If possible, would the certificates for each level need to be different?
This should work just fine in theory, though I cannot say for sure whether OpenSSL or something would support it easily. You can technically use the same certificate for multiple TLS connections, even if one is nested inside another.
However, I want to point out that one common reason to nest TLS connections might be to tunnel data over a multi-layered encrypted connection, making some subset of the data available at each stop in the tunnel (i.e. peeling back a layer of the encryption). Using the same certificate doesn't really support that use case. Perhaps you've got another use case in mind.
Furthermore, it is cryptographically sound to encrypt encrypted data. That is, more encryption cannot make data less secure. Lastly, encrypting encrypted data alone will not make it more secure. That is, AES(AES(x,key1),key2) where key1 != key2 is not more (or less) secure than AES(x, key1). Just in case that was your motivation.
TLS doesn't care what data you're sending and receiving, so it could well be another TLS session (though I've no idea why you'd want to do that).
Since it's another, independent session, there's no reason you wouldn't be able to use the same certificate.
ths tls rfc has confirmed such situation, and the answer is yes, please refer to this : https://www.rfc-editor.org/rfc/rfc5246, I couldn't find which part has mentioned this, but I remember I have read it.