You have two application that need to exchange information among them in a local area network. The first application uses TCP for communication while the second uses UDP. Can we link both applications directly? If your answer is no, explain how we can link them?
(from a homework assignment)
I think the answer is no, we need to use some translator or middleware between them. But what?
As you figured out, you can't simply combine 2 types of connections into one.
TCP is a state-full connection, which requires two computers to establish the connection,
opposing to UDP which is stateless/connectionless connection that requires just one computer, send and forget style.
If you want them to communicate with each other, you must have a middle-ware.
The TCP application should have a TCP Client and TCP Server
The Middle-ware should have a TCP Server that will listen to the TCP application's client and establish connection and a TCP Client that will establish connection with the TCP application's server.
Now the middle-ware can fully communicate with the TCP Application.
In order to do so with the UDP Application, you should listen to UDP at a certain port in order to listen to incoming data from the UDP Application, and send to it over UDP to the UDP Applicaiton (the UDP Application need to listen on that port)
Related
I am trying to wrap my head around the ssl Tunneling process which is performed by an http proxy after receiving the CONNECT method from a client.
Stuff I can't seem to find or understand in docs, blogs, rfcs:
1) when setting up the tunnel, are the two connections from client-proxy and proxy-destination two separate connections or just one and the same? E.g. is there an tcp handshake between client-proxy and another between proxy-destination?
2) when starting the ssl handshake what node is targeted (ip address/hostname) by the client? The proxy or the destination host? Since ssl requires a point-to-point connection to make the authentication work my feeling tells me it should be the destination host. But then again that wouldn't make sense since the destination host isn't (directly) accessible from the clients perspective (hence the proxy).
when setting up the tunnel, are the two connections from client-proxy and proxy-destination two separate connections or just one and the same? E.g. is there an tcp handshake between client-proxy and another between proxy-destination?
Since the client makes the TCP connection to the proxy there is no other way than that the proxy is making another TCP connection to the server. There is no way to change an existing TCP connection to be connected to a different IP:port.
when starting the ssl handshake what node is targeted (ip address/hostname) by the client? The proxy or the destination host?
The SSL handshake is done with the destination host, not the proxy.
Since ssl requires a point-to-point connection to make the authentication
It doesn't need a point-to-point connection. It just needs that all data gets exchanged unmodified between client and server which is the case when the proxy simply forwards the data.
Ive build a local multiplayer game (multiplayer over wlan network). Now, I want to add an online multiplayer feature..
Currently, the network communications consist mostly of "signals" (tcp/udp packets sent from game-host peer to the game-client peers). I would like to use this mostly signal based communication for my online multiplayer (because of performance and efficiency ), too . But, since the host peer is now replaced by a server there will be a lot problems with sending signals (NAT, firewall,...).
So is there good solution to implement these signals?
regards
there will be a lot problems with sending signals (NAT, firewall,...)_
What problems exactly?
Normally, the clients establish a TCP connection to the server and the server uses this TCP connection to communicate with the clients.
For UDP-based communication the clients use Internet Gateway Device Protocol to forward ports on the router, so that the server can send UDP datagrams to the clients.
Assuming your server is in public internet, not behind any NAT. All the clients must initiate the connection. Otherwise the server can't know clients credential and can't connect. As the server has no NAT it will accept connection from client. And this connection client must keep alive. So when server needs to send some data there should be no problem.
This will work for both UDP and TCP.
I am looking at traffic generated by my computer when socks server is defined.
I read over the internet and see that its possible to route udp also trough the proxy server.
when i try using different apps that uses UDP and allows socks settings, it uses it only for tcp traffic. why?
I have defined SOCKS5, as i understand that v4 doesnt support udp (why?)
i tried an example, Vuze client - its expert mode allows to prefer udp traffic, setup socks server and even at this point, any udp goes directly to peers.
My wish is to monitor the traffic and see how its transmitted, is it over UDP connection with socks server, or does it actually connects to the socks server in TCP and sends the data, which is then sent via udp to the destination?
When a client wants to relay UDP traffic over the SOCKS5 proxy, the client makes a UDP associate request over the TCP. SOCKS5 server then returns an available UDP port to the client to send UDP packages to.
Client then starts sending the UDP packages that needs to be relayed to the new UDP port that is available on SOCKS5 server. SOCKS5 server redirects these UDP packages to the remote server and redirects the UDP packages coming from the remote server back to the client.
When client wants to terminate the connection, it sends a FIN package over the TCP. The SOCKS5 server then terminates the UDP connection created for the client and then terminates the TCP connection.
Double SSH Tunnel Manager support SOCKS5 With UDP
3proxy Server support UDP
I've read about WebSockets and I wonder why browser couldn't simply open trivial TCP connection and communicate with server like any other desktop application. And why this communication is possible via websockets?
It's easier to communicate via TCP sockets when you're working within an intranet boundary, since you likely have control over the machines on that network and can open ports suitable for making the TCP connections.
Over the internet, you're communicating with someone else's server on the other end. They are extremely unlikely to have any old socket open for connections. Usually they will have only a few standard ones such as port 80 for HTTP or 443 for HTTPS. So, to communicate with the server you are obliged to connect using one of those ports.
Given that these are standard ports for web servers that generally speak HTTP, you're therefore obliged to conform to the HTTP protocol, otherwise the server won't talk to you. The purpose of web sockets is to allow you to initiate a connection via HTTP, but then negotiate to use the web sockets protocol (assuming the server is capable of doing so) to allow a more "TCP socket"-like communication stream.
Web browsers operate at the Application layer, whereas TCP operates at the Transport Layer. As a web application developer, it's easier to send messages over the wire via the Application Layer instead of raw bytes at the Transport Layer.
Underlying WebSockets is TCP, it's just abstracted away for simplicity.
Websocket is a application layer protocol while TCP is transport layer protocol. At transport layer, we usually have TCP and UDP protocol. Any message from application layer need to go through transport layer to be transmitted to other machine. Hence, websocket and tcp have a relationship to each other and can not be comparable.
To make it simple, the websocket communications are done over TCP port number 80 (or 443 in the case of TLS-encrypted connections), which is of benefit for those environments which block non-web Internet connections using a firewall.
Would you like to use existed TCP port or open a new TCP port that might be blocked by firewall?
The socket API is the de-facto standard for TCP/IP and UDP/IP communications (that is, networking code as we know it). However, one of its core functions, accept() is a bit magical.
To borrow a semi-formal definition:
accept() is used on the server side.
It accepts a received incoming attempt
to create a new TCP connection from
the remote client, and creates a new
socket associated with the socket
address pair of this connection.
In other words, accept returns a new socket through which the server can communicate with the newly connected client. The old socket (on which accept was called) stays open, on the same port, listening for new connections.
How does accept work? How is it implemented? There's a lot of confusion on this topic. Many people claim accept opens a new port and you communicate with the client through it. But this obviously isn't true, as no new port is opened. You actually can communicate through the same port with different clients, but how? When several threads call recv on the same port, how does the data know where to go?
I guess it's something along the lines of the client's address being associated with a socket descriptor, and whenever data comes through recv it's routed to the correct socket, but I'm not sure.
It'd be great to get a thorough explanation of the inner-workings of this mechanism.
Your confusion lies in thinking that a socket is identified by Server IP : Server Port. When in actuality, sockets are uniquely identified by a quartet of information:
Client IP : Client Port and Server IP : Server Port
So while the Server IP and Server Port are constant in all accepted connections, the client side information is what allows it to keep track of where everything is going.
Example to clarify things:
Say we have a server at 192.168.1.1:80 and two clients, 10.0.0.1 and 10.0.0.2.
10.0.0.1 opens a connection on local port 1234 and connects to the server. Now the server has one socket identified as follows:
10.0.0.1:1234 - 192.168.1.1:80
Now 10.0.0.2 opens a connection on local port 5678 and connects to the server. Now the server has two sockets identified as follows:
10.0.0.1:1234 - 192.168.1.1:80
10.0.0.2:5678 - 192.168.1.1:80
Just to add to the answer given by user "17 of 26"
The socket actually consists of 5 tuple - (source ip, source port, destination ip, destination port, protocol). Here the protocol could TCP or UDP or any transport layer protocol. This protocol is identified in the packet from the 'protocol' field in the IP datagram.
Thus it is possible to have to different applications on the server communicating to to the same client on exactly the same 4-tuples but different in protocol field. For example
Apache at server side talking on (server1.com:880-client1:1234 on TCP)
and
World of Warcraft talking on (server1.com:880-client1:1234 on UDP)
Both the client and server will handle this as protocol field in the IP packet in both cases is different even if all the other 4 fields are same.
What confused me when I was learning this, was that the terms socket and port suggest that they are something physical, when in fact they're just data structures the kernel uses to abstract the details of networking.
As such, the data structures are implemented to be able to distinguish connections from different clients. As to how they're implemented, the answer is either a.) it doesn't matter, the purpose of the sockets API is precisely that the implementation shouldn't matter or b.) just have a look. Apart from the highly recommended Stevens books providing a detailed description of one implementation, check out the source in Linux or Solaris or one of the BSD's.
As the other guy said, a socket is uniquely identified by a 4-tuple (Client IP, Client Port, Server IP, Server Port).
The server process running on the Server IP maintains a database (meaning I don't care what kind of table/list/tree/array/magic data structure it uses) of active sockets and listens on the Server Port. When it receives a message (via the server's TCP/IP stack), it checks the Client IP and Port against the database. If the Client IP and Client Port are found in a database entry, the message is handed off to an existing handler, else a new database entry is created and a new handler spawned to handle that socket.
In the early days of the ARPAnet, certain protocols (FTP for one) would listen to a specified port for connection requests, and reply with a handoff port. Further communications for that connection would go over the handoff port. This was done to improve per-packet performance: computers were several orders of magnitude slower in those days.