Send file to microservice using TCP transport layer? - tcp

Is it possible to send a file to a microservice using the TCP transport layer?
and all I get in my microservice controller is
{
filename: 'Screenshot_1.png',
mimetype: 'image/png',
encoding: '7bit'
}
I tried different approach and all of them have failed.

Related

Are there rules for gRPC port number?

I am currently developing an application which will expose both REST an gRPC endpoints.
I need to set a port for the gRPC server.
Are there any rules for the port number? Any special ranges other than the standard for REST services?
To my knowledge, no rules.
You'll see 50051 used as a gRPC default.
If you're multiplexing HTTP 1.x traffic on the same port as gRPC, you'll likely want to default to 80 (insecure) and 443 (secure) ports for the front-end service (often proxy) and 8080 and 8443 respectively for backend (proxied) services.
NOTE Google defaults to 8080 (for proxied containers) on Google Cloud Platform (e.g. App Engine, Cloud Run) with an often-ignored (but important) requirement that the deployed service bind to the value of PORT environment variable exported to the container environment (which defaults but may not always be 8080). Suffice to say, check your deployment platforms' requirements and adhere to their requirements.
No there are not any rules for the port number. Just be careful to assign Http2 protocol and SSL for gRPC (not mandatory but highly recommended). All you need is just to configure the Kestrel endpoints parameters in your appsettings.json. Set Endpoints with WebApi and gRPC with your custom name as follow:
"Kestrel": {
"Endpoints": {
"Grpc": {
"Protocols": "Http2",
"Url": "https://localhost:5104"
},
"webApi": {
"Protocols": "Http1",
"Url": "https://localhost:5105"
}
}
}

Nginx ingress : Host based routing on TCP port

Usage of the same TCP port for Rabbitmq 5672 and transfer requests to different namespaces/rabbitmq_service based on the host-based routing.
What works:
chart: nginx-git/ingress-nginx
version: 3.32.0
values:
- tcp:
5672: "cust1namespace/rabbitmq:5672"
Block reflected in nginx.conf:
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-cust1namespace-services-rabbitmq-5672";
}
listen :5672;
proxy_pass upstream_balancer;
}
Note: this will transfer all the requests coming to port 5672 to cust1namespace/rabbitmq:5672, irrespective of the client domain name and we want host-based routing based on domain name.
What is expected:
chart: nginx-git/ingress-nginx
version: 3.32.0
values:
- tcp:
cust1domainname:5672: "cust1namespace/rabbitmq:5672"
cust2domainname:5672: "cust2namespace/rabbitmq:5672"
Error:
Failed to render chart: exit status 1: Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(Service.spec.ports[3].port): invalid type for io.k8s.api.core.v1.ServicePort.port: got "string", expected "integer", ValidationError(Service.spec.ports[4].port): invalid type for io.k8s.api.core.v1.ServicePort.port: got "string", expected "integer"]
The final nginx.conf should look like:
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-cust1namespace-services-rabbitmq-5672";
}
listen cust1domainname:5672;
proxy_pass upstream_balancer;
}
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-cust2namespace-services-rabbitmq-5672";
}
listen cust2domainname:5672;
proxy_pass upstream_balancer;
}
A bit of theory
Approach you're trying to implement is not possible due to network protocols implementation and difference between them.
TCP protocol works on transport layer, it has source and destination IPs and ports, it does not have any hosts information within. In turn HTTP protocol works on application layer which seats on top of the TCP and it does have information about host where this request is intended to be sent.
Please get familiar with OSI model and protocols which works on these levels. This will help to avoid any confusion why this works this way and no other.
Also there's a good answer on quora about difference between HTTP and TCP protocols.
Answer
At this point you have two options:
Use ingress to work on application layer and let it direct traffic to services based on hosts which are presented in request body. All traffic should go through ingress endpoint (usually it's loadbalancer which is exposed outside of the cluster).
Please find examples with
two paths and services behind them
two different hosts and services behind them
Use ingress to work on transport layer and expose separate TCP ports for each service/customer. In this case traffic will be passed through ingress directly to services.
Based on your example it will look like:
chart: nginx-git/ingress-nginx
version: 3.32.0
values:
- tcp:
5672: "cust1namespace/rabbitmq:5672" # port 5672 for customer 1
5673: "cust2namespace/rabbitmq:5672" # port 5673 for customer 2
...

Websocket interceptor

I'm looking for a way to wrap/intercept a websocket with a proxy that enriches the responses coming back from the web socket, as implicitly as possible.
For example, lets say I have a websocket endpoint: wss://stream.abc.com:1234 and I want to enrich the incoming json messages.
A brute force approach would create a new endpoint that's listening and creating a web socket to the remote socket for every connection and proxying the communication enriching the responses before sending them back to the client.
However I wonder if there's a more standard library that can do that, maybe with nginx?
Thanks,
Z

How to send a HTTP Request using a tun tap interface

I am working on a network proxy project and a newbie to this field. I want to create a tun-tap interface and send an HTTP request through this interface. Here is my approach.
use tun_tap::Iface;
use tun_tap::Mode;
use std::process:Command;
fn cmd(cmd: &str, args: &[&str]) {
let ecode = Command::new(cmd)
.args(args)
.spawn()
.unwrap()
.wait()
.unwrap();
assert!(ecode.success(), "Failed to execte {}", cmd);
}
fn main() {
let iface = Iface::new("tun1",Mode::Tun).unwrap();
cmd("ip", &["addr", "add", "dev", 'tun1', '192.168.0.54/24']);
cmd("ip", &["link", "set", "up", "dev", 'tun1']);
// 192.168.0.53:8000 is my development server created by python3 -m http.server command
let sent = iface.send(b"GET http://192.168.0.53:8000/foo?bar=898 HTTP/1.1").unwrap();
}
But my development server is not receiving any request. And not displaying any error.
A TUN interface sends and receives IP packets. This means that the data you give to iface.send must be an IP packet in order to be delivered. You can see in your code you're not indicating what server you are connecting to because at this layer connections "don't even exist". The IP in the HTTP request happens to be there because HTTP protocol says so, but you must already be connected to the server when you send this information.
In order to send and receive data from a tun interface you'll have to build an IP packet.
Once you can send and receive IP packets, you'll have to implement the TCP protocol on top of that to be able to open a connection to an HTTP server. On this layer (TCP) is where the concept of "connection" appears.
Once you can open and send/receive data over a TCP connection, you'll have to implement the HTTP protocol to be able to talk to the HTTP server, i.e. "GET http://192.168.0.53:8000/foo?bar=898 HTTP/1.1".

Is different port used for every request from same client?

I have a web client which has code such as:
for(i = 0; i < 10; i++) {
$.ajax({
url: "url",
type: "GET/POST",
data: {
...
}
}).done(function (data) {
...
});
}
So I'm making 10 requests to the same server url (java servlet with doGet, doPost methods)
In this case, will 10 different ports be used at the server side for 10 different 10 requests? Or will those requests share the same server port?
Assuming that these requests are made successively, this will result in 10 connections from the client to the server.
The client port will be different, likely incrementing, between port 1025 and port 65535. The server port will be the same; port 80 or 443, for example.
The client IP/client port/serverIP/server port make up the IP port pair that are used to key the connection, allowing the server to distinguish one from the other. Of course, over TCP, the sequence number is also involved in keying the communication, but the IP port pair is the primary distinguishing factor for the TCP/IP stack.

Resources