I am trying to implement an attack similar to a normal SYN flood attack in my network created on UnetStack. As UnetSocket() requires parameters like api and localhost to create a socket, providing fake or random api number results in error. So we have to create a node with that api just to create the socket. Creating so many nodes simply to exhaust the server is not feasible from the attacker's point of view. Is there any way I can create the sockets more feasibly? I appreciate guidance from the expert in this domain. (If anything mentioned above is wrong, please correct me.)
Thank you
Related
Should information such as metrics generated from an application that are devoid of any business information, still be subject to encryption/decryption over HTTPS, when being transmitted within the eco system of an organization, that sits behind firewalls?
The reason I am asking this question is that, since the metrics data does not give away any business information, and is behind a firewall already, beyond everything, since the data is tremendous in size (time-series data in the counts of millions of records per second), does it make sense to reduce the computational complexity involved in using HTTPS, that forces encryption/decryption at every hop of the metrics' journey from source to destination, by redirecting metrics data with an ingress policy applied, that routes the packets via another port such as 8080 to skip encryption/decryption, thus saving us BIG on resource utilization, and of course reduced time complexity?
Or is it a known compromise that can in some way turn into a vulnerability hole, that can lead to breaches in the system?
Context:
The applications being monitored are communicating over HTTPS.
The metrics scraping agents are asked to communicate over HTTP
Ingress policy applied on the application node, recognizes the calls from the known metrics scraping agent and routes the packets via a non HTTPS port such as 8080, in order to skip the certificate validation plus mainly, the decryption of metrics payload in the request coming in.
I am looking for suggestions and inputs, especially from someone who has had this problem to solve in their experience. Anybody else with relevant information is more than welcome to add to it.
Any leads appreciated.
Thank you, in advance.
the metrics data does not give away any business information
I think this is not true. Metrics can record traffic patterns also in a business context (e.g.: what users searched for/bought the most, etc.).
Also, it can accidentally contain sensitive information (it should not but accidents can happen). Additionally, it can help attackers to get more data about:
Your infrastructure (what platforms you use)
Your environment (os, java version, etc.)
Your app topology (who calls who)
Please check the Fallacies of distributed computing:
#4 The network is secure
Being behind a firewall does not mean attackers can't get in, that's one of the reasons why you use HTTPS on the internal network.
I'm trying out gRPC as a solution to a problem and im wondering if its possible to do that, i have not found anything regarding this.
Ill exemplify the problem
Let's say i have a distributed server A of 100 nodes.
Those servers are going to call another server B, which is a single node, using gRPC calls. Each node has a pool of connections open to server B.
The goal is to make sure server B receives all requests in order. Since latency can vary, if two nodes of server A sends requests almost at the same time, it would be possible to receive those requests in an incorrect order ?
Would apreciate any help. Thanks
inter stream order is not guaranteed in http2 layer. your best bet is client generates a sequence number (this is yet another very hard problem), so you can handle it in order on the server side (B).
Note that telling order of request itself is very hard unless you have an atomic clock.
this problem usually means you are trying solve the actual problem in wrong way. with some legacy code this can be unavoidable, but better to fix underlying issue rather than fix it in hacky way or something impossible.
I want to create a P2P application on the internet. What is the best or if none exist a good enough way to do auto-discovery of other nodes in a decentralized network?
Grothoff and GauthierDickey from the GNUnet project (an anonymous censorship-resistant file-sharing network) researched on the question of bootstrapping a p2p network without any central hostlist.
They found that for the Gnutella (Limewire) network a random ip search needed on average 2500 connection attempts to find a peer.
In the paper they proposed a method which reduced the required connection attempts to 817 for Gnutella and 51 for the E2DK network.
Achieved was this through creating a statistical profile of p2p users for every DNS organization, this small (around 100kb) discovery database has to be created in advance and shipped with the p2p client.
This is the holy grail of P2P. There isn't a magic solution really - there's no way a node can discover other nodes without a good known point to act as a reference (well, you can do so on a LAN by using broadcasting, but not on the internet). P2P filesharing tends to work by having known websites distributing 'start points' for discovery, and then further discovery (I would expect) can come from asking nodes what other nodes they know about.
A good place to start on research would be Distributed Hash Tables.
As for security, that topic will be in the literature somewhere, I should think - again I would recommend Wikipedia. Non-existent ones are trivially dealt with: if you can't contact an IP/port, don't keep it on your list, and if a node regularly provides non-existent pointers, consider de-prioritising it or removing it from your list entirely.
For evil nodes, it depends on your use case, but let's say you are doing file sharing. If you request a section of a file, check with several nodes what the file section's hash should be, and then request by hash. If the evil node gives you a chunk that has a different hash, then you can again de-prioritise or forget that node.
Distributed processing systems work a little differently: they tend to ask several unrelated nodes to perform the same work, and then they use a voting system (probably using hashing again) to determine whether evilness is at hand. If a node provides consistently bad results, the administrator is contacted or the IP is removed from the known nodes list.
ok, for two peers to find each other they both have to know a common, lets say, mediator to exchange IPs once. You can use anything for this kind of the first handshake whilst being able to WRITE and READ from that "channel". i.e: DNS (your well known domains), e-Mail, IRC, Twitter, Facebook, dropbox, etc.
I have a setup in which two nodes are going to be communicating a lot. On Node A, there are going to be thousands of processes, which are meant to access services on Node B. There is going to be a massive load of requests and responses across the two nodes. The two Nodes, will be running on two different servers, each on its own hardware server.
I have 3 Options: HTTP/1.1 , rpc:call/4 and Directly sending a message to a registered gen_server on Node B. Let me explain each option.
HTTP/1.1 Suppose that on Node A, i have an HTTP Client like Ibrowse, and on Node B, i have a web server like Yaws-1.95, the web server being able to handle unlimited connections, the operating system settings tweaked to allow yaws to handle all connections. And then make my processes on Node A to communicate using HTTP. In this case each method call, would mean a single HTTP request and a reply. I believe there is an overhead here, but we are evaluating options here. The erlang Built in mechanism called webtool, may be built for this kind of purpose.
rpc:call/4 I could simply make direct rpc calls from Node A to Node B. I am not very susre how the underlying rpc mechanism works , but i think that when two erlang nodes connect via net_adm:ping/1, the created connection is not closed but all rpc calls use this pipe to transmit requests and pass responses. Please correct me on this one.Sending a Message from Node A to Node B I could make my processes on Node A to just send message to a registered process, or a group of processes on Node B. This too seems a clean option.
Q1. Which of the above options would you recommend and why, for an application in which two erlang nodes are going to have enormous communications between them all the time. Imagine a messaging system, in which two erlang nodes are the routers :) ? Q2. Which of the above methods is cleaner, less problematic and is more fault tolerant (i mean here that, the method should NOT have single point of failure, that could lead to all processes on Node A blind) ? Q3. The mechanism of your choice: how would you make it even more fault tolerant, or redundant? Assumptions: The Nodes are always alive and will never go down, the network connection between the nodes will always be available and non-congested (dedicated to the two nodes only) , the operating system have allocated maximum resources to these two nodes. Thank you for your evaluations
HTTP is definitely out. Just the round-trip overhead of creating a new connection is a problem.
As for Erlang connections and using Pids, you have the advantage that you can subscribe to node-down messages and handle the case where a node goes down. A single TCP connection should be able to give you very fast speeds, however, be aware that it works like a long pipe: messages are muxed and demuxed on a pipe which can affect latency on the line. It also means that large messages will block small messages from getting through.
How much bandwidth are you aiming for, and at what latency? What is the 95th and 99th percentile of answering messages? It is better to put up some rough numbers and then try to target these than just having "as fast as possible". Set your success criteria first.
Q1: HTTP will add additional overhead and will give you nothing in my opinion. HTTP would be useful if you were designing a REST API. Directly sending messages and rpc:call look about the same as far as overhead is regarded.
Q2: Sending messages is much much clearer. It's the way erlang is designed. With RPC calls you must always track which call is executed where and under which circumstances which can be a huge issue if the two servers have state. Also RPC calls are synchronous.
Q3: I would use UBF if I can afford minor overhead, otherwise I would directly send messages between the erlang nodes. If the bandwidth is an issue other trickery would be needed as well. Like encoding the messages in same way and then using some compression algorithm to reduce the size of the message, alternatively I may ditch the erlang message passing altogether and use UDP sockets.
It is not obvious that ! is the best way to go. Definitely, it is the easiest and the code will be the most elegant.
In terms of scalability, take under consideration that to use rpc/! you have to maintain an erlang cluster. I found it painful having just 10-20 nodes even in private cloud. I would never recommend bigger deployments on e.g. EC2, where io/latency/network is not deterministic.
I recommend to structure the project in a way that will let you exchange communication engine in the future. Also HTTP is pretty heavy, but there are options:
socket-socket (tcp/udp/sctp)
amqp (many benefits connected to load balancing)
zeromq (even nicer than amqp)
Betting on !/rpc and OTP cluster is risky. You will fight with full mesh overhead, master election algos and quorum/partition detection.
I am building a service which requires me to dynamically launch and close servers at many locations around the world, (for example using AWS). When a user visits my domain they need to be assigned to a local server with the lowest latency.
By assignment, I mean that for example the client makes an ajax call to example.com/getData, it should go directly to one particular server that is has been assigned to. Different servers will be doing different computation, so it is not sufficient to have some kind of general load balancing.
What general mechanisms/technology would allow me to 1) Assess the latency between a particular client and any server under my control? 2) Assign a particular client to a particular server? I cannot use just the IP addresses for example, since javascript has domain name based restrictions.
Thanks
Note: I do not have enough reputation to link all the technologies in the response, therefore sometimes you will see the links copied in plain text.
1) Assign users to a local server with the lowest latency is not always possible.
Sometimes the geographically closest server to a user is unexpectedly the one with the highest latency.
To find the lowest latency between your (running) servers and the users is not an easy task.
There might be many different hops (routers) between the client and the server, and any of them at any time can have problems, routes update, packet congestions and so on.
The quickest way to assess the latency is a ping, but it can be that the firewalls block this.
So the best way to achieve this is to use the anycast
All the major CDN providers implement this method. Some use the TCP anycast, which seems to be not recommended, and others UDP anycast. It is an open debate.
Anyway in order to implement anycast you need to be able to peer with the ISP routers, and normally this is not possible. Additionally there are good peers and bad peers.
Finally All this requires a deep knowledge of the routing protocols and the TCP/IP stack.
A quick and dirty solution could be to use BIND with the GEO-IP patch.
So you can define specific dns query responses per country.
What I mean is that, for instance, if you have a server in UK and one in US you can configure BIND to respond to users coming from europe to hit the UK server and users coming from US to hit the US server.
2) To assign a particular client to a particular server you can use the technique I described on the point 1 or you can use a proxy and sticky sessions.
HA-Proxy is a good product to achieve this. (the URL: xy.1wt.eu )
3) if you use the point 1, you will not have problems with cross domain ajax calls. In fact it is completely transparant for the client. For instance for the same domain example.com a user coming from US will resolve it to 1.1.1.1 whereas a user coming from Germany will resolve example.com to 2.2.2.2 (ip addresses are fake and used just as an example).
On a side note, a solution to do cross domain ajax call is JSON-P which has though some drawbacks, like the lack of support for POST.
If I were you I would go with the BIND and GEO-IP, because it would solve all three problems in once. (a part for the latency because is not always true that the geographically closest server is the one with the lowest latency.)