Is it possible to push something (maybe a text snippet) to n number(1000s) of Unix hosts over HTTP using comet or something like that?
Basically my requirement is to transfer a text file to multiple Unix hosts at one go; currently I am using SSH and its rather slow :(
I thought to cron a poll through wget/curl but that causes lots of unwanted traffic.
Any insights please?
Take a look at Udpcast - might of might not be what you are looking for. Here is some guy's blog about using it.
Comet is unrelated to this, each client will still have its own connection. If you have control of the network you could use multicast to send it in one go. Or if you have control of the clients you could have them all forward it to each other to spread the load out from the first pc.
Related
So I'm thinking of building an application that acts like a VPN of sorts. So, the idea is my application should collect all traffic going from the device it is running on and handles them on it's own instead of forwarding them into the internet. My application will forward all this traffic/packets to an external server that performs whatever the original request was intended on. The same should apply in reverse also.
This thread Routing all packets through my program? gave me a few places to start with...
So far my idea is to use a packet capturing library and capture all packets and pass them on to another section of my program where another header is added on top of the existing packets, then sent to my external server. To server parses the header and determines the destination address and action and gets a response. This response is then wrapped with another header and is sent to my application. With help of netfilter PREROUTING hook I can forward the packets to the required application...
So this is as far as I thought of this. But you see I'm relatively new to network concepts and very much interested to move forward. So any suggestions on how-to's, or this might not work instead try this, is welcome. Even if my entire idea is faulty, please convey it. I'm not expecting you to explain things entirely, just point me some stuff that could be useful
And lastly note that the result I'm intending to get out of this is to demonstrate how I can unblock content within an organizational network. So most administrators block based on domains and stuff. So most one won't block connections to servers. But worry not I'm seriously not going to use this. This is just to improve my knowledge and out of my own interest...
So any help is appreciated. Thanks in advance...
I am working for a company that is providing File-Share-Software for all sorts of Protocols such as FTP, SFTP, FTPS and so on. One of our customers is facing an issue with Key-Auth and spontaneously login-problems.
Going trough the code I am pretty certain that the server collapses with too many requests at the same time. What I need right now is a simple tool to test a situation just like this. I need a simple SFTP-Fuzzer or Stresser, sending invalid or broken Auth-Attempts to the SFTP-Server.
I am not a developer but a technician and instead of writing something myself (which would take forever) I would love to have a simple script or toolset to go...if there is one.
Ok, found one faster than I thought.
Steps:
Download Kali Linux (or any Distro that contains Metasploit)
Fire up Kali Linux and put it in the same subnet as your SFTP-Server
Start Metasploit and use the SSH-Fuzzer /auxiliary/fuzzer/ssh/ssh_version_2
Set RHOST and RPORT to the relevant IP and port your server is listening to
Exploit and see what will happen
I have a VM from Digital Ocean. It currently has two domains linked to the VM.
I do not use any other web server but Golang's built in http module. Performance-wise I like it, and I feel like I have a full control over it.
Currently I am using a single Go program that has multiple websites built in.
http.HandleFunc("test.com/", serveTest)
http.HandleFunc("123.com/", serve123)
http.HandleFunc("/", serve123)
As they are websites, Go program is using port 80 for that.
And the problem is when I am trying to update only 1 website, I have to recompile whole thing as they are written in the same code.
1) Is there a way to make it hot-swappable only with Golang (without Nginx or Apache)
2) What would be a standard best practice?
Thank you so much!
Well, you can do hotswapping in go, but I really wouldn't want to do that unless really ncecessary as the complexity added isn't negligible (and I'm not talking about code).
You can have something close with a kind of proxy that would sit in front of the program and do a graceful swap whenever your binary change : the principle is to have the binary on one port, the proxy on another. When a new binary is ready, you run it on another port, and make the proxy redirect to the new port, then gracefully shutdown the old one.
There was a tool for that in Go that I can't remember the name of…
EDIT: not the one I had in mind, but close call https://github.com/rcrowley/goagain
Personnal advice: use a reverse proxy for that, its much more simple to do. My personnal setup is to use h2o to terminate SSL, HTTP2, etc, and send the requests to the various websites running on the background. Not only Go ones, though, but also PHP ones, a Gitlab instance, etc. Its much more flexible, and the performance penalty of the proxy is small…
I am working on providing analytics for our web property based on instrumentation data we collect via a simple image beacon. Our data pipeline starts with Flume, and I need the fastest possible way to parse query string parameters, form a simple text message and shove it into Flume.
For performance reasons, I am leaning towards nginx. Since serving static image from memory is already supported, my task is reduced to handling the querystring and forwarding a message to Flume. Hence, the question:
What is the simplest reliable way to integrate nginx with Flume? I am thinking about using syslog (Flume supports syslog listeners), but I struggle with how to configure nginx to forward custom log messages to a syslog (or just TCP) listener running on a remote server and on a custom port. Is it possible with existing 3rd party modules for nginx or would I have to write my own?
Separately, anything existing you can recommend for writing a fast $args parser would be much appreciated.
If you think I am on a completely wrong path and can recommend something better performance-wise, feel free to let me know.
Thanks in advance!
You should parse nginx log file like tail -f do and then pass results to Flume. It will be the most simple and reliable way. The problem with syslog is that it blocks nginx and may completely stuck under high-load or if something goes wrong (this is why nginx doesn't support it).
I'm playing a game and i'm trying to send some custom requests to the server in order to perform some tasks easier .. While i will gain little to none from this, i have become very interested in the educational part of it.
Since the game runs partially on client via a .jar and/or a .cab file i think it is run by JVM - correct me if im wrong
I have captured some traffic send by the game via wireshark. The protocol is TCP and it looks like this:
!, 1338,102,264,0.0 ,0.0,32433553,0, 102,264,
Nevermind all the numbers - thats for me to figure out.
But when i create and send a similar packet via a couple of different programs it always fails. This is of course because i am sending the wrong sequence number along with the TCP-packet.
So in order to not mess up the sequence-number i figure i will have to inject the process running the game and then somehow make it send my custom packets.
How do i go about that ?
You can't mess with the TCP sequence number in pure Java. Java doesn't even do that itself, the TCP stack does all that.
It is most unlikely that this is your real problem.