HTTP response times GUI - http

I'm looking for an application available on CentOS, that allows me to check periodic connectivity response times between that server and a specific port of a remote server (in this case servers a SOAP API).
Something that preferentially allows me to send periodic API calls, but if not possible, just telnet's that remote port, but shows results in a graphic.
Does someone know about an application that allows this, without the need for me to create a script that writes results to a log file that is less readable in terms of time perspective?

After digging and testing a bit more, ended up using netdata:
https://www.netdata.cloud/
Awesome tool, extremely simple to use and install.

Related

How to outsource code to other computers to run perpetually?

I’ve created a web scraper that scrapes info from web pages and populates the parameters of/makes an API post that is running perpetually (there are some tens of thousands of pages to scrape and each request takes about 1 second to prevent too many request, or 429, errors).
I am wanting to streamline the process by outsourcing the code to other IP addresses. If I run more requests from my IP, the site will likely begin to block the requests. The goal would be to have 4 or 5 instances of this code running perpetually.
The only solution I know of that would work is using VMs to run additional instances of the code, but I imagine there are simpler ways to achieve this goal.
"outsourcing" is the wrong word.
Terminology
You want "remote execution" or some kind of distributed computing, and probably even remote procedure calls.
You could use JSONRPC. or RPC/XDR or XML-RPC or CORBA or SOAP or REST above HTTP. You'll find (on github, gitlab, sourceforge, in your favorite Linux distribution, etc...) many free software libraries to help you (even libssh). You could even find distributed libraries for web scraping.
You could even more generally do some message passing (consider 0mq) or do some MapReduce. You probably want some text-based protocol (since they are easier to debug, e.g. a JSON based one) above perhaps Berkeley sockets.
Details are operating system specific.
If on Linux, read ALP, then syscalls(2), socket(7), socket(2) and related, then tcp(7).

Looking for a good method to transfer critical real time data over internet

I am searching for a good method to transfer data over internet, and I work in C++/windows environment. The data is binary, a compressed blob of an extracted image. Input and requirements are as follows:
6kB/packet # 10 packets/sec (60kBytes per second)
Reliable data transfer
I am new to network programming and so far I could figure out that one of the following methods will be suitable.
Sockets
MSMQ (MS Message Queuing)
The Client runs on a browser (Shows realtime images on browser). While server runs native C++ code. Please let me know if there are any other methods for achieving the same? Which one should I go for and why?
If the server determines the pace at which images are sent, which is what it looks like, a server push style solution would make sense. What most browsers (and even non-browsers) are settling for these days are WebSockets.
The main advantage WebSockets have over most proprietary protocols, apart from becoming a widely adopted standard, is that they run on top of HTTP and can thus permeate (most) proxies and firewalls etc.
On the server side, you could potentially integrate node.js, which allows you to easily implement WebSockets, and comes with a lot of other libraries. It's written in C++, and extensible via C++ and JavaScript, which node.js hosts a VM for. node.js's main feature is being asynchronous at every level, making that style of programming the default.
But of course there are other ways to implement WebSockets on the server side, maybe node.js is more than you need. I have implemented a C++ extension for node.js on Windows and use socket.io to do WebSockets and non-WebSocket transports for older browsers, and that has worked out fine for me.
But that was textual data. In your binary data case, socket.io wouldn't do it, so you could check out other libraries that do binary over WebSockets.
Is there any specific reason why you cannot run a server on your windows machine? 60kb/seconds, looks like some kind of an embedded device?
Based on our description, you ned to show image information, in realtime on a browser. You can possibly use HTTP. but its stateless, meaning once the information is transferred, you lose the connection. You client needs to poll the C++/Windows machine. If you are prety confident the information generated is periodic, you can use this approach. This requires a server, so only if a yes to my first question
A chat protocol. Something like a Jabber client running on your client, and a Jabber server on your C++/Windows machine. Chat protocols allow almost realtime
While it may seem to make sense, I wouldn't use MSMQ in this scenario. You may not run into a problem now, but MSMQ messages are limited in size and you may eventually hit a wall because of this.
I would use TCP for this application, TCP is built with reliability in mind and you can simply feed data through a socket. You may have to figure out a very simple protocol yourself but it should be the best choice.
Unless you are using an embedded device that understands MSMQ out of the box, your best bet to use MSMQ would be to use a proxy and you are then still forced to play with TCP and possibly HTTP.
I do home automation that includes security cameras on my personal time and I use the .net micro framework and even if it did have MSMQ capabilities I still wouldn't use it.
I recommend that you look into MJPEG (Motion JPEG) which sounds exactly like what you would like to do.
http://www.codeproject.com/Articles/371955/Motion-JPEG-Streaming-Server

What is the best method to send data from a device to a server

I am currently developing a website for an energy-monitoring company. We are trying to send high volumes data from the devices which record the data to a server so they can be processed in a database. The guy developing the firmware seems to think that the best way to send the data is to produce CSV files and send them via FTP. A program on the server needs to monitor the files received via FTP and run a PHP script to process them. I, however, feel that the best way of sending the data is via HTTP POST.
We had HTTP POST working and then I began trying to work with the CSVs which became a pain as reliably monitoring the files received via FTP meant editing the ProFTPD configuration file (which I found to be a near impossible task) and install a package called mod_exec (which comes with security risks) so that ProFTPD could run a PHP script. These issues and the fact that I am unfamiliar with the linux console which I am required to use extensively to set this up, makes the CSV method very difficult to set up. HTTP POST to me seems like a more direct way of sending the data without having to worry about files or relying on ProFTPD. It would also allow us to use identifiers to give the data being passed meaning as opposed to a string of values for which the meaning is not immediately apparent. In addition, the query string could be URL encoded to pass a multidimensional array which would work well given the type of data being passed.
Nevertheless, just because the HTTP POST method would be easier doesn't mean that the CSV method doesn't have advantages. Furthermore, the firmware guy has far more experience than me with computers so I trust his opinion.
Can you please help me to understand his point of view on the advantages of the CSV method and explain what the best method is?
You're right. FTP has major issues with firewalls, and especially doesn't work well on mobile (NAT'ted) IPv4. HTTP POST works far, far better under such circumstances, if only because nobody accepts an "internet" connection that breaks HTTP.
Furthermore, HTTP is a lot easier on the device as well. It's just a single-socket protocol, with trivial read/write semantics on that socket.
Some more benefits? HTTP has almost-native support for compression (gzip). HTTP transmission can start before the input is complete. HTTP is easier to secure (HTTPS)...
No, there really is little reason to use FTP.
The 'CSV method' (I'd call it the 'FTP method' though) has the advantage of being known to the embedded developer. The receiving side will have to create some way of checking if there is a file though. That adds complexity.
The 'HTTP method' has several advantages:
HTTP is easy to implement on the sending side
No need to create a file-checker
You can reply to the embedded device if everything went OK
I actually just implemented a system just like that (not too much data, but still) and use HTTP POST to send the data. I implemented the HTTP POST myself.

How do client-side web-based agents work?

I'm not sure if I'm asking the question properly. I'm referring to locally installed software, often called an "Agent" that keeps in regular communication with some host via HTTP. e.g. When you install LogMeIn, the Agent keeps in communication with the logmein.com server so that when you visit logmein.com with your web browser and connect to the agent, the server is able to initiate communication. The Agent, however, isn't a webserver, nor are any ports forwarded to the Agent. So, is the Agent constantly polling the server asking like a broken record, "Can I help you? Can I help you? Can I help you?" Or is the http connection from Agent to server somehow kept open? I know you can keep an http connection open, but A) how, and B) for how long? Does the Agent need to act like a less annoying broken record asking, "Can I help you? Yet? Yet? Yet?" with much more time in between each question? Or can the Agent ask once and wait indefinitely, asking again only once it learns that the connection has been dropped?
Bottom line is, I'd like to create a small little sample program for trying my hand at writing a client/server application that communicates via the Internet using HTTP. Either side needs to be able to initiate commands / requests. The Agent would likely communicate with the Server using some sort of API, perhaps RESTful. When I start the experiment, I'll be using Perl. It'd be fun to create a Hello World project that would have samples in many languages for many platforms how to write the agent and how to communicate with the server. The agent code would do client side things (e.g. determine public IP address) and send the data to the server. The server would act on the data (e.g. store IP address in a database). The server might also initiate a command to the Agent (e.g. Hey, Agent! What's your CPU type?) Proper authentication / authorization between Agent and Server is of course a necessity.
Are there any existing projects to model off of? Any existing documents? Perhaps I'm just missing terminology and if I just knew that everything I was asking can be summarized by the term foo, then the doors would be opened wide for what I could find in searches!
I looked into the code of Ubuntu's Landscape. It uses Python's Twister -- a web server for HTML5 Websockets. So I'd say what I was looking for in an answer is Websockets (bi-directional communication). That now has opened up a wealth of options, node.js, twister, mojolicious, and many many more as web servers. Turns out using Ajax to poll every few seconds is a very bad idea -- an overwhelming slam on web servers. Keep the connection open.

Secure data transfer over HTTP when HTTPS is not an option

I would like to write an application to manage files, directories and processes on hundreds of remote PCs. There are measurement programs running on these machines, which are currently managed manually using TightVNC / RealVNC. Since the number of machines is large (and increasing) there is a need for automatic management. The plan is that our operators would get a scriptable client application, from which they could send queries and commands to server applications running on each remote PC.
For the communication, I would like to use a TCP-based custom protocol, but it is administratively complicated and would take very long to open pinholes in every firewall in the way. Fortunately, there is a program with a built-in TinyWeb-based custom web server running on every remote PC, and port 80 is opened in every firewall. These web servers serve requests coming from a central server, by starting a CGI program, which loads and sends back parts of the log files of measurement programs.
So the plan is to write a CGI program, and communicate with it from the clients through HTTP (using GET and POST). Although (most of) the remote PCs are inside the corporate intranet, they are scattered all over the country, I would like to secure the communication. It would not be wise to send commands, which manipulate files and processes, in plain text. Unfortunately the program which contains the web server cannot be touched, so I cannot simply prepare it for HTTPS. I can only implement the security layer in the client and in the CGI program. What should I do?
I have read all similar questions in SO, but I am still not sure what to do in this specific situation. Thank you for your help.
There are several webshells but as far as I can see ( http://www-personal.umich.edu/~mressl/webshell/features.html ) they run on the top of an existing SSL/TLS layer.
There is also S-HTTP.
There are several ways of authenticating to an server (username/passwort) in a protected way, without SSL. http://www.switchonthecode.com/tutorials/secure-authentication-without-ssl-using-javascript . But these solutions are focused only on sending a username/password to the server.
Would it be possible to implement something like message-level security in SOAP/WS-Security? I realise this might be a bit heavy duty and complicated to implement, but at least it is
standardised
definitely secure
possibly supported by some libraries or frameworks you could use
suitable for HTTP

Resources