Can we install zabbix server and agents in 2 different networks? - http

We are planning to install zabbix in our production environment as we need to monitor around 10-12 servers. The key point here is we are planning to install the zabbix server on an external internet server while these 10 agents are on intranet.These agents have restricted accesses and cannot be accessed from outside.
I would like to know if it is possible to connect these agents with the server using http proxy.How?

While you cannot use a HTTP proxy (at least without tunneling through it), your agents can connect to the server using active items in Zabbix. Note that this is configured on the item level.

Related

Building Proxy Site with Nginx and Rotating Proxy Service

Im' looking to build a similar application to https://www.proxysite.com/ but am not sure on the best architecture.
Looking to have a data flow like this.
User Web Browser -> myproxysite.com -> Ngninx Proxy Server (somehow rotating IP for each client session) -> Targetsite.com
Then the user would need to maintain a full session on Targetsite.com as a logged in user.
In this example, targetsite.com is always the same site and is pre-determined. The challenge we are facing is that targetsite.com is blocking our users based on IP, many of whom are accessing it from the same office network.
So my questions are:
Does this seem correct?
Is there anyway for me to configure nginx with a rotating proxy service like luminati? Or do I need to add an API software layer to handle the actual IP changes?
Any guidance on this one would be greatly appreciated!
While I can't help you with your application, I do want to suggest an alternative. You mentioned an office so it sounds like the users who will use the proxy are workers.
Luminati (now BrightData) has a proxy manager which you can host on any server. The proxy manager allows you to create ports (ie port 24000) and configure it with whatever proxy you want (doesn't have to be BrightData's proxy). It has a ton of different parameters that you can include for each proxy (including IP rotation) and each port can be configured to have a unique setup.
Then you simply go to your user PC, open the browser proxy settings, type the IP address of the server that the proxy manager is running on and the specific port you configured and voila. You have central control of the managing the proxies and your user's browser is proxied.
A big benefit of this is the logs in the proxy manager show all activity on each port you setup, so you can monitor traffic and the success rates right there.
Proxy manager: https://prnt.sc/13uyjgj

Reliable delivery of information between a browser and a locally run service using port knocking

The goal
Allow a browser to exchange information with a service running locally. Allow the service to figure out the user (logon session in Windows) who runs the browser. Avoid, if possible, storing a TLS certificate and private key on the machine. A bonus task: provide a solution for the setup where an anti-virus software like Kaspersky or Sophos proxies all TCP connections.
The story
The underlying OS is Windows, but can be any modern OS. There is a daemon running in the system. In case of Windows this is a Windows service. There is a JavaScript loaded by an Internet browser from a remote server which sends data to the daemon. The daemon does not have an HTTP/HTTP server. Instead the daemon opens N ports and listens for incoming connection. The N is a low two digits number.
The JS initiates TCP connections to a selected group of ports K in the range N. In the current implementation JS attempts to load JS scripts from 127.0.0.1:port-number. The daemon accepts the connection and immediately closes it (kinda port knocking). The daemon recovers the data from the ports "knocked" by the JS.
In the current implementation the backend chooses a unique tuple of ports, for example a 3 ports combination. The tuple is a key identifying the browser session. The service collects "knocks" - the ports accessed by a specific OS process. The service queries the backend using the collected ports.
One of the goals of the solution is to avoid implementation of HTTP/HTTPS server in the service and save maintenance of a SSL certificate.
The problem
The order in which JS connects to the ports is not defined. Specifically two browsers can run knocking sessions simultaneously.
The service can fail to open some of the ports in the range N because the
ports are busy.
The order is not critical because the server chooses a unique combination from the range N. I need the system to tolerate missing ports. I was thinking about choosing more than one tuple and using more than one range N.
The question
How can I adopt FEC for the problem? Does the design make sense?

Listen voice messages on server A from server B with asterisk

I have RealTime asterisk with 3 servers. In database I hold sippears only and voicemail boxes. Voicemail messages are stored on the system FILE_STORAGE.
Server A and B are for calls and sip registrations and Server C is dundi.
Currently everything work fine.. I can call from Server A to Server B. The problem is when I leave message to number who is busy and registered on Server B.. then this number disconnect and register on Server A -> he can't listen the messages because it is stored on Server B..
How can I make any user to be able to listen his messages no matter on which server are?
You have alot of options, most of each in clustering area.
Simplest options are:
Glusterfs setup on both server, voicemail in glusterfs directory. This one do failover
NFS/samba share on both servers.
mysql master-master replication, use ODBC_STORAGE, put all voicemails in db. This one is recommended if you also want easy access from web interface to your voice files and simple search/lookup/get message. Highly recommended use innodb tables and optimized mysql config.
Easiest way just for to be able to listen them no matter on which of both servers user is registered is NFS and mounting for example /var/spool/asterisk/. In this case you need to install some additional components.
Here is great tutorial how can you done this:
How to configure an NFS server and mount NFS shares - Ubuntu
Another way if you can make master-slave with two servers in cluster and using rsync . Then you can sync every X minutes/hours/days folder to remote server to keep them in case of failure.
rsync -a local_dir/ user#remote-host-ip:/path/to/dir

BMC Control-M - can it manage remote offsite servers

Our organisation uses Control-M to manage all the scripts within our network. However, we have recently entered a contract to have a new application hosted for us. It is an ecommerce application and has scripts run through Cron jobs.
What I'd like to know is that whether Control-M has any offsite agent functionality so that it can be installed on this remote site and then in some way keep in communication with the Control-M server we have within our infrastructure so that we can monitor the scripts along with the rest of our applications.
Thank you.
Control-M supports Agents anywhere as long as you have network connectivity. The default Server/Agent ports are 7005-7012 inclusive, so those will need opening on any firewall.
If you don't want the remote site to use the full Agent (which will need a local install) then you can use the Agentless option (works via WMI on Windows or SSH for Unix) and only needs defining on the Control-M Server side.
Not sure about you question. Control-M server works with tcp/ip to contact its agents and does not care if it is connecting to a server in the same datacenter or on another continent.
Install a VPN between you and your hosted application and install a control-m agent or setup an agentless scheduling using ssh and you are good to go.

State server in webfarm scenario?

This is a fairly basic question about state server but assume there are 2 servers behind a load balancer. How do I configure the session state server?
So, I have machine1 and machine2. I would assume that I would need to install the state server on 1 machine only and then use the internal IP to refer to that machine. Is this correct? As oppose too, I would not install state server on both machines.
In your scenario (and most webfarm scenarios), a single state server is right.
You could refer to it by the internal IP or setup a DNS entry for the IP on the internal network and refer to it using that.
Single state server is mad as you have no fault tolerance, if it does down its game over. You need a distibuted state server stored on both servers.

Resources