connect to specific asterisk instance CLI - asterisk

On a centos dedicated server,
I'm running two asterisk instances on different bind IPs.
when I do
asterisk -r
It connects to default asterisk instance which was started first.
I have tried :
asterisk -r -s NEW_IP_OF_SECOND_INSTANCE:5060
It gives unable to connect.
But IP and port are both correct. netstat shows listening ports.
How to I connect to that second asterisk instance?

-s option is used to specify Asterisk socket file (like /var/run/asterisk/asterisk.ctl). If you have multiple Asterisk instances running on one server, use this option with appropriate asterisk.ctl file.
5060 is a SIP port number, used to originate and receive VoIP calls rather than for management purposes, so you can't use it with asterisk -r command

Related

Asterisk sip trunk peer NIC

I'm working with Asterisk 14.7.6 and Freepbx 14.0.13.23 in a ec2 instance on AWS
At this moment I have a sip trunk with 3CX server working, I need to make another one with the same one.
I have had an idea of add another NIC in the asterisk and add externip parameter in the sip.conf file to add anocher sip trunk and I did it. When I puted sip show peers in the asterisk console, it shows "Status OK (100 ms)" but in 3CX the traffic incoming was from the first trunk.
It's possible create this kind of sip trunk? or I need to launch another machine create a kind of bridge between my asterisk and 3CX?
Thanks,
Only way do that without starting second INSTANCE of asterisk is use chan_pjsip or combination of chan_pjsip+chan_sip.
For first variant you should do multiple endpoints entity. For second just put one channel on one ip, second on other ip.
You also can start more than one asterisk process on host by using
asterisk -C asterisk_config.conf

Find IP address of collectd client

I've got several PCs, virtual and bare metal, that run clients of the collectd daemon and report their statuses to the monitoring server.
One of those PCs is incorrectly configured and reports localhost as its name.
How can I find its IP address?
The simple answer would be to run a tcpdump on the port used for collectd (port 2003 for example) and check the different IPs.
run ssh and pipe directly to the config file to see which one has the wrong host set:
echo "sudo nano /etc/collectd/collectd.conf | grep "Host" | ssh user#IP

How to create ssh tunnel and keep in running

I want to access machine A which is behind the firewall through a jump host from machine B.
I want to do the same either via ssh keys or via username and password.
What will be the steps and the commands to achieve the same?
The feature is called port forwarding:
ssh -L localport:machine-a-address.domain:remote-port machine-b
Then you can simply use localpott on localhost to access the remote service on machine-a, for example:
telnet localhost localport

Spark SPARK_PUBLIC_DNS and SPARK_LOCAL_IP on stand-alone cluster with docker containers

So far I have run Spark only on Linux machines and VMs (bridged networking) but now I am interesting on utilizing more computers as slaves. It would be handy to distribute a Spark Slave Docker container on computers and having them automatically connecting themselves to a hard-coded Spark master ip. This short of works already but I am having trouble configuring the right SPARK_LOCAL_IP (or --host parameter for start-slave.sh) on slave containers.
I think I correctly configured the SPARK_PUBLIC_DNS env variable to match the host machine's network-accessible ip (from 10.0.x.x address space), at least it is shown on Spark master web UI and accessible by all machines.
I have also set SPARK_WORKER_OPTS and Docker port forwards as instructed at http://sometechshit.blogspot.ru/2015/04/running-spark-standalone-cluster-in.html, but in my case the Spark master is running on an other machine and not inside Docker. I am launching Spark jobs from an other machine within the network, possibly also running a slave itself.
Things that I've tried:
Not configure SPARK_LOCAL_IP at all, slave binds to container's ip (like 172.17.0.45), cannot be connected to from master or driver, computation still works most of the time but not always
Bind to 0.0.0.0, slaves talk to master and establish some connection but it dies, an other slave shows up and goes away, they continue looping like this
Bind to host ip, start fails as that ip is not visible within the container but would be reachable by others as port-forwarding is configured
I wonder why isn't the configured SPARK_PUBLIC_DNS being used when connecting to slaves? I thought SPARK_LOCAL_IP would only affect on local binding but not being revealed to external computers.
At https://databricks.gitbooks.io/databricks-spark-knowledge-base/content/troubleshooting/connectivity_issues.html they instruct to "set SPARK_LOCAL_IP to a cluster-addressable hostname for the driver, master, and worker processes", is this the only option? I would avoid the extra DNS configuration and just use ips to configure traffic between computers. Or is there an easy way to achieve this?
Edit:
To summarize the current set-up:
Master is running on Linux (VM at VirtualBox on Windows with bridged networking)
Driver submits jobs from an other Windows machine, works great
Docker image for starting up slaves is distributed as a "saved" .tar.gz file, loaded (curl xyz | gunzip | docker load) and started on other machines within the network, has this probem with private/public ip configuration
I am also running spark in containers on different docker hosts. Starting the worker container with these arguments worked for me:
docker run \
-e SPARK_WORKER_PORT=6066 \
-p 6066:6066 \
-p 8081:8081 \
--hostname $PUBLIC_HOSTNAME \
-e SPARK_LOCAL_HOSTNAME=$PUBLIC_HOSTNAME \
-e SPARK_IDENT_STRING=$PUBLIC_HOSTNAME \
-e SPARK_PUBLIC_DNS=$PUBLIC_IP \
spark ...
where $PUBLIC_HOSTNAME is a hostname reachable from the master.
The missing piece was SPARK_LOCAL_HOSTNAME, an undocumented option AFAICT.
https://github.com/apache/spark/blob/v2.1.0/core/src/main/scala/org/apache/spark/util/Utils.scala#L904
I'm running 3 different types of docker containers on my machine with the intention of deploying them into the cloud when all the software we need are added to them: Master, Worker and Jupyter notebook (with Scala, R and Python kernels).
Here are my observations so far:
Master:
I couldn't make it bind to the Docker Host IP. Instead, I pass in a made up domain name to it: -h "dockerhost-master" -e SPARK_MASTER_IP="dockerhost-master". I couldn't find a way to make Akka bind against the container's IP and but accept messages against the host IP. I know it's possible with Akka 2.4, but maybe not with Spark.
I'm passing in -e SPARK_LOCAL_IP="${HOST_IP}" which causes the Web UI to bind against that address instead of the container's IP, but the Web UI works all right either way.
Worker:
I gave the worker container a different hostname and pass it as --host to the Spark org.apache.spark.deploy.master.Worker class. It can't be the same as the master's or the Akka cluster will not work: -h "dockerhost-worker"
I'm using Docker's add-host so the container is able to resolve the hostname to the master's IP: --add-host dockerhost-master:${HOST_IP}
The master URL that needs to be passed is spark://dockerhost-master:7077
Jupyter:
This one needs the master URL and add-host to be able to resolve it
The SparkContext lives in the notebook and that's where the web UI of the Spark Application is started, not the master. By default it binds to the internal IP address of the Docker container. To change that I had to pass in: -e SPARK_PUBLIC_DNS="${VM_IP}" -p 4040:4040. Subsequent applications from the notebook would be on 4041, 4042, etc.
With these settings the three components are able to communicate with each other. I'm using custom startup scripts with spark-class to launch the classes in the foreground and keep the Docker containers from quitting at the moment.
There are a few other ports that could be exposed such as the history server which I haven't encountered yet. Using --net host seems much simpler.
I think I found a solution for my use-case (one Spark container / host OS):
Use --net host with docker run => host's eth0 is visible in the container
Set SPARK_PUBLIC_DNS and SPARK_LOCAL_IP to host's ip, ignore the docker0's 172.x.x.x address
Spark can bind to the host's ip and other machines communicate to it as well, port forwarding takes care of the rest. DNS or any complex configs were not needed, I haven't thoroughly tested this but so far so good.
Edit: Note that these instructions are for Spark 1.x, at Spark 2.x only SPARK_PUBLIC_DNS is required, I think SPARK_LOCAL_IP is deprecated.

NETSH port forwarding from local port to local port not working

I'm trying to use NETSH PORTPROXY command to forward packets sent to my XP PC (IP 192.168.0.10) on port 8001 to port 80 (I've a XAMPP Apache server listening to port 80).
I issued the following:
netsh interface portproxy add v4tov4 listenport=8001 listenaddress=192.168.0.10 connectport=80 connectaddress=192.168.0.10
Show all confirms that everything is configured correctly:
netsh interface portproxy show all
Listen on IPv4: Connect to IPv4:
Address Port Address Port
--------------- ---------- --------------- ----------
192.168.0.10 8001 192.168.0.10 80
However, I'm not able to access apache website from http://localhost:8001. I'm able to access through the direct port at http://localhost as shown below.
Additionally, I've also tried the following:
1. Access the Apache website from a remote PC using the link: http://192.168.0.10:8001. Firewall turned off.
2. Changing listenaddress and connectaddress to 127.0.0.1.
Without further information, I can't find a way to resolve the problem. Is there a way to debug NETSH PORTPROXY?
Note: By the way, if you're wondering why I am doing this, I actually want to map remote MySQL client connections from a custom port to the default MySQL Server port 3306.
I managed to get it to work by issuing:
netsh interface ipv6 install
Also, for my purpose, it is not required to set listenaddress and better to set connectaddress=127.0.0.1, e.g.
netsh interface portproxy add v4tov4 listenport=8001 connectport=80 connectaddress=127.0.0.1
If netsh's port proxying is not working as expected, then you should verify the followings, preferably in that order:
Make sure the port proxy is properly configured
Start or restart the related Windows service
Ensure support for IPv6 is installed
Make sure the port is not blocked by a firewall
Make sure the port proxy is properly configured
This might seems to be trivial, but just in case, take the time to review your configuration before you go any further.
From either a command prompt or PowerShell prompt, run the following command:
netsh interface portproxy show all
The result should look something like this:
Listen on ipv4: Connect to ipv4:
Address Port Address Port
--------------- ---------- --------------- ----------
24.12.12.24 3306 192.168.0.100 3306
24.12.12.24 8080 192.168.0.100 80
Carefully review those settings. Make sure that you can indeed connect to the addresses on the right side of that list, from the local computer. For example, can you locally open a web browser and reach 192.168.0.100:80? If the protocol is not HTTP, then use telnet: telnet 192.168.0.100 3306 (see here for how to install the Telnet client on Windows).
Then, are the values on the left side correct? Is the IP address valid for your machine? Is that the port number you are trying to connect to, from the external machine?
Start or restart the related Windows service
On latest versions of Windows, netsh's port proxying is handled by a Windows service named "IP Helper" or "iphlpsvc". Proxying will obviously not work if that service is stopped. I have also faced situations that turned out to be resolved by restarting that service.
To do that in latest versions of Windows:
Open the Task manager, then go to the Services tab.
In the "Name" column, find the service named either "iphlpsvc" or "IP Helper".
Right click on that service, then select Restart. If restart is not available, then the service is probably stopped, and actually has to be started, so select Start.
On previous versions of Windows, look for Services in Administrative Tools, inside the Control Panel.
Ensure support for IPv6 is installed (older releases of Windows only)
On earlier versions of Windows (that is Windows XP, for sure, upto some early releases of Windows 10, apparently, though this is not clear), netsh's port proxying feature (including for IPv4-to-IPv4 proxys) was actually handled by a DLL (IPV6MON.DLL) that was only loaded if IPV6 protocol support was enabled. Therefore, on these versions, support for the IPv6 protocol is required in order to enable netsh's port proxying (see Microsoft's support article here).
From either a command prompt or PowerShell prompt, run the following command:
netsh interface ipv6 install
If you get an error indicating that command interface ipv6 install was not found, then it means that you are using a recent release of Windows, in which netsh's IPv6 support is implicit and cannot be disabled.
Make sure the port is not blocked by a firewall
A local firewall may potentially block the port even before they reach the IP Helper service. To make validate this hypothesis, temporarily disable any local firewall (including Windows' native firewall), then retest. If that works, then simply add a port exclusion to your firewall configuration.
I have the problem with you. I have solve it just now. There is a Windows Service named "IP Helper" that supplies the funcions tunnel connections. You should ensure it has been started.
You must Run Command.exe as Administrator first, by right-clicking the Command Prompt icon and choosing Run as Administrator. You will asked to confirm.
Paste your netsh Command in the command.exe window and press Enter.
If no error message is shown, the command worked.
In your web browser go to http://your-up:8001 to see it works.
The Windows Event Log might have information to help find the cause of a failure.

Resources