I have just installed Kamailio SIP Server followed instructions on official site. Later I've started the server for listening SIP messages and added "test" user. So now the tutorial is ended up and i have no idea how i can test whether it works correctly or not. I mean if there is some "hello world" simple config to run or how to write simple test and execute in that environment. What I've found in google it's just module and function descriptions. Thanks for any help and "real" examples are vital :)
I assume you have choosen a domain for your sip server (mysipserver.com in the tutorial). I'm also assuming that you have choosen a domain name that you owns.
Step1: check NAPTR & SRV record (optional, but at least SRV is good to have)
In theory, SIP Applications, will do some NAPTR and SRV requests to locate your server for your service. This is described in rfc3263 and means you are supposed to configure your DNS entries to let SIP applications find the IP of your server. Check this page for an example!
Then, you can test NAPTR for your service (replace antisip.com, with your domain name)
~$ host -t NAPTR antisip.com
antisip.com has NAPTR record 0 0 "s" "SIPS+D2T" "" _sips._tcp.antisip.com.
antisip.com has NAPTR record 2 0 "s" "SIP+D2U" "" _sip._udp.antisip.com.
antisip.com has NAPTR record 1 0 "s" "SIP+D2T" "" _sip._tcp.antisip.com.
Then, use one the answers to test the SRV queries:
~$ host -t SRV _sips._tcp.antisip.com.
_sips._tcp.antisip.com has SRV record 0 0 5061 sip.antisip.com.
_sips._tcp.antisip.com has SRV record 0 0 5061 sip2.antisip.com.
In the example above sip.antisip.com and sip2.antisip.com are running the sip services for antisip.com
Step2: Without NAPTR/SRV, at least check DNS
To make it simple, if you have one server, just make sure your domain resolve to your server's IP address:
~$ ping antisip.com
PING antisip.com (91.121.78.130) 56(84) bytes of data.
Note that for me, antisip.com is also the sip.antisip.com server.
Step3: Testing from a windows
The easiest from this point is to test on your favorite desktop OS. This will allow you to start a network capture.
You can download this very simple demo. It's a very basic SIP app, but that's easier for testing: VoipByAntisip.exe for Windows
Install wireshark and start it. Then, start capture and put the "sip" filter. You may also later add the "DNS" filter and the "RTP" filter.
Test UDP, TCP and then TLS:
To test UDP, in settings, configure:
Proxy: mysipserver.com
username: test
password: yourpassword
protocol: UDP
To test TCP, in settings, modify:
protocol: TCP
To test TLS (without certificate verification), in settings, modify:
protocol: TLS
After applying the change, the box on the left of REFRESH button should become green with 200 OK written. If not, UDP doesn't work and either the answer code is written, or a 408 Timeout is provided to indicate no answer.
If you have registered correctly: that means you have received a 200 Ok, then, the "location" table of your kamailio database should contains the new registered user.
Test calls:
Of course, you also need to test calls.
The tutorial doesn't indicate that you need a rtp relay! But usually, if you wish to makes calls between SIP User-Agents, an application relaying RTP, like rtpproxy will need to be installed and configured to work with kamailio on your server. Without the relay, you should be able to call (talk) between two SIP applications running on the same LAN.
In order to test calls, you will need to create a second user (test2?) and configure another PC to use this account. Then, in Voip By Antisip for windows, use the start call box and enter sip:test2#mysipserver.com. The network capture should show an INVITE being sent to your server. This INVITE should be relayed to second user and received by test2 SIP application.
If your SIP server is up and running, then go ahead and use an android phone to test whether it works fine. You can use 'csipsimple' client to connect to a SIP server. For more details checkout this tutorial.
And there are other SIP clients available for various devices PC, Android, iOS, etc.
Related
I am sending events through gatling using graphite protocol on default port 2003.All the set up is on local ( including influxdb and grafana as well).Now I want to verify in gatling logs that in actual events are passing through port 2003 .How to verify that ? In gatling debug logs I am not finding anything related with graphite or port 2003.
Please help. Also let me know if you want me to add more info.
I wanted to continue in the previous question... But let's continue then here. To understand what data sends Gatling, you can use the utility netcat / nc.
It will listen incoming on port 2003:
nc -k -l 2003
(don't forget to turn off Influx or pass another port in nc and Gatling's conf)
Also you can emulate Gatling's data without run and send directly to Influx:
echo "gatling.example.get_request.all.percentiles99 155 1615558965" | nc localhost 2003
I am looking to use asyncssh with python3.7 (asyncio)
Here is what I want to build:
A remote device would be running a client that does a call-home to a centralized server. I want the server to be able to execute commands on client using reverse ssh tunnels on the incoming connection. I cannot use forward ssh (regular ssh) because the client could be behind NAT and server might not know the address of the client. I prefer client doing a call-home and then server managing the client.
The program for a POC should use python3 + an async implementation of ssh. I see asyncssh as the only viable choice (please suggest if you have an alternate):
Client: Connect to server and accepts reverse ssh tunnels to be opened on same outbound connection
Server: Accepts connection from client and keeps the session open. The server then opens reverse ssh tunnel to the client. For e.g. the server program should open 3 reverse ssh tunnnels on the incoming connection. Each of these tunnels would run one command ['ls', 'sleep 30 && date', 'sleep 5 && cat /proc/cpuinfo']
Server program should print the received response for each of these commands (one should come back amost immediately, other after 5 and other after 30).
I looked at the documentation, and I could not see examples of using multiple reverse ssh tunnels.
Anyone has experience using this? Can you point me to examples?
Developer of asyncssh has provided an example:
As of now, this is in develop branch. I have tested it and it does the job perfectly!
https://asyncssh.readthedocs.io/en/develop/#reverse-direction-example
[If you are checking this after a while, you might find it in master documentation.]
I am trying to setup a google TCP internal Load Balancer. Instance group behind this lb consists of redis-server processes listening on port 6379. Out of these redis instances, only one of them is master.
Problem: Add a TCP health check to detect the redis master and make lb divert all traffic to redis master only.
Approach:
Added a TCP Health Check for the port 6379.
In order to send the command role to redis-server process and parse the response, I am using the optional params provided in the health check. Please check the screenshot here.
Result: Health check is failing for all. If I remove the optional request/response params, health check starts passing for all.
Debugging:
Connected to lb using netcat and issued the command role, it sends the response starting with *3(for master) and *5(for slave) as expected.
Logged into instance and stopped redis-server process. Started listening on port 6379 using nc -l -p 6379 to check what exactly is being received at the instance's side in the health check. It does receive role\r\n.
After step 2, restarted redis-server and ran MONITOR command in redis-cli, to watch log of commands received by this process. Here there is no log of role.
This means, instance is receiving the data(role\r\n) over tcp but is not received by the process redis-cli(as per MONITOR command) or something else is happening. Please help.
Unfortunately GCP's TCP health checks is pretty limited on what can be checked in the response. From https://cloud.google.com/sdk/gcloud/reference/compute/health-checks/create/tcp:
--response=RESPONSE
An optional string of up to 1024 characters that the health checker expects to receive from the instance. If the response is not received exactly, the health check probe fails. If --response is configured, but not --request, the health checker will wait for a response anyway. Unless your system automatically sends out a message in response to a successful handshake, only configure --response to match an explicit --request.
Note the word "exactly" in the help message. The response has to match the provided string in full. One can't specify a partial string to search for in the response.
As you can see on https://redis.io/commands/role, redis's ROLE command returns a bunch of text. Though the substring "master" is present in the response, it also has a bunch of other text that would vary from setup to setup (based on the number of slaves, their addresses, etc.).
You should definitely raise a feature request with GCP for regex matching on the response. A possible workaround until then is to have a little web app on each host that performs "redis-cli role | grep master" command locally and return the response. Then the health check can be configured to monitor this web app.
I have a pressure sensor plugged into my computer, and the only way to collect the data is through a localhost API endpoint, meaning right now only that machine can collect data. Is there any way to receive data from the localhost API on a different machine? I also need to ping the API 20-40 times a second if that matters.
There are couple of ways I can think of, I am assuming both the machines are on same network
Use localhost API to collect the data in database and create a GET endpoint inside same application for fetching the data according your parameters. You can access GET endpoint from different machine by hitting network ip address of your local machine. Which you can check using ifconfig command in your terminal, check en0 type where you will find something like 192.168.X.X. From other machine you can hit http://192.168.X.X:<port>/getData, where <port> is the localhost port.
If you don't want to use database, then you can use publish subscribe mechanism which is real time. see http://autobahn.ws/python/
How publish subscribe works ?
You will have to make your localhost machine a publisher (server) which will publish events or sensor data in your case (real time). The other machine will be subscriber (client ) which will listen to the events from your server and do necessary processing.
Its uses WAMP (Web application messaging protocol) for communication. The sample code for basic publisher and subsriber can be found here.
Follow steps:
1 : Download ngrok,
2 : Go to the path where ngrok.exe file present and open that path in cmd.
3 : Connect your account.
paste : ngrok authtoken1pA6advIt950uA4y2Rixgc8rdx9_23MSDokKjWhbPUW3NSrZK
4 : Replace your port no including bracket.
paste : ngrok http {9003} -host-header="localhost:{9003}".
5 : copy forward line and paste in other system to check.
Forwarding http://d1c0bc16ff7b.ngrok.io
For some reason when I try to use get or put from a Solaris box to an IBM mainframe, the ftp client appears to hang.
I've tried all sorts of different variations (for example, including using quotes and not), and all I ever get is a "200 Port Request OK". But I never get the prompt back, and eventually the connection breaks.
ftp> open ibm.some_server
Connected to ibm.some_server
230 USER1 is logged on. Working directory is "USER1.".
Remote system type is MVS.
ftp> cd 'Z.TABS.'
250 "Z.TABS." is the working directory name prefix.
ftp> get 'SAMASCPY' samas.txt
200 Port request OK.
Anyone know what could be going on?
You need to enable passive mode. With Solaris 10's ftp:
ftp> passive
Passive mode on.
The FTP protocol as originally defined makes the server open a connection back to the client when a file transfer is initiated. That's what the PORT command in your question shows -- the client requested that the server connect back to its address on a specific port number. These days, with ubiquitous firewalls & NAT traversals, that rarely works.
Enabling passive mode tells the client to connect directly to the server, and fixes this issue. Most ftp clients now use passive mode by default; Solaris' does not.