I am using php version 5.3.10 and asterisk version 1.8.22.0.
I am registering one customer of a2billing in softphone and dialing one number.
In asterisk i am getting below result:
<SIP/myip-0000004c>AGI Tx >> 200 result=1
<SIP/myip-0000004c>AGI Rx << Connection failed
<SIP/myip-0000004c>AGI Tx >> 510 Invalid or unknown command
[Dec 30 07:59:16] ERROR[28331]: utils.c:1343 ast_carefulwrite: write() returned error: Broken pipe
[Dec 30 07:59:16] ERROR[28331]: utils.c:1343 ast_carefulwrite: write() returned error: Broken pipe
-- <SIP/myip-0000004c>AGI Script a2billing.php completed, returning 0
Does anybody have any idea what is the issue?
I am getting correct credential in AGI when it is trying to connect and using those credential i can connect in mysql but from CLI> i am getting connection failed error.
Thanks in advance.
You need to install php5-adodb and libphp-adodb.
Related
We just installed Rancher Desktop 1.4.1 (nerdctl v 0.20.0) on Windows 10 and we seem to have a problem pulling images and logging into a registry:
nerdctl pull alpine
docker.io/library/alpine:latest: resolving |--------------------------------------|
elapsed: 9.9 s total: 0.0 B (0.0 B/s)
INFO[0010] trying next host error="failed to do request: Head \"https://registry-1.docker.io/v2/library/alpine/manifests/latest\": dial tcp: lookup registry-1.docker.io on 192.168.167.172:53: read udp 192.168.167.172:47744->192.168.167.172:53: i/o timeout" host=registry-1.docker.io
FATA[0010] failed to resolve reference "docker.io/library/alpine:latest": failed to do request: Head "https://registry-1.docker.io/v2/library/alpine/manifests/latest": dial tcp: lookup registry-1.docker.io on 192.168.167.172:53: read udp 192.168.167.172:47744->192.168.167.172:53: i/o timeout
Trying to login results in similar errors:
nerdctl --debug-full login registry-1.docker.io
/usr/local/bin/docker-credential-rancher-desktop: source: line 5: can't open '/etc/rancher/desktop/credfwd': No such file or directory
Enter Username: myusername
Enter Password:
DEBU[0030] Ignoring hosts dir "/etc/containerd/certs.d" error="stat /etc/containerd/certs.d: no such file or directory"
DEBU[0030] Ignoring hosts dir "/etc/docker/certs.d" error="stat /etc/docker/certs.d: no such file or directory"
DEBU[0030] len(regHosts)=1
ERRO[0040] failed to call tryLoginWithRegHost error="failed to call rh.Client.Do: Get \"https://registry-1.docker.io/v2/\": dial tcp: lookup registry-1.docker.io on 192.168.167.172:53: read udp 192.168.167.172:36590->192.168.167.172:53: i/o timeout" i=0
FATA[0040] failed to call rh.Client.Do: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 192.168.167.172:53: read udp 192.168.167.172:36590->192.168.167.172:53: i/o timeout
It looks like nerdctl is having problems resolving hostnames. It always times-out after 10 seconds.
Is there a way to explicitly configure hostname resolution in Rancher or nerdctl?
Any help would be appreciated.
I have running the IROHA node on my local ubuntu machine with docker and I am able to run all commands using docker shell.
I want to have JS implementation of Iroha so I have run the dockerfile for GRPC but it is not able to connect to IROHA,
error:
WARN[1672] [core] grpc: addrConn.createTransport failed to connect to {dev.localdomain:50051 dev.localdomain:50051 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused". Reconnecting... system=system
[GRPC console][2]
I resolved GrpcWebProxy by making some changes in already provided solution, now you can see it here:
https://github.com/AqeelKazmi/IrohaDockerServices
I submitted a job via slurm. The job ran for 12 hours and was working as expected. Then I got Data unpack would read past end of buffer in file util/show_help.c at line 501. It is usual for me to get errors like ORTE has lost communication with a remote daemon but I usually get this in the beginning of the job. It is annoying but still does not cause as much time loss as getting error after 12 hours. Is there a quick fix for this? Open MPI version is 4.0.1.
--------------------------------------------------------------------------
By default, for Open MPI 4.0 and later, infiniband ports on a device
are not used by default. The intent is to use UCX for these devices.
You can override this policy by setting the btl_openib_allow_ib MCA parameter
to true.
Local host: barbun40
Local adapter: mlx5_0
Local port: 1
--------------------------------------------------------------------------
--------------------------------------------------------------------------
WARNING: There was an error initializing an OpenFabrics device.
Local host: barbun40
Local device: mlx5_0
--------------------------------------------------------------------------
[barbun21.yonetim:48390] [[15284,0],0] ORTE_ERROR_LOG: Data unpack would read past end of buffer in
file util/show_help.c at line 501
[barbun21.yonetim:48390] 127 more processes have sent help message help-mpi-btl-openib.txt / ib port
not selected
[barbun21.yonetim:48390] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error
messages
[barbun21.yonetim:48390] 126 more processes have sent help message help-mpi-btl-openib.txt / error in
device init
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
An MPI communication peer process has unexpectedly disconnected. This
usually indicates a failure in the peer process (e.g., a crash or
otherwise exiting without calling MPI_FINALIZE first).
Although this local MPI process will likely now behave unpredictably
(it may even hang or crash), the root cause of this problem is the
failure of the peer -- that is what you need to investigate. For
example, there may be a core file that you can examine. More
generally: such peer hangups are frequently caused by application bugs
or other external events.
Local host: barbun64
Local PID: 252415
Peer host: barbun39
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[15284,1],35]
Exit code: 9
--------------------------------------------------------------------------
I am running airflow tasks using SSH operator. I am pretty sure that the python program has no error and runs successfully when i run it. But when run from airflow towards the end of program execution I end up with SIGTERM error.
I tried to figure out by looking into various solutions but nothing worked. I tried increasing
killed_task_cleanup_time = 1200 from 60 in airflow.cfg file. Also tried changing hostname_callable to socket:gethostname in airflow.cfg as I received the following warning before this error
Warning: The recorded hostname xxx does not match this instance's hostname
Error:
[2020-10-15 10:45:34,937] {taskinstance.py:954} ERROR - Received SIGTERM. Terminating subprocesses.
[2020-10-15 10:45:34,959] {taskinstance.py:1145} ERROR - SSH operator error: Task received SIGTERM signal
Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.7/site-packages/airflow/contrib/operators/ssh_operator.py", line 137, in execute
readq, _, _ = select([channel], [], [], self.timeout)
File "/opt/anaconda3/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 956, in signal_handler
raise AirflowException("Task received SIGTERM signal")
airflow.exceptions.AirflowException: Task received SIGTERM signal
Any ideas and suggestions are teally helpful. Stuck with this for a day now
This problem is triggered by the fact that the RECORDED hostname XXX maps an IP address that is different from the IP address mapped by instance's hostname, throwing a SIGTERM error. So you need to specify the IP mapping for the recorded Hostname XXX
Possibly this thread might help? https://issues.apache.org/jira/browse/AIRFLOW-966.
Which version of airflow are you using, and did you check your celery broker settings?
The solution seems to be setting visibility timeout higher than the celery default, which is 1 hour, to prevent celery from re-submitting the job. I believe this only affects tasks created via manual run / CLI (not normally scheduled tasks.)
when I restart Asterisk and connect to the Asterisk console with a
asterisk -rv
Asterisk is spitting thousands of
WARNING[4695]: chan_dahdi.c:12320 do_monitor: Read failed with -1: Invalid argument
after some minutes of this crazy spitting of messages it quits with a
*CLI>
Disconnected from Asterisk server
Asterisk cleanly ending (0).
Executing last minute cleanups
a dmesg shows:
[ 3561.591539] asterisk[4695]: segfault at b3150fec ip b73a4b8d sp b3150ff0 error 6 in libc-2.19.so[b7334000+1a8000]
I have no idea how to deal with this and find no similar error anywhere.
I am using a Digium TDM410P telephony card with 2 FXO and two FXS interfaces.
Dahdi linux driver < 2.10.0.1 is incompatible with linux kernel 3.16+
See complete problem description and fix in Asterisk Jira:
https://issues.asterisk.org/jira/browse/DAHLIN-340