Good day. The call file details are
Channel: SIP/voipswitch/971556710034
MaxRetries: 6
RetryTime: 20
WaitTime: 30
Context: default
Extension: 971556710034
Priority: 1
with this setting I can connect call on my phone. Once call get received asterisk delete this file and stop calling. But I want to call on my phone after every 30 min and will receive every time.
Can some one please help to do this.
Put this line in /etc/crontab
*/30 * * * * asterisk cp /path/to/call/file /var/spool/asterisk/outgoing
This will copy call file to asterisk outgoing directory every 30 min.
Don't forget to restart crontab service.
service crontab restart
Related
I have set up my outbound emails on phabricator by following this guide.
However, my emails don't arrive. All the emails are queued. When I went to the daemons in Phabricator UI, I see that several tasks are failing. They all look like this.
Task 448: PhabricatorMetaMTAWorker
Task 448
Task StatusQueuedTask ClassPhabricatorMetaMTAWorkerLease StatusLeasedLease Owner13195:1624502950:mail.icicbcoin.com:11Lease Expires1 h, 59 mDurationNot Completed
Data phabricator/ $ ./bin/mail show-outbound --id 154
Retries
Failure Count5Maximum Retries250Retries After1 m, 2 m, 4 m, 6 m, 8 m, 11 m, 14 m, 17 m, 20 m, 23 m, 27 m, ...
I'm curious of this data part. To me it sounds like phabricator fails running this command which is weir because if I run ./bin/mail show-outbound --id 154 manually I get this:
ID: 154
Status: queued
Related PHID:
Message: fputs(): send of 28 bytes failed with errno=32 Broken pipe
PARAMETERS
sensitive: 1
mustEncrypt:
subject: [Phabricator] Welcome to Phabricator
to: ["PHID-USER-qezqlvc7rxton2lshjue"]
force: 1
HEADERS
TEXT BODY
Welcome to Phabricator!
admin (John Doe) has created an account for you.
Username: some.person
To log in to Phabricator, follow this link and set a password:
http://phabricator.innolabsolutions.rs/login/once/welcome/9/b2jf7j6mg5xomwjhmcfcxbigs7474jyq/10/
After you have set a password, you can log in to Phabricator in the future by going here:
http://phabricator.innolabsolutions.rs/
Love,
Phabricator
HTML BODY
(This message has no HTML body.)
Actually, the problem was the SMTP server configuration, even though this error didn't tell me that. I changed the SMTP port from 465 to 587, restarted the daemons and it worked.
I had the same problem twice.
The second time, it was because I could not resolve the smtp server name:
$ ping gandi.net
ping: gandi.net: Temporary failure in name resolution
Then I added a dns server in /etc/resolv.conf
nameserver 127.0.0.1
nameserver 8.8.8.8 # <--- added
search home
and restarted the service
sudo service systemd-resolved restart
Right after, Phabricator automatically sent all the queued emails.
I'm running a DAG that runs once per day. It starts with 9 concurrently running tasks that all do the same thing - each is basically polling S3 to see if that tasks's designated 1 file exists. Each task is the same code in Airflow and is put into the structure in the same way. I have 1 of these tasks, which, on random days, fails to "begin" - it won't enter the running stage. It just sits as queued . When it does this, here's what its log says
*** Log file isn't local.
*** Fetching here: http://:8793/log/my.dag.name./my_airflow_task/2020-03-14T07:00:00
*** Failed to fetch log file from worker.
*** Reading remote logs...
Could not read logs from s3://mybucket/airflow/logs/my.dag.name./my_airflow_task/2020-03-14T07:00:00
Why does this only happen on random days? All similar questions I've seen point to this error happening consistently, and once overcome, no longer continues. To "trick" this task into "running" I manually touch whatever the name of the log file is supposed to be, and then it changes to running.
So the issue appears that it had to do with the system's ownership rules regarding the folder the logs for that particular task wrote to. I used a CI tool to ship the new task_3 when I updated my Airflow's Python code to the production environment, so the task was created that way. When I peaked for log directory ownership, I noticed this for the tasks:
# inside/airflow/log/dir:
drwxrwxr-x 2 root root 4096 Mar 25 14:53 task_3 # is the offending task
drwxrwxr-x 2 airflow airflow 20480 Mar 25 00:00 task_2
drwxrwxr-x 2 airflow airflow 20480 Mar 25 15:54 task_1
So, I think what was going on, was that randomly, Airflow couldn't get the permission to write the log file, thus it wouldn't start the rest of the task. When I applied the appropriate chown command using something like sudo chown -R airflow:airflow task_3 . Ever since I changed this, the issue has disappeared.
To monitor a log file I have to connect to an ssh connection and redirect the output of the log file(let's call it RemoteLog.txt) out to a local machine so it can be read by a java program and put on a GUI.
Right now I have the output redirected out of the ssh connection and onto the local machine with the command:
ssh remote#ip.address tail logs/RemoteLog.txt -f > ~/Log/LocalLog.txt
and everything works fine technically with one exception: for some reason LocalLog.txt only gets updated with the changes to RemoteLog.txt every 35 seconds to the millisecond.
It doesn't matter the number of changes to RemoteLog, the number of lines specified with the tail command, or using the >> operator vs the > operator; there is always a 35 second delay between updates of LocalLog.txt while RemoteLog is constantly updating.
Does anyone have any clue why this might be?
I'm using rsync in solaris and couldn't find an exit code if there is no file or folder modification/addition or deletion done on the destination folder. How can I get the status if rsync doesn't have one ?
0 Success
1 Syntax or usage error
2 Protocol incompatibility
3 Errors selecting input/output files, dirs
4 Requested action not supported: an attempt was made to manipulate 64-bit
files on a platform that cannot support them; or an option was specified
that is supported by the client and not by the server.
5 Error starting client-server protocol
6 Daemon unable to append to log-file
10 Error in socket I/O
11 Error in file I/O
12 Error in rsync protocol data stream
13 Errors with program diagnostics
14 Error in IPC code
20 Received SIGUSR1 or SIGINT
21 Some error returned by waitpid()
22 Error allocating core memory buffers
23 Partial transfer due to error
24 Partial transfer due to vanished source files
25 The --max-delete limit stopped deletions
30 Timeout in data send/receive
35 Timeout waiting for daemon connection
Thank you
There is a work around
rsync --log-format=%f ...
Note that rsync outputs files anytime any attribute changes, not only if the content of the file is updated.
There is also a -i option (or --log-format=%i) that itemizes all of the changes. See the rsync man page for details of the output format.
I am trying to make a outgoing from an asterisk pbx using .call file but every time .call file is moved in outgoing folder my cli shows
[Jun 16 15:38:12] NOTICE[30435]: pbx_spool.c:372 attempt_thread: Call failed to go through, reason (1) Hangup
[Jun 16 15:38:12] NOTICE[30435]: pbx_spool.c:375 attempt_thread: Queued call to DAHDI/g0/09716927126 expired without completion after 0 attempts
-- Span 1: Channel 0/1 got hangup request, cause 16
-- Hungup 'DAHDI/i1/09711590094-103a'
[Jun 16 15:38:17] NOTICE[30434]: pbx_spool.c:372 attempt_thread: Call failed to go through, reason (1) Hangup
[Jun 16 15:38:17] NOTICE[30434]: pbx_spool.c:375 attempt_thread: Queued call to DAHDI/g0/09711590094 expired without completion after 0 attempts
-- Attempting call on DAHDI/g0/09711590094 for 4759509#outgoing1:1 (Retry 1)
-- Attempting call on DAHDI/g0/09716927126 for 4759509#outgoing1:1 (Retry 1)
-- Requested transfer capability: 0x00 - SPEECH
-- Requested transfer capability: 0x00 - SPEECH
-- Span 1: Channel 0/2 got hangup request, cause 31
-- Hungup 'DAHDI/i1/09716927126-103d'
my .call file
Channel: DAHDI/g0/09711590094
MaxRetries: 1
RetryTime: 600
WaitTime: 30
Context: outgoing1
Extension: 10
Priority: 1
The call could not be connected.Anybody knows what would be the possible reason for that?
Thanks in advance
This error mean you can't call as requested via dahdi/g0
Very likly you have configure correctly your dahdi card.