WORK_FILE=RetriesExceeded.csv
MAIL="test#test.org"
HOST=lawsonfax
$FTP -v "$HOST" << EOF
get RetriesExceeded.csv
quit
EOF
archive_file $WORK_FILE
/law/bin/mpack -s 'Fax Retries Exceeded' "$WORK_FILE" "$MAIL"
log_stop
exit 0
Newest error at bottom, no such file or directory: [dgftp#lawapp2]/lawif/bin$ get_lawson_fax.ksh
Connected to lawsonfax.phsi.promedica.org.
220 Microsoft FTP Service
331 Password required for dgftp.
230 User logged in.
200 PORT command successful.
125 Data connection already open; Transfer starting.
226 Transfer complete.
352 bytes received in 0.04171 seconds (8.242 Kbytes/s)
local: RetriesExceeded.csv remote: RetriesExceeded.csv
221 Goodbye.
RetriesExceeded.csv: No such file or directory
[dgftp#lawapp2]/lawif/bin$
The last command now:
CMD="/law/bin/mpack -s 'Fax Retries Exceeded' $WORK_FILE $MAIL"
Suggested change:
/law/bin/mpack -s 'Fax Retries Exceeded' "$WORK_FILE" "$MAIL"
Of course only if you actually have such /law/bin/mpack program.
Related
We just installed Rancher Desktop 1.4.1 (nerdctl v 0.20.0) on Windows 10 and we seem to have a problem pulling images and logging into a registry:
nerdctl pull alpine
docker.io/library/alpine:latest: resolving |--------------------------------------|
elapsed: 9.9 s total: 0.0 B (0.0 B/s)
INFO[0010] trying next host error="failed to do request: Head \"https://registry-1.docker.io/v2/library/alpine/manifests/latest\": dial tcp: lookup registry-1.docker.io on 192.168.167.172:53: read udp 192.168.167.172:47744->192.168.167.172:53: i/o timeout" host=registry-1.docker.io
FATA[0010] failed to resolve reference "docker.io/library/alpine:latest": failed to do request: Head "https://registry-1.docker.io/v2/library/alpine/manifests/latest": dial tcp: lookup registry-1.docker.io on 192.168.167.172:53: read udp 192.168.167.172:47744->192.168.167.172:53: i/o timeout
Trying to login results in similar errors:
nerdctl --debug-full login registry-1.docker.io
/usr/local/bin/docker-credential-rancher-desktop: source: line 5: can't open '/etc/rancher/desktop/credfwd': No such file or directory
Enter Username: myusername
Enter Password:
DEBU[0030] Ignoring hosts dir "/etc/containerd/certs.d" error="stat /etc/containerd/certs.d: no such file or directory"
DEBU[0030] Ignoring hosts dir "/etc/docker/certs.d" error="stat /etc/docker/certs.d: no such file or directory"
DEBU[0030] len(regHosts)=1
ERRO[0040] failed to call tryLoginWithRegHost error="failed to call rh.Client.Do: Get \"https://registry-1.docker.io/v2/\": dial tcp: lookup registry-1.docker.io on 192.168.167.172:53: read udp 192.168.167.172:36590->192.168.167.172:53: i/o timeout" i=0
FATA[0040] failed to call rh.Client.Do: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 192.168.167.172:53: read udp 192.168.167.172:36590->192.168.167.172:53: i/o timeout
It looks like nerdctl is having problems resolving hostnames. It always times-out after 10 seconds.
Is there a way to explicitly configure hostname resolution in Rancher or nerdctl?
Any help would be appreciated.
I used FileZilla to connect to one of my Linux servers via the SFTP protocol, but got below error stack trace.
Status: Connecting to <server_ip>...
Response: fzSftp started, protocol_version=5
Command: keyfile "C:\ruifeng_ibm.ppk"
Command: open "root#<server_ip>" 22
Status: Connected to <server_ip>
Error: Connection timed out after 20 seconds of inactivity
Error: Could not connect to server
On the server when I ran lsof -i, I was able to see the established sshd connection.
sshd 12333 root 3u IPv4 109406 0t0 TCP <server_hostname>:ssh-><workstation_ip>:54315 (ESTABLISHED)
How could the directories not be listed when the connection is successful? No idea how to debug either.
Turned out to be a silly problem.
I put below welcome message in the .bashrc file.
echo -e "\n\nHello Ruifeng...Welcome to the Arena! \n#>>------>---->>"
Either it contained some illegal characters FileZilla does not honor, or it's completely not supported by FileZilla. Too lazy to further dig in. After removing this message, the connection worked and the directories got listed.
Nexus version 3.1.0-04
During a build, I receive the following error downloading an artifact from Nexus.
Download >http://10.148.254.17:8081/nexus/content/repositories/central/org/assertj/assertj-core/2.4.1/assertj-core-2.4.1.jar
:collection:extractIncludeTestProto FAILED
FAILURE: Build failed with an exception.
What went wrong:
Could not resolve all dependencies for configuration ':collection:testCompile'.
Could not download assertj-core.jar (org.assertj:assertj-core:2.4.1)
Could not get resource 'http://xxx.xxx.xxx.xxx:8081/nexus/content/repositories/central/org/assertj/assertj-core/2.4.1/assertj-core-2.4.1.jar'.
Premature end of Content-Length delimited message body (expected: 900718; received: 6862
This appear to be a problem with large files stored in Nexus.
If I try and download the file via wget or curl, it also fails.
c:>wget http://xxx.xxx.xxx.xxx:8081/nexus/content/repositories/central/org/assertj/assertj-core/2.5.0/assertj-
core-2.5.0.jar
--13:57:06-- >http://xxx.xxx.xxx.xxx:8081/nexus/content/repositories/central/org/assertj/assertj-core/2.5.0/assertj-core-2.5.0.jar
=> `assertj-core-2.5.0.jar'
Resolving proxy.xxxx.com... done.
Connecting to proxy.xxxx.com[xxx.xxx.xxx.xxx]:xxx... connected.
Proxy request sent, awaiting response... 200 OK
Length: 934,446 [application/java-archive]
0% [ ] 6,856 1.44K/s ETA 10:27
13:57:21 (1.44 KB/s) - Connection closed at byte 6856. Retrying.
c:>curl -O http://xxx.xxx.xxx.xxx:8081/nexus/content/repositories/central/org/assertj/assertj-core/2.5.0/assertj-core-2.5.0.jar
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 912k 0 6862 0 0 613 0 0:25:24 0:00:11 0:25:13 613
curl: (18) transfer closed with 927584 bytes remaining to read
Any ideas why?
In my case by docker layer was blocked. I solved this problem by changing the timeout in System>Http>Connection/Socket timeout.
I hope you can help me. I can not stand having to keep restarting my ec2 instance on Amazon.
I have two wordpress sites hosted there. My sites have always worked well until two months ago, one of them started having this problem. I tried all ways pack up, and the only solution was to reconfigure.
Now that all was right with the two. The second site started the same problem. I think Amazon is clowning me.
I am using a free micro instance. If anyone knows what the problem is, please help me!
Your issue will be the limited memory that is allocated to the T1 Micro instances in EC2. I'm assuming you are using ANI Linux in this case and if an alternate version of Linux is used then you may have different locations for your log and config files.
Make sure you are the root user.
Have a look at your MySQL logs in the following location:
/var/log/mysqld.log
If you see repeated instances of the following it's pretty certain that the 0.6GB of memory allocated to the micro instance is not cutting it.
150714 22:13:33 InnoDB: Initializing buffer pool, size = 12.0M
InnoDB: mmap(12877824 bytes) failed; errno 12
150714 22:13:33 InnoDB: Completed initialization of buffer pool
150714 22:13:33 InnoDB: Fatal error: cannot allocate memory for the buffer pool
150714 22:13:33 [ERROR] Plugin 'InnoDB' init function returned error.
150714 22:13:33 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
150714 22:13:33 [ERROR] Unknown/unsupported storage engine: InnoDB
150714 22:13:33 [ERROR] Aborting
You will notice in the log excerpt above that my buffer pool size is set to 12MB. This can be configured by adding the line innodb_buffer_pool_size = 12M to your MySQL config file /etc/my.cnf.
A pretty good way to deal with InnoDB chewing up your memory is to create a swap file.
Start by checking the status of your memory:
free -m
You will most probably see that your swap is not doing much:
total used free shared buffers cached
Mem: 592 574 17 0 15 235
-/+ buffers/cache: 323 268
Swap: 0 0 0
To start ensure you are logged in as the root user and run the following command:
dd if=/dev/zero of=/swapfile bs=1M count=1024
Wait for a bit as the command is not verbose but you should see the following response after about 15 seconds when the process is complete:
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 31.505 s, 34.1 MB/s
Next set up the swapspace with:
mkswap /swapfile
Now set up the swap event:
swapon /swapfile
If you get a permissions response you can ignore it or address the swap file by changing the permissions to 600 with the chmod command.
chmod 600 /swapfile
Now add the following line to /etc/fstab to create the swap spaces on server start:
/swapfile swap swap defaults 0 0
Restart your MySQL instance:
service mysqld restart
Finally check to see if your swap file is working correctly with the free -m command.
You should see something like:
total used free shared buffers cached
Mem: 592 575 16 0 16 235
-/+ buffers/cache: 323 269
Swap: 1023 0 1023
Hope this helps.
Well, I currently use lbackup to backup files on my remote server. So I logged in with my account, which is NOT root.
And I got below errors, obviously, my account is NOT www-data.
Any suggestions?
$ ls -l /var/www/cache |grep cache
drwx------ 13 www-data www-data 4096 Jul 28 06:27 cache
Sun Jul 28 23:53:17 CST 2013
Hard Links Enabled
Synchronizing...
Creating Links
rsync: opendir "/var/www/bbs/cache" failed: Permission denied (13)
IO error encountered -- skipping file deletion
rsync: opendir "/var/www/bbs/files" failed: Permission denied (13)
rsync: opendir "/var/www/bbs/store" failed: Permission denied (13)
rsync: send_files failed to open "/var/www/bbs/config.php": Permission denied (13)
Number of files: 10048
Number of files transferred: 1919
Total file size: 202516431 bytes
Total transferred file size: 16200288 bytes
Literal data: 16200288 bytes
Matched data: 0 bytes
File list size: 242097
File list generation time: 0.002 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 39231
Total bytes received: 5617302
sent 39231 bytes received 5617302 bytes 50731.24 bytes/sec
total size is 202516431 speedup is 35.80
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1536) [generator=3.0.9]
WARNING! : Data Transfer Interrupted
WARNING! : No mail configuration partner specified.
To specify a mail partner configuration file add the
following line into your backup configuration file :
mailconfigpartner=nameofyourmailpartner.conf
you have two possibilities:
a) ignore the files you cannot read (--exclude=PATTERN)
b) get read persmissions for these files, either by logging in as another user or by chmod-ing the files, whatever is appropriate