On my Unix server I execute this command to copy all content from folderc via the unix shell.
wget -r -nH --accept=ismv,ismc,ism,jpg --cut-dirs=5 --level=0 --directory-prefix="/root/sstest" -o /root/sstest2.log http://site.com/foldera/folderb/folderc/
All the content from folderc is actually copied to /root/sstest .
The wget does not exit after copying and take me back to the command prompt.
What could be causing this behaviour?
I had the same problem, and I just add single quote to the front and end of the URL.
This step resolved this issue form me.
It's possible that the HTTP server miscommunicates the length of a response, so that Wget keeps waiting for more data. It could be due to a bug in Wget or in the server (or a software component running on the server) which you don't notice in an interactive web browser.
To debug this, make sure you are running the latest version of Wget. If the problem persists, use the -d flag to collect the debug output, and send a report about the misbehavior to Wget developers at bug-wget#gnu.org. Be sure to strip the sensitive data, such as passwords or internal host names, from the report before sending it.
I observe a similar problem when downloading files from dropbox with wget:
the download finishes (file is complete)
wget (or curl, depending on what I use for download) do not show up in running processes, anymore, after the file is complete
wget (or curl) do not return to the command prompt
returning to the command prompt can be "forced" by simply hitting enter, I do not have to actually kill any process to return to the command prompt, it's just kind of stuck before I press enter one more time.
The problem is not wget-specific, it also occurs when I try to download the same file from the same location with curl. The problem does not occur at all if I download the same file from several unix web server, neither with wget, nor with curl.
I have tried using timeout (with a sufficiently long time) to force wget/curl to return to the command prompt, but they even do not return to the command prompt after timeout kills them.
Related
When I created a new .conf file inside /etc/supervisor/conf.d/ and tried to start this program it was showing some errors (fatal error) and restarting frequently by itself. Then I ran the command sudo service supervisor restart but now the supervisor also stopped and couldn't be restarted it. During solving my error the nginx server also got stuck also.
After spending a vast time I recovered it Alhamdulillah and writing the solution in the answer section.
Don't trust the solution entirely for your problem. Your problem may belong to another issues as well.
Sometime Supervisor can show the below horrible error when you restart the
service (by the command sudo service supervisor restart):
unix:///var/run/supervisor.sock refused connection
Try to diagnosis the problem with the command supervisord. You can also run journalctl -xe.
Problems and Solutions:
When you write a new .conf file to inside the /etc/supervisor/conf.d directory which contains some statements that are generating error.
Like, you write some statements that will run a script. That script contains some statements that runsGunicorn to deploy a python web apps. In the script you wrote a statement to bind an unix socket. But the mentioned directory where the unix socket will be created doesn't give permission to create the .sock file there. This can lead the permission error.
The demo gunicorn command is below:
SOCKFILE = /home/shamim/python_project/another_directroy/gunicorn.sock
gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--bind=unix:$SOCKFILE
If the another_directory doesn't give the permission to create a .sock file inside it then an error can be occurred. So give it enough permission to create something here from outside. Or, Bind IP and port instead unix socket (like 127.0.0.1:ANY_PORT). Be sure first the port is not used by another application.
Sometimes the error can be occurred if any directory path is used inside .conf file but actually that directory doesn't exist at all.
Now run the command supervisord.
If the error persists after fixing the above issues and now shows a error like -
another program is already listening on a port that one of our HTTP servers is configured to use
then run the below command to fix this issue:
sudo unlink /var/run/supervisor.sock
If the command above does not work you should check run unlink the file at /tmp/supervisor.sock
Keep in mind that the nginx server can also show some errors and fail to
restart (or start) if any .conf file contains some statement where a socket
is used but actually the socket file doesn't exist or doesn't have enough permission to be executed.
Example: If you write the below code in any nginx file config:
upstream surveyapp_payment_stripe {
server unix:/home/shamim/python_project/another_directroy/gunicorn.sock fail_timeout=0 weight=5 max_fails=3;
}
If the above socket doesn't exist or not have enough permission then some error may be occurred.
Nginx can also show error if any directory path is used here but not exists at all. To run nginx at this time quickly just delete the .conf file or edit it's extension (make another another extension type other than .conf).
Hopefully this explanation will help someone in future.
Due to the need to direct shiny-server logs to stdout so that "docker logs" (and monitoring utilities relying on it) can see them i'm trying to do some kind of :
tail -f <logs_directory>/*
Which works as needed when no new files are added to the directory, the problem is shiny-server dynamically creates files in this directory which we need to automatically consider.
I found other users have solved this via the xtail package, the problem is i'm using Centos and xtail is not available for centos.
The question is , is there any "clean" way of doing this via standard tail command without needing xtail ? or maybe there exists an equivalent package to xtail for centos?
You will probably find it easier to use the docker run -v option to mount a host directory into the container and collect logs there. Then you can use any tool you want that collects log files out of a directory (logstash is popular but far from the only option) to collect those log files.
This also avoids the problem of having to both run your program and a log collector inside your container; you can just run the service as the main container process, and not have to do gymnastics with tail and supervisord and whatever else to try to keep everything running.
I have been fighting with udev all afternoon. Basically I have created a rule that detects when a mass storage device is plugged into the system. This rule works and I can get it to execute a script without any issues, here it is for review purposes:
ACTION=="add", KERNEL=="sd?*", SUBSYSTEM=="block", RUN+="/usr/local/bin/udevhelper.sh"
The problem I am running into is that the script is executed as some sort of strange user that has read-only permissions to the entire system. The script I am executing is quite simple:
#!/bin/sh
cd /usr/local/bin
touch .drivedetect
echo "1" > .drivedetect
exit
Basically I would like udev to run this script and simply output a 1 to a file named .drivedetect within the /usr/local/bin folder. But as I mentioned before it sees the rule and executes the rule when I plug in a drive however when it tries to run the script it comes back with file system is read-only script quit with error code 1.
I am currently running this on a raspberry pi zero and the latest Debian image. udev is still being run from init.d from what I can tell because there is no systemd service for it registered. Any help would be great and if you need any more information just ask.
Things I've tried:
MODE="0660"
GROUP="plugdev"
Various combinations of RUN+="/bin/sh -c '/path/to/script'" and /bin/bash
OPTIONS="last_rule"
And last but not least I tried running the script under the main username as well
#!/bin/sh
su pi drivedetect
I had same issue, when I just used
udevadm control --reload-rules
after editing a udev rule. But, if I do:
sudo /etc/init.d/udev restart
The script can edit a file.
It's not enough to reboot. I have to do the restart after booting. It then works as expected until the next reboot.
This is on an rpi with stretch-lite.
We have created a http URL and stored the windows setup files. Our requirements is to execute this setup files through HTTP.
we tried with command CURL http://192.168.2.20/win2k12/setup.exe but it shows only the ASCII characters.
Also is it possible to execute the .exe files through wget?
Can anyone help us how to execute the setup.exe file from command prompt?
I can't seem to find if you can do this with wget or not.
What I am trying to do is download a whole bunch of images from one folder on a web server where all the images are stored. I am wondering if i can have multiple runs of wget to download them quicker.
eg
instance 1 of wget is downloading file 1
instance 2 of wget is downloading file 2
instance 3 of wget is downloading file 3 ...
and so on.
once an instance has finished I want it to move to the next file that hasn't started downloading yet.
Is this even possible with wget?
It's a question for your operating system.
For example :
If you are trying to run wget from command line, to launch wget and get
on linux you may add amp sign (&) at the end of your command.
"your wget command here" &
on windows you can type
start "your wget command here"
and a lot of other ways actually