My friends and me tried so many times and failed. We can access the site https://www.meteor.com
but when we use command
sudo curl http://install.meteor.com/ | sh
we get fail command:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 6674 0 6674 0 0 2473 0 --:--:-- 0:00:02 --:--:-- 2473
Downloading Meteor distribution
curl: (35) Server aborted the SSL handshake
Installation failed.
I've heard of some people having issues with certain versions of curl that come along with VPN and other packages. Have you confirmed that you're using the instance of curl in /usr/bin ?
sudo /usr/bin/curl https://install.meteor.com/ | sh
If that still fails, what is the output of 'curl -v https://install.meteor.com/' ?
Related
I'm installing openstack using All-In-One Single Machine setup, I run stack.sh script for devstack setup. On starting glance service I'm getting following error on my console:
++:: curl -g -k --noproxy '*' -s -o /dev/null -w '%{http_code}' http://10.10.20.10/image
+:: [[ 503 == 503 ]]
+:: sleep 1
+functions:wait_for_service:485 rval=124
+functions:wait_for_service:490 time_stop wait_for_service
+functions-common:time_stop:2310 local name
+functions-common:time_stop:2311 local end_time
+functions-common:time_stop:2312 local elapsed_time
+functions-common:time_stop:2313 local total
+functions-common:time_stop:2314 local start_time
+functions-common:time_stop:2316 name=wait_for_service
+functions-common:time_stop:2317 start_time=1602763779096
+functions-common:time_stop:2319 [[ -z 1602763779096 ]]
++functions-common:time_stop:2322 date +%s%3N
+functions-common:time_stop:2322 end_time=1602763839214
+functions-common:time_stop:2323 elapsed_time=60118
+functions-common:time_stop:2324 total=569
+functions-common:time_stop:2326 _TIME_START[$name]=
+functions-common:time_stop:2327 _TIME_TOTAL[$name]=60687
+functions:wait_for_service:491 return 124
+lib/glance:start_glance:480 die 480 'g-api did not start'
+functions-common:die:198 local exitcode=0
+functions-common:die:199 set +o xtrace
[Call Trace]
./stack.sh:1306:start_glance
/opt/stack/devstack/lib/glance:480:die
[ERROR] /opt/stack/devstack/lib/glance:480 g-api did not start
Error on exit
World dumping... see /opt/stack/logs/worlddump-2020-10-15-121040.txt for details
neutron-dhcp-agent: no process found
neutron-l3-agent: no process found
neutron-metadata-agent: no process found
neutron-openvswitch-agent: no process found
I also tried to increase timeout duration but then also it failed and also verifyied devstack#g-api.service is in active state. Can someone let me know what is the exect reason behind this issue and how to resolve it.
The only solution is to reload the entire system, including the os
I'm leveraging Zabbix with a custom low-level-discovery that discovers a REST/API endpoint using Python. When the polling is on, the CPU utilization goes through the roof. All the CPU usage is caused by setroubleshootd as show in top:
top - 13:51:56 up 15:33, 1 user, load average: 1.52, 1.43, 1.37
Tasks: 127 total, 3 running, 124 sleeping, 0 stopped, 0 zombie
%Cpu(s): 35.8 us, 6.7 sy, 0.0 ni, 57.3 id, 0.1 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem : 8010508 total, 6211020 free, 397104 used, 1402384 buff/cache
KiB Swap: 1679356 total, 1679356 free, 0 used. 6852016 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7986 setroub+ 20 0 424072 130856 11548 R 77.4 1.6 7:12.16
Zabbix calls the agent and requests to execute a "UserParameter" which is shorthand for a script. That script is a bash file that calls my python script. and the call looks like this:
#!/usr/bin/env bash
/usr/bin/python /etc/zabbix/externalscripts/discovery.py $1 $2 $3 $4 $5
When zabbix calls the script, it passes the unique filters, like a server ID or network card ID, as one of the arguments. The python script opens up an https session using requests, leveraging a bearer token if the token file exists. If the token file doesn't exist it creates it.
The script works fine and does everything it is supposed to but setroubleshoot is rebooting a slew of issues, specifically around file folder access. The huge number of setroubleshootd responses is causing the CPU to go nuts. Here is an example of the error:
python: SELinux is preventing /usr/bin/python2.7 from create access on the file 7WMXFl.
The file name is random and changes with every execution. I've tried adding an exception using the selinux tools such as:
ausearch -c 'python' --raw | audit2allow -M my-python
But since the file name is random, the errors persist. I've tried uninstalling setroubleshootd, selinux just reinstalls it. Unfortunately, I need to run enforcing mode, so dropping to permissive or disabling are not options.
I've tried changing so that I'm not running a bash script, that zabbix calls the python script directly, or declaring shebang /usr/bin/python, but passing arguments doesn't seem to work properly. I get an error stating the $1 $2... are unknown arguments.
At a loss at this point. It is running, but I'd really like to get the CPU usage down as 60% of 4 cores is unreasonable for 30-40 HTTPS calls.
I ended up writing an SEModule for this that allows the zabbix user write access to the /tmp folder where these files are being created and managed. CPU usage dropped from 75% to 2%. #NailedIt
$>sudo ausearch -m avc | grep zabbix | grep denied | audit2allow -m zabbixallow > my_script.te
$>checkmodule -M -m -o zabbixallow.mod my_script.te
$>semodule_package -o zabbixallow.pp -m zabbixallow.mod
$>sudo semodule -i zabbixallow.pp
Hopefully this helps someone else if they run across this issue.
External scripts will have to complete within your timeout value, this sounds like it's too big for that. You could convert it to zabbix_sender and schedule it via cron. Then it's just a script with performance problems.
I keep getting issues with Solr crashing on my server. Its hardly a busy site, so I'm baffled as to why it keeps doing it.
Anyway, as an intermediary - I'm written a shell script that runs on a cron as root:
#!/bin/bash
declare -a arr=(tomcat7 nginx mysql);
for i in "${arr[#]}"
do
echo "Checking $i"
if (( $(ps -ef | grep -v grep | grep $i | wc -l) > 0 ))
then
echo "$i is running!!!"
else
echo "service $i start\n"
service $i start
fi
done
# re-run, but this time do a restart if its still not going!
for i in "${arr[#]}"
do
echo "Checking $i"
if (( $(ps -ef | grep -v grep | grep $i | wc -l) > 0 ))
then
echo "$i is running!!!"
else
service $i restart
fi
done
..then this cron (as root)
*/5 * * * * bash /root/script-checks.sh
The cron itself seems to run just fine:
Checking tomcat7
service tomcat7 start\n
Checking nginx
nginx is running!!!
Checking mysql
mysql is running!!!
Checking tomcat7
Checking nginx
nginx is running!!!
Checking mysql
mysql is running!!!
...and Tomcats status seems ok:
root#domain:~# service tomcat7 status
รข tomcat7.service - LSB: Start Tomcat.
Loaded: loaded (/etc/init.d/tomcat7)
Active: active (exited) since Mon 2016-03-21 06:33:28 GMT; 4 days ago
Process: 2695 ExecStart=/etc/init.d/tomcat7 start (code=exited, status=0/SUCCESS)
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
...yet my script, can't connect to Solr:
Could not parse JSON response: malformed JSON string, neither tag, array, object, number, string or atom, at character offset 0 (before "Can't connect to loc...") at /srv/www/domain.net/www/cgi-bin/admin/WebService/Solr/Response.pm line 42. Can't connect to localhost:8080 Connection refused at /usr/share/perl5/LWP/Protocol/http.pm line 49.
If I manually run a "restart":
service tomcat7 restart
...it then starts working again. Its almost like the 2nd part in my shell script isn't working.
Any suggestions?
My Solr versions are as follows:
Solr Specification Version: 3.6.2.2014.10.31.18.33.47
Solr Implementation Version: 3.6.2 debian - pbuilder - 2014-10-31 18:33:47
Lucene Specification Version: 3.6.2
UPDATE: I've read that sometimes updating the maxThreads can help with crashes, so I've changed it to 10,000:
<Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol"
maxThreads="10000" SSLEnabled="true" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS" />
I guess time will tell, to see if this fixes the issue.
Ok, well I never got to the bottom of why it wouldn't restart... but I have worked out why it was crashing. Before, we had it on 2048mb RAM Linode server, but when we moved over to Apache2, I setup a 1024Mb server, and was going to upgrade it to 2048mb one we had it all working. However, we put it live - but I forgot to update it to the 2048mb server, so Nginx/Apache2/Tomcat/MySQL etc, were all trying to run on a pretty slow server.
We found that Solr was dying with an OOM (out of memory) error, which is what gave us the clue.
Hopefully this helps someone else, who may come across this.
I'm trying to install ffmpeg on travis with this command:
curl http://johnvansickle.com/ffmpeg/releases/ffmpeg-release-64bit-static.tar.xz | tar -C /usr/local/bin/ -xvz
I get this error:
$ curl http://johnvansickle.com/ffmpeg/releases/ffmpeg-release-64bit-static.tar.xz | tar -C /usr/local/bin/ -xvz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
gzip: stdin: not in gzip format
tar: Child died with signal 13
tar: Error is not recoverable: exiting now
The command "curl http://johnvansickle.com/ffmpeg/releases/ffmpeg-release-64bit-static.tar.xz | tar -C /usr/local/bin/ -xvz" failed and exited with 2 during .
however, it works locally on OS X. what's going on?
.xz is not .gz. GNU tar apparently recognises XZ format; but OS X does not use GNU tools. I found this quote:
Without installing anything, a TAR archive can be created with XZ compression using the tar program with the undocumented --xz argument.
seems like i just used the wrong command! should've been
curl http://johnvansickle.com/ffmpeg/releases/ffmpeg-release-64bit-static.tar.xz | sudo tar -C /usr/local/bin/ -xJ --strip-components=1
Your tar might not be able to handle .xz files.
According tothis link you can try to install xz-utils and use the -J flag:
tar -C /path/to/output -xJv
Evening,
I am running a lot of wget commands using xargs
cat urls.txt | xargs -n 1 -P 10 wget -q -t 2 --timeout 10 --dns-timeout 10 --connect-timeout 10 --read-timeout 20
However, once the file has been parsed, some of the wget instances 'hang.' I can still see them in system monitor, and it can take about 2 minutes for them all to complete.
Is there anyway I can specify that the instance should be killed after 10 seconds? I can re-download all the URLs that failed later.
In system monitor, the wget instances are shown as sk_wait_data when they hang. xargs is there as 'do_wait,' but wget seems to be the issue, as once I kill them, my script continues.
I believe this should do it:
wget -v -t 2 --timeout 10
According to the docs:
--timeout: Set the network timeout to seconds seconds. This is equivalent to specifying
--dns-timeout, --connect-timeout, and --read-timeout, all at the same time.
Check the verbose output too and see more of what it's doing.
Also, you can try:
timeout 10 wget -v -t 2
Or you can do what timeout does internally:
( cmdpid=$BASHPID; (sleep 10; kill $cmdpid) & exec wget -v -t 2 )
(As seen in: BASH FAQ entry #68: "How do I run a command, and have it abort (timeout) after N seconds?")
GNU Parallel can download in parallel, and retry the process after a timeout:
cat urls.txt | parallel -j10 --timeout 10 --retries 3 wget -q -t 2
If the time for an url to be fetched changes (e.g. due to faster internet connection), you can let GNU Parallel figure out the timeout:
cat urls.txt | parallel -j10 --timeout 1000% --retries 3 wget -q -t 2
This will make GNU Parallel record the median time for a successful job and set the timeout dynamically to 10 times that.