I'm trying to get the hang of hydra.
When I do this to test against my ftp site, it works. I'm hitting my own ftp site (ex. www.mysite.com) with the correct username and password (ex. username1 and password1):
./hydra -l username1 -p password1 -vV -f www.mysite.com ftp
Hydra v7.4.1 (c)2012 by van Hauser/THC & David Maciejak - for legal purposes only
Hydra (http://www.thc.org/thc-hydra) starting at 2012-12-29 21:06:20
[VERBOSE] More tasks defined than login/pass pairs exist. Tasks reduced to 1.
[DATA] 1 task, 1 server, 1 login try (l:1/p:1), ~1 try per task
[DATA] attacking service ftp on port 21
[VERBOSE] Resolving addresses ... done
[ATTEMPT] target www.mysite.com - login "username1" - pass "password1" - 1 of 1 [child 0]
[21][ftp] host: 200.200.240.240 login: username1 password: password1
[STATUS] attack finished for www.mysite.com (valid pair found)
1 of 1 target successfully completed, 1 valid password found
Hydra (http://www.thc.org/thc-hydra) finished at 2012-12-29 21:06:21
However, when I do this to test a public basic authentication test page (http://browserspy.dk/password-ok.php) with the correct username and password (test and test), hydra just stops with a 'Resolving address ... done' message.
./hydra -l test -p test -vV -f browserspy.dk http-get /password-ok.php
Hydra v7.4.1 (c)2012 by van Hauser/THC & David Maciejak - for legal purposes only
Hydra (http://www.thc.org/thc-hydra) starting at 2012-12-29 21:02:58
[VERBOSE] More tasks defined than login/pass pairs exist. Tasks reduced to 1.
[DATA] 1 task, 1 server, 1 login try (l:1/p:1), ~1 try per task
[DATA] attacking service http-get on port 80
[VERBOSE] Resolving addresses ... done
The hydra process just seems to die here and I'm returned to the command prompt.
What am I doing wrong?
You are not doing anything wrong, its a bug in hydra which affects the modes http-get, http-head and irc. Downgrade to v7.3 or wait for v7.5 which will fix this issue.
Related
I have set up my outbound emails on phabricator by following this guide.
However, my emails don't arrive. All the emails are queued. When I went to the daemons in Phabricator UI, I see that several tasks are failing. They all look like this.
Task 448: PhabricatorMetaMTAWorker
Task 448
Task StatusQueuedTask ClassPhabricatorMetaMTAWorkerLease StatusLeasedLease Owner13195:1624502950:mail.icicbcoin.com:11Lease Expires1 h, 59 mDurationNot Completed
Data phabricator/ $ ./bin/mail show-outbound --id 154
Retries
Failure Count5Maximum Retries250Retries After1 m, 2 m, 4 m, 6 m, 8 m, 11 m, 14 m, 17 m, 20 m, 23 m, 27 m, ...
I'm curious of this data part. To me it sounds like phabricator fails running this command which is weir because if I run ./bin/mail show-outbound --id 154 manually I get this:
ID: 154
Status: queued
Related PHID:
Message: fputs(): send of 28 bytes failed with errno=32 Broken pipe
PARAMETERS
sensitive: 1
mustEncrypt:
subject: [Phabricator] Welcome to Phabricator
to: ["PHID-USER-qezqlvc7rxton2lshjue"]
force: 1
HEADERS
TEXT BODY
Welcome to Phabricator!
admin (John Doe) has created an account for you.
Username: some.person
To log in to Phabricator, follow this link and set a password:
http://phabricator.innolabsolutions.rs/login/once/welcome/9/b2jf7j6mg5xomwjhmcfcxbigs7474jyq/10/
After you have set a password, you can log in to Phabricator in the future by going here:
http://phabricator.innolabsolutions.rs/
Love,
Phabricator
HTML BODY
(This message has no HTML body.)
Actually, the problem was the SMTP server configuration, even though this error didn't tell me that. I changed the SMTP port from 465 to 587, restarted the daemons and it worked.
I had the same problem twice.
The second time, it was because I could not resolve the smtp server name:
$ ping gandi.net
ping: gandi.net: Temporary failure in name resolution
Then I added a dns server in /etc/resolv.conf
nameserver 127.0.0.1
nameserver 8.8.8.8 # <--- added
search home
and restarted the service
sudo service systemd-resolved restart
Right after, Phabricator automatically sent all the queued emails.
I am installing ICp 2.1.0.1 and I received an error at the TASK
[master: Waiting for MariaDB service to start] msg: The MariaDB
component failed to start.
After this msg the installation completed with failed status.
We are installing ICp with 3 Masters, 3 Proxies and 2 Workers. We have 1 IP for VIP master and 1 for VIP proxy.
I tried to install multiple times and all installations got the same error.
For prior issues with that error, the correct db admin password was not used. So check the db user and password to resolve issue.
Would you validate whether each master host was able to access port 3306 on the other hosts?
If you run with .. install -vv | tee -a install-log.txt, do you get additional details as well?
The error was solved by following the steps below.
Check whether kubelet is running:
Log in to your master node.
Run the following command to check kubelet status:
systemctl status kubelet
If kubelet is not running, run the following command to get the logs:
journalctl -u kubelet &> kubelet.log
We found the error in the kubelet.log log:
Error: failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false.
We found this troubleshoot in this link, and the solution at the ICP issue 4651.
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0/troubleshoot/etcd_fails.html
https://github.ibm.com/IBMPrivateCloud/roadmap/issues/4651
I have server running Ubuntu 16.04 with Docker 17.03.0-ce running an Nginx container. That server also has ConfigServer Security & Firewall installed. Shortly after starting the Nginx container I start receiving emails about "Excessive resource usage" with the following details:
Time: Fri Mar 24 00:06:02 2017 -0400
Account: systemd-timesync
Resource: Process Time
Exceeded: 1820 > 1800 (seconds)
Executable: /usr/sbin/nginx
Command Line: nginx: worker process
PID: 2302 (Parent PID:2077)
Killed: No
I fully understand that I can add exe:/usr/sbin/nginx to csf.pignore to stop these email alerts but I would like to understand a few things first.
Why is the "systemd-timesync" account being reported? That does not seem to have anything to do with Docker.
Why does the host machine seem to be reporting the excessive resource usage (the extended process time) when that is something running in the container?
Why are other docker containers not running Nginx not resulting in excessive resource usage emails?
I'm sure there are other questions but basically, why is this being reported the way it is being reported?
I can at least answer the first two questions:
Unlike real VMs, Docker containers are simply a collection of processes run under the host system kernel. They just have a different view on certain system resources, including their own file hierarchy, their own PID namespace and their own /etc/passwd file. As a result, they will still show up if you ps aux on the host machine.
The nginx container's /etc/passwd includes a user 'nginx' with UID 104 that runs the nginx worker process. However, in the host's /etc/passwd, UID 104 might belong to a completely different user, such as systemd-timesync.
As a result, if you run ps aux | grep nginx in the container, you might see
nginx 7 0.0 0.0 32152 2816 ? S 11:20 0:00 nginx: worker process
while on the host, you see
systemd-timesync 22004 0.0 0.0 32152 2816 ? S 13:20 0:00 nginx: worker process
even though both are the are the same process (also note the different PID namespaces; in containers, PIDs are counted from 1 again).
As a result, container processes will still be subject to ConfigServer's resource monitoring, but they might show up with random, or even non-existent user accounts.
As to why nginx triggers the emails and other containers don't, I can only assume that nginx is the only one of your containers that crosses ConfigServer's resource thresholds.
I want to SSH to a machine (call it A). It's sshd config file looks like below:
AllowGroups wheel
AllowTcpForwarding no
AuthorizedKeysFile .ssh/authorized_keys
Banner /etc/issue
ChallengeResponseAuthentication no
Ciphers aes256-ctr,aes128-ctr
Compression delayed
GatewayPorts no
MACs hmac-sha1
MaxSessions 1
PasswordAuthentication yes
PermitRootLogin yes
PermitTunnel no
PermitUserEnvironment no
Protocol 2
RhostsRSAAuthentication no
StrictModes yes
Subsystem sftp /usr/lib64/ssh/sftp-server
UsePAM yes
UsePrivilegeSeparation yes
X11Forwarding no
I have another machine (call it B - ssh client)
Now, if I try to ssh to machineA as below, it works perfect:
ssh root#machineA
and then interactively provide password. Works perfect!
Now, I try passing password via a script using sshpass utility as below:
sshpass -p PASSWORD ssh -vvv -o=StrictHostKeyChecking=no root#machineA
This fails.
Last few lines from debug of -vvv gives below:
debug3: remaining preferred: ,password
debug3: authmethod_is_enabled password
debug1: Next authentication method: password
debug3: packet_send2: adding 64 (len 51 padlen 13 extra_pad 64)
debug2: we sent a password packet, wait for reply
debug1: Authentications that can continue: publickey,password
Permission denied, please try again.
/var/log/auth.log on machine A says below:
Connection from machineB port 44408
Failed password for root from machineB port 44408 ssh2
Connection closed by machineB port 44408 [preauth]
lastb command gives below:
root ssh:notty machineB Sun Jul 3 04:20 - 04:20 (00:00)
root ssh:notty machineB Sun Jul 3 04:19 - 04:19 (00:00)
I have been reading around for it. But, not something that I can stop at. Do you have any pointers? What could be the problem?
Anything related to sshpass or something?
I would try to narrow the problem:
Try with a simpler password, without special characters.
Try with a user different than root.
Try using certificates instead of password.
That is explained here: https://git-scm.com/book/en/v2/Git-on-the-Server-Generating-Your-SSH-Public-Key , https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys--2 etc.
Have a look at the server logs as well, check /var/log/secure or alike.
Does it possible to check which DNS server used for resolving domain name (in intraned network)? We have many steps: proxy, BigIP, domain controllers, etc.
I have a complicated networks with many DNS server. Sometimes when in browser I use:
http://mysitedomainalias.mydomain.com
I receive web page,
sometime after near 15 minutes I receive error about timeout.
But when I use IP address instead of domain alias I always reach my web page.
So I have decided that it could be a problem with DNS server. I would like to know common way how to resolve similar problems.
On *NIX systems, dig is a standard tool to test and debug DNS servers:
deceze$ dig google.com
...
;; QUESTION SECTION:
;google.com. IN A
;; ANSWER SECTION:
google.com. 5 IN A 173.194.35.168
google.com. 5 IN A 173.194.35.161
google.com. 5 IN A 173.194.35.169
...
;; Query time: 84 msec
;; SERVER: 192.168.10.1#53(192.168.10.1)
;; WHEN: Mon Jul 14 15:59:05 2014
;; MSG SIZE rcvd: 204
In the last part, SERVER signifies which DNS server answered our request.
Some more things you can then do with dig:
query a specific DNS server instead of the system's default:
$ dig #mydns.example.com google.com
trace each step of the resolution chain to see any problems in the canonical name servers:
$ dig google.com +trace
query specific record types:
$ dig google.com NS
$ dig google.com MX
$ dig google.com ANY
See the manual: http://linux.die.net/man/1/dig