I need to use nmap to check if port 443 is open for a list websites. So, I saved them into a file. I need the output to tell me if the port is open or not. I used the command:
nmap -PN -p443 gnmap -oG logs/output.gnmap -iL myfolder/input.txt
The problem: the output file is giving me a different domain names. Nmap made rDNS and I found that the IP points to adifferent domain name. Please, explain. Does this means both domains are hosted in the same server ? However, I checked their certificates and found each domain has different certificate. I am concerned about port 433 in my list to check their certificates later. So, I don't want to check another domain's certificate's other than the one I entered in the file.
To solve the issue, I used the -n option. But the problem is that the output file contains IPs only. How can I produce output file that contains the result of my domains without rDNS ??
The "Grepable" output format (-oG) is deprecated because it cannot show the full output of an Nmap scan. There is no way to get the output you want with the -oG option unless you modify Nmap and recompile it.
Luckily, the XML output format (-oX) contains the information you want and more:
<hostnames>
<hostname name="bonsaiviking.com" type="user"/>
<hostname name="li34-105.members.linode.com" type="PTR"/>
</hostnames>
In this example, from scanning my domain, the hostname provided on the command line has the attribute type="user", and the hostname that was a result of the reverse lookup has type="PTR".
Related
I'm not sure, why executing below command on ubuntu terminal throws error. tcpprep syntax and options are mentioned as per in help doc, still throws error.
root#test-vm:~# /usr/bin/tcpprep --cachefile='cachefile1' —-pcap='/pcaps/http.pcap'
tcpprep: Command line arguments not allowed
tcpprep (tcpprep) - Create a tcpreplay cache cache file from a pcap file
root#test-vm:~# /usr/bin/tcpprep -V
tcpprep version: 3.4.4 (build 2450) (debug)
There are two problems with your command (and it doesn't help that tcpprep errors are vague or wrong).
Problem #1: Commands out of order
tcpprep requires that -i/--pcap come before -o/--cachefile. You can fix this as below, but then you get a different error:
bash$ /usr/bin/tcpprep —-pcap='/pcaps/http.pcap' --cachefile='cachefile1'
Fatal Error in tcpprep_api.c:tcpprep_post_args() line 387:
Must specify a processing mode: -a, -c, -r, -p
Note that the error above is not even accurate! -e/--mac can also be used!
Problem #2: Processing mode must be specified
tcpprep is used to preprocess a capture file into client/server using a heuristic that you provide. Looking through the tcpprep manpage, there are 5 valid options (-acerp). Given this capture file as input.pcapng with server 192.168.122.201 and next hop mac 52:54:00:12:35:02,
-a/--auto
Let tcpprep determine based on one of 5 heuristics: bridge, router, client, server, first. Ex:
tcpprep --auto=first —-pcap=input.pcapng --cachefile=input.cache
-c/--cidr
Specify server by cidr range. We see servers at 192.168.122.201, 192.168.122.202, and 192.168.3.40, so summarize with 192.168.0.0/16:
tcpprep --cidr=192.168.0.0/16 --pcap=input.pcapng --cachefile=input.cache
-e/--mac
This is not as useful in this capture as ALL traffic in this capture has dest mac of next hop of 52:54:00:12:35:02, ff:ff:ff:ff:ff:ff (broadcast), or 33:33:00:01:00:02 (multicast). Nonetheless, traffic from the next hop won't be client traffic, so this would look like:
tcpprep --mac=52:54:00:12:35:02 —-pcap=input.pcapng --cachefile=input.cache
-r/--regex
This is for IP ranges, and is an alternative to summarizing subnets with --cidr. This would be more useful if you have several IPs like 10.0.20.1, 10.1.20.1, 10.2.20.1, ... where summarization won't work and regex will. This is one regex we could use to summarize the servers:
tcpprep --regex="192\.168\.(122|3).*" —-pcap=input.pcapng --cachefile=input.cache
-p/--port
Looking at Wireshark > Statistics > Endpoints, we see ports [135,139,445,1024]/tcp, [137,138]/udp are associated with the server IPs. 1024/tcp, used with dcerpc is the only one that falls outside the range 0-1023, and so we'd have to manually specify it. Per services syntax, we'd represent this as 'dcerpc 1024/tcp'. In order to specify port, we also need to specify a --services file. We can specify one inline as a temporary file descriptor with process substitution. Altogether,
tcpprep --port --services=<(echo "dcerpc 1024/tcp") --pcap=input.pcapng --cachefile=input.cache
Further Reading
For more examples and information, check out the online docs.
I have a process using https. I found its PID using ps and used the command lsof -Pan -p PID -i to get the port number it is running on.
I need iftop to see the data transfer. The filter I am using now is
iftop -f "port http 57787".
I don't think this is giving me the right output.
Can someone help me the right filter to use with iftop so that I know the traffic going through only this port?
I can see 2 problems here:
1/ Is that a typo? The correct option for filtering is -f (small "f"). -F (capital "F") option is for net/mask.
2/ Though not explicitly stated by iftop documentation, the syntax for filtering seems to be the pcap one from the few examples given (and using ldd I can see that yes, the iftop binary is linked with libpcap). So a filter with http is simply not valid. To see the doc for pcap filtering syntax, have a look at pcap-filter (7) - packet filter syntax man page. In your example, a filter such as "tcp port 57787" would be OK. pcap does not do layer 5 and above protocol dissection such as http (pcap filters are handled by BPF in the kernel, so above layer 4 you're on your own, because that's none of the kernel business).
All in all, these looks like iperf bugs. It should refuse your "-F" option, and even with "-f" instead exit with an error code because pcap will refuse the filter expression. No big deal, iftop is a modest program. See edit bellow.
EDIT:
I just checked iftop version 1.0pre4 source code, and there is no such obvious bug from a look at set_filter_code() and its caller packet_init() in iftop.c. It correctly exit with error, but...
Error 2, use the "-f" option, but your incorrect filter syntax:
jbm#sumo:~$ sudo iftop -f "port http 57787"
interface: eth0
IP address is: 192.168.1.67
MAC address is: 8c:89:a5:57:10:3c
set_filter_code: syntax error
That's OK.
Error 1, the "-F" instead of "-f", there is a problem:
jbm#sumo:~$ sudo iftop -F "port http 57787"
(everything seems more or less OK, but then quit the program)
Could not parse net/mask: port http 57787
interface: eth0
IP address is: 192.168.1.67
MAC address is: 8c:89:a5:57:10:3c
Oops! "Could not parse net/mask: port http 57787"! That's a bug: it should exit right away.
I am trying to change the hostname from "localhost" to "systemhost" (user-defined names).
I was actually running a servlet program where I suddenly got a question, if it is possible to change the hostname from localhost (default) to systemhost (user-defined)
What I have done so far:
Searched on the google but I get irrelevant answers.
Changed the server.xml file content from localhost to systemhost as I was running servlet program so I thought that this change can work along.
Navigated to C:\Windows\System32\drivers\etc and changed the hosts file. Replaced the word localhost to systemhost.
After doing all these no success. I wonder, if it is really possible to change?
Any help or suggestions?
Thanks!
uncomment that line by removing #.
it should be:
127.0.0.1 systemhost
::1 systemhost
I'm trying to setup response headers on my separate webservers that outputs the physical name of the machine that nginx is running on, so that I can tell which servers are serving the responses to our web clients.
Is there a variable that exists to do this already? Or do I just have to hardcode it per-server :(
You're after the $hostname common variable. Common variables are listed in the variable index.
The nginx access log documentation only shows variables that are specific to the access log:
The log format can contain common variables, and variables that exist
only at the time of a log write.
I guess you're looking for $hostname variable.
At first I thought the answer was to use the ENV variable and pull out the hostname from there https://docs.apitools.com/blog/2014/07/02/using-environment-variables-in-nginx-conf.html. But I couldn't get it to work for some reason.
However, this works like a charm:
perl_set $server_int 'sub { use Sys::Hostname; return hostname; }';
And example usage:
add_header 'Server-Int' "$server_int";
Just have to make sure your nginx is compiled with --with-http_perl_module - just run nginx -V to make sure. And that you have Sys::Hostname installed.
Warning: I at first used hostname to return the hostname in the Perl script, but while that did return the name, it for some reason aborted the rest of the output. I don't know if it's a bug with perl_set but you've been warned - using backticks in perl_set may be deadly.
I hope this is the right place to post this.
I have a VM I usually connect from work. To connect from home I was given the following instructions:
Copy and paste ./ssh/id_rsa and ./ssh/id_rsa.pub from the work machine to the home machine. Also make a config file like:
# Debian VM
Host nacho4d.dev.acme.com
# IdentityFile ~/.ssh_acme/id_rsa
User nacho4d
ProxyCommand ssh ns.dev.acme.com -l nacho4d nc -w 1 %h %p
# Tunnel/springboard server
Host ns.dev.acme.com
# IdentityFile ~/.ssh_acme/id_rsa
User nacho4d
ProxyCommand ssh ts6.in.acme.com -l nacho4d nc -w 1 %h %p
So everything works good with:
$ ssh nacho4d.dev.acme.com
The problem is that I already have my own (non-work) private keys and I don't want to replace it with the work .ssh folder every time I need to use ssh. Too tedious.
How can I use a particular key, etc to connect to a specific server only?
I tried putting my files like:
~/.ssh/id_rsa → home private key
~/.ssh/id_rsa.pub → home public key
~/.ssh/config → config file like above but with IdentityFile enabled
~/.ssh_acme/id_rsa → work private key
~/.ssh_acme/id_rsa.pub → work public key
I thought that having a config file with IndentityFile should make ssh to use a particular key ( in this case pointing to ~/.ssh_acme/id_rsa) for that particular host, but I always get "Permission Denied" Connection closed by remote host.
Am I missing something? Perhaps do I need to supply the public key somewhere else too?
I checked ~./ssh/authorized_keys file in the VM and I have a ssh-rsa entry for the work-computer not the home computer (which Is I believe normal since I am using the keys provided by work.)
How come IdentityFile ~/.ssh_acme/id_rsa is not working as expected?
Do I really need to interchange my home/work keys everytime I need to connect to somewhere?
I am almost a beginner in ssh things, but something tells me there must be a clever way of doing this.
Any help is appreciated.
You don't need to specify which key works with which host, just rename the keys and add a IdentityFile line for each key:
IdentityFile ~/.ssh/id_rsa_dev_acme
IdentityFile ~/.ssh/id_rsa_in_acme
It's possible the keys in ~/.ssh_acme/id_rsa aren't being used because the permissions aren't correct on ~/.ssh_acme (0700) or ~/.ssh_acme/id_rsa (0600)
Finally, this question might be more relevant on http://unix.stackexchange.com