Can anyone tell me how to find the proxy server is http or socks ? Is that based on port number ? how it differs ?
Thanks in advance
try it as http: curl -x http://x.x.x.x:y check-host.net/ip. if fails, try as socks: curl -x socks://x.x.x.x:y check-host.net/ip.
No, the proxy type is not based on port numbers. The ports are assigned by network admins and can be anything they want.
Your only hope is if your network is configured to use some type of proxy auto-config to provide the specific proxy details to clients when needed.
Otherwise, there is no way to query the proxy itself. You have to know ahead of time what type of proxy it is so you know how to communicate with it correctly.
Try this script:
$ cat get_version.py
#!/usr/bin/python
import struct
import socket
import sys
try:
server = sys.argv[1]
port = sys.argv[2]
except:
print "Usage: server port"
try:
sen = struct.pack('BBB', 0x05, 0x01, 0x00)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(( server , int( port ) ))
s.sendall(sen)
data = s.recv(2)
s.close()
version, auth = struct.unpack('BB', data)
print 'server : port is ', server, ':', port, '; varsion: ', version
except Exception as e:
print e
Related
I'm running a flask_restful API service that is being forwarded traffic via an nginx proxy. While the IP address is being forward through the proxy via some variables, flask_restful doesn't seem to be able to see these variables, as indicated by its output which points to 127.0.0.1:
127.0.0.1 - - [25/Oct/2017 21:55:37] "HEAD sne/event/SN2014J/photometry HTTP/1.0" 200 -
While I know I can retrieve the IP address via the request object (nginx forwards X-Forwarded-For and X-Real-IP), I don't know how to make the above output from flask_restful show/use this IP address, which is important if you want to say limit the number of API calls from a given IP address with flask_limiter. Any way to make this happen?
You can use (for older version of werkzeug)
from werkzeug.contrib.fixers import ProxyFix
app.wsgi_app = ProxyFix(app.wsgi_app)
For newer version of werkzeug (1.0.0+)
from werkzeug.middleware.proxy_fix import ProxyFix
app.wsgi_app = ProxyFix(app.wsgi_app)
This will fix the IP using X-Forwarded-For. If you need a enhanced version you case use
class SaferProxyFix(object):
"""This middleware can be applied to add HTTP proxy support to an
application that was not designed with HTTP proxies in mind. It
sets `REMOTE_ADDR`, `HTTP_HOST` from `X-Forwarded` headers.
If you have more than one proxy server in front of your app, set
num_proxy_servers accordingly
Do not use this middleware in non-proxy setups for security reasons.
get_remote_addr will raise an exception if it sees a request that
does not seem to have enough proxy servers behind it so long as
detect_misconfiguration is True.
The original values of `REMOTE_ADDR` and `HTTP_HOST` are stored in
the WSGI environment as `werkzeug.proxy_fix.orig_remote_addr` and
`werkzeug.proxy_fix.orig_http_host`.
:param app: the WSGI application
"""
def __init__(self, app, num_proxy_servers=1, detect_misconfiguration=False):
self.app = app
self.num_proxy_servers = num_proxy_servers
self.detect_misconfiguration = detect_misconfiguration
def get_remote_addr(self, forwarded_for):
"""Selects the new remote addr from the given list of ips in
X-Forwarded-For. By default the last one is picked. Specify
num_proxy_servers=2 to pick the second to last one, and so on.
"""
if self.detect_misconfiguration and not forwarded_for:
raise Exception("SaferProxyFix did not detect a proxy server. Do not use this fixer if you are not behind a proxy.")
if self.detect_misconfiguration and len(forwarded_for) < self.num_proxy_servers:
raise Exception("SaferProxyFix did not detect enough proxy servers. Check your num_proxy_servers setting.")
if forwarded_for and len(forwarded_for) >= self.num_proxy_servers:
return forwarded_for[-1 * self.num_proxy_servers]
def __call__(self, environ, start_response):
getter = environ.get
forwarded_proto = getter('HTTP_X_FORWARDED_PROTO', '')
forwarded_for = getter('HTTP_X_FORWARDED_FOR', '').split(',')
forwarded_host = getter('HTTP_X_FORWARDED_HOST', '')
environ.update({
'werkzeug.proxy_fix.orig_wsgi_url_scheme': getter('wsgi.url_scheme'),
'werkzeug.proxy_fix.orig_remote_addr': getter('REMOTE_ADDR'),
'werkzeug.proxy_fix.orig_http_host': getter('HTTP_HOST')
})
forwarded_for = [x for x in [x.strip() for x in forwarded_for] if x]
remote_addr = self.get_remote_addr(forwarded_for)
if remote_addr is not None:
environ['REMOTE_ADDR'] = remote_addr
if forwarded_host:
environ['HTTP_HOST'] = forwarded_host
if forwarded_proto:
environ['wsgi.url_scheme'] = forwarded_proto
return self.app(environ, start_response)
from saferproxyfix import SaferProxyFix
app.wsgi_app = SaferProxyFix(app.wsgi_app)
PS: Code taken from http://esd.io/blog/flask-apps-heroku-real-ip-spoofing.html
I am setting up simple tcp connection routing using HAProxy acl's. The idea is to route connections depending on request content having two flavors: read and write requests.
For testing I made a simple tcp client/server setup using perl. Strangely enough about 10-40% of the ACL's fail and are sent to the default backend.
The ACL's should find the substring 'read' or 'write' and route accordingly, but this is not allways the case.
Sending a read/write request using nc (netcat) has the same effect.
I tested this configuration with mode=http and everything works as expected.
I also tested with reg, sub and bin, to no avail.
The example server setup is as follows:
HAProxy instance, listens on port 8000
Client (creates tcp connection to proxy and sends user input (read/write string) to server through port 8000, after which it closes the connection)
Server1 (write server), listens on port 8001
Server2 (read server), listens on port 8002
Server3 (default server), listens on port 8003
My HAProxy configuration file looks is:
global
log /dev/log local0 debug
#daemon
maxconn 32
defaults
log global
balance roundrobin
mode tcp
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend tcp-in
bind *:8000
tcp-request inspect-delay 3s
acl read req.payload(0,4) -m sub read
acl write req.payload(0,5) -m sub write
use_backend read_servers if read
use_backend write_server if write
default_backend testFault
backend write_server
server server1 127.0.0.1:8001 maxconn 32
backend read_servers
server server2 127.0.0.1:8002 maxconn 32
backend testFault
server server3 127.0.0.1:8003 maxconn 32
The client code (in perl):
use IO::Socket::INET;
# auto-flush on socket
#$| = 1;
print "connecting to the server\n";
while(<STDIN>){
# create a connecting socket
my $socket = new IO::Socket::INET (
PeerHost => 'localhost',
PeerPort => '8000',
Proto => 'tcp',
);
die "cannot connect to the server $!\n" unless $socket;
# data to send to a server
$req = $_;
chomp $req;
$size = $socket->send($req);
print "sent data of length $size\n";
# notify server that request has been sent
shutdown($socket, 1);
# receive a response of up to 1024 characters from server
$response = "";
$socket->recv($response, 1024);
print "received response: $response\n";
$socket->close();
}
The server (perl code):
use IO::Socket::INET;
if(!$ARGV[0]){
die("Usage; specify a port..");
}
# auto-flush on socket
$| = 1;
# creating a listening socket
my $socket = new IO::Socket::INET (
LocalHost => '0.0.0.0',
LocalPort => $ARGV[0],
Proto => 'tcp',
Listen => 5,
Reuse => 0
);
die "cannot create socket $!\n" unless $socket;
print "server waiting for client connection on port $ARGV[0]\n";
while(1){
# waiting for a new client connection
my $client_socket = $socket->accept();
# get information about a newly connected client
my $client_address = $client_socket->peerhost();
my $client_port = $client_socket->peerport();
print "connection from $client_address:$client_port\n";
# read up to 1024 characters from the connected client
my $data = "";
$client_socket->recv($data, 1024);
print "received data: $data\n";
# write response data to the connected client
$data = "ok";
$client_socket->send($data);
# notify client that response has been sent
shutdown($client_socket, 1);
$client_socket->close();
print "Connection closed..\n\n";
}
$socket->close();
Binary data in haproxy is tricky. Probably some bug, but the following worked for me on haproxy 1.7.9.
I am trying to build a thrift proxy server which can route to appropriate backend based on the user_id in the payload.
frontend thriftrouter
bind *:10090
mode tcp
option tcplog
log global
log-format "%ci:%cp [%t] %ft %b/%s %Tw/%Tc/%Tt %B %ts %ac/%fc/%bc/%sc/%rc %sq/%bq captured_user:%[capture.req.hdr(0)] req.len:%[capture.req.hdr(1)]"
tcp-request inspect-delay 100ms
tcp-request content capture req.payload(52,10) len 10
tcp-request content capture req.len len 10
tcp-request content accept if WAIT_END
acl acl_thrift_call req.payload(2,2) -m bin 0001 # Thrift CALL method
acl acl_magic_field_id req.payload(30,2) -m bin 270f # Magic field number 9999
# Define access control list for each user
acl acl_user_u1 req.payload(52,10) -m sub |user1|
acl acl_user_u2 req.payload(52,10) -m sub |user2|
# Route based on the user. No default backend so that one always has to set it
use_backend backend_1 if acl_user_u1 acl_magic_field_id acl_thrift_call
use_backend backend_2 if acl_user_u2 acl_magic_field_id acl_thrift_call
When matching binary data in acl, make sure you're looking at the right number of bytes, for substring to work properly. Or use the hex conversion method and match on hex bytes.
Dont I feel silly. Re-reading the HAProxy documentation I found the following directive (fetch method) that fixes the issue:
tcp-request content accept if WAIT_END
That solved the unexpected behaviour.
I've configured xinetd and I want to access echo service remotely. The point is, when y do:
nmap localhost
it appears
PORT STATE SERVICE
7/tcp open echo
But when i make
nmap [remote IP]
it doesn't appears
> PORT STATE SERVICE
21/tcp open ftp
23/tcp open telnet
80/tcp open http
and I don't know why
Here is the /etc/xinetd.d/echo
> service echo
{
disable = no
type = INTERNAL
id = echo-stream
socket_type = stream
protocol = tcp
user = root
wait = no
}
And here is the /etc/xinetd.conf
# Simple configuration file for xinetd
#
# Some defaults, and include /etc/xinetd.d/
defaults {
# Please note that you need a log_type line to be able to use log_on_success
# and log_on_failure. The default is the following :
# log_type = SYSLOG daemon info
}
includedir /etc/xinetd.d
Thanks by the way! Cheers!
I found the error: It was that I've been using the wrong IP.
Thank anyone
My issue is as follows: I want to implement a listen service using scapy to stimulate a honeypot (because honeypot uses a fake ip, so I can't use OS sockets) and I chose scapy.
I implemented a very simple TCP hand-shake procedure, however one thing frustrated me: one byte of the packet I use PSH to send is eaten.
For example I send "abc" out to a client, but the client's socket, for example netcat or wget, only receive "bc". Another example is "HTTP/1.1 200 OK" becomes "TTP/1.1 200 OK". I captured packet and wireshark can correctly recognize my hand-made packet as HTTP, but the client socket just lack 1 byte. I don't know why.
The code is as follows:
192.168.1.100 stands for server(my) ip addr,9999 is the port. For example, I run this python script on 192.168.1.100, then I use "nc 192.168.1.100 9999". I expect to get "abc", but I can only get "bc", but the packet seems no problem in Wireshark. it's so strange.
'''
Created on Jun 2, 2012
#author: root
'''
from scapy import all
from scapy.layers.inet import IP, ICMP, TCP
from scapy.packet import ls, Raw
from scapy.sendrecv import sniff, send
from scipy.signal.signaltools import lfilter
import scapy.all
HOSTADDR = "192.168.1.100"
TCPPORT = 9999 'port to listen for'
SEQ_NUM = 100
ADD_TEST = "abc"
def tcp_monitor_callback(pkt):
global SEQ_NUM
global TCPPORT
if(pkt.payload.payload.flags == 2):
'A syn situation, 2 for SYN'
print("tcp incoming connection")
ACK=TCP(sport=TCPPORT, dport=pkt.payload.payload.sport, flags="SA",ack=pkt.payload.payload.seq + 1,seq=0)
send(IP(src=pkt.payload.dst,dst=pkt.payload.src)/ACK)
if(pkt.payload.payload.flags & 8 !=0):
'accept push from client, 8 for PSH flag'
print("tcp push connection")
pushLen = len(pkt.payload.payload.load)
httpPart=TCP(sport=TCPPORT, dport=pkt.payload.payload.sport, flags="PA", ack=pkt.payload.payload.seq + pushLen)/Raw(load=ADD_TEST)
'PROBLEM HERE!!!! If I send out abc, the client socket only receive bc, one byte disappers!!!But the packet received by client is CORRECT'
send(IP(src=pkt.payload.dst,dst=pkt.payload.src)/httpPart)
if(pkt.payload.payload.flags & 1 !=0):
'accept fin from cilent'
print ("tcp fin connection")
FIN=TCP(sport=TCPPORT, dport=pkt.payload.payload.sport, flags="FA", ack=pkt.payload.payload.seq +1, seq = pkt.payload.payload.ack)
send(IP(src=pkt.payload.dst,dst=pkt.payload.src)/FIN)
def dispatcher_callback(pkt):
print "packet incoming"
global HOSTADDR
global TCPPORT
if(pkt.haslayer(TCP) and (pkt.payload.dst == HOSTADDR) and (pkt.payload.dport == TCPPORT)):
tcp_monitor_callback(pkt)
else:
return
if __name__ == '__main__':
print "HoneyPot listen Module Test"
scapy.all.conf.iface = "eth0"
sniff(filter=("(tcp dst port %s) and dst host %s") % (TCPPORT,HOSTADDR), prn=dispatcher_callback)
Some suggestions:
Sniff may append some payload to the end of the packet, so len(pkt.payload.payload.load) may not be the real payload length. You can use pkt[IP].len-40 (40 is the common header length of IP+TCP). You may also use -len(pkt[IP].options)-len(pkt[TCP].options) for more accurate results.
Usually the application layer above TCP uses line breaks ("\r\n") to separate commands, so you'd better change ADD_TEST to "abc\r\n"
If none of above methods work, you may upgrade to the latest netcat and try again.
I tested your code, you are missing sending proper tcp sequence
httpPart=TCP(sport=TCPPORT, dport=pkt.payload.payload.sport, flags="PA", ack=pkt.payload.payload.seq + pushLen, seq=pkt.payload.payload.ack)/Raw(load=ADD_TEST)
should fix the issue, you may have other packet length issue, but the eaten 1 byte is caused by missing proper tcp sequence
Does anyone know how to use python to ping a local host to see if it is active or not? We (my team and I) have already tried using
os.system("ping 192.168.1.*")
But the response for destination unreachable is the same as the response for the host is up.
Thanks for your help.
Use this ...
import os
hostname = "localhost" #example
response = os.system("ping -n 1 " + hostname)
#and then check the response...
if response == 0:
print(hostname, 'is up!')
else:
print(hostname, 'is down!')
If using this script on unix/Linux replace -n switch with -c !
Thats all :)
I've found that using os.system(...) leads to false positives (as the OP said, 'destination host unreachable' == 0).
As stated before, using subprocess.Popen works. For simplicity I recommend doing that followed by parsing the results. You can easily do this like:
if ('unreachable' in output):
print("Offline")
Just check the various outputs you want to check from ping results. Make a 'this' in 'that' check for it.
Example:
import subprocess
hostname = "10.20.16.30"
output = subprocess.Popen(["ping.exe",hostname],stdout = subprocess.PIPE).communicate()[0]
print(output)
if ('unreachable' in output):
print("Offline")
The best way I could find to do this on Windows, if you don't want to be parsing the output is to use Popen like this:
num = 1
host = "192.168.0.2"
wait = 1000
ping = Popen("ping -n {} -w {} {}".format(num, wait, host),
stdout=PIPE, stderr=PIPE) ## if you don't want it to print it out
exit_code = ping.wait()
if exit_code != 0:
print("Host offline.")
else:
print("Host online.")
This works as expected. The exit code gives no false positives. I've tested it in Python 2.7 and 3.4 on Windows 7 and Windows 10.
I've coded a little program a while back. It might not be the exact thing you are looking for, but you can always run a program on the host OS that opens up a socket on startup. Here is the ping program itself:
# Run this on the PC that want to check if other PC is online.
from socket import *
def pingit(): # defining function for later use
s = socket(AF_INET, SOCK_STREAM) # Creates socket
host = 'localhost' # Enter the IP of the workstation here
port = 80 # Select port which should be pinged
try:
s.connect((host, port)) # tries to connect to the host
except ConnectionRefusedError: # if failed to connect
print("Server offline") # it prints that server is offline
s.close() #closes socket, so it can be re-used
pingit() # restarts whole process
while True: #If connected to host
print("Connected!") # prints message
s.close() # closes socket just in case
exit() # exits program
pingit() #Starts off whole process
And here you have the program that can recieve the ping request:
# this runs on remote pc that is going to be checked
from socket import *
HOST = 'localhost'
PORT = 80
BUFSIZ = 1024
ADDR = (HOST, PORT)
serversock = socket(AF_INET, SOCK_STREAM)
serversock.bind(ADDR)
serversock.listen(2)
while 1:
clientsock, addr = serversock.accept()
serversock.close()
exit()
To run a program without actually showing it, just save the file as .pyw instead of .py.
It makes it invisible until user checks running processes.
Hope it helped you
For simplicity, I use self-made functions based on socket.
def checkHostPort(HOSTNAME, PORT):
"""
check if host is reachable
"""
result = False
try:
destIp = socket.gethostbyname(HOSTNAME)
except:
return result
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(15)
try:
conn = s.connect((destIp, PORT))
result = True
conn.close()
except:
pass
return result
if Ip:Port is reachable, return True
If you wanna to simulate Ping, may refer to ping.py
Try this:
ret = os.system("ping -o -c 3 -W 3000 192.168.1.10")
if ret != 0:
print "Host is not up"
-o waits for only one packet
-W 3000 gives it only 3000 ms to reply to the packet.
-c 3 lets it try a few times so that your ping doesnt run forever
Use this and parse the string output
import subprocess
output = subprocess.Popen(["ping.exe","192.168.1.1"],stdout = subprocess.PIPE).communicate()[0]
How about the request module?
import requests
def ping_server(address):
try:
requests.get(address, timeout=1)
except requests.exceptions.ConnectTimeout:
return False
return True
No need to split urls to remove ports, or test ports, and no localhost false-positive.
Timeout amount doesn't really matter since it only hits the timeout when there is no server, which in my case meant performance no longer mattered. Otherwise, this returns at the speed of a request, which is plenty fast for me.
Timeout waits for the first bit, not total time, in case that matters.