Can we monitor windows network information in realtime using minifilters? - networking

I am trying to write a minifilter that more or less captures everything that happens in the kernel and was wondering if I could also capture "URLs"/network information; I stumbled upon windivert which seems to be using a .sys driver and also another thread which says we cannot get URLs in driver mode which leaves me a bit confused. If it is true then how does windivert do it?
I understand there is something called network redirect under minifilters on learn.microsoft.com which uses a dll and .sys file (same as windivert), but I could not find any resources that can help make me one.
Is there a better way to capture all visited URLs in real time?
Thanks in advance for any help or directions.

You're looking for Windows Filtering Platform and Filtering Platform Callout Drivers, which WinDivert is utilizing. This gives you the data that goes out over the wire, so for plain old HTTP over port 80 you can parse the requests to obtain the URL. This won't work for HTTPS since you're getting encrypted data over the wire; you'd have to implement some kind of MITM interception technique to handle that.

Related

How to detect proxy requests? [duplicate]

This question already has answers here:
How do you detect a VPN or Proxy connection? [closed]
(7 answers)
Closed 2 years ago.
I know it is popular question, and I read all topics about it. I want to put point for me in this question.
Goal: Detect proxy if user use it
Reason: If user use proxy does not show geo adv. I need to know bool result.
Decision:
1. Use database of proxy IPs (for ex: MaxMind);
2. Check header Connection: keep-alive because cheap proxy does not use persistent connection. But all modern browsers use it;
3. Check other popular headers;
4. Use JS to detect web-proxy by compare browser host and real host.
Questions:
1. Advise database, I read about MaxMind, but some people wrote it is not effective.
2. Check Connection-header. Is it okey?
3. May be I missed something?
PS/ Sorry for my english... I learn it.
Option 1 you suggested is the best option. Proxy detection can be time consuming and complicated.
As you mentioned maxmind and your concern for effectiveness, there are other APIs available like GetIPIntel. It's free and very simple to use. They go beyond simple blacklists and use machine learning and probability theory algorithms to determine a probability value and makes things very accurate.
Option 2 you mentioned doesn't hurt to implement unless you get a lot of false positives. Option 3-4 should not be used alone because it's very easy to get around it. All browser actions can be automated and just because someone is using a proxy, it does not mean they're not using a real browser.
The best way is definitely to use an API. You could use the database from MaxMind but then you need to keep downloading that database and making sure the data is kept up to date by them. And as you said there are questions about the accuracy of MaxMind data.
Personally I would recommend you try https://proxycheck.io which full disclosure is my own site, you get full access to everything for free, premium proxy detecting and blocking with 1,000 daily queries.
You can evaluate IP2Proxy database which is updated daily. It detects open proxy, web proxy, Tor and VPN. https://www.ip2location.com/database/px2-ip-proxytype-country
Check connection header is inaccurate for proxy types such as VPN.
Check headers is easily being defeated. A new generation of proxy will attempt to workaround older generation of detection methods.
Based on our experience, the best method in proxy detection is based on accurate blacklist.

What is the best method to send data from a device to a server

I am currently developing a website for an energy-monitoring company. We are trying to send high volumes data from the devices which record the data to a server so they can be processed in a database. The guy developing the firmware seems to think that the best way to send the data is to produce CSV files and send them via FTP. A program on the server needs to monitor the files received via FTP and run a PHP script to process them. I, however, feel that the best way of sending the data is via HTTP POST.
We had HTTP POST working and then I began trying to work with the CSVs which became a pain as reliably monitoring the files received via FTP meant editing the ProFTPD configuration file (which I found to be a near impossible task) and install a package called mod_exec (which comes with security risks) so that ProFTPD could run a PHP script. These issues and the fact that I am unfamiliar with the linux console which I am required to use extensively to set this up, makes the CSV method very difficult to set up. HTTP POST to me seems like a more direct way of sending the data without having to worry about files or relying on ProFTPD. It would also allow us to use identifiers to give the data being passed meaning as opposed to a string of values for which the meaning is not immediately apparent. In addition, the query string could be URL encoded to pass a multidimensional array which would work well given the type of data being passed.
Nevertheless, just because the HTTP POST method would be easier doesn't mean that the CSV method doesn't have advantages. Furthermore, the firmware guy has far more experience than me with computers so I trust his opinion.
Can you please help me to understand his point of view on the advantages of the CSV method and explain what the best method is?
You're right. FTP has major issues with firewalls, and especially doesn't work well on mobile (NAT'ted) IPv4. HTTP POST works far, far better under such circumstances, if only because nobody accepts an "internet" connection that breaks HTTP.
Furthermore, HTTP is a lot easier on the device as well. It's just a single-socket protocol, with trivial read/write semantics on that socket.
Some more benefits? HTTP has almost-native support for compression (gzip). HTTP transmission can start before the input is complete. HTTP is easier to secure (HTTPS)...
No, there really is little reason to use FTP.
The 'CSV method' (I'd call it the 'FTP method' though) has the advantage of being known to the embedded developer. The receiving side will have to create some way of checking if there is a file though. That adds complexity.
The 'HTTP method' has several advantages:
HTTP is easy to implement on the sending side
No need to create a file-checker
You can reply to the embedded device if everything went OK
I actually just implemented a system just like that (not too much data, but still) and use HTTP POST to send the data. I implemented the HTTP POST myself.

Can uploads be too fast?

I'm not sure if this is the right place to ask this, but I'll do it anyway.
I have developed an uploader for the Umbraco CMS that lets people upload a queue of files in one go. This uses some simple flash app that just calls a .NET ashx to upload the files one at a time. When one is done, the next one starts.
Recently I've had a user hit a problem where 1 or 2 uploads will go up fine, but then the rest fail. This happens for himself and a client of his. After some debugging, he thinks he's found the problem, but it seems weird so was wondering if anyone else has had this problem?
Both him and his client are on a fibre optic broadband connection so have got really fast upload speeds. When it was tested on a lesser speed broadband connection, all the files were uploaded no problem. According to one of his developer friends, apparently they had come across it before and had to put a slight delay in the upload script to make it work.
Does this sound possible? Had anyone else hit this problem? Is there a known workaround to prevent the uploads from failing?
I have not struck this precise problem before, but I have done a lot of diagnosis of DSL and broadband troubleshooting before, so will do my best to answer this.
There are 2 possible causes for this particular symptom, both generally outside of your network control (I would have thought).
1) Packet loss
Of course where some links receive a very high volume of traffic then they can chose to just dump a lot of data (eg all that is over that link maximum set size), but TCP/IP should be controlling that, and also expecting that sort of thing to drop from time to time, so this seems less likely.
2) The receiving server
May have some HTTP bottlenecks into that server or even the receiving server CPU / RAM etc, may be at capacity.
From a troubleshooting perspective, even if these symptoms shouldn't (in theory) exist, the fact that they do, and you have a specific
Next steps if you really need to understand how it is all working might be to get some sort of packet sniffer (like WireShark) to try to work out at a packet level what exactly is happening.
Also Socket programming can often program directly to the TCP/IP sockets, so you would be processing at the lower network layers, and seeing the responses and timeouts etc.
Also if you control the receiving server, then you can do the same from that end, or at least review the error logs to see what is getting thrown up as a problem.
A really basic method could be to send a pathping to the receiving server if that is possible, and that might highlight slow nodes getting the server, or packet loss between your local machine and the end server.
The upshot? Put in a slow down function in the upload code, and that should at least make the code work.
Get in touch if you need any analysis of the WireShark stuff.
I have run into a similar problem with an MVC2 website using Flash uploader and Firefox. The servers were load balanced with a Big-IP load balancer. What we found out in debugging this is that Flash, in Firefox, did not send the session ID on continuation requests and the load balancer would send continuation requests off to another server. Because the user had no session on the new server, the request failed.
If a file could be sent in one chunk, it would upload fine. If it required a second chunk, it failed. Because of this the upload would fail after an undetermined number of files being uploaded.
To fix it, I wrote a Silverlight uploader.

MonoTouch, test if remote device is found on local network

I am using MonoTouch to develop an app which will connect to remote devices on a network. These devices have data which can be access through http queries.
If I provide a valid IP address to a controller the app works perfectly, however it hangs for a long time if the controller is not on the network. For this reason I thought it would be good to use the Reachability.cs class which can be found here:
https://github.com/xamarin/monotouch-samples/blob/master/ReachabilitySample/reachability.cs
Instead of using google.com as the host, I am using the IP address of the controller. I have read that there is a bug with this class which causes it to not like having "http" at the beginning of the URL. Having now tried numerous things to get this working I am out of ideas.
Do anyone have any suggestions? Perhaps I am reinventing the wheel here.
Having now tried numerous things to get this working I am out of ideas.
From your question it's not clear what issue you're having with the Reachability class. Maybe you could edit it and add more details ? e.g. what you tried so far, how it reacts like: never works, throws/crash, inconsistent results ...
Do anyone have any suggestions?
If your main issue is blocking the UI of your application then you could (and should anyway) do your connection and data transfer asynchronously (or a separate thread) and once completed update your UI (from the main thread).
E.g. using WebClient.DownloadDataAsync

Sending broadcast with Chrome Extensions

I'm coding an extension for a customer, one of the requirements is that the extension also works offline because internet services are not that reliable, my customer's business can't stop but can deal with "stale" data, thats a nice tradeoff I guess.
Therefore, I want to code some kind of distributed cache as an extension to synchronize local data among the N nodes that will be connected running the same application and thus synchronize with the real database, hosted on the internet.
In order to achieve that I imagined that I would need to make a network broadcast and listen to incoming broadcasts, then every node that starts to run my application will broadcast it's IP address and become available as a new node for the distributed cache, failover is very important here.
I googled some possibilities I initially thought but none of them will work, I guess. The first was to do it just with HTTP, the second was to use Google Native Client to write C++ code that could run network code and thus do the broadcast, but it has limitations. Right now I'm thinking to use Java Applets but I don't really know if they have some limitations related to networking or if Chrome Extensions has any limitation with Java Applets.
Any ideas on how to do it? Using some of the stuff I suggested or another approach?
You could create an NPAPI extension, which would not be restricted by Chrome at all.

Resources