Passing a variable from the the registry into HTTP Headers in IE - http

Forgive me if this is a naive question
Some of the Software we use relies on the IP address of the Client Device to set its location - It grabs this from REMOTE_ADDR passed through the Webpage currently, This is fine on Physical devices but I'm struggling to make it work across the organisation on VDI without using user assigned desktops (which I really don't want to do)
Is there a way of being able to insert a Custom Header "Into" IE Emulation mode in Edge eg: HTTP-VDI-PHYSICAL which would grab its information from HKCU\Volatile Environment\View Client_IP_Address please?
I have looked at Addins such as Modify HTTP Header and applications like Fiddler but neither really offer what I need.
Thank you.

Related

Can we monitor windows network information in realtime using minifilters?

I am trying to write a minifilter that more or less captures everything that happens in the kernel and was wondering if I could also capture "URLs"/network information; I stumbled upon windivert which seems to be using a .sys driver and also another thread which says we cannot get URLs in driver mode which leaves me a bit confused. If it is true then how does windivert do it?
I understand there is something called network redirect under minifilters on learn.microsoft.com which uses a dll and .sys file (same as windivert), but I could not find any resources that can help make me one.
Is there a better way to capture all visited URLs in real time?
Thanks in advance for any help or directions.
You're looking for Windows Filtering Platform and Filtering Platform Callout Drivers, which WinDivert is utilizing. This gives you the data that goes out over the wire, so for plain old HTTP over port 80 you can parse the requests to obtain the URL. This won't work for HTTPS since you're getting encrypted data over the wire; you'd have to implement some kind of MITM interception technique to handle that.

Chrome/Firefox extension to kill network communication?

So I'm building a web app, and I want to emulate a network failure in browser to see if the client side javascript handles it gracefully. I know I can just disconnect my network connection, but that also disconnects my email, pandora, skype, all things that are marginally vital to my non-productivity. Is there an easy way to kill network communication for just one tab in either of these browser? Or (I'm in linux) can I block a single pid from network communication while still allowing the rest (even if it's the same program) through?
Edit: Shoot, I just realized that I'm working on localhost, and that may not apply for what I'm asking for.
Does menu file -> work without connection works for you? It should be in the firefox menu.
You could always use invalid proxy settings! I recall some plugins that let you easily change proxy profiles so you could even have a profile for "dead proxy" and enable ot whenever you want no Internet.
Turns out there are more sophisticated options: a dedicated site blocker for Chrome. That way you could still use other sites that help your non-productivity while still blocking the desired one!

Simulating a remote website locally for testing

I am developing a browser extension. The extension works on external websites we have no control over.
I would like to be able to test the extension. One of the major problems I'm facing is displaying a website 'as-is' locally.
Is it possible to display a website 'as-is' locally?
I want to be able to serve the website exactly as-is locally for testing. This means I want to simulate the exact same HTTP data, including iframe ads, etc.
Is there an easy way to do this?
More info:
I'd like my system to act as closely to the remote website as possible. I'd like to run command fetch for example which would allow me to go to the site in my browser (without the internet on) and get the exact same thing I would otherwise (including information that is not from a single domain, google ads, etc).
I don't mind using a virtual machine if this helps.
I figured this was quite a useful thing in testing. Especially when I have a bug I need to reliably reproduce in sites that have many random factors (what ads show, etc).
As was already mentioned, caching proxies should do the trick for you (BTW, this is the simplest solution). There are quite a lot of different implementations, so you just need to spend some time selecting a proper one (according to my experience squid is a good solution). Anyway, I would like to highlight two other interesting options:
Option 1: Betamax
Betamax is a tool for mocking external HTTP resources such as web services and REST APIs in your tests. The project was inspired by the VCR library for Ruby. Betamax aims to solve these problems by intercepting HTTP connections initiated by your application and replaying previously recorded responses.
Betamax comes in two flavors. The first is an HTTP and HTTPS proxy that can intercept traffic made in any way that respects Java’s http.proxyHost and http.proxyPort system properties. The second is a simple wrapper for Apache HttpClient.
BTW, Betamax has a very interesting feature for you:
Betamax is a testing tool and not a spec-compliant HTTP proxy. It ignores any and all headers that would normally be used to prevent a proxy caching or storing HTTP traffic.
Option 2: Wireshark and replay proxy
Grab all traffic you are interested in using Wireshark and replay it. This I would say it is not that hard to implement required replaying tool, but you can use available solution called replayproxy
Replayproxy parses HTTP streams from .pcap files
opens a TCP socket on port 3128 and listens as a HTTP proxy using the extracted HTTP responses as a cache while refusing all requests for unknown URLs.
Such approach provide you with the full control and bit-to-bit precise simulation.
I don't know if there is an easy way, but there is a way.
You can set up a local webserver, something like IIS, Apache, or minihttpd.
Then you can grab the website contents using wget. (It has an option for mirroring). And many browsers have an option for "save whole web page" that will grab everything, like images.
Ads will most likely come from remote sites, so you may have to manually edit those lines in the HTML to either not reference the actual ad-servers, or set up a mock ad yourself (like a banner image).
Then you can navigate your browser to http://localhost to visit your local website, assuming port 80 which is the default.
Hope this helps!
I assume you want to serve a remote site that's not under your control. In that case you can use a proxy server and have that server cache every response aggressively. However, this has it's limits. First of all you will have to visit every site you intend to use through this proxy (with a browser for example), second you will not be able to emulate form processing.
Alternatively you could use a spider to download all content of a certain website. Depending on the spider software, it may even be able to download JavaScript-built links. You then can use a webserver to serve that content.
This service http://www.json-gen.com provides mock for html, json and xml via rest. By this way, you can test your frontend separately from backend.

MonoTouch, test if remote device is found on local network

I am using MonoTouch to develop an app which will connect to remote devices on a network. These devices have data which can be access through http queries.
If I provide a valid IP address to a controller the app works perfectly, however it hangs for a long time if the controller is not on the network. For this reason I thought it would be good to use the Reachability.cs class which can be found here:
https://github.com/xamarin/monotouch-samples/blob/master/ReachabilitySample/reachability.cs
Instead of using google.com as the host, I am using the IP address of the controller. I have read that there is a bug with this class which causes it to not like having "http" at the beginning of the URL. Having now tried numerous things to get this working I am out of ideas.
Do anyone have any suggestions? Perhaps I am reinventing the wheel here.
Having now tried numerous things to get this working I am out of ideas.
From your question it's not clear what issue you're having with the Reachability class. Maybe you could edit it and add more details ? e.g. what you tried so far, how it reacts like: never works, throws/crash, inconsistent results ...
Do anyone have any suggestions?
If your main issue is blocking the UI of your application then you could (and should anyway) do your connection and data transfer asynchronously (or a separate thread) and once completed update your UI (from the main thread).
E.g. using WebClient.DownloadDataAsync

Identity Information over HTTP?

If a person clears their cookies and changes their IP address, is there ANY way for a website to identify that computer as a computer that has "been here before"? In other words, no identifiable information like MAC can ever be known over HTTP, right? (I've looked through the list of headers and only see cookies and user-agent).
Also - same goes for a mobile device. If the mobile clears cookies, is there any way to identify it as a repeat visitor?
Thanks!
Chad
If you look at a site such as browserspy, you will see that a website can find out quite a bit more from a browser then the stuff you see just by looking at your request headers. And security researchers have done some investigation of the idea of uniquely identifying a browser based on those characteristics (e.g. what plugins you have installed, what fonts you have installed, etc.). But nothing like this is truly reliable (for one thing, much of this will change simply by switching to a different browser on the same computer). There is certainly no "official" unique identifier such as a MAC address.
Not at the application level. As you correctly determined, the user can change everything that is sent in an HTTP request.
As for the MAC address, the MAC address is used in the link level of the internet protocol. It is not transmitted along multiple hops when making any sort of internet communication, thus unless you are 1 hop away from the client, you cannot use this information either.
Bottom line, can't really be done. If someone really wants to be forgotten, then they will be forgotten.
There are other ways to identify individual users without cookies -- based on a variety of information leaked by the browser and associated plugins. Check out Panopticlick for an example. It's probably not as effective with mobile browsers because (as far as I know) they don't have plugins like desktop browsers.
As others have said, no, there's nothing you can do for normal browser access.
For mobile devices (at least via WAP) there is an extra CGI parameter (the name of which escapes me) which the gateway is suposed to populate with an identifier which is unique to that mobile devices phone number - however implementations vary.
C.
If there were a (toggle-able) program available that would intercept the requests for the font-list at the O/S level, and return a bogus list, resembling a machine's list right after the O/S has been installed -- and perhaps the list could be modified slightly each time by including or excluding some randomly chosen font not from the basic list-- then a huge percentage of the identifying bits could be removed from your browser's "fingerprint", and you're no longer uniquely identified but blend in better with the herd or the flock.

Resources