Modifying First and Last Name in Hotmail via script - http

I would like to write a script that logs into hotmail and changes the first and last name. I am thinking of perl for this, although I imagine that any language would do.
How do I find out what requests to make? I'm guessing that I can do the change requests manually, determine what http requests are being sent to/from the server, determine what pattern is being sent, and then attempt to generate a similar pattern (substituting in the changes I want) in my script.
I took a look at ethereal, but I get a message "the installer is corrupted". I'm going to look at "Advanced http packet sniffer" next - anyone have experience with this tool?

Yes you can use Perl (or any languages with modules that support HTTP(S) ). And you can use libraries already made by people. Eg WWW::Hotmail or Hotwayd. Search CPAN for more of those modules as well.

Related

LDAP Proxy with inspection/modification of requests and responses

I need to build an LDAP proxy that I can program to inspect and modify the LDAP requests and responses - some of the LDAP requests/responses will simply be passed through, but for others I might want to send two different requests to the server and then combine the results (that's just one example - there will be other use cases).
I've looked at the proxying options documented for OpenLDAP's slapd, and I see that it has quite flexible configuration and 'overlays', but no capability to insert custom code.
So I think that's not a solution, unless slapd's source code is easy to modify, to insert my own modules plus hooks to/from the existing code (?)
An alternative would be to start with a friendly TCP/IP framework library (or even a complete TCP/IP proxy). Then I can link to an ASN.1 decoding/encoding library, and write the rest myself.
I'd prefer to avoid having to write (& learn) all the TCP/IP connection/message handling and event loop myself.
So I'm looking for the most complete starting point that does the hard work and gives me the flexibility to write what I need. Typical lazy/greedy approach :-)
Must be open source, ideally in C or C++, and I'll probably be targetting RHEL/CentOS 8 in a container.

Calculate the number of visits based on downloaded GB

I have a website hosted in firebase that totally went viral for a day. Since I wasn't expecting that, I didn't install any analytics tool. However, I would like to know the number of visits or downloads. The only metric I have available is the GB Downloaded: 686,8GB. But I am confused because if I open the website with the console of Chrome, I get two different metrics about the size of the page: 319KB transferred and 1.2MB resources. Furthermore, not all of those things are transferred from firebase but from other CDN as you can see in the screenshots. What is the proper way of calculating the visits I had?
Transferred metric is how much bandwidth was used after compression was applied.
Resources metric is how much disk space those resources use before they are compressed (for transfer).
True analytics requires an understanding how what is on the web. There are three classifications:
Humans, composed of flesh and blood and overwhelmingly (though not absolutely) use web browsers.
Spiders (or search engines) that request pages with the notion that they obey robots.txt and will list your website in their websites for relevant search queries.
Rejects (basically spammers and the unknowns) which include (though are far from limited to) content/email scrapers, brute-force password guessers, vulnerability scanners and POST spammers.
With this clarification in place what you're asking in effect is, "How many human visitors am I receiving?" The easiest way to obtain that information is to:
Determine what user agent requests are human (not easy, behavior based).
Determine the length of time a single visit from a human should count as.
Assign human visitors a session.
I presume you understand what a cookie is and how it differs from a session cookie. Obviously when you sign in to a website you are assigned a session. If that session cookie is not sent to the server on a page request you will in effect be signed out. You can make session cookies last for a long time and it will come down to factors such as convenience for the visitor and if you directly count those sessions or use it in conjunction with something else.
Now your next thought likely is, "But how do I count downloads?" Thankfully you mention PHP in your website so I can thankfully give you some code that should make sense to you. If you just link directly to the file you'd be stuck with (at best) counting clicks via a click event on the anchor element though if the download gets canceled because it was a mistake or something else makes it more subjective than my suggestion. Granted my suggestion can still be subjective (e.g. they decide they actually don't want to download and cancel before the completion) and of course if they use the download is another aspect to consider. That being said if you want the server to give you a download count you'd want to do the following:
You'll may want to use Apache rewrite (or whatever the other HTTP server equivalents are) so that PHP handles the download.
You'll may need to ensure Apache has the proper handling for PHP (e.g. AddType application/x-httpd-php5 .exe .msi .dmg) so your server knows to let PHP run on the request file.
You'll want to use PHP's file_exists() with an absolute file path on the server for the sake of security.
You'll want to ensure that you set the correct mime for the file via PHP's header() as you should expect browsers to be horrible at guessing.
You absolutely need to use die() or exit() to avoid Gecko (Firefox) bugs if your software leaks even whitespace as the browser would interpret it as part of the file likely causing corruption.
Here is the code for PHP itself:
$p = explode('/',strrev($_SERVER['REQUEST_URI']));
$file = strrev($p[0]);
header('HTTP/1.1 200');
header('Content-Type: '.$mime);
echo file_get_contents($path_absolute.$file);
die();
For counting downloads if you want to get a little fancy you could create a couple of database tables. One for the files (download_files) and the second table for requests (download_requests). Throw in basic SQL queries and you're collecting data. Record IPv6 (Storing IPv6 Addresses in MySQL) and you'll be able to discern from a query how many unique downloads you have.
Back to human visitors: it takes a very thorough study to understand the differences between humans and bots. Things like Captcha are garbage and are utterly annoying. You can get a rough start by requiring a cookie to be sent back on requests though not all bots are ludicrously stupid. I hope this at least gets you on the right path.

Serving static content programmatically from Servlet - does the spec have anything available or i should roll a custom one?

I have a db with original file names, location to files on disk, meta data like user that owns file... Those files on disk are with scrambled names. When user requests a file, the servlet will check whether he's authorized, then send the file in it's original name.
While researching on the subject i've found several cases that cover that issue, but nothing specific to mine.
Essentially there are 2 solutions:
A custom servlet that handles headers and other stuff the Default Servlet containers don't: http://balusc.omnifaces.org/2009/02/fileservlet-supporting-resume-and.html
Then there is the quick and easy one of just using the Default Servlet and do some path remapping. For ex., in Undertow you configure the Undertow subsystem and add file handlers in the standalone.xml that map http://example.com/content/ to /some/path/on/disk/with/files .
So i am leaning towards solution 1, since solution 2 is a straight path remap and i need to change file names on the fly.
I don't want to reinvent the hot water. And both solutions are non standard. So if i decide to migrate app server to other than Wildfly, it will be problematic. Is there a better way? How would you approach this problem?
While your problem is a fairly common one there isn't necessarily a standards based solution for every possible design challenge.
I don't think the #2 solution will be sufficient - what if two threads try to manipulate the file at the same time? If someone got the link to the file could they share it?
I've implemented something very similar to your #1 solution - the key there is that even if the link to the file got out no one could reuse the link as it requires security. You would just "return" a 401 or 403 for the resource.
Another possibility depends on how you're hosted. Amazon S3 allows you to generate a signed URL that has a limited time to live. In this way your server isn't sending the file directly. It is either sending a redirect or a URL to the front end to use. Keep the lifetime at like 15 seconds (depending on your needs) and then the URL is no longer valid.
I believe that the other cloud providers have a similar capability too.

Is it insecure to execute code via an HTTP URL?

I'm suspicious of the installation mechanism of Bioconductor. It looks like it is just executing (via source()) the R script from an HTTP URL. Isn't this an insecure approach vulnerable to a man-in-the-middle attack? I would think that they should be using HTTPS. If not, can someone explain why the current approach is acceptable?
Yes, you are correct.
Loading executable code over a cleartext connection is vulnerable to a MITM.
Unless loaded over HTTPS where SSL/TLS can be used to encrypt and authenticate the connection, or unless the code has been signed and verified at the client then a MITM attacker could alter the input stream and cause arbitrary code to be executed on your system.
Allowing code to execute via a HTTP GET request essentially means you're allowing user-input to be directly processed by the application thus directly influencing the behavior of the application. Whilst this is often what the developer would like (say to query specific information from a database) it may be exploited in ways as you have already mentioned (E.g MITM). This is often (however I'm not directly referring to Bioconductor in any way) a bad idea as it opens the system to possible XSS/(B)SQLi attacks amongst others.
However the URL - http://bioconductor.org/biocLite.R is essentially just a file placed on the Web Server and from what is seems source() is being used to directly download it. There does not seem to be any user-input anywhere in this example so no, I wouldn't mark is as unsafe; however your analogy is indeed correct.
Note: This is simply referring to GET requests - E.g: http://example.com/artists/artist.php?id=1. Such insecurities could be exploited in many HTTP requests such as Host Header attacks, however the general concept is the same. No user-input should ever be directly processed by the application in any way.

appropriate user-agent header value

I'm using HttpBuilder (a Groovy HTTP library built on top of apache's httpclient) to sent requests to the last.fm API. The docs for this API say you should set the user-agent header to "something appropriate" in order to reduce your chances of getting blocked.
Any idea what kind of values would be deemed appropriate?
The name of your application including a version number?
I work for Last.fm. "Appropriate" means something which will identify your app in a helpful way to us when we're looking at our logs. Examples of when we use this information:
investigating bugs or odd behaviour; for example if you've found an edge case we don't handle, or are accidentally causing unusual load on a system
investigating behaviour that we think is inappropriate; we might want to get in touch to help your application work better with our services
we might use this information to judge which API methods are used, how often, and by whom, in order to do capacity planning or to get general statistics on the API eco-system.
A helpful (appropriate) User-Agent:
tells us the name and version of your application (preferably something unique and easy to find on Google!)
tells us the specific version of your application
might also contain a URL at which to find out more, e.g. your application's homepage
Examples of unhelpful (inappropriate) User-Agents:
the same as any of the popular web browsers
the default user-agent for your HTTP Client library (e.g. curl/7.10.6 or PEAR HTTP_Request)
We're aware that it's not possible to change the User-Agent sent when your application is browser-based (e.g. Javascript or Flash) and don't expect you to do so. (That shouldn't be a problem in your case.)
If you're using a 3rd party Last.fm API library, such as one of the ones listed at http://www.last.fm/api/downloads , then we would prefer it if you added extra information to the User-Agent to identify your application, but left the library name and version in there as well. This is immensely useful when tracking down bugs (in either our service or in the client libraries).

Resources