I did some research but couldn't find any good literature on how to consolidate multiple instances of WordPress Multisite into one instance. To be clear, I have three multisites and want to merge them into one; is there a good/easy way to do this other than exporting/importing all of the sites manually?
Here's one answer that I found- it's not fast but it definitely works.
I will refer to first MS as the multisite that you want to remove a site from and second MS as the multisite that you want to add this site to.
Step 1
In phpmyadmin or similar for first MS, export the tables relevant to the site that you're taking. It'll probably look something like:
wp_7_commentmeta
wp_7_comments
wp_7_links
wp_7_options
wp_7_postmeta
wp_7_posts
wp_7_terms
wp_7_term_relationships
I'm going to call the site number the number corresponding to the middle number in the DB tables. In this case, the site number is 7.
Step 2
Go to phpmyadmin for second MS and see what the largest DB site number is.
Step 3
In a code editor do a find replace for your current site number and replace it with a site number that is one larger than the biggest site number in the second MS DB. For instance, if the largest site number in second MS was 460 using the example above I would do a find/replace for "wp_7_" and replace it with "wp_461_".
Step 4
Once you've done this, import the updated .sql file into the second MS DB.
Step 5
Enter the options table for the site you just imported (now wp_461_options) and make sure that the "siteurl" and "home" options correspond to the new site.
Step 6
Go to the wp_blogs table in the DB and insert new information for the site you just added. Be sure that the "blog_id" corresponds to the site number you just created (in my example I would make sure that "blog_id" was 461).
Step 7
You should now be able to safely change the nameservers on your domain to the second MS and it should work.
If anyone knows of a way to automate this, or to make it easier to do in bulk that would be phenominal. Otherwise this way will work, albeit slowly.
There are a number of 3rd party tools which can make the process a whole lot simpler - BackupBuddy for example can deal with exporting Multisite sites and importing them to existing installs.
I've used this successfully to move a few sites around, but success rates can often depend on how many user accounts are involved as Multisite maps all accounts onto a single table of accounts.
IMHO, unless there's a very good reason to consolidate them, it can often be a whole lot easier to keep the installations separate.
Related
Let's say you have a small project. The team has estimated all the tasks as 300 days of effort.
I have 5 developers in the team, and I want MS Project to tell me when the project will complete considering vacations and working schedule of my team member.
In order to do that:
I'm creating a Task "Development" with fixed work "300d", and task type "Fixed Work".
Then I create 5 resources, and specify a 2 week vacation for one of the developers somewhere in the middle of the schedule.
Then I assign my 5 development resources to this task.
The problem is, the 300d distributed evenly to all 5 development resources. And If one of them have a two weeks vacation in between, due to that particular resource the work will be finished 2 weeks later, where other 4 resources are sitting and doing nothing for 2 weeks. Total duration is 70 days.
what I get
What I want to get is: work is distributed accordingly through all 5 resources unevenly in a way that the whole task finishes as earlier as possible taking most of the usable time from all developers.
That's how I would expect it to work. In that particular case I was distributing hours manually.
what i would expect
Is there a possibility in MS Project to do something like this? Or am I doing something wrong?
There are a couple issues with how you are approaching the problem.
1. Rather than just planning out the manpower hours estimated to be needed for the entire project on a single line item, You should plan out the tasks that will need to be done to accomplish "Small Project"
If you discretely plan out the tasks that need to be accomplished to satisfy the scope of "Small project", you can establish dependency (predecessor/successor) relationships between your tasks and figure out what tasks need to be done before you can move on to others. When you do this it will give you a good idea of how long the total duration of the project will take and likely be more accurate than just relying on an estimate based on the manpower hours estimate your developers give you. Find out what tasks they actually need to do, not just how many hours they think the whole project will take them. This will also allow you to plan out the utilization of your resources better because you'll be able to assign specific resources to specific tasks, and not all of your resources need to be on every task.
2. In general I would avoid using the Task Usage form.
I noticed you are altering resources in the task usage form, but unless you are really experienced with Microsoft Project I would avoid ever touching that, as it's really easy to set the period of performance of resources assigned to a task to be different than the actual period of performance of the task itself. This will cause MS Project to behave unusually, and it can be hard for an unexperienced user to understand why. This usually leads to pain and frustration. This leads me to my next bit of advice:
3. If you really want to specify a resource's vacation time, it's better to adjust the calendar associated the resource to exclude those dates as working dates.
In your situation with only 5 resources on your project, this can be fairly easy to do. You can accomplish this 2 different ways (I'll start with the easiest option):
1. You can add resource specific exclusion dates to the default calendar in your project
You can accomplish this by opening the Resource Sheet table and then clicking the Project tab then Change Working Times. If you have the Resource Sheet open instead of the Gantt chart, you can specify the resource that is going to be effected by the exceptions:
In this example you can see that I would be excluding (removing) 8/23/21 thru 9/3/21 as working days for the SW Engineer resource, without needing to change the calendar used by the resource completely.
2. You can completely change the calendar used by particular resources to be different than the default calendar set for the project.
You can accomplish this by going into the Resource Sheet and opening the Base Calendar column:
From here you can assign any calendar that exists in the project to the resource. Of course this means you would need to create the calendars and assign exclusion dates to them.
To create a calendar, click the Project tab then click Change Working Times. Click Create New Calendar on the form that opens up and give it a name:
From there you can add exclusion dates and all that.
Note: In a larger project with many resources, I would recommend not messing with the calendar for the resources at all. It just gets hard to deal with when there are a lot of resources.
Rookie S3 user here looking to troubleshoot a problem I encountered while helping some friends with their business. Their business revolves around selling courses and the program they use is WooCommerce and they attach course files through WordPress. The way these courses work is that there is a live video call where people like to join in so the product on WooCommerce initially holds the details for the upcoming call and afterward additional audio and transcripts are added to the product for sale. The problem is that this means people who had bought the course prior to this call would not receive these files unless permission to see them was manually given. As this is redundant and troublesome, my thought was to change the purchase to instead give a link which goes into an Amazon S3 bucket labeled courses and give them access to a specific folder within it. Ideally, this link would let them see new files lives and furthermore would limit the size of data on the website which is hosted on a dedicated server (save some $$$ on hosting fees, 2 birds 1 stone) The problem however is that since I am a complete novice to this style of coding, I am unsure of how to do this although I do think it is possible given an answer is already out there or if I can bull and jam my way through a section of code. The reason I am looking to sort out courses as folders inside a bucket instead of individual buckets is that the number of courses the website currently has is nearing 200 and if an effort was made to change those then it would be well over the 100 bucket limit in addition to being an exercise in repetition. Any advice or help would be greatly appreciated, thanks!
If I understand you correctly, you want to host content on S3, but want to achieve some degree of access control on that content.
The most straightforward way to do this, the one that involves minimal S3 integration, is to presign an S3 url for the user. the presigned url would be good for a limited time and could be generated directly before redirecting the user to that URL by your wordpress site, which would in turn hold aws access credentials.
https://docs.amazonaws.cn/zh_cn/aws-sdk-php/guide/latest/service/s3-presigned-url.html eplains more about this from a php perspective, which I'm guessing is the right lens for you.
This allows some modicum of access control ( the users can still share the document after they've accessed it, but at least it's not just public).
If you don't need access control, you can make the s3 object public and omit the signing altogether.
This question relates to WordPress's wp-cron function but is general enough to apply to any DB-intensive calculation.
I'm creating a site theme that needs to calculate a time-decaying rating for all content in the system at regular intervals. This rating determines the order of posts on the homepage, which is paged to allow visitors to potentially view all content. This rating value needs to be calculated frequently to make sure the site has fresh content listed in the proper order.
The rating calculation is not heavy but the rating needs to be calculated for, potentially, 1,000s of items and doing that hourly via wp-cron will start to cause problems for sites with lots of content. Ignoring the impact on page load (wp-cron processes requests on page loads once a certain interval has been reached), at some point the script will reach a time limit. Setting up the site to use "plain ol' cron" will solve the page loading issue but not the timeout one.
Assuming that I have no control over the sites that this will run on, what's the best way to handle this rating calculation on a regular basis? A few things that came to mind:
Only calculate the rating for the most recent 1,000 posts, assuming that the rest won't be seen much. I don't like the idea of ignoring all old content, though.
Calculate the first, say, 100 or so, then only calculate the rating for older groups if those pages are loaded. This might be hard to get right, though, and lead to incorrect listing and ratings (which isn't a huge problem for older content but something I'd like to avoid)
Batch process 100 or so at regular intervals, keeping track of the last one processed. This would cycle through the whole body of content eventually.
Any other ideas? Thanks in advance!
Depending on the host, you're in for a potentially sticky situation. Let me outline a couple of ideal cases and you can pick/choose where you need to.
Option 1
Mirror the database first and use a secondary app (WordPress or otherwise) to do the calculations asynchronously against that DB mirror. When they're done, they can update a static file in the project root, write data to a shared Memcached instance, trigger a POST to WordPress' admin_post endpoint to write some internal state, whatever.
The idea here is that you're removing your active site from the equation. The last thing you want to do is have a costly cron job lock the live site's database or cause queries to slow down as it does its indexing.
Option 2
Offload the calculation entirely to a separate application. Tracking ratings in real time with WordPress is a poor idea as it bypasses page caching and triggers an uncachable request every time a new rating comes in. Pushing this off to a second server means your WordPress site is super fast, and it also means you can have the second server do the calculations for you in the first place.
If you're already using something like Elastic Search on the site, you can add ratings as an added indexing facet. Then just update posts as ratings change, and use the ES API to query most popular posts later.
Alternatively, you can use a hosted service like Keen IO to record and aggregate ratings.
Option 3
Still use cron, but don't schedule it as a cron job in WordPress. Instead, write a WP CLI routine that does the reindexing for you. Then, schedule real cron jobs to process the job.
This has the advantage of using PHP's command line version, which can be configured to skip the timeouts and memory limits imposed on the FPM/CGI/whatever version used to serve the site. It also means you don't have to wait for site traffic to trigger the job - and a long-running job won't block other cron events within WordPress from firing.
If using this process, I would set the job to run hourly and, each hour, run a batch of 1/24th of the total posts in the database. You can keep track of offsets or even processed post IDs in the database, the point is just that you're silently re-indexing posts throughout the day.
I have a function which web-scraping all latest news from a website (approximately 10 news and the number of news is up to that website). Note that the news are in chronical order.
For example, yesterday I got 10 news and stored in database. Today I get 10 news but there are 3 news that are not available from yesterday (7 news stayed the same, 3 new).
My current approach is to extract each news till I find an old news (the 1st among 7 news) then I stop extracting and only update the field "lastUpdateDate" of the old news + add new news to the database. I think this approach is somehow complicated and it takes time.
Actually I'm getting news from 20 websites with same content structure (Moodle) so each request will last about 2 minutes, which my free host doesn't support.
Is it better if I delete all the news and then extracting everything from the start (this actually increments a huge amount of the ID numbers in the database)?
First, check to see if the website has a published API. If it has one, use it.
Second, check the website's terms of service, which may specifically and explicitly disallow scraping the website.
Third, look at a module in your programming language of choice that handles both the fetching of the pages and the extraction of the content from the pages. In Perl, you would start with WWW::Mechanize or Web::Scraper.
Whatever you do, don't fall into the trap that so many who post to StackOverflow fall into: Fetching the web page, and then trying to parse the content themselves, most often with regular expressions which is an inadequate tool for the job. Surf the SO tag html-parsing for tales of sorrow from those who have tried to roll their own HTML parsing systems instead of using existing tools.
Its depend on requirement if you want to show old news to the users or not.
For scraping you can create a custom local script for cron job which will grab the data from those news websites and will store into database.
You can also check through subject if its already exist of not.
Final make a custom news block which will show all the database feed.
I have created a small "Dynamic Data Web Site" using the Entity Framework. I've no experience with this really, but it looks very interesting. Anyway, I have a single table being displayed on a single web page. The table contains over 21000 rows and the page limits me to 10 records per page, which is all fine.
My problem is that the page is incredibly slow. I'm guessing that maybe every row in the table is being loaded whenever I try to navigate, but I can't be sure this is the cause.
How can I increase the performance of the page? I want to be able to click through pages of results quickly and easily. It currently takes more than 60 seconds to click to the next set of results.
this is usually caused by filters on a table where the filter has MANY rows you could fix this using the Autocomplete filter which prefilters the data base what the user types in.
You can get this filter and other from ny NuGet package Dynamic Data Custom Filters
Also try having a look in it using Ayende's EFProf. It is a commercial product but it has a free 30 day trial. I can sometimes point out silly things you are doing and suggest some ways to optimise your data access