I am trying to optimize my site's database using PHPMyAdmin.
I want to know if it is safe to delete "post_meta" as it has occupied more than 1 GB of my database?
update: I removed all the post revisions, spam comments etc, along with all the plugin data using https://wordpress.org/plugins/plugins-garbage-collector/. However, i still see my postmeta DB contains 1GB. When i look inside, i can still see a lot of old plugin files. I followed this command: https://crunchify.com/better-optimize-wordpress-database.../. to remove some of the files. However, i am confused if this is the right and the shortest way to fix this issue. Or there is any other way to clean the old plugin data from phpmyadmin?
First, you should know that MySQL usually expands the size of its files on disk as needed, but doesn't shrink them. Deleting old data won't reclaim that disk space automatically. You can supposedly use the SQL command OPTIMIZE TABLE <table_name>; to do that manually, but you'll have to do it per table. I say "supposedly" because I've seen differing reports on whether this works to reclaim all the disk space from deleted rows, or if it only reclaims disk space from outdated indexes.
Next, I would not delete an entire table. It's almost certain that WordPress will stop functioning properly if it can't access the post_meta table. Even removing individual entries risks corrupting your data in strange ways, where one post may link to another but if it's missing, WordPress will show an error because of the missing post, things like that.
Finally, to specifically address your last question "[I]s [there] any other way to clean the old plugin data from phpmyadmin?" I highly suggest using WordPress-aware tools for this job. You can delete data from your database directly using phpMyAdmin (by either structuring a query properly or just going through and using the delete icon to remove a row you don't want), but not in a way that will be aware of how WordPress processes the data and relationships between tables. There are probably tools meant specifically for that job that would make things easier for you.
What are the practical advantages of Doctrine Migrations over just running a schema update?
Safety?
The orm:schema-tool:update command (doctrine:schema:update in Symfony) warns
This operation should not be executed in a production environment.
but why is this? Sure, it can delete data but so can a migration.
Flexibility?
I thought I could tailor my migrations to add stuff like column defaults but this often doesn't work as Doctrine will notice the discrepancy between the schema and the code on the next diff and stomp over your changes.
When you are using the schema-tool, no history of database modification is kept, and in a production/staging environment this is a big downside.
Let's assume you have a complicated database structure in a live project. And in the next changeset you have to alter the database somehow. For example, your users' contact phones need to be stored in a different format, not a VARCHAR, but three SMALLINT columns for country code, area code and the phone number.
Well, that's not so hard to figure out a query that would fetch the current data, separate it into three values and insert them back. That's when migrations come into play: you can create your new fields, then do the transforms and finally drop the field that was holding the data before.
And even more! You can even describe the backwards process (the down migration), when you need to undo the changes introduced in your migration. Let's assume that someone somewhere relied heavily on the format of the VARCHAR field, and now that you've changed the structure, his piece of code is not working as expected. So, you run migration:down, and everything gets reverted. In this specific case you'd just bring back the old VARCHAR column and concatenate the values back, and then drop the fields.
Doctrine's migration tool basically does most of the work for you. When you diff your schema, it generates all the necessary up's and down's, so only thing you'll have to do is handle the data that could be damaged when the migration is applied.
Also, migrations are something that gives other developers on your team knowledge on when it's time to update their schemas. With just the schema-tool, your teammates would have to run doctrine:schema:update each and every time they pull, `cause they wouldn't know if the schema has really changed.
When using migrations, you always see that there are some updates in the migrations folder, which means you need to update your schema.
I think that you indeed nailed it on Safety. With Migrations you can go back to another state of the table (just like you can do in Git version control). With the schema update command you can only UPDATE the tables. There is no log kept for going back in case of a failure with already saved data in those tables. I don't know exactly, but doesn't a migration also saves the data of the corresponding table that's being updated? That would be essential in my opinion, otherwise there is no big reason to use them.
So yes, I personally think that the main reason for using migrations in a production environment is safety and maybe a bit of flexibility. Safety would be the winner here I think :)
Hope this helps.
edit: Here is another answer with references to the Symfony docs: Is it safe to use doctrine2 migrations in production environment with symfony2 and php
You also cant perform large updates with plain doctrine migration. Like try to update index on 30 mln users database. As it will a lot of time while you app will not be accessible.
"On a Youtube-like website, would it make sense to use individual sqlite files to store video comments? (One sqlite file per video.)"
I'm curious to hear what anyone thinks.
The reopening of so many files and file handles at the OS level may cause a performance hit. I would let the database do what databases do best and just have a FK to the video ID for each comment.
As you've stated in the comments that you already have a MySQL database for videos, there's little sense in using SQLite for commenting. A comments table with a video_id column is going to be a lot more flexible.
SQLite also doesn't hold up too well in a situation where you might have concurrent writes, which commenting is.
It would make it impossible (or very difficult) to efficiently search all comments.
I'm currently looking at a terrible legacy ColdFusion app written with very few stored procedures and lots of nasty inline SQL statements (it has a similarly bad database too).
Does anyone know of any app which could be used to search the files of the app picking out any SQL statements and listing the tables/stored procedures which are referenced?
Dreamweaver will allow you to search the code of the entire site. If the site is setup properly including the RDS password and provide a data source it can tell you a lot of information. I've only set it up once so I can't remember exactly what information it gives you, I think maybe just the DB structure. Application window > databases. Even if it isn't set up properly just searching for "cfquery" will quickly find all your queries.
You could also write a CF script using CFDirectory/CFFile to loop the .cfm files and parse everything between cfquery and /cfquery tags.
CFBuilder may have some features like that but I'm not to familiar with it yet.
edit I've heard that CFBuilder can't natively find all your cfqueries that don't have cfqueryparam but you can use CF to extend CFB to do so. I imagine you could find/write something for CFB to help you with your problem.
another edit
I know it isn't indexing the contents of the query, but you can use regex to search using the editor as well. searching for <cfquery.+(select|insert|update|delete) checking the regex box should find the queries that aren't using cfstoredProc (be sure to uncheck the match case option if there is one). I know Dreamweaver and Eclipse can both search for Regex.
HTH
As mentioned above I would try a grep with a regex looking for
"<cfquery*" "</cfquery>" and "<cfstoredproc*" "</cfstoredproc>"
In addition if you have tests that have good code coverage or even just feel like the app is fully exercised in production you could try turning on "Log Database Calls" in Admin - > Datasources or maybe even at the JDBC driver level, just monitor performance to make sure it does not slow the site down unacceptably.
In short: no. You'd have to do alot of tricky parsing to make sure you get all the SQL. And because you can glob SQL together from lots of strings, you'll almost always miss some of it.
The best you're likely to do will be a case insensitive grep for "SELECT|INSERT|UPDATE|DELETE" and then manually pulling out the table names.
Depending on how the code is structured, you might be able to get the table names by regexing the SQL from clause. But that's not foolproof. Alot of people use string concatenation to build SQL statements. This is bad because it can introduce SQL injection attacks, and it also make this particular problem harder.
I'm trying to start using plain text files to store data on a server, rather than storing them all in a big MySQL database. The problem is that I would likely be generating thousands of folders and hundreds of thousands of files (if I ever have to scale).
What are the problems with doing this? Does it get really slow? Is it about the same performance as using a Database?
What I mean:
Instead of having a database that stores a blog table, then has a row that contains "author", "message" and "date" I would instead have:
A folder for the specific post, then *.txt files inside that folder than has "author", "message" and "date" stored in them.
This would be immensely slower reading than a database (file writes all happen at about the same speed--you can't store a write in memory).
Databases are optimized and meant to handle such large amounts of structured data. File systems are not. It would be a mistake to try to replicate a database with a file system. After all, you can index your database columns, but it's tough to index the file system without another tool.
Databases are built for rapid data access and retrieval. File systems are built for data storage. Use the right tool for the job. In this case, it's absolutely a database.
That being said, if you want to create HTML files for the posts and then store those locales in a DB so that you can easily get to them, then that's definitely a good solution (a la Movable Type).
But if you store these things on a file system, how can you find out your latest post? Most prolific author? Most controversial author? All of those things are trivial with a database, and very hard with a file system. Stick with the database, you'll be glad you did.
It is really depends:
What is file size
What durability requirements do you have?
How many updates do you perform?
What is file system?
It is not obvious that MySQL would be faster:
I did once such comparison for small object in order to use it as sessions storage for CppCMS. With one index (Key Only) and Two indexes (primary key and secondary timeout).
File System: XFS ext3
-----------------------------
Writes/s: 322 20,000
Data Base \ Indexes: Key Only Key+Timeout
-----------------------------------------------
Berkeley DB 34,400 1,450
Sqlite No Sync 4,600 3,400
Sqlite Delayed Commit 20,800 11,700
As you can see, with simple Ext3 file system was faster or as fast as Sqlite3 for storing data because it does not give you (D) of ACID.
On the other hand... DB gives you many, many important features you probably need, so
I would not recommend using files as storage unless you really need it.
Remember, DB is not always the bottle neck of the system
Forget about long-winded answers, here's the simplest reasons why storing data in plaintext files is a bad idea:
It's near-impossible to query. How would you sort blog posts by date? You'd have to read all the files and compare their date, or maintain your own index file (basically, write your own database system.)
It's a nightmare to backup. tar cjf won't cut it, and if you try you may end up with an inconsistent snapshot.
There's probably a dozen other good reasons not to use files, it's hard to monitor performance, very hard to debug, near impossible to recover in case of error, there's no tools to handle them, etc...
I think the key here is that there will be NO indexing on your data. SO to retrieve anything in say a search would be rediculously slow compared to an indexed database. Also, IO operations are expensive, a database could be (partially) in memory, which makes the data available much faster.
You don't really say why you won't use a database yourself... But in the scenario you are describing I would definitely use a DB over folder any day, for a couple of reasons. First of all, the blog scenario seems very simple but it is very easy to imagine that you, someday, would like to expand it with more functionality such as search, more post details, categories etc.
I think that growing the model would be harder to do in a folder structure than in a DB.
Also, databases are usually MUCH faster that file access due to indexing and memory caching.
IIRC Fudforum used the file-storage for speed reasons, it can be a lot faster to grab a file than to search a DB index, retrieve the data from the DB and send it to the user. You're trading the filesystem interface with the DB and DB-library interfaces.
However, that doesn't mean it will be faster or slower. I think you'll find writing is quicker on the filesystem, but reading faster on the DB for general issues. If, like fudforum, you have relatively immutable data that you want to show several posts in one, then a file-basd approach may be a lot faster: eg they don't have to search for every related post, they stick it all in 1 text file and display it once. If you can employ that kind of optimisation, then your file-based approach will work.
Also, mail servers work in the file-based approach too, the Maildir format stores each email message as a file in a directory, not in a database.
one thing I would say though, you'll be better storing everything in 1 file, not 3. The filesystem is better at reading (and caching) a single file than it is with multiple ones. So if you want to store each message as 3 parts, save them all in a single file, read it to get any of the parts and just display the one you want to show.
...and then you want to search all posts by an author and you get to read a million files instead of a simple SQL query...
Databases are NOT faster. Think about it: In the end they store the data in the filesystem as well. So the question if a database is faster depends strongly on the access path.
If you have only one access path, which correlates with your file structure the file system might be way faster then a database. Just make sure you have some caching available for the filesystem.
Of course you do loose all the nice things of a database:
- transactions
- flexible ways to index data, and therefore access data in a flexible way reasonably fast.
- flexible (though ugly) query language
- high recoverability.
The scaling really depends on the filesystem used. AFAIK most file system have some kind of upper limit for number of files (totally or per directory), though on the new ones this is often very high. For hundreds and thousands of files with some directory structure to keep directories to a reasonable size it should be possible to find a well performing file system.
#Eric's comment:
It depends on what you need. If you only need the content of exact on file per query, and you can determine the location and name of the file in a deterministic way the direct access is faster than what a database does, which is roughly:
access a bunch of index entries, in order to
access a bunch of table rows (rdbms typically read blocks that contain multiple rows), in order to
pick a single row from the block.
If you look at it: you have indexes and additional rows in memory, which make your caching inefficient, where is the the speedup of a db supposed to come from?
Databases are great for the general case. But if you have a special case, there is almost always a special solution that is better in some sense.
if you are preferred to go away with RDBMS, why dont u try the other open source key value or document DBs (Non- relational Dbs)..
From ur posting i understand that u r not goin to follow any ACID properties of relational db.. it would be better to adapt other key value dbs (mongodb,coutchdb or hyphertable) instead of your own file system implementation.. it will give better performance than the existing approaches..
Note: I am not also expert in this.. just started working on MongoDB and find useful in similar scenarios. just wanted to share in case u r not aware of these approaches