I know this is an age old question, and usually you can't get a simple answer.
However, I'm in the situation where I might need 20gb of storage for pictures on a collaborative web app I'm creating using ASP.NET MVC, but my web host doesn't give me more than 4gb of storage. However - I have unlimited space on my MySQL DB, so I'm seriously considering using the longblob or something in the MySQL DB as storage - can anyone give me a couple of reasons why not go this way? The alternative would be a very expensive host, or a possibly equally expensive solution with cloud storage (I'm thinking Amazon S3 or something).
Thanks!
Not smart. Because unless your webhost is very dumb, they will notice and tell you to stop being silly.
"Unlimited" very rarely actually means "unlimited".
This is not much lines of code, try and check what solution is better for you problem.
I think if you have unlimited DB Storage and you doesn't matter on bandwidth between database server and run time environment then probably this solution might be better for you.
It's not really an age old question with no simple answer - it's simply inappropriate to store image data in a database.
Using it as a workaround for limited disk space is simply going to cause other issues, such as the fact that it'll be a lot slower to load and it's likely the hosting company will pull the plug once they realise what's going on. (If they're the kind of company that limits you to 4GB of local disk space, then I also have to wonder whether their MySQL set up will cope with serving up image content.)
Related
I am wondering if it is feasible to deploy wordpress as a series of lambda functions on AWS API gateway. Any pointers on the feasibility/gotchas would be greatly appreciated!
Thanks in advance,
PKK
You'll have a lot of things to consider with persistence and even before that, Lambda doesn't support PHP. I'd probably look at Microsoft Azure Functions instead that do support PHP and do have persistent storage.
While other languages (such as Go, Rust, Swift etc.) can be "wrapped" to run in AWS Lambda with relative ease, compiling PHP targeting the same platform and running it is a bit different (and certainly more painstaking). Think about all the various PHP modules you'd need for starters. Moreover, I can't imagine performance will be as good as something like a Go binary.
If you can do something clever with the Phalcon framework and come up with an easy build and deploy process, then maayyyybee.
Though, you'd probably need to really overhaul something like WordPress which was not designed for this at all. It still uses some pretty old conventions due to the age of the project and while that is all well and good for your typical PHP server, it's a different ball game in the sense of this "portable" PHP installation.
Keep in mind that PHP sessions are relied upon as well and so you're going to need to move those elsewhere due to the lack of persistence with AWS Lambda. You can probably find some sort of plugin for WordPress that works with Redis?? I have to imagine something like that has been built by now... But there will be many complications.
I would seriously consider using Azure Functions to begin with OR using Docker and forgoing the pricing model that cloud functions offers. You can still find some pretty cheap and scalable hosting out there.
What I've done previously was use AWS ECS (Docker) with EFS (network storage) for persistence and RDS for the database. While this doesn't carry the same pricing model as Lambda, it is still cost efficient. You can set up your ECS Service to autoscale up and down. So that way you're running the bare minimum until you need more.
I've written a more in depth article about it here: https://serifandsemaphore.io/how-to-host-wordpress-like-a-boss-b5993fcfbd8e#.n6fbnf8ii ... but it's basically just the idea of running WordPress in Docker and using EFS to offload the persistent storage issues. You can swap many of the pieces of the puzzle out if you like. Use a database hosted in some other Docker service or Compose or where ever. That part need not be RDS for example. Even your storage could be handled in a different way, though EFS worked pretty well! The only major thing to note about EFS is the write speed. Most WordPress sites are read heavy though. Your mileage will vary depending on your needs.
Is it possible? Yes, anything is possible with enough time and effort. Is it worth it? That is a question best to ask yourself.
PHP can be run on Lambda as per the documentation located here: https://aws.amazon.com/blogs/compute/scripting-languages-for-aws-lambda-running-php-ruby-and-go/ .
The bigger initial problem as stated in other comments is a persistent file system. S3 for media storage is doable via Wordpress plugin (again from the comments) but any other persistent storage for the request / script execution is the initial biggest hurdle. Tackle one problem at a time till you get to the end!
I have a bit of an issue here. Actually, it might not be an issue, I just don't really know how to handle it.
I need to copy an image, from a remote server, to my local server, every n seconds, IF (any) users are on my webpage.
If no users are on, it doesn't matter. If multiple users are on, it should only run the copy once (every n seconds).
I think I have heard somewhere, that you can do background tasks, on your ASP.NET website, but I have absolutely no knowledge of this. Some people also talk about threads, is it perhaps the same solution?
So, I'm hoping for some experienced people, to guide me towards a solution here. What possibilities do I have, which would you recommend and perhaps some articles where I can read how to do it.
Given your answer in the comments I would suggest you need to use a cache that supports time-based expiry.
See http://www.codeproject.com/Articles/51409/Exploring-Caching-Using-Caching-Application-Enterp#heading0020 for a good article on using Enterprise Library caching block.
I need to store the application settings somewhere, but can't find a satisfying solution. Read only settings are pretty easy to store in web.config, but what about settings for application administration that would should be accessible through web-page? Writing to web.config doesn't seem to be a good idea. I have considered storing the settings in custom xml file, but then if there is sensitive information involved in the settings, that seems to be problem, also if there are multiple users modifying the settings at the same time some kind of file locking has to be involved. Now I am inclined to store the app settings the MS-SQL database, it seems like a secure and well scale-able solution, however it feels wrong to have a table to store just one row - the setting. What's your opinion? How would you design that?
Are there any ready to go .NET solutions for storing dynamic web app settings?
Your question is so subjective that I don't even know why I am answering it instead of voting to close. But anyway, a database is a good place. And if you are bored and tired of relational data there are great NoSQL databases out there such as MongoDB and RavenDB that will make this very easy. And if you want a very fast database Redis could be worth checking out.
Storing things in files in a web application is far more difficult than it might look at the first place. If it is for readonly then web.config could indeed be a good place. But once you start writing you will have to take into account that a web application is a multithreaded environment where you will have to synchronize the access to this file. And what looked in the first place as an easy solution, could quickly turn into a nightmare if you want to design it properly. That's why I think that a database is a good solution as it gives you concurrency, security, atomicity, data integrity, ...
I absolutely think that storing settings of dynamic nature in database is the right way. Don't feel bad about having one simple table. This table can save you a lot of headaches. If you'll code it smart you can really benefit from it (but that depends on the type of values you want to store). The only problem with db is that someone might actually modify values directly in database. But it can be easily solved. For example I have a "configuration-values" class that I feed from database upon start and put it to cache with some timeout. Then after a while I can lazily feed it again, catching situations like I mentioned above. I hope it makes some sense.
I'm about to build a new personal blog/portfolio site (which will be written in ASP.NET), and I'm going to run it against a SQLite database. There are a few reasons for this:
The site will not be getting a lot
of traffic, and from what I've read,
SQLite is able to support quite a
lot of concurrent users for reading
anyway
I can back up all the content
easily, just by downloading the db
over FTP
I don't have to pay my hosting
company every month for a huge
SQL2008 database that I'm hardly
using
So, should I go for it, or is this a crazy idea?
I'm not so sure about #2 (what happens if SQLite makes changes to the file while the FTP program is reading it?) but other than that, there is no reason to prefer one DB over the other (unless one of those DBs just can't do what you need).
[EDIT] Use an online backup to create the file for FTP download. That will make sure the file content is intact.
Even better, add a page (with password) to your site which creates the file at the press of a button, so your browser can download it.
It's just fine for a low traffic site as long as it's mostly read traffic. If it were me, I'd use SQL Compact Edition instead (same benefits as Sqlite- single file, no server), just because I'm a LINQ-head and the LINQ providers are "in the box" for it, but Sqlite has a decent LINQ library and managed support as well. Make sure your hosting company allows unmanaged code, or that you use the managed port of Sqlite (don't know its current stability though).
SQLite can handle this easily - go for it.
You should check, but I think that the Express version of SQL 2008 is free of charge.
Anyway, I've been working with SQLite from .NET environment, and it works quite fine (but I haven't done any load test).
And if you're not decided yet, you still can use a LINQ provider which will allow you later to switch from one database to another without rewriting your SQL code (I think to DbLinq, for example).
If you plan to backup you database, you must ensure first that it is not used at the moment.
SQLite answer this for you:
http://sqlite.org/whentouse.html
low-medium volume = okay,
high volume = don't use it
in your case its a-ok to use sqlite
Generally, yes.
But you should be aware of the fact that SQLite does not support everything that you might be used to from a 'real' DBMS. E.g. there are no constraints like foreign keys, unique indexes and the like, and AFAIK some (more advanced) datatypes are not available.
You should check for the various limitations here and here. If you can get along with that there's no reason to not use SQLite.
A rule of thumb is that if the site can run on one server
then SQLite is sufficient. That is what the creator of
SQLite, D. Richard Hipp, said at approximately 13 min
30 secs into episode 26 of the FLOSS Weekly
podcast.
Direct audio link (MP3 file, 24 MB, 51 min 15 sec).
I'd say no. First off, I don't know who you are using for a provider, but with my provider (goDaddy), it's pretty cheap at $2.99 a month or so. I get 1 sql server db and 10 mysql dbs.
I don't know how much cheaper this can get.
Secondly, why risk it? Most provider plans include at least MySQL database. You can hook up with that.
In general, SQLite isn't meant for a high-traffic website, but it can do quite well on websites getting 100,000 hits/day or less. The SQLite org website gets more than 500,000 hits/day, and generates 2 million or more DB interactions/day ... all handled by SQLite.
Here are some things that will dramatically speed up SQLite's performance:
Index your tables
Use transactions for multiple commands instead of executing one at a time.
Learn about write-ahead logging
Do a Google search on each of the above with SQLite ... your DB performance will improve dramatically.
An SQLite DB can actually be faster than a MySQL, PostGRE, MS SQL Server DB, or other hosted server-based DBs for 2 reasons:
1). SQLite is usually stored on the same machine as the website, rather than a separate server machine, eliminating round trip network latency response times.
2.) For smaller read/write requests, the SQLite engine is executing far less code, which can also be faster.
For a smaller website, a smaller DB engine like SQLite could actually be faster and more efficient.
Are you using any SQL functionality? SUM, AVG, SORT BY, etc, if yes go use SQLite. If not, just use plain txt files to store your data. Also make sure that the database is outside the httpdocs folder or it is not web accessible.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I recently read this Question about SQLite vs MySQL and the answer pointed out that SQLite doesn't scale well and the official website sort-of confirms this, however.
How scalable is SQLite and what are its upper most limits?
Yesterday I released a small site* to track your rep that used a shared SQLite database for all visitors. Unfortunately, even with the modest load that it put on my host it ran quite slowly. This is because the entire database was locked every time someone viewed the page because it contained updates/inserts. I soon switched to MySQL and while I haven't had much time to test it out, it seems much more scaleable than SQLite. I just remember slow page loads and occasionally getting a database locked error when trying to execute queries from the shell in sqlite. That said, I am running another site from SQLite just fine. The difference is that the site is static (i.e. I'm the only one that can change the database) and so it works just fine for concurrent reads. Moral of the story: only use SQLite for websites where updates to the database happen rarely (less often than every page loaded).
edit: I just realized that I may not have been fair to SQLite - I didn't index any columns in the SQLite database when I was serving it from a web page. This partially caused the slowdown I was experiencing. However, the observation of database-locking stands - if you have particularly onerous updates, SQLite performance won't match MySQL or Postgres.
another edit: Since I posted this almost 3 months ago I've had the opportunity to closely examine the scalability of SQLite, and with a few tricks it can be quite scalable. As I mentioned in my first edit, database indexes dramatically reduce query time, but this is more of a general observation about databases than it is about SQLite. However, there is another trick you can use to speed up SQLite: transactions. Whenever you have to do multiple database writes, put them inside a transaction. Instead of writing to (and locking) the file each and every time a write query is issued, the write will only happen once when the transaction completes.
The site that I mention I released in the first paragraph has been switched back to SQLite, and it's running quite smoothly once I tuned my code in a few places.
* the site is no longer available
Sqlite is scalable in terms of single-user, I have multi-gigabyte database that performs very well and I haven't had much problems with it.
But it is single-user, so it depends on what kind of scaling you're talking about.
In response to comments. Note that there is nothing that prevents using an Sqlite database in a multi-user environment, but every transaction (in effect, every SQL statement that modifies the database) takes a lock on the file, which will prevent other users from accessing the database at all.
So if you have lots of modifications done to the database, you're essentially going to hit scaling problems very quick. If, on the other hand, you have lots of read access compared to write access, it might not be so bad.
But Sqlite will of course function in a multi-user environment, but it won't perform well.
SQLite drives the sqlite.org web site and others that have lots of traffic. They suggest that if you have less than 100k hits per day, SQLite should work fine. And that was written before they delivered the "Writeahead Logging" feature.
If you want to speed things up with SQLite, do the following:
upgrade to SQLite 3.7.x
Enable write-ahead logging
Run the following pragma: "PRAGMA cache_size = Number-of-pages;" The default size (Number-of-pages) is 2000 pages, but if you raise that number, then you will raise the amount of data that is running straight out of memory.
You may want to take a look at my video on YouTube called "Improve SQLite Performance With Writeahead Logging" which shows how to use write-ahead logging and demonstrates a 5x speed improvement for writes.
Sqlite is a desktop or in-process database. SQL Server, MySQL, Oracle, and their brethren are servers.
Desktop databases are by their nature not a good choices for any application that needs to support concurrent write access to the data store. This includes at some level most web sites ever created. If you even have to log in for anything, you probably need write access to the DB.
Have you read this SQLite docs - http://www.sqlite.org/whentouse.html ?
SQLite usually will work great as the
database engine for low to medium
traffic websites (which is to say,
99.9% of all websites). The amount of web traffic that SQLite can handle
depends, of course, on how heavily the
website uses its database. Generally
speaking, any site that gets fewer
than 100K hits/day should work fine
with SQLite. The 100K hits/day figure
is a conservative estimate, not a hard
upper bound. SQLite has been
demonstrated to work with 10 times
that amount of traffic.
SQLite scalability will highly depend on the data used, and their format. I've had some tough experience with extra long tables (GPS records, one record per second). Experience showed that SQLite would slow down in stages, partly due to constant rebalancing of the growing binary trees holding the indexes (and with time-stamped indexes, you just know that tree is going to get rebalanced a lot, yet it is vital to your searches). So in the end at about 1GB (very ballpark, I know), queries become sluggish in my case. Your mileage will vary.
One thing to remember, despite all the bragging, SQLite is NOT made for data warehousing. There are various uses not recommended for SQLite. The fine people behind SQLite say it themselves:
Another way to look at SQLite is this: SQLite is not designed to replace Oracle. It is designed to replace fopen().
And this leads to the main argument (not quantitative, sorry, but qualitative), SQLite is not for all uses, whereas MySQL can cover many varied uses, even if not ideally. For example, you could have MySQL store Firefox cookies (instead of SQLite), but you'd need that service running all the time. On the other hand, you could have a transactional website running on SQLite (as many people do) instead of MySQL, but expect a lot of downtime.
i think that a (in numbers 1) webserver serving hunderts of clients appears on the backend with a single connection to the database, isn't it?
So there is no concurrent access in the database an therefore we can say that the database is working in 'single user mode'. It makes no sense to diskuss multi-user access in such a circumstance and so SQLite works as well as any other serverbased database.
Think of it this way. SQL Lite will be locked every time someone uses it (SQLite doesn't lock on reading). So if your serving up a web page or a application that has multiple concurrent users only one could use your app at a time with SQLLite. So right there is a scaling issue. If its a one person application say a Music Library where you hold hundreds of titles, ratings, information, usage, playing, play time then SQL Lite will scale beautifully holding thousands if not millions of records(Hard drive willing)
MySQL on the other hand works well for servers apps where people all over will be using it concurrently. It doesn't lock and it is quite large in size. So for your music library MySql would be over kill as only one person would see it, UNLESS this is a shared music library where thousands add or update it. Then MYSQL would be the one to use.
So in theory MySQL scales better then Sqllite cause it can handle mutiple users, but is overkill for a single user app.
SQLite's website (the part that you referenced) indicates that it can be used for a variety of multi-user situations.
I would say that it can handle quite a bit. In my experience it has always been very fast. Of course, you need to index your tables and when coding against it, you need to make sure you use parameritized queries and the like. Basically the same stuff you would do with any database to improve performance.