Updating SQLite DB with MSI? - sqlite

We have a product with 3 main components
1) a client application
2) a network server
3) Datasets, mostly containing (read-only - from the customer's perspective) documents stored in BLOB fields within a SQLite DB
The client application can access datasets stored directly on that machine (many users are on non-networked laptops) or via a the network server.
The data needs to be updated from time to time - the whole datasets can be several GB, so for updates we wish to only send out those documents that are new or have been revised. A patch in a sense. Our customers tend to like MSIs to incorporate in their own distribution strategies. Some are adamant about accepting nothing else.
How feasible is it to update a SQLite DB via MSI (without a complete overwrite of the DB file)?
I have 2 strategies in mind but both have drawbacks
1) the MSI installs some files on to customer machine (workstation or server) and when the client or the server software detects them it run some sort of DB merge.
2) The MSI accesses functions in a custom DLL (I'm not an MSI expert, so I don't even know if this is possible) to merge new content into DB. I suspect custom DLLs really break the point of repackagability of MSIs.
This is far from my are of expertise, so can anyone suggest potential solutions?
Thanks for your time

Related

Pros/Con of Excel VBA v SQLite

I am recently working with a relational database program that has been written in Excel VBA. Excel VBA was chosen as it is a default application on computers where I work and therefore everyone would be able to use the database.
As part of the database development the need has arisen to add some more tables which will only interact programmatically with the current database. In order to consider all my options I am contemplating separating this new data either as an SQLite or second excel file.
I keep changing my mind as to what would be the best route and would appreciate information from those who work with the programs.
The new database would need to perform normal database functions quickly and efficiently. Given this context what are the advantages/disadvantages of using SQLite compared with excel?
Excel is not a database! If you want to use the Office applications, use Access, it is designed for that.
If you want to use SQLite, bear in mind that there is no concurrency. So if PC1 writes to the database, and PC2 wants to do that too, the file is locked by PC1 and you will get an error on PC2.
My recommendation:
You can also install SQL Server Express, this is free, with a few terms (10 concurrent users & max 10GB db). And then store your data in your SQL server. Use Excel as an interface to collect data from the SQL Server in your Excel/VBA applications. This is a lot more scale-able then Access, let alone SQLite.
OR: If your organization doesn't allow installation of software or whatever reason, go for Access.
You can use SQLite if you have a single application on a single device. Think about mobile apps, they use SQLite for example. If you have two applications on a single device, the concurrency problem of SQLite comes around the corner. It is possible that both apps want to write at the same time, which will give you an error.

How to sync data of multiple Azure Web Site instances

I have an Azure website running on several instances on Basic compute mode. I want to synchronize local data (just a few small numbers like number of online users, total app users, not whole tables or files) between the instances. How can I achieve this? I've seen Refreshing Data on all Azure Instances but it didn't help much. My current naive approach is writing these values to my database, and in my application code, keeping local changes, and syncing the actual data from the database every few minutes, which is OK for my application. But I'm looking for a more elegant way to achieving sync between instances. Is this possible? If yes, how?
There is not an API that will support syncing state between instances. However, here are a few ideas you might consider.
Local File System
The web site instances share a common file system. So, you could just write data to a .txt file in your App_Data folder and all the instances will use that same file. This would at least eliminate the dependency on a database for such simple data.
Caching
Another option is to write the data to a cache. Your options for this would be to use the Managed Cache service or the new Redis Cache (in preview).
Storage Tables
This may not be any better than your current solution. But, it is cheap and very efficient.

Multiple deployers single Content Delivery database (Broker DB)

In the publishing scenario I have, we have multiple deployers pushing content to both file system and database (broker). Pages and Binaries are put on the file system, everything else in the Broker. We have one of the deployers putting the content into the database. Is this the recommended best practice?
If the storage configurations in all deployers also put the content into the database, how does Tridion handle this? Could this cause duplicate entries, locking failures etc?
I'm afraid at the time of writing I don't have access to an environment to test how this would work.
SDL best practice is to have a one-to-one relationship between a deployer and a publication; that means so long as two deployers do not publish the same content (from the same publication) then they will not collide providing, if a file system, there is separation between the deployed sites e.g. www/pub1 & www/pub2.
Your explanation of your scenario needs some additional information to make it complete but it sounds most likely that there are multiple broker databases (albeit hosted on a single database server). This is the most common setup when dealing with multiple file systems on webservers, combined with a single database server.
I personally do not like this set up as I think it would be better to host file system content in a shared location & share single DB. Or better still deploy everything to the database and uses something like DD4T/CWA.
I have seen (and even recommended based on customer limitations) similar configurations where you have multiple deployers configured as destinations of a given target.
Only one of the deployers can write to the database for the same transaction, otherwise you'll have concurrency issues. So one deployer writes to the database, while all others write to the file system.
All brokers/web applications are configured to read from the database.
This solves the issue of deploying to multiple servers and/or data centers where using a shared file system (preferred approach) is not feasible - be it for cost or any other reason).
In short - not a best practice, but it is known to work.
Julian's and Nuno's approaches cover most of the common scenarios. Indeed a single database is a single point of failure, but in many installations, you are expected to run multiple schemas on the same database server, so you still have a single point of failure even if you have multiple "Broker DBs".
Another alternative to consider is totally independent delivery nodes. This might even mean running a database server on your presentation box. These days it's all virtual anyway so you could run separate small database servers. (Licensing costs would be an important constraint)
Each delivery server has it's own database and file system. Depending on how many you want, you might not want to set up multiple destinations/deployers, so you deploy to one, and use file system replication and database log shipping to mirror the content to the rest.
Of course, you could configure two deployment systems (or three) for redundancy, assuming you can manage all the clustering etc.
OK - to come clean - I've never built one like this, but I'm fairly sure elements of this kind of design will become more common as virtualisation increases, and licensing models which support it. (Maybe we have to wait for Tridion to support an open source database!)

Can I install WAMP on Microsoft Azure (Bizspark account)?

I have got a Bizspark account from Microsoft and they are providing a basic Azure account. I have been told that it can run PHP, however I would like to use a more tested solution like WAMP. On top of that, I want to place a quite heavy WordPress / BuddyPress installation (that I hope will bring a lot of trafic :)
Has anyone done something similar to this? If so, what is your experience / pitfalls etc.?
Thanks
Stelios
Yes, you can do this. At the end of the day you are just using Windows Server, so anything that installs there will install in the cloud as well. I have done this myself for hosting WordPress in Windows Azure.
However, there are some pitfalls here. Mostly the pitfalls are around the M (MySQL). To setup MySQL in Windows Azure is not really that hard, but you have several considerations on how to make sure it is always available. You can:
Setup a single instance of MySQL in
a role and store the db on local
disk (this is a bad idea).
Setup a single instance of MySQL in
a role and store the db on a drive
(blob backed storage)
Setup 2 instances of MySQL to each
point to a shared drive
(hot-failover). Only one drives will
be able to mount. Now, you have reliability and failover, but a single instance at a time working for you.
Setup 1 writer of MySQL on a drive,
and multiple readers on a snapshot
of a drive. Put in some logic via
connection strings to make sure only
writes goto a single one and reads
to the others. Snapshot every X
mins to update readers.
Setup multiple instances of MySQL
and use native replication features
(each storing to local disk) and
rely on that if you lose an
instance.
There are probably more permutations, but the gist of the problem is how you scale out MySQL to be available and reliable. In Windows Azure, you don't get to rely on the fact that the local disk will always be around or that you will always have the same instance. In fact, you can guarantee that your instances will be down for some period of time each month and eventually, given enough time, you will lose the local disk.
Overall, with multiple instances however, you can guarantee they won't be down simultaneously (to the service SLA level at least). So, you need to make sure MySQL works with multiple instances (or live with single instance downtime) and that your data is backed by blob storage to guarantee it is persisted.
Or you can scrap all that crap and just use SQL Azure, which solves all those problems. So, it become WASP. SQL Azure can also be more economical as well for smaller DBs.
Or you can scrap all that crap and just use SQL Azure, which solves all those problems. So, it become WASP. SQL Azure can also be more economical as well for smaller DBs.
Ditto.
Installing MySQL on an Azure role is not a good idea for plenty of reasons, most notably (lack of) scalability and reliability. (That's just for deploying on Azure, MYSQL itself is great)
To set it up remotely reliably you're going to need a dedicated instance which will run you at least $40 a month, going with SQL Azure is $10/Gb, or free if you get an introductory offer or Bizspark.
If you're just looking to play around with a single instance app, I'd suggest you rather use SQLite or some other in memory db, it'll be a lot less painful.

How to scale a document storage system?

I maintain a web application (ASP.NET/IIS7/SQL2K8/Win2K8) that needs to access documents, actually hundreds of thousands of documents, and growing. Currently, they are all on a Windows 2K8 Server fileshare, being accessed by UNC path (SMB). The files are in a single flat directory and I'm trying to plan how to best improve this solution. I don't want to use the SQL Filestream attribute as it would be significant effort to migrate it all into that, and would really lock in to SQL Server. I also need to find a way to replicate the data for disaster recovery, so perhaps a solution can help with that too.
Options could be:
Segment files into multiple directories?
Application would add metadata for which directory it's on (or segment by other means)
Segment files into separate servers? (virtualize)
Backup becomes more complicated.
Application would add metadata for which server it's on
NAS Storage
SAN Storage
Put a service (WCF) in front of the files and have the app talk to the service
bonus of being reusable across many applications
Assuming I'm going to store on filesystem and not in database (I've read those disccusions here), which would be a more scalable solution?
You've got a couple issues:
- managing a large volume of (static?) files
- preparing for backups and disaster recovery of said files
I'll throw this out there, even though I'm not a fan of the answer, but you might poke around with the free SharePoint 2010 Foundation that's included with server 2k8. If you're having issues with finding the documents you need (either by search, taxonomy via tagging or other metadata) as well as document expiration and you don't want to buy a full blown document management system, this might be a solution. Of course it introduces new problems...
If your only desire is to have these files available to spit out on the web, then the file store like you're using now really is the simplest solution. For DR/redundancy purposes, I'd look at a) running them on a raid/SAN of some sort and b) auto-syncing them with the cloud (either azure or amazon). For b) you can get apps that make the cloud appear as a mapped drive and then use an rsync type software to keep the cloud up to date.
If you want to build something new and cool, you might think about moving the entire file archive into the cloud and just write a table in a db to manage the file name, old location, new cloud location and a redirector code that can provide the access tokens to requestors.
3 different approaches... your choice.

Resources