Accessing 1 Berkeley Environment from 2 different application - berkeley-db

Can 2 or more different applications access a single Berkeley Environment at the same time?

Yes. But, you've got to set it up properly.
An environment may be shared by any number of processes, as well as by any number of threads within
those processes. It is possible for an environment to include resources from other directories on the
system, and applications often choose to distribute resources to other directories or disks for
performance or other reasons. However, by default, the databases, shared regions (the locking,
logging, memory pool, and transaction shared memory areas) and log files will be stored in a single
directory hierarchy.
http://docs.oracle.com/cd/E17275_01/html/programmer_reference/env.html

Related

Virtual Memory and pages in RAM

In a system with virtual memory, when the pages of a process are swapped from HD to RAM is it true that all the pages are always put in Swap Area? Or only the pages that not fit in RAM are put in Swap Area?
Which one of these two situations happens?
Swapping is the memory management technique where the entire process is stored on disk. Swapping was common in the days of 64kb address spaces where it did not take many disk I/O operations to store an entire process. Processes are stored in a swap file.
Pages is the memory management technique where individual pages are stored on disk. Pages are stored in a page file.
Some systems use both swapping and paging. For example, Windoze has recently reintroduced swapping.

IIS 7.0 Application Pool Shared Between Sites - Is Cache Shared between sites?

I have two sites that are using the same data between their sites except one of the sites is the content management system. Within the content management system when an item is saved, it expires the cache for the particular object.
The other site, I would like to use the cache so I don't have to keep making round trips to the database. If I'm using the same cache keys/object between these sites that are sharing the same app pool, will the site that isn't the CMS in this case reflect it's cache has expired and retrieve the new object?
The two applications run in the same application pool, but they do not run in the same memory space. You can think of the two applications as having their own distinct set of memory pointers and thus, one does not affect the other. You can't access another application's variables and cache lifetimes have no effect, even if they are to the same data store.
No, because cache will be specific to AppDomain, NOT AppPool. See this question to share cache across the applications of same app pool. Shared variable across applications within the same AppPool?
You answer is summed up in just 2 words - "Application Cache". It applies to individual apps, and doesn't care about AppPool, so it doesn't work as you're expecting.
Perhaps you can look into session sharing between the 2 apps.

Multiple deployers single Content Delivery database (Broker DB)

In the publishing scenario I have, we have multiple deployers pushing content to both file system and database (broker). Pages and Binaries are put on the file system, everything else in the Broker. We have one of the deployers putting the content into the database. Is this the recommended best practice?
If the storage configurations in all deployers also put the content into the database, how does Tridion handle this? Could this cause duplicate entries, locking failures etc?
I'm afraid at the time of writing I don't have access to an environment to test how this would work.
SDL best practice is to have a one-to-one relationship between a deployer and a publication; that means so long as two deployers do not publish the same content (from the same publication) then they will not collide providing, if a file system, there is separation between the deployed sites e.g. www/pub1 & www/pub2.
Your explanation of your scenario needs some additional information to make it complete but it sounds most likely that there are multiple broker databases (albeit hosted on a single database server). This is the most common setup when dealing with multiple file systems on webservers, combined with a single database server.
I personally do not like this set up as I think it would be better to host file system content in a shared location & share single DB. Or better still deploy everything to the database and uses something like DD4T/CWA.
I have seen (and even recommended based on customer limitations) similar configurations where you have multiple deployers configured as destinations of a given target.
Only one of the deployers can write to the database for the same transaction, otherwise you'll have concurrency issues. So one deployer writes to the database, while all others write to the file system.
All brokers/web applications are configured to read from the database.
This solves the issue of deploying to multiple servers and/or data centers where using a shared file system (preferred approach) is not feasible - be it for cost or any other reason).
In short - not a best practice, but it is known to work.
Julian's and Nuno's approaches cover most of the common scenarios. Indeed a single database is a single point of failure, but in many installations, you are expected to run multiple schemas on the same database server, so you still have a single point of failure even if you have multiple "Broker DBs".
Another alternative to consider is totally independent delivery nodes. This might even mean running a database server on your presentation box. These days it's all virtual anyway so you could run separate small database servers. (Licensing costs would be an important constraint)
Each delivery server has it's own database and file system. Depending on how many you want, you might not want to set up multiple destinations/deployers, so you deploy to one, and use file system replication and database log shipping to mirror the content to the rest.
Of course, you could configure two deployment systems (or three) for redundancy, assuming you can manage all the clustering etc.
OK - to come clean - I've never built one like this, but I'm fairly sure elements of this kind of design will become more common as virtualisation increases, and licensing models which support it. (Maybe we have to wait for Tridion to support an open source database!)

How to scale a document storage system?

I maintain a web application (ASP.NET/IIS7/SQL2K8/Win2K8) that needs to access documents, actually hundreds of thousands of documents, and growing. Currently, they are all on a Windows 2K8 Server fileshare, being accessed by UNC path (SMB). The files are in a single flat directory and I'm trying to plan how to best improve this solution. I don't want to use the SQL Filestream attribute as it would be significant effort to migrate it all into that, and would really lock in to SQL Server. I also need to find a way to replicate the data for disaster recovery, so perhaps a solution can help with that too.
Options could be:
Segment files into multiple directories?
Application would add metadata for which directory it's on (or segment by other means)
Segment files into separate servers? (virtualize)
Backup becomes more complicated.
Application would add metadata for which server it's on
NAS Storage
SAN Storage
Put a service (WCF) in front of the files and have the app talk to the service
bonus of being reusable across many applications
Assuming I'm going to store on filesystem and not in database (I've read those disccusions here), which would be a more scalable solution?
You've got a couple issues:
- managing a large volume of (static?) files
- preparing for backups and disaster recovery of said files
I'll throw this out there, even though I'm not a fan of the answer, but you might poke around with the free SharePoint 2010 Foundation that's included with server 2k8. If you're having issues with finding the documents you need (either by search, taxonomy via tagging or other metadata) as well as document expiration and you don't want to buy a full blown document management system, this might be a solution. Of course it introduces new problems...
If your only desire is to have these files available to spit out on the web, then the file store like you're using now really is the simplest solution. For DR/redundancy purposes, I'd look at a) running them on a raid/SAN of some sort and b) auto-syncing them with the cloud (either azure or amazon). For b) you can get apps that make the cloud appear as a mapped drive and then use an rsync type software to keep the cloud up to date.
If you want to build something new and cool, you might think about moving the entire file archive into the cloud and just write a table in a db to manage the file name, old location, new cloud location and a redirector code that can provide the access tokens to requestors.
3 different approaches... your choice.

What are the advantages and/or disadvantages of running each website in its own AppPool?

We have a couple of sites which have some memory problems and fairly often crashes and taking a lot of other sites down with them. So I'm planning to at least put the troublesome sites in their own individual AppPools. I just wanted to make sure that this is a safe move to do or if there are anything I should know about separating AppPools.
There is a memory overhead associated with each app pool (a couple of hunded megabytes if memory serves), so if the problem is a lack of overall memory, this might not help.
Having said that, we run all of our websites in their own app pools by default, but we tend to have a maximum of a handfull on a single server, so I have no experience of large numbers of app pools at once.
We also find the advantages of being able to recycle a single app pool without affecting the other sites is a big advantage.
Finally, as well as isolation (as Guffa mentions) you can tune the settings of each app pool to restrict memory use etc, as well as identities and permissions. (We would tend to run different websites with different accounts, that only had permissions to their own databases for example)
There are a few questions about this on Server Fault too:
https://serverfault.com/questions/2106/why-add-additional-application-pools-in-iis
Advantage:
Separate credentials per app pool/web site
Better isolation, recycling etc
Separate configuration (timeout, memory etc)
Disadvantage:
Memory usage increases (not so much an issue with 64 bit these days)
Generally, there is no downside. If you have massive multiple sites on one web server, whether to have separate App Pools or not is a minor issue.
The advantage is that you isolate the site in it's own process.
The disadvantage is that you get more processes running on the server.
So, if you do this for just a few sites, there should be no problem at all.

Resources