Usage of `deferred_segment_creation` - oracle11g

Heard that deferred_segment_creation had been introduced since Oracle 11g. I have gone through the documentation. Do we need to set the value of deferred_segment_creation for each table we created? Someone please help me in understanding the usage of deferred_segment_creation.

Deferred_segment_creation is normally set at the database level though it can be set at a session level. You can specify segment creation deferred when you create the table but that is very rare.
Generally deferred_segment_creation is helpful when you are installing large packaged applications that create thousands of tables of which many if not most will never be use in a particular installation. That avoids wasting space for tables that will never have any data. If you're building an application, you're probably not creating a ton of tables that will never have data so this is much less useful.

Related

What cache strategy do I need in this case ?

I have what I consider to be a fairly simple application. A service returns some data based on another piece of data. A simple example, given a state name, the service returns the capital city.
All the data resides in a SQL Server 2008 database. The majority of this "static" data will rarely change. It will occassionally need to be updated and, when it does, I have no problem restarting the application to refresh the cache, if implemented.
Some data, which is more "dynamic", will be kept in the same database. This data includes contacts, statistics, etc. and will change more frequently (anywhere from hourly to daily to weekly). This data will be linked to the static data above via foreign keys (just like a SQL JOIN).
My question is, what exactly am I trying to implement here ? and how do I get started doing it ? I know the static data will be cached but I don't know where to start with that. I tried searching but came up with so much stuff and I'm not sure where to start. Recommendations for tutorials would also be appreciated.
You don't need to cache anything until you have a performance problem. Until you have a noticeable problem and have measured your application tiers to determine your database is in fact a bottleneck, which it rarely is, then start looking into caching data. It is always a tradeoff, memory vs CPU vs real time data availability. There is no reason to make your application more complicated than it needs to be just because.
An extremely simple 'win' here (I assume you're using WCF here) would be to use the declarative attribute-based caching mechanism built into the framework. It's easy to set up and manage, but you need to analyze your usage scenarios to make sure it's applied at the right locations to really benefit from it. This article is a good starting point.
Beyond that, I'd recommend looking into one of the many WCF books that deal with higher-level concepts like caching and try to figure out if their implementation patterns are applicable to your design.

Big database and how to proceed

I'm working on a website running ASP.NET with MSSQL Database. I realized, that the number of rows in several tables can be very high (possibly something like hundread million rows). I thnik, that this would make the website run very slowly, am I right?
How should I proceed? Should I base it on a multidatabase system, so that users will be separated in different databases and each of the databases will be smaller? Or is there a different, more effective and easier approach?
Thank you for your help.
Oded's comment is a good starting point and you may be able to just stop there.
Start by indexing properly and only returning relevant results sets. Consider archiving unused data (or rarely accessed data
However if it isn't Partioning or Sharding is your next step. This is better than a "multidatabase" solution because your logical entities remain intact.
Finally if that doesn't work you could introduce caching. Jesper Mortensen gives a nice summary of the options that are out there for SQL Server
Sharedcache -- open source, mature.
Appfabric -- from Microsoft, quite mature despite being "late
beta".
NCache -- commercial, I don't know much about it.
StateServer and family -- commercial, mature.
Try partitioning the data. This should make each query faster and the website shouldn't be as slow
I don't know what kind of data you'll be displaying, but try to give users the option to filter it. As someone had already commented, partitioned data will make everything faster.

Creating Data Access Layer for Small website

I am creating my application in asp.net 3.5. I have to make my Data Access layer, in which I am doing the traditional method of fetching/updating the data. Which is SqlConnection than SQLCommand, than SQLadapter.
Will there be any other way I can create my DAL layer easily.
Specification.
My website is small. Approx 7-10
pages.
Database has around 80
tables.
What I know:
Linq to SQL - I don't want to use it
because I am not fully aware about
the LINQ statement and I need to
develop the application really fast.
[3 days :-( ]. Also, there are 100%
chances that the table structure
will be altered in future.
Enterprise Library: It will take too
much time for me to integrate to my
application.
Any other suggestion to create my data layer, quick ... fast ... and "NOT" dirty.
Thanks in advance.
How about using Codesmith (free version 2.6) to generate a simple set of data access objects off your database? Given the small number of DB objects that you need to model I think this would be a quick and easy way of achieving your goal given the time constraints.
I would have recommended using LINQ to SQL. But, since that is a no from you, only other option I would suggest is Strongly Typed Datasets and Table Adapters generated by Visual Studio. They are old but decent enough to work in any modern application.
They are fast to create. They provide type safety. They are quite flexible for configuration and customization. Since they are generated by Visual Studio, any changes made to database can be easily reflected quickly.
Being a LINQ beginner myself, I would recommend taking the plunge and going with linq-to-sql or entity framework. I cant say for certain without knowing your requirements but theres a good chance taking the time to learn basic linq for this project would speed up development overall.
You may also want to consider SubSonic. It's relatively easy to implement and is fairly intuitive to use. Used it for the first time recently on a small project, and despite some initial configuration problems getting it to work with MySQL, it handled data access pretty well.

Is there a standard practice for storing default application data?

Our application includes a default set of data. The default data includes coefficients and other factors that are unlikely to ever change but still need to be update-able by the user.
Currently, the original default data is stored as a populated class within the application. Data updates are stored to an external XML file. This design allows us to include a "reset" feature to restore the original default data. Our rationale for not storing defaults externally [e.g. XML file] was to minimize the risk of being altered. The overall volume of data doesn't warrant a database.
Is there a standard practice for storing "default" application data?
Suppose I were to answers: "Yes, there is a standard. 79% of systems worldwide externalise to a database." Would you now feel motivated to adopt a database? Surely not! Your particular requirements don't merit that overhead.
We're talking trade-offs here. Do the defaults need to change frequently? How much effort is it to change them using your current approach? Do you need to release different versions of the application with different defaults? Do the defaults change as you move from UAT to Production?
If you explore your requirements an engineering solution should emerge. In all likelyhood you will then make a better choice than the current common practice ("standard") that most folks have adopted, which all too often is to use whatever technique they used on their previous project.
For what it's worth, my personal "standard" is to externalise everything. Even when I don't expect things to change, sometime, somewhere, they do. Once I've decided to externalise then XML or property files doesn't make much difference to me.
properties files sound like OK to me. You can also include them inside the jar so that you don't have to carry all around with it.
Edit: "reset" function goes into your application code though.
Having these defaults in an external file could make updating the defaults easier, you could always have a copy of this in the download/on cd etc.

Is there a stand-alone database for Adobe AIR that supports large amounts of data?

I have considered SQLite, but from what I've read, it is very unstable at sizes bigger than 2 GB. I need a database that in theory can grow up to 10 GB.
It would be best if it was stand-alone since it is easier to implement for non-techie users, instead of having the extra step of installing something like MySQL which most likely will require assistance.
Any recommendations?
SQLite should handle your file sizes just fine. The only caveat worth mentioning is that SQLite is not suitable for highly-concurrent environments, since the entire database file is exclusively-locked during writing processes.
So if you are writing an application that needs to handle several users concurrently, a better choice would be Postgresql.
I believe SQLite will actually work fine for you with large databases, especially if you index them appropriately. Considering SQLite's popularity it seems unlikely that it would have fundamental bugs.
I would suggest that you revisit the decision to rule out SQLite, and you might try to compensate for the selection bias of negative reports. That is, people tend to publicize bug reports, not non-bug reports, and if SQLite were the most popular embedded database then you might expect to see more negative experiences than with less popular packages even if it were superior.

Resources