Oracle ALL_TABLES.LOGGING clarification - oracle11g

I queried the user_tables view of sys.all_tables and saw a column called LOGGING which is set to either YES or NO. This is an Oracle 11g database. I am not too familiar with the specifics of Oracle databases.
I just want to find out what that parameter does. What kind of logging are we talking about?
I am interested in finding out if there is any connection between this parameter and the CREATED and LAST_MODIFIED fields usually available in Oracle based applications.
Also does this logging parameter also enable logging of data changes (INSERT, UPDATE, DELETE) including old and new values of fields changed?
Appreciate your help folks!

Sort of. The documentation describes the column thusly:
Indicates whether or not changes to the table are logged; NULL for partitioned tables
The relates to the LOGGING clause in the CREATE TABLE statement:
Specify whether the creation of the table and of any indexes required
because of constraints, partition, or LOB storage characteristics will
be logged in the redo log file (LOGGING) or not (NOLOGGING).
This is separately documented, along with a lot more information. Simply put this indicates whether changes made to the table are being logged so that they can be recovered in the event of an instance failure. It is not so you can reference changes; you'll have to use triggers or a materialized view for that.

Related

Is there a way to enforce a schema constraint on an AWS DynamoDB table?

I'm a MSSQL developer who recently was tasked with building a new application using DynamoDB since we use AWS and we wanted a highly scaleable database service.
My biggest concern is data integrity. For example, I have a table for all my users where every row needs to have a username, email, and name field, all strings, with a verified field that's an int. Is there anyway to require all entries in that table to have those fields and to be of that particular type?
Since the application is in PHP I'm using Kettle as my ORM which should prevent me from messing up the data integrity but another developer voiced a concern about if we ever add another application or if someone manually changes some types via the console.
https://github.com/inouet/kettle
Currently, no, you are responsible for maintaining the integrity of your items with respect to the existence of attributes that are not keys on the base table. However, you can use LSI and GSI to enforce data types of attributes (notwithstanding my qualm that this is not a recommended pattern, as it could cause partition heat especially for attributes whose range of values is small). For example, verified seems like it might take only 0 or 1 as a value, so if you create a GSI with PK=verified where verified is a Number, writes to the base table may get throttled by the verified GSI.

SOA Composite not pulling data from query, dealing with latency

We have an Oracle SOA Composite that is deployed on Weblogic 11g. There is a trigger in a mySQL database that kicks off the composite. When it runs with a new entry the account name is not being populated so I added an additional query for the account name. I have included a screenshot of the check I have to query account name.
It appears the corresponding table is not getting updated as fast as the table that the trigger is on. I tried putting a wait in the composite and that didn't work. I also tried a wait with a while loop, which hung the composite. Does anyone have any suggestions on how to handle a situation like this?
Thanks,
Tom
This was actually an issue with the composite assignments were incorrect and the query did not return any data.

Is it possible to create a database/table/view alias?

Let's say there is a database owned by someone else called theirdb with a very slow view named slowview. I have an app that queries this view regularly, but, because it takes too long, I want to materialize it to a table within a database that I own (mydb.materializedview).
Is there a way in Teradata to create an alias database object so that I can go like select * from theirdb.slowview, but actually be selecting from mydb.materializedview?
I need to do some rigorous testing against their view, but it's so slow that I hardly have time to test anything. The other option is to edit the code so that it reads from mydb.materializedview, but that is, unfortunately, not an option in this particular case.
Teradata does not allow for you to create aliases or symbolic links between objects.
If the object is fully qualified by database name and view name in the application your options are a little more restricted. You have have to create a backup of their view definition and them place your materialized table in the same database. This would obviously be best done during a planned application outage.
If the object is not fully qualified by database name and view name in the application and relies on a default database setting or application variable you have a little more flexibility. If all the work is done at a view level you can duplicate the environment in another database where you plan to have a materialized version of their slowview. Then by changing the users default database or application variable you can point it at the duplicate environment to complete your testing.
Additionally, you can try to cover (partially or fully) the query that makes up the slowview by using a join index. This allows you to leave the codebase as it is in the application but for queries that can be satisfied by the join index the optimizer will use the join index. Keep in mind that a join index does incur a cost as it is in essence a materialized version of the SQL which was used to construct it. This means additional IO and change management issues have to be taken in to account.
Lastly, you could try to create additional secondary or hash indexes on the objects within the slowview to improve it's performance.

semaphore for a datarow

I am writing a web application that allows the user basic CRUD operations against a database. The tables that are being updated have less than 200 records and there may be multiple users using this applications there is a need for some sort of locking mechanism to avoid the 2 users from overwriting each others changes.
I have looked into semaphores but that seems to only limit the number of users executing the same code. In my data layer I have a class file for each table so I can certainly employ this on a specific table's class file but can I somehow limit the locking to the key fields?
Assuming that you are using a proper SQL implementation along with ASP .Net, why dont you use transactions to achieve this? Check it out here.
Additionally, you can also read up on optimistic concurrency to see if that is what you need. Basically, before saving a value, the user checks if the value in a particular field is the same as it was when he first read it. If the value is the same, it is assumed that noone else has overwritten it, and the new value is saved to the DB; if the values are not the same, a warning message is returned instead.

How do I check which values in my Form have changed before saving?

The situation is like this. We have a form with a large number of fields (over 30 spread over several tabs) and what I want to do is find which values have changed before saving with minimum impact on performance. What happens right now is, for editing, single records are queried from several databases. The values are passed over to the client side as value objects. At the moment they are not bound to any fields in the form.
My initial idea was to have a boolean flag for each field to set true or false each time any of the fields were changed. At the time of saving the program would run through the list of flags to see which fields have changed. This seems more than a bit clunky to me so I was thinking maybe it could be done on the server side. But then I don't want to go through each field one by one checking to see which ones don't match the db records.
Any ideas on what to do here?
This is a very common problem for a lot of Flex applications. Because it happens so often there are a number of commercial implementations for Data Management. Queries are stored into entities and those entities are bound to a form on the client side. Whenever a field is updated, it will automatically perform the steps to persist the changes to the db and do rollbacks when requested.
Adobe LCDS Data Management - If you are dealing with a Java environment
WebOrb - If you are dealing with a .net, php, java, rails environment
Of course you can re-invent the wheel and roll out your own, set up PropertyChangeEvent listeners on each field. When the changes are dispatched, listen for them and write handlers for each one.
This sounds exactly like what we're doing with one of the projects I'm working on for a client.
What we do is dupe the value objects once they back to the UI. Then when calling the update service, I send both the original object and the new object. In the service, I do a field by field compare on the server to determine what values should sent to the database.
If you need to update every field/property conditionally based on whether or not it changed; then I don't see a way to avoid the check with every field/property. Even if you implement your Boolean idea and swap the flag in the UI whenever anything changes; you're still going to have to check those Boolean values when creating your query to determine what should be updated or not.
In my situation, three different databases are queried to create the value object that gets sent back to the UI. Field updates are saved in one of those database and given first order of preference when doing the select. So, wee have an explicit field by field comparison happening inside a stored procedure.
If you don't need field by field comparisons, but rather a "record by record" comparisons; then the Boolean approach to let you know the record/Value Object had changed is going to save you some time and coding.

Resources