How do we make sure the record is being locked? - plsql

In Oracle EBS, when we are doing data conversions and interfaces, loading data into Oracle from another system, how do we make sure the record is being locked? How do we make sure no other person is updating our records?

Oracle EBS seeded API's will take care about locking. We don't insert data into EBS base tables directly,
validate the data and insert into interface tables later we run oracle standard programs to import interface tables data into base tables.
These oracle standard programs use oracle seeded API to insert data into multiple base tables.
How do we make sure no other person is updating our records?
Developers use their own custom staging tables to import data into EBS.
when data upload staging tables to interface table maintain each interface data source is different, usually other persons don't update other interfaces records, We can't track if anyone updated from database backend tools like sql developer or TOAD. we can track transaction from last updated by column if they update from applications
If you have any specific issue related locking let us know

Related

Where are hash table implemented on the database or server code?

I'm reading on hash table and data structure, and one question come to mind. Where is hash table implemented? Is it on server code or database?
The resource I've read seems to implement them on the server code, but isnt storing data the job of database? PS: I've havent get to a point of knowing non-sql database yet, maybe that's where my knowledge lack.
Many applications need to store some data internally, even if they're also using or updating data in a database at times. Often they'll even retrieve related data from a remote (across the network) database and have it available in RAM on the local machine for the application to access quickly.
Other times, an application may use a data structure such as a hash table to support some application behaviours that are not part of the business data model, and therefore don't belong in the database. For example, a GUI application might keep help strings to display when the mouse hovers over a widget/button/whatever - they might be stored in a hash table keyed on some GUI object identifier, screen region or whatever the GUI library finds useful to help it display the tooltips at the right time. Another application might keep a table of usernames and activity statistics that it generated by scraping some website - it might display them to the user on demand, or aggregate them or something, without ever saving them down to a database (historic data may be of no value, and it can scrape the website again).
In summary - non-trivial programs tend to use hash tables to provide quick access to the data they consult or manipulate, whether the programs are themselves databases, applications that do also use databases, or applications that run without any database support.

Is Schema in Oracle is equivalent to Database in Microsoft SQL Server?

I am new to Oracle database and I wanted to create a database in Oracle. I followed this link to create a database:
http://www.fehily.com/books/createdb/createdb_oracle_11g_2.html
In Microsoft SQL Server, when we create a database, we use the create database command and the database creation is instantaneous [within fraction of seconds], but the Database Tool as described in link above took couple of minutes to create the database. Is database creation in Oracle this much slower?
Searching more about it, I have a feeling that this database created using above tool in Oracle is not equivalent to the database we create in SQL Server. Rather, the schema/user in oracle is appearing to be equivalent to database in SQL Server. Is it true?
So, If I want multiple databases in Oracle, do I create a single database and then multiple schemas inside that single database? And then are those multiple Schemas are my databases?
I am very much confused about all this. Can someone please refer me to a nice article/book that explains these things in oracle in detail?
For most purposes, yes you would indeed map a SQL server database to an Oracle schema (=user).
The term "database" in Oracle does not mean the same as in SQL Server. An Oracle "database" (from a technical point of view) is more like a SQL Server instance/installation, rather than a "database" in SQL Server.
SQL Server has two levels of namespace: database and schema. Whereas Oracle only has a single level of namespaces: a schema (which has a 1:1 relation to a user)
SQL Server and Oracle both support Schema.
A Schema is like a new database but it is not a new database
Maybe you are confused, Mysql doesn't support schemas but SQL server offers full support for it.
In mysql your database is a schema, to only difference is that it doesn't support multi schemas
For the part of creating multi databases or a single database if multiple schemas it all depend in your specific situation, you should test thinks like performance and how much money you want to spend, a multi database approach can be very expensive unlike a multi Schema approach

Synchronize Postgres Server Database to Sqllite Client database

I am trying to create an app that receives an Sqlite database from a server for offline use but cloud synchronization. The server has a postgres database with information from many clients.
1) Is it better to delete the sql database and create a new one from a query, or try to synchronize and update the existing separate sqlite files (or another better solution). The refreshes will be a few times a day per client.
2) if it is the latter, could you give me any leads to resources on how I could do this?
I am pretty new to database applications so please excuse my ignorance and let me know if there is any way I could clarify.
There is no one size fits all approach here. You need to carefully consider exactly what needs to be done, what you are replicating, how much data is involved, and what your write models are, all before you build a solution. Along the way you have to decide how to handle write conflicts and more.
In general the one thing I would say is that such synchronization works best with append-only write models (i.e. inserts, no deletes, no updates), and one way to do it is to log changes that need to be made and replicate those changes.
However, master-master replication is difficult on the best of days and with the best of tools available. Jumping between databases with very different capabilities will introduce a number of additional problems. You are in for a big job.
Here's an open source product that claims to solve this for many database types including Postgres. I have no affiliation or commercial interest in this company.
https://github.com/sqlite-sync/SQLite-sync.com
http://sqlite-sync.com/
If you're able and willing to step outside relational databases to use an object store you might want to have a look at CouchDb and perhaps PouchDb that use a MVCC based replication protocol designed to support multi-master replication including conflict resolution. Under the covers, PouchDb uses adaptors for Sqlite, IndexDb, Local storage or a remote CouchBb instance to persist client side data. It auto selects the best client side storage option for the given desktop or mobile browser. The Sqlite engine can be either WebSQL or a Cordova Sqlite plugin.
http://couchdb.apache.org/
https://pouchdb.com/

Maintain users data integrity across multiple databases for ASP.NET

I have 2 questions.
I am developing a ASP.NET web application that uses the standard ASP.NET membership. We intend to have the membership tables in 1 database. We have 2 other databases that stores data for 2 different applications.
Shared - Membership info
DB1 - Application1
DB2 - Application2
Both applications uses the membership info in the "Shared" database.
The Shared database has a table called userdetals that will store additional users' info such as name, phone and job title for example.
However, DB1 also has a table called employees that store the same fields as name, phone and job title. Each employee may be an user.
Also for each table in DB1 and DB2, we keep audit trial, i.e. which user updated the tables in the database. Hence, we need to store UserID in the tables of DB1 and DB2.
We thought of having a Users table added in DB1 and DB2. So everytime a new user is created in Shared, the same user will be created in Users table in DB1 and DB2.
Our questions are:
What is the best way to maintain database integrity given the above setup? E.g. Each employee is assigned as an user. If any fields in DB1 such as username, name and phone is updated, then the same fields in Shared DB should be updated and vice versa.
Is it advisable to have membership database in a different database in our case? What is the best solution since almost all the tables in DB1 and DB2 references userID in the Shared database.
1.
The technology you are looking for is Merge Replication (http://bit.ly/KUtkPl). Essentially, you would create a common Users table on both databases, create a Merge Replication publisher on one application database, and then create a Merge Replication subscriber on the other application database. You could also set this up to synchronize the schema as well (which also means you only need to create the table once on the publishing database: it will push the table, schema with data, to the subscriber).
But if you are looking for more of a manual approach, I would not denormalize the user data to the employee(s) table, instead create a supplemental table and a view on each Application server. Kind of like inheritance in OOP: Any common data between the Employee table and Users table, leave on the shared user table. Any unique columns for the Employee, add to the supplemental table only and store on each database. The view would merge both the supplemental table and shared table. (http://bit.ly/9KPxt0)
Even if you do use Replication Services, I would still use this view design with the synchronized table.
You COULD update through the view, but I would not recommend that. It has been done before successfully in production, but there are too many constraints that could blow up (http://bit.ly/LJCJev). Instead update the table directly that holds the data.
Absolutely avoid "triggers that synchronize". Too risky (can cause an infant loop on your SQL server) and too much maintenance overhead.
2.
I would do the Merge Replication, it is just less for you to worry about and maintain after it is configured correctly. But your approach is OK if want something more manual or if you are not familiar with Replication services in SQL... just use the view noted above and you'll be set.
Easy way:
You can create link server to these databases.
And then create synonym to easy access to tables of each database.
Create trigger to update data when any data was updated on each table.

Best way to create a default Database setup via an .aspx page?

We are going to be selling a service that will be hosted by us, and each client we host will have their own database, but there will be one centralized website. I currently have a blank database with the few things that a new client will need. What is the best way to copy this database so I can setup another client? I want to be able to do this from an .aspx page. Thanks in advance!
Update:
By .aspx page, I just meant that I need to be able to kick off the process from an .aspx page.
Update2:
We're running SQL Server 2008.
Update 3: Referencing Cade Roux's answer... Thanks for a great answer, but...
What is the reason for merging all of the databases into one, and then distinguishing clients based on an identifier in each table? Wouldn't this greatly complicate the architecture of the entire product? I would need to add these Client ID columns to practically every table, and the DAL would need to know which client data its looking for. With the current setup I have, I just switch out the connection string in the DAL, depending on which user is accessing the site. That way, after the connection string is set, I never need to worry about finding client specific data! How do these approaches compare (and should I add this as a separate question?
You have a few different options:
You can detach your empty database, then when a user signs up, copy that database and mount it under a unique name for them and map it to their account in your master database, say.
You can create a database from scratch using scripts and populate any base data either from an online template database or scripting the base data and map it to their account in your master database.
You should seriously consider going to a multi-tenant architecture where all users are in the same database (with most tables having CustomerID columns to segregate the data) if you are going to have more than a few dozen customers.
Regarding your notes about option 3 - it depends on your application. Multi-tenant can be difficult to retrofit. On the other hand, managing and upgrading hundreds of individual customer databases can be difficult in the long haul.
There are previous Stack Overflow questions regarding this:
What are the advantages of using a single database for EACH client?
One database or many?
I think I'll see about re-tagging them with multi-tenant-db or something. Anyhow, I think that this comes up as a consideration secondary to your answer about a particular tactic does show the importance of including details about your overall goals in strategy in every question on StackOverflow.
Depending on what database you're using, there are several approaches. The simplest is to ask your database software to generate SQL code for creating the database and include that with your software. Another would be to just script out in C#/VB the steps needed to recreate your empty database.
Why the need for .aspx page?
You don't say what db version you're using but in SQL2005-2008, you have the ability to "script database as" and then "create to" and have it port the sql to a query window. You could then work with that to create a stored procedure that can be called from your .aspx page.
SQL Server has a system database called 'model'. Any database objects (tables, views, stored procedures) that exist in the model are added to any new database created.
You could create your 'client database' schema as model, and any new database would have all the same tables...
But, if you need to change your database schema later, your best option is to write change scripts which are part of your code-behind file. Since changes to the 'model' database are not propagated to existing databases, the application needs to detect and upgrade the database schema as necessary.
Disadvantage to this approach: If you want a database which isn't a 'client database' then you would need to create the database, and then delete the 'client database' tables.

Resources