Global indexes while renaming the partition name - oracle11g

I have a existing table with some indexes in it. I am going to do partitioning of that table using dbms redefinition. I also have to rename the partition names every 24 hours.
Is there any problem in global indexes after I rename the partition names. Please reply.
Is it mandatory to have a primary key to perform interval partitioning?
I am using oracle 11g

Renaming partitions doesn't affect index status, global or otherwise. They stay valid if they were valid before the rename.
You don't need a primary key for interval partitioning. The constraints are the same as for range partitioning, with some restrictions. See Interval Partitioning in the concepts guide:
You can only specify one partitioning key column, and it must be of NUMBER or DATE type.
Interval partitioning is not supported for index-organized tables.
You cannot create a domain index on an interval-partitioned table.
Note that the names for the partitions created automatically on an interval-partitioned table are system-generated. You can rename them after they've been created, but you cannot, in 11gR2, have them created with a name of your choice.

Related

Can I avoid a Scan Operation when trying to retrieve all items in a specific date range in dynamoDB?

I have a simple table which contains one unique partition key id and a bunch of other attributes including a date attribute.
I now want to get all records in a specific time range however as far as I understood, the only way to do this is to use a scan.
I tried to use a GSI on date but then I can not use BETWEEN in the KeyConditionExpression.
Is there any other option?
Q: Are you providing one-and-only-one Partition Key value?
A: If YES, then you can query. If NO, it's a scan.
You are currently in scan territory, because you need to search over multiple ids.
To get to the promised land of queries, consider DynamoDB's design pattern for time series data. One implementation would be to add a GSI with a compound Primary Key representing the date. Split the date between a PK and SK. Your PK could be YYYY-MM, for instance, depending on your query patterns. The SK would get the leftover bits of the date (e.g. DD). Covering a date range would mean executing one or several queries on the GSI.
This pattern has many variants. If scale is a challenge and you are mostly querying a known subset of recent days, for instance, you could consider replicating records to a separate reporting table configured with the above keys and a TTL field to expire old records. As always, the set of "good" DynamoDB solutions is determined by your query patterns and scale.

What is the difference between an AWS DynamoDB local vs. global secondary index?

From the DynamoDB documentation:
Global secondary index — an index with a partition key and a sort key that can be different from those on the base table. A global secondary index is considered "global" because queries on the index can span all of the data in the base table, across all partitions.
Local secondary index — an index that has the same partition key as the base table, but a different sort key. A local secondary index is "local" in the sense that every partition of a local secondary index is scoped to a base table partition that has the same partition key value.
This just isn't making sense to me and no amount of searches is able to aptly explain it to me.
Could someone help me w/understanding this?
When you insert data to DynamoDB, it internally partitions the data and store in different storage nodes internally. This is based on the Partition Key.
Lets say you want to Query an Item based on a non-key (Neither partition nor sort key) attribute you need to use a Scan (Which is expensive since it checks all the items in the table).
This is where GSI snd LSI comes in. Lets take an example of a Student table with StudentsId as sort key and SchoolId as partition key.
LSI is useful if your application have queries like getting all the students of grade 5 of a given school.
If you need to query all grade 5 students across all the schools (Across all school partitions) you will need a GSI.
Local secondary index(LSI)
can only be created when creating a table
share the capacity units with a table
index's partition key has to be the same as table's partition key
a table can have 5 LSI
Global secondary index(GSI)
can be created anytime but takes times to set one up (due to copying original table items into index table, it cost read capacity units of table)
have a separate set of capacity unit
any attribute can be the partition key
a table can have 5 GSI

Autonomous Partitioning of Table in Oracle?

Is it possible to make Autonomous Partitioning of a table in Oracle ?
I want the new partition get created automatically on exceeding the range or limit assign to it. Suppose I partitioned a table on the basis of year and I want as soon as the new year begin the new partition must be created automatically, means I don't have to create it manually.
It depends on the Oracle version you're using. You've tagged this for both 10g and 11g and the answer is different between the two.
In 11g, you can use interval partitioning to have Oracle automatically create new partitions when new data is inserted. Prior to that, you'd need to explicitly create the partitions you need. You could always, of course, allow new rows to be inserted into one partition with a very large MAXVALUE and then split the partition later on but I assume that's not exactly what you're looking for.
If you're partitioning on a date, you could also create a scheduled job that created partitions just before they are needed. If you really wanted to, you could also create hundreds of partitions in advance.

Creating a Roster Database in SQLite

I am dealing with a roster with 15,000 unique employees. Depending on their 'Designation' they either impact performance or do not. The issue is, these employees could change their designation any day. The roster is as simple as this:
AgentID
AgentDesignation
Date
I feel like I would be violating some Normalization rules if I just have duplicate values (the agent has the same designation from the previous day, for example). Would I really want to create a new row for each date even if the Designation is the same? I want to always be able to get the agent's correct designation on a particular date.
All calculations are done with Excel, probably with Vlookup. Anyone have some tips?
The table structure you propose would not be a violation of normalization -- it contains a PRIMARY KEY (AgentID, Date) and a single attribute that is dependent on all elements of the key (AgentDesignation). Furthermore, it's easy (using the PRIMARY KEY constraint) to ensure that there is one-and-only-one designation per agent per day. The fact that many PRIMARY KEY values will yield the same dependent value does not mean the database is not correctly normalized.
An alternative approach using date ranges would likely result in fewer rows but guaranteeing integrity would be harder and searches for a particular value would be costlier.

SQLITE and autoindexing

I recently began exploring indexing in sqlite. I'm able to successfully create an index with my desired columns.
After doing this I took a look at the database to see that the index was created successfully only to find that sqlite had already auto created indexes for each of my tables:
"sqlite_autoindex_tablename_1"
These auto generated indices used two columns of each table, the two columns that make up my composite primary key. Is this just a normal thing for sqlite to do when using composite primary keys?
Since I'll be doing most of my queries based on these two columns, does it make sense to manually create indices, which are the exact same thing?
New to indices so really appreciate any support/feedback/tips, etc -- thank you!
SQLite requires an index to enforce the PRIMARY KEY constraint -- without an index, enforcing the constraint would slow dramatically as the table grows in size. Constraints and indexes are not identical, but I don't know of any relational database that does not automatically create an index to enforce primary keys. So yes, this is normal behavior for any relational database.
If the purpose of creating an index is to optimize searches where you have an indexable search term that involves the first column in the index then there's no reason to create an additional index on the column(s) -- SQLite will use the automatically created one.
If your searches will involve the second column in the index without including an indexable term for the first column you will need to create your index. Neither SQLite (nor any other relational database I know of) can use composite indexes to optimize filtering when the head columns of the index are not specified in the search.

Resources