DELETE a table in Riak TS - riak

I am attempting to drop an entire table in RIAK TS, but nothing seems to work. I have tried both "drop table" in a standard query so (using Python):
from riak import RiakClient
client = RiakClient(host = '127.0.0.1')
client.ts_query('ticks', 'DROP TABLE ticks')
but this gives me an error that DROP is not understood. An alternative would be to delete everything in the table using client.ts_delete('ticks', ["rows"]) but this seems to need me to specify the row keys. Is there a wildcard option for row keys, and if not, how do I get all row keys given the subquery size limits?

As of Riak TS 1.4.0 DROP TABLE is not supported and there is no other means to delete tables.
Range deletes (or deleting more than one row) is also not yet supported however you can batch delete statements.
ALTER, DROP, and range deletes are all features on the Riak TS road map for future releases.

Related

How to introduce indexing to sqlite query in android?

In my android application, I use Cursor c = db.rawQuery(query, null); to query data from a local sqlite database, and one of the query string looks like the following:
SELECT t1.* FROM table t1
WHERE NOT EXISTS (
SELECT 1 FROM table t2
WHERE t2.start_time = t1.start_time AND t2.stop_time > t1.stop_time
)
however, the issue is that the query gets very slow when the database gets huge. Trying to look into introducing indexing to speed up the query, but so far, not been very successful, therefore, would be great to have some help here, as it's also hard to find examples for this for android applications.
You can create a composite index for the columns start_time and stop_time:
CREATE INDEX idx_name ON table_name(start_time, stop_time);
You can read in The SQLite Query Optimizer Overview:
The ON and USING clauses of an inner join are converted into
additional terms of the WHERE clause prior to WHERE clause analysis
...
and:
If an index is created using a statement like this:
CREATE INDEX idx_ex1 ON ex1(a,b,c,d,e,...,y,z);
Then the index might be used if the initial columns of the index
(columns a, b, and so forth) appear in WHERE clause terms. The initial
columns of the index must be used with the = or IN or IS operators.
The right-most column that is used can employ inequalities.
You may have to uninstall the app from the device so that the db is deleted and rerun to recreate it, or increase the version number of the db so that you can create the index in the onUpgrade() method.

truncate not always working: why?

I have defined this mapper method:
#Delete("truncate table MY_TABLE")
public void wipeAllData();
and it usually works...anyway sometimes it doesn't...is there any particular reason/known bug for that?
I'm using mybatis 3.3.0 with oracle 11g as DBMS.
EDIT
Since you added the oracle11g tag. My previous answer is no longer valid, at least not the reason why it would not be working. So I edited it.
There are some reasons that I'm aware of why sometimes it is not working in ORACLE. According to the ORACLE docs
You cannot individually truncate a table that is part of a cluster. You must either truncate the cluster, delete all rows from the table, or drop and re-create the table.
You cannot truncate the parent table of an enabled foreign key constraint. You must disable the constraint before truncating the table. An exception is that you can truncate the table if the integrity constraint is self-referential.
You cannot truncate the parent table of a reference-partitioned table. You must first drop the reference-partitioned child table.
But you should be aware that the usage or a TRUNCATE command is not ideal in an application scope. It should be an operation executed on the database only. The reason lies in another indication of the docs:
If table is not empty, then the database marks UNUSABLE all nonpartitioned indexes and all partitions of global partitioned indexes on the table. However, when the table is truncated, the index is also truncated, and a new high water mark is calculated for the index segment. This operation is equivalent to creating a new segment for the index. Therefore, at the end of the truncate operation, the indexes are once again USABLE.
So it can be a painfully long operation depending on indexes and the size of the table.
Also, for tables that have constraints the truncate operation will not drop the table, it will delete registries one by one. If you have ON DELETE CASCADE on your constraints, if not, an error will be thrown. This is still true for oracle database
Another thing will should aware of is
Removing rows with the TRUNCATE TABLE statement can be faster than removing all rows with the DELETE statement, especially if the table has numerous triggers, indexes, and other dependencies.
So if by any means you have a trigger on that table it will do nothing.
The original DOC about TRUNCATE command is here:
TRUNCATE TABLE

Oracle 11g data pump 10 column limit

I am using an Oracle data pump to do a schema "rename." There is a primary key column on all (2000) tables. For example, I need to run this on all tables:
update mytable set mykey='foo2' where mykey='foo';
I would use the remap_data option of expdp to do this. The problem is that there are some columns that I would need to do the rename on 10+ columns. Has anyone had a problem like this and found a way to handle this?
Previously, I had tried using "Create Table As." The problem would be having to recreate the schema structure for all of the tables (views/triggers/grants/indexes/constraints). I am aware of the DBMS_METADATA.GET_DDL package. Offhand, doing a diff of the database schema before and after and recreating the diffs seems ugly.
I have also tried doing inserts on the table without any constraints or indexes, so I would only have to re-enable constraints and recreate the indexes, but I would like to try something faster.
I am using Oracle 11.2.0.3.0.
If i understand correctly, your real problem (or goal) is to 'RENAME' a schema.
You chose to export / import (using a different NAME to achieve RENAME) using oracle data pump.
Then DROP old schema (if you feel redundant).
If this is correct, here are the steps, you can do to achieve your goal. I did it successfully on my DEV env. All objects (including PK, FKs) were imported successfully.
-- Export RMCORE_QA
expdp DIRECTORY=DMPDIR DUMPFILE=RMCORE_QA.dmp SCHEMAS='RMCORE_QA' LOG=RMCORE_QA_EXP_DP.lst
-- Import using RMCORE_QA3
impdp DIRECTORY=DMPDIR DUMPFILE=RMCORE_QA.dmp REMAP_SCHEMA='RMCORE_QA:RMCORE_QA3' SCHEMAS='RMCORE_QA' LOG=RMCORE_QA_IMP_DP.lst TRANSFORM=OID:N
You can also compare objects b/w schemas by-
SELECT OBJECT_NAME, STATUS, object_type FROM dba_objects WHERE owner LIKE 'RMCORE_QA'
MINUS
select OBJECT_NAME, STATUS, object_type from dba_objects where owner like 'RMCORE_QA3';
HTH. Let me know if i did not get your problem...

Indexing SQLite database: Empty Index ?

I have a .sqlite db which contains only one table. That table contains three columns and I am interested in indexing one column ONLY.
The problem is, when I perform the indexing, I got an empty index table !
I am using SQLite Manager add-ons for Firefox. This is the syntax that appears before I confirm the indexing:
CREATE INDEX "main"."tableIndex" ON "table" ("column1" ASC)
I don't know what is the problem here. I tried this tool - long time ago - with another database and it works fine.
Any suggestion ?
You cannot "see" the contents of a database index. No table or table-like structure is created that corresponds to the index. So there is nothing to look at that could be empty.
If the CREATE INDEX command ran without error, you can be confident that the index was created and will continue to be maintained by SQLite as you add, remove, and update data.
As per the comments, below, #iturki is actually trying to index for full text search. SQLite supports several extensions for full text search but they are not implemented through the stanard CREATE INDEX command. See this reference.
Try use VACUUM query. It will completely rebuild sqlite database file and will rebuild all indices and reset all ROWID etc.

Sqlite3: Disabling primary key index while inserting?

I have an Sqlite3 database with a table and a primary key consisting of two integers, and I'm trying to insert lots of data into it (ie. around 1GB or so)
The issue I'm having is that creating primary key also implicitly creates an index, which in my case bogs down inserts to a crawl after a few commits (and that would be because the database file is on NFS.. sigh).
So, I'd like to somehow temporary disable that index. My best plan so far involved dropping the primary key's automatic index, however it seems that SQLite doesn't like it and throws an error if I attempt to do it.
My second best plan would involve the application making transparent copies of the database on the network drive, making modifications and then merging it back. Note that as opposed to most SQlite/NFS questions, I don't need access concurrency.
What would be a correct way to do something like that?
UPDATE:
I forgot to specify the flags I'm already using:
PRAGMA synchronous = OFF
PRAGMA journal_mode = OFF
PRAGMA locking_mode = EXCLUSIVE
PRAGMA temp_store = MEMORY
UPDATE 2:
I'm in fact inserting items in batches, however every next batch is slower to commit than previous one (I'm assuming this has to do with the size of index). I tried doing batches of between 10k and 50k tuples, each one being two integers and a float.
You can't remove embedded index since it's the only address of row.
Merge your 2 integer keys in single long key = (key1<<32) + key2; and make this as a INTEGER PRIMARY KEY in youd schema (in that case you will have only 1 index)
Set page size for new DB at least 4096
Remove ANY additional index except primary
Fill in data in the SORTED order so that primary key is growing.
Reuse commands, don't create each time them from string
Set page cache size to as much memory as you have left (remember that cache size is in number of pages, but not number of bytes)
Commit every 50000 items.
If you have additional indexes - create them only AFTER ALL data is in table
If you'll be able to merge key (I think you're using 32bit, while sqlite using 64bit, so it's possible) and fill data in sorted order I bet you will fill in your first Gb with the same performance as second and both will be fast enough.
Are you doing the INSERT of each new as an individual Transaction?
If you use BEGIN TRANSACTION and INSERT rows in batches then I think the index will only get rebuilt at the end of each Transaction.
See faster-bulk-inserts-in-sqlite3.

Resources