How can I cancel DynamoDB Restore - amazon-dynamodb

I started restore for DynamoDB table from backup. The table has millions of records and it is taking a long time. I would like to cancel the Restore and I cannot find the option from AWS Console. I tried to delete the table using aws cli and I am greeted with resource is being used error message. Is there a way to cancel restore? Thank you!

Related

sqlite .backup command fails when another process writes to the database (Error: database is locked)

The goal is to complete an online backup while other processes write to the database.
I connect to the sqlite database via the command line, and run
.backup mydatabase.db
During the backup, another process writes to the database and I immediately receive the message
Error: database is locked
and the backup disappears (reverts to a size of 0).
During the backup process there is a journal file, although it never gets very large. I checked that the journal_size_limit pragma is set to -1, which I believe means its unlimited. My understanding is that writes to the database should go to the journal during the backup process, but maybe I'm wrong. I'm new to sqlite and databases in general.
Am I going about this the wrong way?
If the sqlite3 backup writes "Error: database is locked", then you should use
sqlite3 source.db ".timeout 10000" ".backup backup.db"
See also Increase the lock timeout with sqlite, and what is the default values? about default timeouts (spoiler: it's zero) and now with backups solved you can switch SQLite to WAL mode (it supports multiple writers!).
//writing this as an answer so it would be easier to google this, thanks guys!

Identify deleted records in PDMlink by querying tables

The PDmlink records are hard deleted in WIndchill from the backend tables.
Users have access to delete the objects created and hence i need to find a way to identify the records deleted.
Is there any table which gives this information in PDMlink database?
Regards
Maha
There is a table called AUDITRECORD in DB which has the details about all the events happened it includes Delete event. But the information will be available only if you have configured Audit Event Recording.
If you have configured then you can get the details by using below query.
select * from auditrecord where eventlabel='Delete';
Refer Windchill helpcenter for steps to configure audit event recording.
In case you have not done this configuration and still wants to fetch the data about deleted objects then
Apache Web server logs and looking at backup of database or current
database doing a negation query to match part object identifer in
apache logs against those still remaining in database. Lot more work,
but not impossible.

aws codedeploy - running sql scripts

I run my sql scripts which inserts data to DB as a part of my codedeploy lifecycle event on a Autoscaling group. The Autoscaling group has 2 instances, the sql scripts run fine on the 1st instance and the deployment is successful on that instance.
In the 2nd instance, as the DB has the data already inserted the sql script fails with the below error message:
[stderr]ERROR 1062 (23000) at line 32: Duplicate entry
Any workaround or solution will be of great help.
Thanks
It suggests that the DB already has an entry which you're trying to insert, hence that error. You may like to first check if the DB has that entry or not.
To identify which part of the script is giving you this error, you may try to create subset of your script and identify the actual cause.
This certainly is the issue when you already have some record(s) and the DB / Table / schema does not allow for duplicate entry.
Assuming your deployment group is a OneAtATime deployment type, then your lifecycle hook should check for the entry before it inserts the SQL.
That way, only the first deployed instance will apply the change. The other deployments will test for the entry, and then skip the insert code phase.

QSqlError("5", "Unable to fetch row", "database is locked")

I am getting this error "QSqlError("5", "Unable to fetch row", "database is locked")"
I have done my research and I think the problem arises from the fact that I am executing an INSERT query while the SELECT query is still active, which locks the database. Now I'd imagine people run into this problem often since it is common to write to a database based on the output of a SELECT query, so I wanted to ask what is the best way to solve this? Would I be able to fetch the query (using query.next()) after closing it with query.finish() to unlock the database? Or should I store the result in a temporary container, close the query then iterate over the temporary container?
Thank you very much in advance
Do you have a database reader on when you run this? I had a similar issue that only occurred when I had DB Browser for SQLite running. Make sure that you don't have any other software that has your database file open. I don't always have this issue with using DB Browser for SQLite but when I do, closing the program fixes it.
It addition, I tend to run query.finish() after each query is complete, to ensure no interaction.
I hope this helps you out!

Can I solve this using oracle db listener?

Try to be more clear, I'm in lack of ideas in this problem, even it sounds like a classic.
My application is running on weblogic 10.3.3 application server, and for database I am using Oracle database 11g. My problem is that there is table in db, let's say "user.", there is column, let's say "columnA", in this table. This table is updating by some module of application.
What I want if when value of column is "abc.", then I have to show alert to console(IP). {IP can be retrieved from DB as it is configured in DB. this ip will be other linux system other than linux machine where oracle database is installed.} Updating is continuously done on my table from module of application. Please tell me from where should I start?, what should I read. I am not able to understand what should be approach. Any help is much appreciated.
Can u provide me any begginner.s link of oracle db listener?
You probably want to look at setting up a Trigger in the database
http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28370/triggers.htm
An alternative to a trigger would be to log update queries against the table (to a log file) and have a process monitor the log, sending out alerts when something happens.

Resources