There are 66 instances in Openstack Havana. I think these instances are zombies instance. Dashboard displays Terminate Success info when I click Terminate Instance. But the instance still exists on dashboard and its status is Running. I have already kill all qemu-kvm program on server.
In Mysql, database nova remains a lot of data. I don't know where to start to delete these data. Could someone give me some advice ?? Thanks a lot.
I did this in the Icehouse release of OpenStack, maybe you can map this to the Havana release:
log into the database (you should see > mysql in your console)
select the nova database:
use nova;
mark the rows in table instances as deleted (that's a "soft-delete"):
update instances set deleted_at = updated_at, deleted = id, power_state = 0, vm_state = "deleted", terminated_at = updated_at, root_device_name = NULL, task_state = NULL where deleted = 0;
<-- That 'deletes' ALL your instances! Use show columns from instances; if you want to choose another column(s) for your where-clause.
update the cache in table instance_info_caches appropriately:
update instance_info_caches set deleted_at = updated_at, deleted = id where deleted = 0;
update the fixed_ips table:
update fixed_ips set instance_id = NULL, allocated = 0, virtual_interface_id = NULL where deleted = 0;
Note: If the column deleted contains a value not equal to zero, then this seems to be the way to say this row is supposed to be deleted. When I delete an instance via API, OpenStack seems to choose the id as value for deleted.
Source: http://www.databaseskill.com/4605135/
You can manually reset the VM state and delete using the following commands
$ nova reset-state c6bbbf26-b40a-47e7-8d5c-eb17bf65c485
$ nova delete c6bbbf26-b40a-47e7-8d5c-eb17bf65c485
http://docs.openstack.org/admin-guide-cloud/content//reset-state.html
How to revert back soft-delete?
I have used below command:
update instances set deleted_at = updated_at, deleted = id, power_state = 0, vm_state = "deleted", terminated_at = updated_at, root_device_name = NULL, task_state = NULL where deleted = 0;
Related
In cosmosDB, I need to insert a large amount of data in a new container.
create_table_sql = f"""
CREATE TABLE IF NOT EXISTS cosmosCatalog.`{cosmosDatabaseName}`.{cosmosContainerName}
USING cosmos.oltp
OPTIONS(spark.cosmos.database = '{cosmosDatabaseName}')
TBLPROPERTIES(partitionKeyPath = '/id', manualThroughput = '10000', indexingPolicy = 'AllProperties', defaultTtlInSeconds = '-1');
"""
spark.sql(create_table_sql)
# Read data with spark
data = (
spark.read.format("csv")
.options(header="True", inferSchema="True", delimiter=";")
.load(spark_file_path)
)
cfg = {
"spark.cosmos.accountEndpoint": "https://XXXXXXXXXX.documents.azure.com:443/",
"spark.cosmos.accountKey": "XXXXXXXXXXXXXXXXXXXXXX",
"spark.cosmos.database": cosmosDatabaseName,
"spark.cosmos.container": cosmosContainerName,
}
data.write.format("cosmos.oltp").options(**cfg).mode("APPEND").save()
Then after this insert I would like to change the Manual Throughput of this container.
alter_table = f"""
ALTER TABLE cosmosCatalog.`{cosmosDatabaseName}`.{cosmosContainerName}
SET TBLPROPERTIES( manualThroughput = '400');
"""
spark.sql(alter_table)
Py4JJavaError: An error occurred while calling o342.sql.
: java.lang.UnsupportedOperationException
I find no documentation online on how to change TBLPROPERTIES for a cosmosdb table in sparkSQL. I know I can edit it on the Azure Portal and also with azure cli, but I would like to keep it in sparkSQL.
This is not supported with the spark connector for NOSQL API , you might need to track the issue here. So you might need to do it through CLI command or portal or SDK (Java)
FYI : Cosmos NOSQL API container is not similar to Table in SQL, so alter commands will not work.
There are two tables ... the 'master' (tblFile) holds record details of files that have been processed by some java code .. the PK is the file name. A column of interest in this table is the 'status' column (VALID or INVALID).
In the subordinate table (tblAnomaly) there are many records that hold the anomalies from processing each file .. this table has a FK as the file name from tblFile ... and along with other columns of relevant data there is a boolean type column which acts as an acceptance flag of the anomaly. NULL is accept .. # is not.
The user manually works their way through the list of anomalies presented in a swing ListPane and checks off the anomalies as they address the issue in the source file. When all the anomalies have been dealt with i need the status of the file in tblFile to change to VALID so that it can be imported into a database.
Here is the trigger i have settled on having designed the statements individually via an SQL editor .. however, i do not know how to validate/debug the trigger statement after it is loaded to the database, so cannot work out why it does not work ... no action and no feedback!!
CREATE TRIGGER
updateFileStatus
AFTER UPDATE ON tblAnomaly
WHEN 0 = (SELECT COUNT(*) FROM tblAnomaly WHERE tblAnomaly.file_name = tblFile.file_name AND tblAnomaly.accept = '#')
BEGIN
UPDATE tblFile
SET tblFile.file_status = 'VALID'
WHERE tblFile.file_name = tblAnomaly.file_name;
END;
So i worked it out! .. here is the solution that works.
CREATE TRIGGER
updateFileStatus
AFTER UPDATE ON tblAnomaly
WHEN 0 = (SELECT COUNT(*)
FROM tblAnomaly
WHERE file_name = old.file_name
AND accept = '#')
BEGIN
UPDATE tblFile
SET file_status = 'VALID'
WHERE file_name = old.file_name;
END;
I am building a shiny application which will allow CRUD operations by a user on a table which exists in an sqlite3 database. I am using the input$table_rows_selected() function in DT to get the index of the rows selected by the user. I am then trying to delete the rows (using an action button deleteRows) from the database which have a matching timestamp (the epoch time stored as the primary key). The following code runs without any error but does not delete the selected rows.
observeEvent(input$deleteRows, {
if(!is.null(input$responsesTable_rows_selected)){
s=input$responsesTable_rows_selected
conn <- poolCheckout(pool)
lapply(length(s), function(i){
timestamp = rvsTL$data[s[i],8]
query <- glue::glue_sql("DELETE FROM TonnageListChartering
WHERE TonnageListChartering.timestamp = {timestamp}
", .con = conn)
dbExecute(conn, sqlInterpolate(ANSI(), query))
})
poolReturn(conn)
# Show a modal when the button is pressed
shinyalert("Success!", "The selected rows have been deleted. Refresh
the table by pressing F5", type = "success")
}
})
pool is a handler at the global level for connecting to the database.
pool <- pool::dbPool(drv = RSQLite::SQLite(),
dbname="data/compfleet.db")
Why does this not work? And if it did, is there any way of refreshing the datatable output without having to reload the application?
As pointed out by #RomanLustrik there was definitely something 'funky' going on with timestamp. I am not well versed with sqlite but running PRAGMA table_info(TonnageListChartering); revealed this:
0|vesselName||0||0
1|empStatus||0||0
2|openPort||0||0
3|openDate||0||0
4|source||0||0
5|comments||0||0
6|updatedBy||0||0
7|timestamp||0||1
8|VesselDetails||0||0
9|Name||0||0
10|VslType||0||0
11|Cubic||0||0
12|DWT||0||0
13|IceClass||0||0
14|IMO||0||0
15|Built||0||0
16|Owner||0||0
I guess none of the variables have a data type defined and I am not sure if that's possible to do it now. Anyway, I changed the query to ensure that the timestamp is in quotes.
query <- glue::glue_sql("DELETE FROM TonnageListChartering
WHERE TonnageListChartering.timestamp = '{timestamp}'
", .con = conn)
This deletes the user selected rows.
However, when I am left with only one row, I am unable to delete it. No idea why. Maybe because of a primary key that I have defined while creating the table?
I'm using oracle11g database. I have a table in the name of phonenumbers_tbl and I performed the DROP command on that table. But It is returning the error resource busy and acquire with NOWAIT specified or timeout expired. After that I was alter the session with the command alter session set ddl_lock_timeout = 600 and Again try to drop the table. But still This error is persisting again
Try to execute this first ,and check whether anyone from other session or your session put a lock on that table .If you have put a lock on that table,try to do commit/rollback .If someone else put a lock ,ask him/her or if you have rights kill his session .And then drop the table.
select session_id "sid",SERIAL# "Serial",
substr(object_name,1,20) "Object",
substr(os_user_name,1,10) "Terminal",
substr(oracle_username,1,10) "Locker",
nvl(lockwait,'active') "Wait",
decode(locked_mode,
2, 'row share',
3, 'row exclusive',
4, 'share',
5, 'share row exclusive',
6, 'exclusive', 'unknown') "Lockmode",
OBJECT_TYPE "Type"
FROM
SYS.V_$LOCKED_OBJECT A,
SYS.ALL_OBJECTS B,
SYS.V_$SESSION c
WHERE
A.OBJECT_ID = B.OBJECT_ID AND
C.SID = A.SESSION_ID
ORDER BY 1 ASC, 5 Desc
yes ! finally I got a solution that is moved the table phonenumber_tbl to another tablespace system and dropped the table.
At first get table lock session then kill session
SELECT a.sid,a.serial#, a.username,c.os_user_name,a.terminal,
b.object_id,substr(b.object_name,1,40) object_name
from v$session a, dba_objects b, v$locked_object c
where a.sid = c.session_id
and b.object_id = c.object_id;
ALTER SYSTEM KILL SESSION 'sid,serial#' ;
I want to insert a record and then update the record according to scope_identity of inserted record.
I'm doing this but when I want to update my record encounter an error.
WorkshopDataContext Dac = new WorkshopDataContext();
Dac.Connection.ConnectionString = "Data Source=dpsxxx-xxx;Initial Catalog=kar;User ID=sa;Password=xxxx";
Tbl_workshop Workshop = new Tbl_workshop();
Workshop.StateCode = Bodu.BduStateCode;
Workshop.CityCode = Bodu.BduCityCode;
Workshop.Co_workshop=12222;
Dac.Tbl_workshop.InsertOnSubmit(Workshop);
Dac.SubmitChanges();
Int64 Scope = Workshop.id;
var query = from record in Dac.Tbl_workshop where record.id == Scope select record;
query.First().co_Workshop = Scope;
Dac.SubmitChanges();
and this is the error:
Value of member 'co_Workshop' of an object of type 'Tbl_Workshop' changed.
A member defining the identity of the object cannot be changed.
Consider adding a new object with new identity and deleting the existing one instead.
If you have properly configured your Linq-to-SQL model to reflect the IDENTITY column in your table, you should have the new value available right after .SubmitChanges():
Tbl_workshop Workshop = new Tbl_workshop();
Workshop.StateCode = Bodu.BduStateCode;
Workshop.CityCode = Bodu.BduCityCode;
Workshop.Co_workshop=12222;
Dac.Tbl_workshop.InsertOnSubmit(Workshop);
Dac.SubmitChanges();
Int64 workshopID = Workshop.Id; // you should get new ID value here - automatically!!
You don't need to do anything like reading out that new value from SQL Server or anything - Linq-to-SQL should automagically update your Workshop object with the proper value.
Update: to update your co_workshop value to the value given by the IDENTITY ID, do this (just set the value of co_workshop and save again - that's really all there is):
Dac.Tbl_workshop.InsertOnSubmit(Workshop);
Dac.SubmitChanges();
Int64 workshopID = Workshop.Id; // you should get new ID value here - automatically!!
Workshop.Co_workshop = workshopID;
Dac.SubmitChanges();
As it said on the error, you can't change co_Workshop because its identity (auto increment value). To freely edit it, you need to edit the database and remove this setting.
What probably is happening is that both id and co_Workshop are set as identity. Just disable the identity checkbox from co_Workshop.