Unable to create additional database manually in 11g - oracle11g

When I am trying to create a database in Oracle 11g manually, I'm facing below error.
SQL> create database test
2 Datafile '/opt/oradata/test/system01.dbf' size 10M
3 Sysaux datafile '/opt/oradata/test/sysaux01.dbf' size 10M
4 Logfile '/opt/oradata/test/redo01.log' size 10M,
5 '/opt/oradata/test/redo02.log' size 10M
6 Undo tablespace undotbs1
7 Datafile '/opt/oradata/test/undo01.dbf' size 10M
8 Default temporary tablespace temp
9 Tempfile '/opt/oradata/test/temp01.dbf' size 10M;
Error:
SQL> /
create database test
*
ERROR at line 1:
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-01501: CREATE DATABASE failed
ORA-01519: error while processing file '?/rdbms/admin/doptim.bsq' near line 15
ORA-00604: error occurred at recursive SQL level 1
ORA-01658: unable to create INITIAL extent for segment in tablespace SYSTEM
Process ID: 4562
Session ID: 1 Serial number: 3
Kindly help.

"ORA-01658: unable to create INITIAL extent for segment in tablespace SYSTEM"
If you look at this error it clearly says it's not able to create the initial extent since the size of the extent required is more than the file size. So, it leaves you with the two options
1) create DMT tablespace.
2) Increase the file size (100M rather than 10M).
Thank You,
Sid

Related

Datastage job failed netezza to greenplum data load using ODBC Greenplum Wire Protocol driver

Greenplum_Connector_0,0: The following SQL statement failed: INSERT INTO GPCC_TT_20211121154035261_15420_0_XXXXX_TABLE_NAME (COLUMN1,COLUMN2,...) SELECT COLUMN1,COLUMN2,... FROM GPCC_ET_20211121154035417_15420_0. The statement reported the following reason: [SQLCODE=HY000][Native=3,484,948] [IBM (DataDirect OEM)][ODBC Greenplum Wire Protocol driver][Greenplum]ERROR: missing data for column "xyz_id" (seg2 slice1 192.168.0.0:00 pid=30826)(Where External table gpcc_et_20211121154035417_15420_0, line 91 of gpfdist://ABCD:123/DDCETLMIG_15420_gpw_3_3_20211121154035261: "AG?199645?ABCD EFGH. - HELLOU - JSF RT ADF?MMM?+1?A?DAD. SDA?0082323209?N?N..."; File copy.c; Line 5211; Routine NextCopyFromX; )
The trick here is to read the error message carefully. Somehow your job has managed not to provide a value for column xyz_id. Check your job design thoroughly.

Failed to add new value partitions to database

My cluster is composed of 3 linux_x64 servers. It contains 1 controller node and 3 data nodes. The server version of DolphinDB is v2.0.1.1.
dfsReplicationFactor=2
dataSync=1
The schema of the database is:
2021.09.09 08:42:57.180: execution was completed [34ms]
partitionSchema->[2021.09.06,2021.09.05,2021.09.04,2021.09.03,2021.09.02,2021.09.01,2021.08.31,2021.08.30,...]
databaseDir->dfs://dwd
engineType->OLAP
partitionSites->
partitionTypeName->VALUE
partitionType->1
When I insert data to the database “dfs://dwd”, I get an error:
Failed to add new value partitions to database dfs://dwd.Please manaually add new partitions [2021.09.07].
Then I use the following script to manually add partitions:
db=database("dfs://dwd")
addValuePartitions(db,2021.09.07..2021.09.09)
The error is:
<ChunkInRecovery>openChunks failed on '/dwd/domain', chunk cf57375e-b4b3-dc87-9b41-667a5e91a757 is in RECOVERING state
The repair method is shown as follows:
Step 1: use etClusterChunksStatus to get chunkid of `/dwd/domain' at the controller node. The sample cade is shown following:
select * from rpc(getControllerAlias(), getClusterChunksStatus) where  file like "%/domain%" and state != 'COMPLETE'
Step 2: use getAllChunks to get the partition information for that chunkid at the data node. In the code below, The chunkid "4503a64f-4f5f-eea4-4247-a0d0fc3941a1" is obtained by step 1.
select * from pnodeRun(getAllChunks)  where chunkId="4503a64f-4f5f-eea4-4247-a0d0fc3941a1"
Step 3: Use copyReplicas to copy the partition copy. Assuming that the result of step 2 shows that the partition copy is on datanode3, now copy to datanode1:
rpc(getControllerAlias(), copyReplicas{`datanode3, `datanode1, "4503a64f-4f5f-eea4-4247-a0d0fc3941a1"})
Step 4: use getClusterChunksStatus to check if the status is COMPLETE. If it is, then the repair is successful.

Lattice Diamond Clock Constraint, cannot properly identify signal,port, pin, net

The design has an internal oscillator of 2.08 mhz. The 2.08 logic has no timing errors reported once compiled, place and routed. An async clock input, 100 MHz rate, has timing errors. Trying to use the constraints to set the clock rate. I cannot seem to properly identify the net, pin, or port to set the constraint. It failed with the below warnings.
constraint.lcd file line is: create_clock -period 50.000000 -name clk1 [get_nets pin22_c]
--------------------------it failed using the constraint with this message -----------------------------------------
WARNING - No object with type NET found matching pin22_c;
WARNING - Ignoring Constraint: create_clock -period 50.000000 -name clk1 [get_nets pin22_c].
------------------------the report before using the constrain file ----------------------------------------------
Constraint: create_clock -period 5.000000 -name clk1 [get_nets pin22_c]
119 items scored, 21 timing errors detected.
Error: The following path violates requirements by 2.088ns
Logical Details: Cell type Pin type Cell name (clock net +/-)
Source: FD1P3IX CK \so/sireaddone_30 (from pin22_c +)
Destination: FD1S3AX D \so/shiftreg_i4 (to pin22_c +)
Delay: 6.928ns (27.8% logic, 72.2% route), 4 logic levels.
Constraint Details:
6.928ns data_path \so/sireaddone_30 to \so/shiftreg_i4 violates
5.000ns delay constraint less
0.160ns L_S requirement (totaling 4.840ns) by 2.088ns
All day struggle with lattice diamond trying to set the clock constraint... finally figured out what needs doing.
process the design so to have a netlist
i used an empty constraint abc.lcd file so it was in the file list
right click on this file list tab and open with lcd editor
now double click the source box and selected clock port, select pin22
double click the other boxes and enter desired values
then under file click the save to save the file
rerun the process and all is well and that clock is set 20 MHz
be happy, but dissapointed with LATTICE!
then look at the file to find the syntax is not even close to what the manuals state!
#
create_clock -period 50.000000 -name asyncclk [ get_ports { pin22 } ]
#

MariaDB 10.2.10 writing double binlog entries in mixed format

I am using MariaDB 10.2.10 under Debian 9 in Master/Slave replication. I am experiencing problems with replication since the slave is refusing replication due to 1062 duplicate key errors.
After a long time of investigation I found, that the binlog of the master contains the same INSERT statement twice. It is written in statement AND row based format. binlog_format is set to MIXED.
I had a look at general log - the INSERT statement was only commited once.
Heres the outpout of mysqlbinlog:
# at 11481089
#171205 10:22:37 server id 126 end_log_pos 11481132 CRC32 0x73b0f77c
Write_rows: table id 22683990 flags: STMT_END_F
### INSERT INTO `mydb`.`document_reference`
### SET
### #1=30561
### #2=6
### #3=0
# at 11481132
#171205 10:22:37 server id 126 end_log_pos 11481387 CRC32 0x599e2b04
Query thread_id=3282752 exec_time=0 error_code=0
SET TIMESTAMP=1512465757/*!*/;
INSERT INTO document_reference
(document_reference_document_id, document_reference_type, document_reference_value)
VALUES (30561, "single", 0)
/*!*/;
# at 11481387
#171205 10:22:37 server id 126 end_log_pos 11481418 CRC32 0x73fe1166 Xid = 248234294
COMMIT/*!*/;
Anyone has an idea, why this statement is written twice to the binlog?

RODBC Teradata Copy Table

I am using RODBC with R to connect to Teradata.
I am trying to copy a large table EXAMPLE (25GB) from the READ_ONLY database to the WORKdatabase. The two databases are under the same DB system so I only need one connection.
I have tried sqlQuery, sqlCopy and sqlCopyTablefunctions but do not succeed.
sqlQuery
EDIT: syntax error corrected as suggested by #dnoeth.
CREATE TABLE WORK.EXAMPLE AS (SELECT * FROM READ_ONLY.EXAMPLE) WITH DATA;
OR
CREATE TABLE WORK.EXAMPLE AS (SELECT * FROM READ_ONLY.EXAMPLE) WITH NO DATA;
INSERT INTO WORK.EXAMPLE SELECT * FROM READ_ONLY.EXAMPLE;
I let the latter method run for 15h but it did not complete the copy.
sqlCopy
sqlCopy(ch,
query='SELECT * FROM READ_ONLY.EXAMPLE',
destination = 'WORK.EXAMPLE')
Error: cannot allocate vector of size 155.0 Mb
Does sqlCopy try to first copy the data to R's memory before creating the new table? If so, how can I bypass this step and work exclusively on the Teradata server? Also, the error persists even if use the option fast=F.
In case R's memory was the issue, I tried creating a smaller table of 1000 rows:
sqlCopy(ch,
query='SELECT * FROM READ_ONLY.EXAMPLE SAMPLE 1000',
destination = 'WORK.EXAMPLE')
Error in sqlSave(destchannel, dataset, destination, verbose = verbose, :
[RODBC] Failed exec in Update
22018 0 [Teradata][ODBC Teradata Driver] Data is not a numeric-literal.
In addition: Warning message:
In odbcUpdate(channel, query, mydata, coldata[m, ], test = test, :
character data '2017-03-20 12:08:25' truncated to 15 bytes in column 'ExtractionTS'
With this command a table was actually created but it only includes the column names without any rows.
sqlCopyTable
sqlCopyTable(ch,
srctable = 'READ_ONLY.EXAMPLE',
desttable = 'WORK.EXAMPLE')
Error in if (as.character(keys[[4L]]) == colnames[i]) create <- paste(create, :
argument is of length zero
The syntax in your sqlQuery is not correc, the WITH DATAoption is missing:
CREATE TABLE WORK.EXAMPLE AS (SELECT * FROM READ_ONLY.EXAMPLE) WITH DATA;
Caution, this will loose all NOT NULL & CHECK constraints and all indexes, resulting in the 1st column as Non-Unique Primary Index.
Either add a PI manually or switch to
CREATE TABLE WORK.EXAMPLE AS READ_ONLY.EXAMPLE WITH DATA;
if READ_ONLY.EXAMPLE is a table and you actually want an exact copy.

Resources