I executed:
CREATE BIGFILE TABLESPACE tspvr010
datafile 'tspvr010.dbf'
size 120g;
but while that was working, my PC crashed (unexpected shutdown).
When PC start again, the tablespace tspvr010 was not created, and tablespace USERS grow up by 30 gb.
I try to drop tspvr010 but not exists.
Now, i can't create that because i have no enought empty space.
Is possible rollback that?
sql> select name from v$datafile;
Check if tspvr010.dbf file is not available. Then datafile 'tspvr010.dbf' information are not written into controlfile.
if you find this file in the Oracle file system, then you can remove it. at OS level tspvr010.dbf.
Follow these steps:
you change your data_file loaction dir.
you will check tspvr010.dbf is available
$ll tspvr010.dbf
if tspvr010.dbf is available
$ rm -fr tspvr010.dbf
sql>startup force
sql>select sum(bytes)/1024/1024 from dba_data_files;
Related
I am trying to create a sealed command for my build pipeline which inserts data and quits.
So far I have created my data files
things-to-import-001.sql and 002 etc, which contains all the INSERT statements I'd like to run, with a file per table.
I have created a command file to run them
-- import-all.sql
.read ./things-to-import-001.sql
.read ./things-to-import-002.sql
.quit
However when I run my command
sqlite3 -init ./import-all.sql ./database.sqlite
..the data is inserted, but the program remains running and shows the sqlite> prompt, despite the .quit command. I have also tried using .exit 0.
From the sqlite3 --help
-init FILENAME read/process named file
Docs: https://www.sqlite.org/cli.html#reading_sql_from_a_file
How can I tell sqlite to exit once my inserts have finished?
I have managed to find a dirty workaround for this issue.
I have updated my import file to include a bad command, and executed using -bail to quit on first error.
-- import-all.sql
.read ./things-to-import-001.sql
.read ./things-to-import-002.sql
.fakeErrorToQuitWithBail
Then you can execute with
sqlite3 -init import-all.sql -bail
and it should quit with
Error: unknown command or invalid arguments: "fakeErrorToQuitWithBail". Enter ".help" for help
Try using ".exit" at the place of ".quit". For some reason SQLite dont doccumented this commands.
https://www.tutorialspoint.com/sqlite/sqlite_commands.htm
This issue has been discussed on a number of threads, but none of the proposals seem to apply to my case.
I have a very large sqlite database (4Tb). I am trying to import csv files from the terminal
sqlite3 -csv -separator " " /data/mydb.db ".import '|cat *.csv' mytable"
I intermittently receive SQLite3 database or disk is full errors. Re-running the command after an error usually succeeds.
Some notes:
/data has 3.2Tb free
/tmp has 1.8Tb free.
*.csv takes up approximately 802Gb.
Both /tmp and /data are using ext4 which has a maximum file size of 16tb.
The only process accessing the database is the one mentioned above.
PRAGMA integrity_check returns ok.
Test on both
-sqlite3 --version - 3.38.1 2022-03-12 13:37:29 38c210fdd258658321c85ec9c01a072fda3ada94540e3239d29b34dc547a8cbc and 3.31.1 2020-01-27 19:55:54 3bfa9cc97da10598521b342961df8f5f68c7388fa117345eeb516eaa837balt1
OS - Ubuntu 20.04
Any thoughts on what could be happening?
(Unless there is an informed reason for why I am exceeding the limits sqlite, I would prefer to avoid suggestions that I move to a client/server RDBMS.)
i didn't figure it out, but someone else did, am pretty sure this will "fix it" until you reach 8TB-ish:
sqlite3 ... "PRAGMA main.max_page_count=2147483647; .import '|cat *.csv' mytable"
However the invocation
sqlite3 ... "PRAGMA main.journal_mode=DELETE; PRAGMA main.max_page_count; PRAGMA main.max_page_count=2147483647; PRAGMA main.page_size=65536;VACUUM; import '|cat *.csv' mytable;"
should allow the db to grow to ~200TB, but that VACUUM command, which is needed to apply the new page_size, requires a lot of free space to run, and will probably use a long time =/
good news is that you only need to run that once and it should be a permanent change to your db, your next invocation only needs sqlite3 ... "import '|cat *.csv' mytable;"
notably, this will probably break again around ~200TB
I have roughly 14 million records that I am attempting to export from a Teradata table to file using a fast export connection object.
There is no size limit for fast export files on our Linux system, and there is 1.2 TB of available space in the target directory.
The session fails, and gives the following errors:
READER_2_1_1 FEXP_87011 Process [16022] exited with status [12]
SDKS_38200 Partition-level [SOURCE_TABLE_NAME]: Plug-in #305400 failed in deinit()
I googled the error message, and found this post:
Here
I followed the recommendations in the port to delete the .out file in the temp directory, delete the files that were partially filled in the target directory, and drop the error table and delete the log file. This did not fix the issue and the session still fails with the same error messages.
Try to use TPT Export plug-in instead. Also you can try to execute this FastExport using bteq scripts directly on your unix environment.
Is there a way (or a module) to write log files into the disk instead of writing them to db? Because i really don't want my db getting fatter just because log lines.
Yes there is. It's part of the core of Drupal. It's call syslog. However, it logs in the system log file by default.
I hope you have some fast disks... you could easily create a bottleneck by doing so. Instead, I would regularly dump the log tables to a file, say using a cron job.
You could add this to a file called drupal_logs.sh:
NOW=$(date +"%Y%m%d")_$(date +"%H%M.%S")
mysqldump -p - -user=username dbname tableName1 tableName1 > /path/to/drupal_$NOW.sql
And schedule that to run every 15 minutes by adding the following cron job:
15 * * * * /path/to/drupal_logs.sh > /dev/null
And if you're worried about the log files in the database getting to large, you can follow your mysqldump command in your drupal_logs.sh with a truncate command of the exported tables.
This is making me kind of crazy: I did a mysqldump of a partitioned table on one server, moved the resulting SQL dump to another server, and attempted to run the insert. It fails, but I'm having difficulty figuring out why. Google and the MySQL forums and docs have not been much help.
The failing query looks like this (truncated for brevity and clarity, names changed to protect the innocent):
CREATE TABLE `my_precious_table` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`somedata` varchar(20) NOT NULL,
`aTimeStamp` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (`id`,`aTimeStamp`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 DATA DIRECTORY='/opt/data/data2/data_foo/' INDEX DIRECTORY='/opt/data/data2/idx_foo/'
/*!50100 PARTITION BY RANGE (year(aTimeStamp)) SUBPARTITION BY HASH ( TO_DAYS(aTimeStamp))
(PARTITION p0 VALUES LESS THAN (2007) (SUBPARTITION foo0 DATA DIRECTORY = '/opt/data/data2/data_foo' INDEX DIRECTORY = '/opt/data/data2/idx_foo' ENGINE = MyISAM),
PARTITION p1 VALUES LESS THAN (2008) (SUBPARTITION foo1 DATA DIRECTORY = '/opt/data/data2/data_foo' INDEX DIRECTORY = '/opt/data/data2/idx_foo' ENGINE = MyISAM),
PARTITION p2 VALUES LESS THAN (2009) (SUBPARTITION foo2 DATA DIRECTORY = '/opt/data/data2/data_foo' INDEX DIRECTORY = '/opt/data/data2/idx_foo' ENGINE = MyISAM),
PARTITION p3 VALUES LESS THAN MAXVALUE (SUBPARTITION foo3 DATA DIRECTORY = '/opt/data/data2/data_foo' INDEX DIRECTORY = '/opt/data/data2/idx_foo' ENGINE = MyISAM)) */;
The error is:
ERROR 1 (HY000): Can't create/write to file '/opt/data/data2/idx_foo/my_precious_table#P#p0#SP#foo0.MYI' (Errcode: 13)
"Can't create/write to file" looked like a permissions issue to me, but permissions on the targeted folders look thus:
drwxrwxrwx 2 mysql mysql 4096 Dec 1 16:24 data_foo
drwxrwxrwx 2 mysql mysql 4096 Dec 1 16:25 idx_foo
For kicks, I've tried chowning to root:root and myself. This did not fix the issue.
Source MySQL server is version 5.1.22-rc-log. Destination server is 5.1.29-rc-community. Both are running on recent CentOS installations.
Edit: A little more research shows that Errcode 13 is, in fact, a permissions error. But how can I get that on rwxrwxrwx?
Edit: Bill Karwin's excellent suggestion didn't pan out. I'm working as the root user, and have all privilege flags set.
Edit: Creating the table WITHOUT specifying data directories for the individual partitions works - but I need to put these partitions on a larger disk than the one on which this MySQL instance puts tables by default. And I can't just specify the DATA/INDEX DIRECTORY at the table level - that's not legit in the version of MySQL I'm using (5.1.29-rc-community).
Edit: Finally came across the answer, thanks to the MySQL mailing list and internal IT staff. See below.
On Ubuntu look into the apparmor settings for mysql
vi /etc/apparmor.d/usr.sbin.mysql
This should solve the permission issues. For a quick test you can even try
/etc/init.d/apparmor stop
But don't forget to restart the service.
This took me some time to figure out. And after reading "SELinux" it was clear that I have forgotten this new kind of protection on Ubuntu.
http://bugs.mysql.com/bug.php?id=19557
You will also receive an error message
of the MySQL user ID running the query
does not have "DATA FILE" privileges
that allows the user ID to write to
the file system.
In other words, it can be a permission problem with respect to SQL privileges, not operating system file permissions.
It turned out to be an SElinux issue - all my filesystem permissions were fine, but there was a higher-level policy set against MySQL accessing that disk partition.
Lesson: When you have a permissions issue but ownership and filesystem permissions are obviously correct, look to SElinux.