DMP File Compression - Oracle EXPDP - oracle11g

I am using Oracle 11g. Here i am Exporting the database using EXPDP. My database dmp file will be around 50 GB. So i am running out of space in Production Server. So i had tried COMPRESSION = "ALL" in ,my EXPDP command. While running this, i am getting something like "Not Enabled".
Here is EXPDP command.
for /f "tokens=2,3,4 delims=/ " %%a in ('date /t') do set fdate=%%c%%a%%b
EXPDP username/password#sid COMPRESSION=ALL DIRECTORY=EXPDP_CUSTOM_DIR TABLESPACES=USER DUMPFILE = user.dmp
Whether i need to change anything in this..

You need to have licensed the Advanced Compression Option to use this feature. Options for export compression are pretty slim with data pump, otherwise. With the older export you could pipe the output through a compression program, but I don't think that's possible here.
You might consider specifying a maximum file size (1GB, say) and include a substitution variable in the dumpfile name so you produce a bunch of smaller files, and have a cron job watching for them and compressing them as soon as they export process releases them.

Related

SQLite3 database or disk is full on csv imports

This issue has been discussed on a number of threads, but none of the proposals seem to apply to my case.
I have a very large sqlite database (4Tb). I am trying to import csv files from the terminal
sqlite3 -csv -separator " " /data/mydb.db ".import '|cat *.csv' mytable"
I intermittently receive SQLite3 database or disk is full errors. Re-running the command after an error usually succeeds.
Some notes:
/data has 3.2Tb free
/tmp has 1.8Tb free.
*.csv takes up approximately 802Gb.
Both /tmp and /data are using ext4 which has a maximum file size of 16tb.
The only process accessing the database is the one mentioned above.
PRAGMA integrity_check returns ok.
Test on both
-sqlite3 --version - 3.38.1 2022-03-12 13:37:29 38c210fdd258658321c85ec9c01a072fda3ada94540e3239d29b34dc547a8cbc and 3.31.1 2020-01-27 19:55:54 3bfa9cc97da10598521b342961df8f5f68c7388fa117345eeb516eaa837balt1
OS - Ubuntu 20.04
Any thoughts on what could be happening?
(Unless there is an informed reason for why I am exceeding the limits sqlite, I would prefer to avoid suggestions that I move to a client/server RDBMS.)
i didn't figure it out, but someone else did, am pretty sure this will "fix it" until you reach 8TB-ish:
sqlite3 ... "PRAGMA main.max_page_count=2147483647; .import '|cat *.csv' mytable"
However the invocation
sqlite3 ... "PRAGMA main.journal_mode=DELETE; PRAGMA main.max_page_count; PRAGMA main.max_page_count=2147483647; PRAGMA main.page_size=65536;VACUUM; import '|cat *.csv' mytable;"
should allow the db to grow to ~200TB, but that VACUUM command, which is needed to apply the new page_size, requires a lot of free space to run, and will probably use a long time =/
good news is that you only need to run that once and it should be a permanent change to your db, your next invocation only needs sqlite3 ... "import '|cat *.csv' mytable;"
notably, this will probably break again around ~200TB

What is the filepath that a "Read CSV" operator needs to read a file from RapidMiner Server?

I have a RM Server running on a VM (Ubuntu) on top of my Win10 machine.
I have a process to read a .csv file and write its contents on a MySQL database on a MySQL Server which also runs on the same VM.
The problem is that the read file operator does not seem to be able to find the file.
Scenario1.
When I try as location-name in the read csv operator ../data/myFile.csv
and run the process on Server I am getting Failed to execute initialization process: Error executing process /apps/myApp/process/task_read_csv_to_db: The file 'java.io.FileNotFoundException: /root/../data/myFile.csv (No such file or directory)' does not exist.
Scenario2.
When I try as location-name in the read csv operator /apps/myApp/data/myFile.csv
and run the process on Server I am getting Failed to execute initialization process: Error executing process /apps/myApp/process/task_read_csv_to_db: The file 'java.io.FileNotFoundException: /apps/myApp/data/myFile.csv (No such file or directory)' does not exist.
What is the right filepath that I should give to the Read CSV operator?
Just to update with the answer. After David's suggestion, I resulted in storing the .csv file outside of the /rapidminer-server-home/data/repository since every remote repository seems to be depicted with an integer instead of its original name, making the use of the actual full path of the file not usable.
I would say, the issue is that depending on the location of the JobAgent that is executing your process, the relative path might be varying.
Is /apps/myApp/data/myFile.csv the correct path to the file? If not, I would suggest to use the absolute path to the file. Hope this helps.
Best,
David

fast export unexplained failure

I have roughly 14 million records that I am attempting to export from a Teradata table to file using a fast export connection object.
There is no size limit for fast export files on our Linux system, and there is 1.2 TB of available space in the target directory.
The session fails, and gives the following errors:
READER_2_1_1 FEXP_87011 Process [16022] exited with status [12]
SDKS_38200 Partition-level [SOURCE_TABLE_NAME]: Plug-in #305400 failed in deinit()
I googled the error message, and found this post:
Here
I followed the recommendations in the port to delete the .out file in the temp directory, delete the files that were partially filled in the target directory, and drop the error table and delete the log file. This did not fix the issue and the session still fails with the same error messages.
Try to use TPT Export plug-in instead. Also you can try to execute this FastExport using bteq scripts directly on your unix environment.

Creating a Dump File for Oracle table

I intend to export one table from My database as a dmp File. This is what I am doing:
expdp SYSTEM/manager#UATDB FILE=F:\LLT.dmp log=F:\llt.log tables=TBAADM.LLT
The error I am getting is:
LRM-00101: unknown parameter name 'FILE'
What is my mistake. Please Help.
Ok Guys I fouund the Answer you need to first Create a Directory where the the dmp File is going to be. Then Import Like this:
expdp USERNAME#server ip/SERVICE_NAME
DIRECTORY=DIR_NAME DUMPFILE=FILE.dmp TABLES=SCHEMA.TABLE_NAME
And you can also use
Below one is shell command, so execute this in the shell. Once Data export is completed, the search dpdump folder in oracle installed directory where you can find your exported file with log.
expdp userid/pwd schemas=dbschema dumpfile=file.dmp logfile=file.log

SQLite short file names 8.3

I am attempting to compile SQLite for an operating system that does not support long file names. The max file name is 8 chars long with an extension of 3 chars (8.3).
Currently a "-journal" is created while using SQLite this breaks the file name rule and stops SQLite with "Disk I/O Error"
I have tried to disable the journal from being created in the first place with "PRAGMA journal_mode OFF" but it appears that the file still gets created then destroyed.
Is there anyway (compile flag or PRAGMA, ect) to force SQLite to use 8.3 file names?
Is there anyway to disable the journal from being created?
Not Windows, not Unix, not OS2, other OS
Option 1: Since you need to create a VFS for your "Not Windows, not Unix, not OS2, other OS" you could have its xOpen function translate "name.sdb-joural" into "name.jnl"
Option 2: Modify sqlite3PagerOpen to use a different mechanism, such as changing the file extension, to make the journal name

Resources