How to delete all files of a database in sqlite3?
I try to delete the filename I created, but there is some strange files left.
If you sqlite3 database filename is "/xxx/test.db", you need try to delete follow four files:
"/xxx/test.db"
"/xxx/test.db-shm"
"/xxx/test.db-wal"
"/xxx/test.db-journal"
These four files may exist, may not exist , so you should ignore the error of "file not exist" during the unlinkat/unlink syscall.
From https://www.sqlite.org/tempfiles.html:
SQLite currently uses nine distinct types of temporary files:
Rollback journals
Master journals
Write-ahead Log (WAL) files
Shared-memory files
Statement journals
TEMP databases
Materializations of views and subqueries
Transient indices
Transient databases used by VACUUM
The following are always written in the same directory as the database file:
rollback journal (with the same name as the database file but with the 8 characters "-journal" appended)
WAL ("-wal" appended)
shared-memory file ("-shm" appended)
Master Journal File (randomized suffix)
Other temporary files can be located in the same directory, depending on a variety of factors.
See the above link for further details.
By request
If DBNAME is a pathname of the SQLite database, you might like to consider these options for removing all the related files in the directory in which the database file lives:
rm -i ${DBNAME} ${DBNAME}-*
or:
rm -i ${DBNAME}*
or if you're quite sure, either of the above but without -i
Related
I need to load hive partitions from staging folders. Currently we copy and delete. Can I use mv?
I am told that I can not use mv if the folders are EAR (Encryption At Rest). How to tell if a folder is EAR'ed?
I'm assuming the feature you are using for encryption at rest is HDFS transparent encryption (see cloudera 5.14 docs).
There is a command to get all the zones configured for encryption, listZones, but that command requires admin privileges. However, if you just need to check the permission of one file at a time, you should be able to run getFileEncryptionInfo without these permissions.
For example
hdfs crypto -getFileEncryptionInfo -path /path/to/my/file
As for whether you can move files, it looks like the answer to that is no. From the "Rename and Trash considerations" section of the transparent encryption documentation:
HDFS restricts file and directory renames across encryption zone boundaries. This includes renaming an encrypted file / directory into an unencrypted directory (e.g., hdfs dfs mv /zone/encryptedFile /home/bob), renaming an unencrypted file or directory into an encryption zone (e.g., hdfs dfs mv /home/bob/unEncryptedFile /zone), and renaming between two different encryption zones (e.g., hdfs dfs mv /home/alice/zone1/foo /home/alice/zone2).
and
A rename is only allowed if the source and destination paths are in the same encryption zone, or both paths are unencrypted (not in any encryption zone).
So it looks like using cp and rm is your best bet.
I am using this to extract data from sqlite database file, commands.txt are where I put my sqlite query:
sqlite3 base1.db < commands.txt > "base1.csv"
This works OK for one file, now I need to apply that commands to multiple .db files at once. This .db files are stored inside subfolders (they all have the same structure, so that sqlite command work OK for all)
I need to collect data from all .db files based on sqlite query in commands.txt into one .csv file, if possible?
When I execute this, I dont get result but empty file.
cd /D "C:\sqlite-tools-win32-x86"
for /R %%G in (*.db) do (
sqlite3 < commands.txt > "%%~dpnG.csv"
)
I am in process of doing a remote to local copy using rsyncand the file list is picked up from a txt file which looks like below
#FILE_PATH FILENAME
/a/b/c test1.txt
/a/x/y test2.txt
/a/v/w test1.txt
The FILE_PATH is the same for remote and local servers. The Problem is, I need to copy the files to a Staging area in the local first and then need to move it to the FILE_PATH, so as to make sure Integrity.
If I simply copy all the files to the Staging area, test1.txt will get overridden. So I thought I can go with clubbing the FILE_PATH and FILENAME, thus it gets unique. To do so, I can not create the file as /a/b/c/test1.txt in my staging area.
So I thought to replace / with special chars that support Unix.
Tried with - _ : ., I got conflicts with all this.
-a-b-c-test1.txt
How I can achieve copying files to the same Staging directory though the file names are of same but supposed to reach different directory
your thoughts pls.
rsync creates a temporary hidden file during a transfer, but the file is renamed when the transfer is complete. I would like to rsync files without creating a hidden file.
alvits is correct, --inplace will fix this for you. I found this when I had issues syncing music to my phone (mounted on ubuntu with jmtpfs). I would get a string of errors renaming the temporary files to replace existing files.
The following command works great for me for a single file:
scp your_username#remotehost.edu:foobar.txt /some/local/directory
What I want to do is do it recursive (i.e. for all subdirectories / subfiles of a given path on server), merge folders and overwrite files that already exist locally, and finally downland only those files on server that are smaller than a certain value (e.g. 10 mb).
How could I do that?
Use rsync.
Your command is likely to look like this:
rsync -az --max-size=10m your_username#remotehost.edu:foobar.txt /some/local/directory
-a (archive mode - the sync is recursive, transfers ownership, attributes, symlinks among other things)
-z (compresses transfer)
--max-size (only copies files up to a certain size)
There are many more flags which may be suitable. Checkout the docs for more details - http://linux.die.net/man/1/rsync
First option: use rsync.
Second option, and it's not going to be a one liner, but can be done in three or four lines:
Create a tar archive on the remote system using ssh.
Copy the tar from remote system with scp.
Untar the archive locally.
If the creation of the archive gets a bit complicated and involves using find and/or tar with several options it is quite practical to create a script which would do that locally, upload it on the server with scp, and only then execute remotely with ssh.