we have a NFS share where folder in unix are mounted over a NFS windows server.
even after setting the permission to 775 on unix machine for some folder.
The same does not reflect when files are created in that folder by some java process.
so we have a folder like /nobackup/stream on unix machine mounted on nfs server
permission on unix machine
ls -ald /nobackup/stream
rwxrwxr-x owner group
we have an automation process writing result logs and sub directories to stream folder
for some weird reason the files are getting created with permission
rwxr-xr-x owner group
i.e write access to group is not present.
This is causing our automation to fail when at certain places a process running with group user privilege tries to update the files created with above permission
Initially the suspect was umask
so we set umask to 0002 in the perl process which starts automation
that did not help
Files.mkdir is being used to write the file
here the posix permission is correct ,umask is correct still the new files are not getting created with correct permission
also note that automation runs under cygwin shell if thats causing the trouble
How can I ensure that file permission is always set correctly
The problem is that the automation is running in cygwin. Those files are still written by the Windows NFS client, which has no clue how to interpret the permissions set in cygwin.
You need to set the default permissions in the Windows NFS client. You can do this from the command line with nfsadmin. Something like:
nfsadmin client [ComputerName] fileaccess=664
Source: https://technet.microsoft.com/en-us/library/cc754304(v=ws.11).aspx
I have a fairly simple flask app which runs fine on my local machine. The app uses sqlite3. I am trying to deploy to a CentOS machine running nginx and uwsgi. The app starts but when I try to access the site through chrome, it raises an exception:
sqlite3.OperationalError: unable to open database file
I believe I have all the permissions correct, the user starting the app has ownership of the database file. All the directories have 777 permissions. The database has 665 permissions. nginx is started using sudo.
I have combed through all the existing posts about this kind of thing. People talk about permissions, but I am pretty sure I have those correct. The name of the file is correct.
DATABASE = 'sqlite:////home/.../firstDB.db'
I get the same error if the database points to a nonexistent file. What else could be going wrong?
So it turns out that the file name prefix of sqlite/// is incorrect. I don't understand this, as it worked before. I just put the file name only and it works now.
I have a small Perl-based CGI application, which I am running in the project web space provided for a SourceForge project. This application stores data in a SQLite (v. 3) database file.
When I run test scripts from a shell, I can read from and write to this SQLite file. However, when the CGI code is executed by Apache, it has read-only access. Write operations result in a log file error:
error.log.web-2:[Wed Oct 27 14:40:22 2010] [error] [client 127.0.0.1] DBD::SQLite::db do failed: unable to open database file
For testing purposes, I have cranked the permissions for that SQLite file all the way up to 777. No difference.
However, there are some funny caveats to SourceForge's project web space, and I wonder if I'm being tripped up by that. Generally, the main web server filesystem is read-only to Apache. If you have files that need to be writable at runtime, you're supposed to store them in a special "persistent" directory elsewhere... and create symlinks from your web space to the actual files under that directory.
I have done this, and the permissions are set to 777 for both the symlink and the actual SQLite file under the "persistence" location. I know this mechanism works in general, because I'm doing the same thing with cache and log files and it works there.
I'm wondering if there's anything funky about SQLite itself, along the lines of it not wanting to open a symlink (rather than a raw file) for writing.
I believe the answer to this question is that it can't be done. Further research into SQLite tells me that the driver must get a lock on the database file before it can do any write operations. This type of lock cannot be obtained when the actual file is on a different machine with its filesystem cross-mounted.
I believe this is the case with SourceForge project web space hosting. It looks like the (writable) "persistent" directory is actually on a totally separate machine from the read-only web server filesystem.
In short, if you stumble across this question because you're having the same issue... either look for different web space hosting, or else it may be time to re-work your app and step up to MySQL or some other DB (SourceForge gives you free MySQL hosting anyway).
Another issue is if you have permissions for the specific db file but you don't have permission to make the temporary files in the directory. (Mixed permissions, or too restrictive permissions)
https://www.sqlite.org/tempfiles.html
If you can't write the temporary files then you can't do any writes on a sqlite database file. If you switch it to a :memory: database you could get by or maybe use the pragma mentioned by #bob.faist PRAGMA temp_store = MEMORY, but really you should diagnosis and fix the permissions problem if possible.
Use these commands to see if you have permission to write in those file locations.
ls -l app.db
getfacl app.db
ls -l -d . # check the directory to see if you can write the temp files there
getfacl .
Use chmod or setfacl -m to fix the files or folders to let you write to them.
Also check your diskspace.
df -k
If it shows that your partition where the database file is located or is trying to write its files to are full, you could also get these kinds of issues.
Hope that helps.
I have an uninstall issue with an app which is using sqlite: during installation a blank sqlite db is created in [CommonAppData]\MyApp\mydb.sqlite, e.g. C:\Documents and Settings\All Users\Application Data\MyApp\mydb.sqlite. When I uninstall my app it can't delete the sqlite db, despite it removing the applications that are connectng to it. Using process explorer I can see that it's explorer.exe that has a lock on the MyApp folder (not on the sqlite file).
I've not seen this sort of thing before. Is it possible this is caused by the app not correctly closing/disposing of connections? I understand that at some level windows manages the fact that several threads & processes are accessing my db file, and it handles the locking. Is it possible that if my app isn't closing connections etc correctly then windows gets confused about whether the file is locked or not?
Or is that not possible and it must simply be something wrong with my MSI?
thanks for any suggestions!
UPDATE: not only can I not delete the folder or file, if I create a new file in that folder (e.g. a new txt doc) I can then not delete that file! So it's some wacky lock on the folder....
UPDATE: actually...it might just be permissions on that folder! In my msi i was setting permissions on that folder, and I think I didn't give delete rights so when I uninstalled I didn't have access to delete it :-/
Use handle.exe from the SysInternals collection to find out what has the remaining handle to the file.
It could also be your MSI, so check you are doing things in the right order by doing;
msiexec /u mymsi.msi /lv* mylog.txt
How can I change an SQLite database from read-only to read-write?
When I executed the update statement, I always got:
SQL error: attempt to write a readonly database
The SQLite file is a writeable file on the filesystem.
There can be several reasons for this error message:
Several processes have the database open at the same time (see the FAQ).
There is a plugin to compress and encrypt the database. It doesn't allow to modify the DB.
Lastly, another FAQ says: "Make sure that the directory containing the database file is also writable to the user executing the CGI script." I think this is because the engine needs to create more files in the directory.
The whole filesystem might be read only, for example after a crash.
On Unix systems, another process can replace the whole file.
I solved this by changing owner from "root" to my own user, on all files in Database's folder.
Just do ls -l on said folder, and if any of the files is owned by root, just change it to your user, like:
# For each file do:
sudo chown "$USER":"$USER" /path/to/my-folder/file.txt
# Or "R"ecursive.
sudo chown -R "$USER":"$USER" /path/to/my-folder
(this error message is typically misleading, and is usually a general permissions error)
On Windows
If you're issuing SQL directly against the database, make sure whatever application you're using to run the SQL is running as administrator
If an application is attempting the update, the account that it uses to access the database may need permissions on the folder containing your database file. For example, if IIS is accessing the database, the IUSR and IIS_IUSRS may both need appropriate permissions (you can try this by temporarily giving these accounts full control over the folder, checking if this works, then tying down the permissions as appropriate)
This error usually happens when your database is accessed by one application already, and you're trying to access it with another application.
To share personal experience I encountered with this error that eventually fix both. Might not necessarily be related to your issue but it appears this error is so generic that it can be attributed to gazillion things.
Database instance open in another application. My DB appeared to have been in a "locked" state so it transition to read only mode. I was able to track it down by stopping the a 2nd instance of the application sharing the DB.
Directory tree permission - please be sure to ensure user account has permission not just at the file level but at the entire upper directory level all the way to / level.
Thanks
If using Android.
Make sure you have added the permission to write to your EXTERNAL_STORAGE to your AndroidManifest.xml.
Add this line to your AndroidManifest.xml file above and outside your <application> tag.
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
This will allow your application to write to the sdcard. This will help if your EXTERNAL_STORAGE is where you have stored your database on the device.
On win10 after a system crash, try to open db with DB Browser, but read only.
Simply delete the journal file.
In Linux command shell, I did:
chmod 777 <db_folder>
Where contains the database file.
It works. Now I can access my database and make insert queries.
On Windows:
tl;dr: Try opening the file again.
Our system was suffering this problem, and it definitely wasn't a permissions issue, since the program itself would be able to open the database as writable from many threads most of the time, but occasionally (only on Windows, not on OSX), a thread would get these errors even though all the other threads in the program were having no difficulties.
We eventually discovered that the threads that were failing were only those that were trying to open the database immediately after another thread had closed it (within 3 ms). We speculated that the problem was due to the fact that Windows (or the sqlite implementation under windows) doesn't always immediately clean up up file resources upon closing of a file. We got around this by running a test write query against the db upon opening (e.g., creating then dropping a table with a silly name). If the create/drop failed, we waited for 50 ms and tried again, repeating until we succeeded or 5 seconds elapsed.
It worked; apparently there just needed to be enough time for the resources to flush out to disk.
On Ubuntu, change the owner to the Apache group and grant the right permissions (no, it's not 777):
sudo chgrp www-data <path to db.sqlite3>
sudo chmod 664 <path to db.sqlite3>
Update
You can set the permissions for group and user as well.
sudo chown www-data:www-data <path to db.sqlite3>
If <db_name>.sqlite-journal file exists in the same folder with DB file, that means your DB is opened currently and in the middle of some changes (or it had been at the moment when DB folder was copied). If you try to open DB at this moment error attempt to write a readonly database (or similar) could appear.
As a solution, wait till <db_name>.sqlite-journal disappears or remove it (is not recommended on the working system)
I had this problem today, too.
It was caused by ActiveSync on Windows Mobile - the folder I was working in was synced so the AS process grabbed the DB file from time to time causing this error.
On Linux, give read/write permissions to the entire folder containing the database file.
Also, SELinux might be blocking the write. You need to set the correct permissions.
In my SELinux Management GUI (on Fedora 19), I checked the box on the line labelled httpd_unified (Unify HTTPD handling of all content files), and I was good to go.
I'm using SQLite on ESP32 and all answers here are "very strange"....
When I look at the data on the flash of the ESP I notice there is only one file for the whole db (there is also a temp file).
In this db file we have of course the user tables but also the system tables so "sqlite_master" for example which contain the definiton of the tables.
So, it's seems hard to belive this can be a "chmod" problem, because if the file is read only, even creating table would be impossible as SQLite would be unable to write the "sqlite_master" data...
So I think our friend user143482 is trying to acesse a "read only" table. In SQLite source code we can see a function named tabIsReadOnly with this comment:
/* Return true if table pTab is read-only.
**
** A table is read-only if any of the following are true:
**
** 1) It is a virtual table and no implementation of the xUpdate method
** has been provided
**
** 2) It is a system table (i.e. sqlite_master), this call is not
** part of a nested parse and writable_schema pragma has not
** been specified
**
** 3) The table is a shadow table, the database connection is in
** defensive mode, and the current sqlite3_prepare()
** is for a top-level SQL statement.
*/