Big Query authentication - r

I have a R script which calls Big Query API and then executes some queries. It works fine if I start this script using batch file. However when I try to start script as System, I see that (more likely) it can't log into bigquery. Maybe it is so because BQ autontification file (.httr-oauth) is valid for my user< not SYSTEM.
The info in .httr-oauth file is hashed so I can't change user (if there is info aboust user there). Maybe there is some way one can make another .httr-oauth file for system? Or is another error i bumped into?

Related

Making sqlite3_open() fail if the file already exists

I'm developing an application that uses SQLite for its data files. I'm just linking in the SQLite amalgamation source, using it directly.
If the user chooses to create a new file, I check to see if the file already exists, ask the user if they want to overwrite the file, and delete it if they say yes. Then I call sqlite3_open_v2() with flags set to SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE to create and open the new data file.
Which is fine, except, what happens if a malicious user recreates the file I'm trying to open in between the file being deleted and SQLite opening it? As far as I'm aware, SQLite will just open the existing file.
My program doesn't involve passwords or any kind of security function whatsoever. It's a pretty simple app, all things considered. However, I've read plenty of stories where someone uses a simple app with an obscure bug in it to bypass the security of some system.
So, bottom line, is there a way to make sqlite3_open() fail if the file already exists?
You might be able to patch in support for the O_EXCL option flag of open(2). If you are using SQLite on a platform that supports that.

Using MagicalRecord to prepopulate a database (swift). -wal journal created, however, data not copied to database

(Forgive me since I am not use to posting here. Will do my best)
I am working on an iOS application that will be using a pre-populated database. In my workspace, I have a project for the iOS app and a separate command line tool project to populate the database. I have dragged the database file from the "/Library/Application Support" folder into my iOS project (without using the "Copy Items If Needed"). So when there is a change in the data or additional data is required, I can just run the command line tool to pre-populate the data. From there I will remove the app from the simulator and do a clean. When I run the app, I would think everything would be ok.
It was driving me crazy for the longest but sometimes, I don't see the changes reflected after I remove the app, clean the project and then run. It seems the only way I can get this to work is, after I run the command line tool to pre-populate the database, I have to open the database file using Base, SQLiteStudio or Firefox's sqlite add-on. Once I do that, it seems to work.
When I look in finder, I do see the files .sqlite, sqlite-shm and sqlite-wal. Before opening the database file, I see that the wal file is the biggest (for now it's 2mb). Once I open the file using Base for example, and then close it, the sqlite file is now 2mb.
When the command line is about to finish, I have tried running PRAGMA statements on the file (vacuum, wal-checkpoint) but those did not work. What am I missing here. I also tried using NSManagedObjectContext.MR_defaultContext.saveToPersistentStoreAndWait
I am using the following code to setup and save.
MagicalRecord.setupCoreDataStackWithAutoMigratingSqliteStoreNamed("callithome.sqlite")
MagicalRecord.saveUsingCurrentThreadContextWithBlockAndWait({(context)->Void in
println("Data saved, I hope")})
MagicalRecord.cleanUp()
Any help would be appreciated
You need to enable the DELETE pragma mode for your sqlite store. This will not create the -wal and -shm files. This will make it easier for you to have a single file for use at your pre-populated data store. You will need to use MagicalRecord 3.0 with some of the extra abilities to add custom options to your stores if you'd like o go that route. However, you can still add a store to a coordinator and have that store configured with the proper pragma option as well. That is, MagicalRecord is not necessary if you use normal Core Data.

single download file for multiple applications

I have a website on which I have published several of my applications.
Right now I have to update it each time one of the applications is updated.
The applications themselves check for updates so the user only visits the website if they don't have a previous version installed.
I would like to make it easier for me by creating a single executable that when downloaded and executed, will check with the database which version is the most recent and then download that one and run that setup.
Now I can make a downloader for each application, but I rather make something more universal with a parameter or argument as the difference.
For the download the 'know' which database to check for the most recent version, I need to pass on the data to the downloader.
My first thought was putting that in a XML file, so I only have to generate different xml files for each application, but then it wouldn't be a single executable anymore.
My second thought was using commandline arguments like: downloader.exe databasename
But how would I do that when the file is downloaded?
Would a link like: "https://my.website.com/downloader.exe databasename" work?
How could I best do this?
rg.
Eric

Write-Ahead Logging and Read-Only mode compatible in SQLite3?

Open read-only
I have a sqlite3 file on a filesystem that belongs to a different user than is running the reading process. I want the reading process to be able to read the file in read-only mode, so I'm passing SQLITE_OPEN_READONLY. I would expect that to work. Surely the idea is that read-only mode works on files that we don't want to write to?
When I prepare my first statement I get
unable to open database file
Similarly if I run the sqlite3 command line tool I get the same result unless I sudo. Which seems to confirm to me that the issue is writeability rather than anything else.
Journal files
The answer to this question seems to suggest that if there are journal files around then read-only access isn't possible.
Why are there journal files? Because another process is writing the file, my user process is trying to open it in read-only. To do this I am using Write-Ahead Logging, which produces two journal files, -shm and -wal. True enough, if I stop the writing process and remove the journal files, my user process can open it in read-only mode.
Incompatibility?
So I have two situations:
If the file belongs to the writing process and also the read-only process, write-ahead logging enables process A to write and process B to read-only
If the file belongs to the writing process but does not belong to the read-only process, the read-only process is blocked from opening read-only.
How do I achieve both of these? To spell it out, I want:
Writing process owns database
Read-only process does not own database
Read-only process cannot write to database
Write-ahead logging is enabled on database
Seems like a simple set of requirements, but I can't see an obvious solution.
**EDIT: ** Going by this documentation, it looks like this isn't possible. Can you suggest any alternative ways to achieve the above?
Yes WAL-journaled databases cannot be opened read-only, explicitly or otherwise (i.e. in the case where the database file is read-only to the process).
If you require that the read-only process absolutely not be allowed to modify the database file, then the only thing that comes to mind is that the write process maintains a not WAL-journal additional copy of the database.
Bottom line: to the best of my knowledge, WAL and read-only can't be done.
I think what the documentation is saying is that the WAL database itself may not be present on a readonly media, which does not necessarily mean you cannot use SQLITE_OPEN_READONLY. In fact, I have successfully opened two connections, a read-write as well as one with SQLITE_OPEN_READONLY, both on a WAL sqlite database. These work just fine. I tested an INSERT query using the read-only connection and the statement correctly returned an error that the database is read-only.
Just make sure that the database is stored on some media with write-access as a -shm file needs to be created and maintained, and so even a 'ready-only' connection may actually physically write something to disk - which doesn't necessarily mean that it can modify data using SQL.

Problem with workflow on SharePoint email enabled document library

SO ... here is the scenario ... i have a workflow on a document library that copies a file to a windows directory ... this workflow is set to be started at the time when a new item is added to the document library ... so everything works fine when you are manually uploading files to the doc library ... but the problem occurs when we use emails to populate the doc library instead of the manual uploading of files.
When an email is received ... the workflow starts successfully and runs properly (i have kept workflow history entries to check every section of code is being executed or not) ... the workflow stops when the section where the file is being copied to the windows folder is reached.
I basically think this is a problem with the permissions or access issues. Because when we upload the file manually (i.e. from doc library > upload) everything works fine. But maybe there is some other permission set which is used while an email is received by the doc library ... i have tried by assigning permissions to "Everyone" on the windows folder ... but no luck...
Can someone let me know which windows user account is used when an email is received by a document library? (i think its the IIS default account - but isnt it included in Everyone?? )
One solution which i can devise in my mind is that for the file transfer to the windows folder i should use temporary impersonation for the specific code segment (which writes the doc library file to windows folder) but any suggestions are welcome.
P.S. I dont have access to the server right now so i can only devise approaches in my mind ... cant test them right nw... so it would be good to have all suggestions u have so that once i get the access i can try all stuff :D
This is a well known situation. The system does not know who sent the email so it cannot impersonate a user it has no knowledge about.
Depending on which version of SharePoint you are running, the workflow may not start at all or it may start under the account that published the workflow.
For details see this Microsoft Support Article.

Resources