SQLite reader.read() always returning false when executed from Task Scheduler - sqlite

I have a C# program that uses a SQLite database to read/write data. This program requires UAC elevation and I require it to be running at all times. When I run this program manually, which I have to 'Run as admin', my SQLite database functions normally, able to read/write data to the database file. However, my issue is when I try to have this program execute automatically when the computer starts.
As I mentioned earlier, I require this program to execute at all times. So, I have put a couple things in place that re-executes the program in the event it crashes (which works great). However, I also need this to begin executing when the computer restarts. Normally this isn't a problem, but the program requires UAC and I will rarely be around to click Yes on the UAC dialog, so I read around and it seems the only way to do this was to set up a task in Task Scheduler. So, I have set up a task to run this at startup. Upon testing, the program does execute but not functioning correctly. Upon further debugging, I've found that each time my code reaches a SQLiteDataReader.read() line, it always seems to return false even though I know there are records there, but this only happens when the program is executed thru Task Scheduler. No errors seem to be coming from SQLite. I suspect file permissions to be the issue, but do not know how to resolve.
A couple things to note of what I've tried already.
1) In the Task Scheduler, I've set this up to execute using the same user account as I've been using to run it manually, which is also a Domain Admin, Admin, and a local Admin account.
2) The task is set to "run with highest privileges"
3) I've changed the security permissions to Full Control for just about every object I can think of (Admins, Domain Admins, Users, , Everyone, etc) on both the root folder of the program AND the SQLite database file.
4) I've even tried moving the entire application outside of the Program Files folder in case there was some sort of restricted access involved there as well.
I'm at my wits end trying to figure this out. Any ideas on what to try next? Or other solutions to get this to execute correctly at startup without user interaction?

I'm a bit late on reporting back on this issue. Stupid on my part... The task scheduler simply needed to include the applications file folder path as the Startup path. So it wasn't finding the database file as my path is using relative paths to reference. I personally don't understand why this shouldn't always just default to the app's folder, but you live and you learn and bang your head on everything in between.

Related

Running a compainion application at install

I have two WPF applications in the same solution. One is a configuration helper for the other and needs to be run before the 'big' app is run. In the VS Setup project I have included the Primary Output from both applications.
I want to run the configuration helper during the Commit phase of setup so I added a Custom Action consisting of the Primary Output of configuration helper and marked the Installer Class as false.
When I run the resulting msi, both applications are installed in the same folder as desired, but I then get an error that 'a program run as part of the setup did not finish as expected.' The msi then uninstalls.
I was hoping the configuration helper would be kicked off as the msi exits, but would also be happy with the installer hanging open until the configuration helper exits.
What am I missing?
The program you ran as a custom action has failed, probably crashed. It may need some extra error checking or tracing to see what's going on. Programs that run as custom actions are not in the same environment as running them from the interactive user's desktop. The working directory is probably not what you expect (so file paths must be specified in full) and it's probably running with the system account, because that's the way Everyone installs work, so any assumptions about user locations (including the interactive user's desktop, user folders, access to the network, access to databases, ability to show forms) will be wrong and are likely to be failure points. It's better to run configuration tools like this when the app first starts because you are now running in a normal user environment.

Updating code on production server when using Go

When I develop and update files on production server with PHP I just copy the files on the fly and everything seems to work without interrupting the server.
But if I am to update the code on the Go server and application and would need to kill the server, copy the src files to the server, run go install, and then start the server, this would interrupt the service, and if I do this quite often then it is going to look very bad for my users of the service.
How can I update files without the downtime when using Go with Go's http server?
PHP is an interpreted language, which means you provide your code in source format and the PHP interpreter will read it and execute it (it may create a more compact binary form so that it doesn't have to analyze the source again when needed).
Go is a compiled language, it compiles into a native executable binary; going further it is statically linked which means every code and library your app is referring to is compiled and linked when the executable is created. This implies you can't just "drop-in" new go modules into a running application.
You have to stop your running application and start the new version. You can however minimize the downtime: only stop the running application when the new version of the executable is already created and ready to be run. You may choose to compile it on a remote machine and upload the binary to the server, or upload the source and compile it on the server, it doesn't matter.
With this you could decrease the downtime to a maximum of few seconds, which your users won't notice. Also you shouldn't update in every hour, you can't really achieve significant updates in just an hour of coding. You could schedule updates daily (or even less frequently), and you could schedule them for hours when your traffic is low.
If even a few seconds downtime is not acceptable to you, then you should look for platforms which handle this for you automatically without any downtime. Check out Google App Engine - Go for example.
The grace library will allow you to do graceful restarts without annoyance for your users: https://github.com/facebookgo/grace
Yet in my experience restarting Go applications is so quick, unless you have an high traffic website it won't cause any trouble.
First of all, don't do it in that order. Copy and install first. Then you could stop the old process and run the new one.
If you run multiple instances of your app, then you can do a rolling update, so that when you bounce one server, the other ones are still serving. A similar approach is to do blue-green deployments, which has the advantage that the code your active cluster is running is always homogeneous (whereas during a rolling deploy, you'll have a mixture until they've all rolled), and you can also do a blue-green deployment where you normally have only one instance of your app (whereas rolling requires more than one). It does however require you to have double the instances during the blue-green switch.
One thing you'll want to take into consideration is any in-flight requests -- you may want to make sure that in-flight requests continue to go to old-code servers until their finished.
You can also look into Platform-as-a-Service solutions, that can automate a lot of this stuff for you, plus a whole lot more. That way you're not ssh'ing into production servers and copying files around manually. The 12 Factor App principles are always a good place to start when thinking about ops.

Quartz.NET: in the Asp.Net Web vs Console Application

I need to run 4 background gobs for cleaning temp files and proccessing some files. I have chosen Quart.net for the job.
I have a Asp.Net website, which accepts uploading files that will be processed by the Quartz Jobs at night.
First i thought about making a console application for the Quartz jobs, keeping the website and the jobs totally decoupled.
But then, i've seen that i will need some config values (connectionstring and paths to files) that are on the asp.net web.config. So a question came to my mind:
Should i run the jobs through the asp.net instance or should i do this on a console application?
Furthermore, i want that when the Quartz jobs start running, the website show a special page (like "We are processing the files...).
What i care the most is the performance, i don't want the website to be affected by the Quartz jobs, neither the jobs' performance affected by the website.
So, what should i do? Have you done something like this and can give me an advice?
Should i run the jobs through the asp.net instance or should i do this on a console application?
If you want to have to manually trigger them each night, sure. But a console application using the host system's task scheduler seems like a more automated solution. A web application is more of a request/response system, it's not really suited for periodic or long-running actions. Scheduling some sort of background operation on the host, such as a scheduled console application or a windows service, would serve that purpose better.
Note that if it truly needs to be unattended and run even when there's nobody logged in to the server console, a windows service may be a more ideal approach than a console application.
i've seen that i will need some config values (connectionstring and paths to files) that are on the asp.net web.config
Console application have App.config files which serve the same purpose. You can use that.
i want that when the Quartz jobs start running, the website show a special page
You definitely want to keep the two de-coupled. But you may be able to accomplish this easily enough. Maybe have some sort of status flag in the database which indicates if any particular record is "currently being processed". The website can simply look for any records with that flag when a page loads and display that message.
There are likely a couple of different ways to synchronize status here, it doesn't really matter what you choose. What does matter is that the systems remain decoupled and that any status which is statically persisted is handled somewhat carefully to avoid an errant process from leaving an incorrect status. (For example, a background task sets a status of "processing" and then fails in some way. The website would forever indicate that it's processing.)

Error 0x800401F3 One process works one doesn't same website

I'm stuck on this one. I hope someone here has some experience with this. Here is the situation. I have set up a web page that allows users to upload flat files to be loaded into SQL Server 2005 using SSIS. There are two difference SSIS processes depending on the file type. The decision of which SSIS process to use is made by the user on the website.
Once the file is uploaded by the user the process is started by a .NET Process object. The command line is the normal command line you'd expect to see to start dtexec with a specific SSIS file and that sets a couple variables. For example:
dtexec /f /De /set value
The ASP.NET Anonymous User is running as a domain user account. All SSIS package files for both SSIS processes are in the same directory. The domain user account has full privileges on that directory. The same method in ASP.NET starts either of the processes. The only difference is the WebMethod called by the website. One WebMethod for each type. It is in these WebMethods where the unique arguments are assigned to the command line text for SSIS.
Here is where I have run into the problem. When running the website process "1", it runs fine, but process "2" fails with the error mentioned above. When I capture the Standard Output I receive this:
Microsoft (R) SQL Server Execute
Package Utility Version 9.00.4035.00
for 32-bit Copyright (C) Microsoft
Corp 1984-2005. All rights reserved.
Started: 10:34:14 AM Could not create
DTS.Application because of error
0x800401F3 Started: 10:34:14 AM
Finished: 10:34:14 AM Elapsed: 0.016
seconds
I don't understand how everything can be nearly identical yet only one will run. One final thing, both methods work fine when I am testing directly from Visual Studio. I figure it must be something with the Anonymous User account used, but I can't figure out why one process would work and the other not work when they are so similar.
Any help will be greatly appreciated.
Rob
Found the problem. The error code was a phantom. What happened was a Connection Component was being fed by a variable that was holding a path to a folder the new account could not go to. Even though in process it would be replaced with a good target it was failing in validation. This is why there was no logs. I didn't have the logging level high enough to see it and it acted like a security issue. Which is was in a way of looking at it.

Strategy for handling user input as files

I'm creating a script to process files provided to us by our users. Everything happens within the same UNIX system (running on Solaris 10)
Right now our design is this
User places file into upload directory
Script placed on cron to run every 10 minutes.
Script looks for files in upload directory, processes them, deletes immediately afterward
For historical/legacy reasons, #1 can't change. Also, deleting the file after processing is a requirement.
My primary concern is concurrency. It is very likely that the situation will arise where the analysis script runs while an input file is still being written to. In this case, data will be lost and this (obviously) unacceptable.
Since we have no control over the user's chosen means of placing the input file, we cannot require them to obtain a file lock. As I understand, file locks are advisory only on UNIX. Therefore a user must choose to adhere to them.
I am looking for advice on best practices for handling this problem. Thanks
Obviously all the best solutions involve the client providing some kind of trigger indicating that it has finished uploading. That could be a second file, an atomic move of the file to a processing directory after writing it to a stage directory, or a REST web service. I will assume you have no control over your clients and are unable or unwilling to change anything about them.
In that case, you still have a few options:
You can use a pretty simple heuristic: check the file size, wait 5 seconds, check the file size. If it didn't change, it's probably good to go.
If you have super-user privileges, you can use lsof to determine if anyone has this file open for writing.
If you have access to the thing that handles upload (HTTP, FTP, a setuid script that copies files?) you can put triggers in there of course.

Resources