For a Shiny app that I am making, I have to define some variables in the global environment as they need to be available to many functions here and there. Some of these variables don't exist to start with and are created as the user interacts with the app. The app is to check for existence of the variables and if they don't exist, it has to do something. However, after one session of use, the variables come into existence and stay in the global environment. When the user starts the app again, the app sees the variables in the global environment and so it behaves the way it is not supposed to behave. Is there a way I can remove the variables I create just before the user chooses to terminate the app? Any help is highly appreciated.
The valid way to solution that would be using onStop function as in:
onStop(function() cat("Session stopped\n"))
The linked documentation suggests using that within server function.
Create a function to cleanup when exiting using on.exit. on.exit records the expression given as its argument as needing to be executed when the current function exits (either naturally or as the result of an error). This is useful for resetting graphical parameters or performing other cleanup actions.
on.exit(rm(list= list(myListOfThings)))
Related
I am testing my scheduled function via this approach:
firebase functions:shell
firebase> RUN_NAME_OF_THE_FUCTION()
In this function, I am verifying if an action should be run, and if it should, I am sending emails. The problem - I can't differentiate between the test and prod environment as I do not know how to:
Pass an argument to the scheduled function
Understand the context of me running a local function.
Is there a way for me to somehow identify that the scheduled function was run manually?
Pass an argument to the scheduled function
Scheduled functions don't accept custom arguments, so it doesn't really make sense to pass one. They receive a context, and that's all it should expect.
Understand the context of me running a local function.
You can simply set an environment variable on your development machine prior to executing the function. Check this variable at the time of execution to determine that it's being tested, as opposed to invoked on a schedule.
You can also use environment variables to effectively "pass" data to the function for the purpose of development, if that helps you.
Weird case. I've got a number of firebase cloud functions. I've added a new one today. It deploys fine, but doesn't run. It's not possible to even call it for some reason. To isolate potential code errors in the new function, I've dropped another working function into this new one and it doesn't run either. If I replace the contents of an existing function by the new one's contents, then it runs. It's as if Firebase had silently introduced a limit on having new functions or just stopped running any new ones. I've tried it on two different instances so far and the issue persists.
To replicate, take an existing project with some functions. Duplicate one of the functions - say a simple https request and give it a new name. The new function will be identical to the old one, but it won't run with the browser saying "Error: Forbidden
Your client does not have permission to get URL /newFunction from this server."
It's quite a weird behaviour especially as it's possible to get a new function to run only by replacing an older function with the contents of the new one and calling the older function. Then it runs fine and without any complaints from the server.
Anyone know what might be causing this and how to fix this strange behaviour?
Apparently, Firebase introduced a change as of January 15, 2020 that HTTP functions require authentication by default.
You have to specify whether a function allows unauthenticated invocation at or after deployment. This can be done in the dashboard of all the functions or via console.
It would be good if the error message they provide explained all of that - would have saved me many hours of trouble-shooting.
Is it possible to initiate a command, when exiting an R session, similar to the commands in the .Rprofile file, but just on leaving the session.
I know of course, that a .RData file can be stored automatically, but since I am often switching machines, which might have different storage settings it would be easier to execute a custom save.image() command per session.
The help for q can give some hints. You can either create a function called .Last or register a finalizer on an environment to run on exit.
> reg.finalizer(.GlobalEnv,function(e){message("Bye Bye")},onexit=TRUE)
> q()
Save workspace image? [y/n/c]: n
Bye bye!
You can register the finalizer in your R startup (eg .RProfile) if you want it to be fairly permanent.
[edit: previously I registered the finalizer on a new environment, but that means keeping this object around and not removing it, because garbage collection would trigger the finalizer. As I've now written it the finalizer is hooked onto the Global Environment, which shouldn't get garbage collected during normal use).]
We have an SSIS package that is run via a SQLAgent job. We are initiating the job (via sp_startjob) from within an ASP.NET web page. The user that is logged onto the UI needs to be logged with the SSIS package that the user initiates - hence we require the userId to be passed to the SSIS package. The issue is we cannot pass parameters to sp_startjob.
Does anyone know how this can be achieved and/or know of an alternative to the above approach
It cannot be done through sp_startjob. You can't pass a parameter to a job step so that option is out.
If you have no concern about concurrency, and given that you can't have the same job running at the same time, you could probably hack it by changing your job step from type SQL Server Integration Services to something like a OS Command. Have the OS Command called a batch script that the web page creates/modifies. Net result being you start your package like dtexec.exe /file MyPackage /Set \Package.Variables[User::DomainUser].Properties[Value];\"Domain\MyUser\" At this point, the variable DomainUser in your package would have the value of Domain\MyUser.
I don't know your requirements so perhaps you can just call into the .NET framework and start your package from the web page. Although you'd probably want to make sure that call asynchronously. Otherwise unless your SSIS package is very fast, the users might try and navigate away, spam refresh etc waiting for it to the page to "work".
All of this by the way is simply pushing a value into an SSIS package. In this case, a user name. It doesn't pass along their credentials so calls to things like SYSTEM_USER would report the SQL Agent user account (or the operator of the job step).
I've a package with global variables related to a file open
(*os.File), and its logger associated.
By another side, I'll build several commands that are going to use
that package and I want not open the file to set it as logger every
time I run a command.
So, the first program to run will set the global variables, and here
is my question:
Do the next programs to use the package can access to those global
variables without problem? It could be created a command with a flag
to initialize those values before of be used by another programs, and
another flag to finish it (unset the global variables in the package).
If that is not possible, which is the best option to avoid such IO-bound? To use a server in Unix sockets?
Assuming by 'program' you actually mean 'process', the answer is no.
If you want to share a (customized perhaps) logging functionality between processes then I would consider a daemon-like (Go doesn't yet AFAIK support writing true daemons) process/server and any kind of IPC you see handy.