Using DB Browser, how do I Write Changes when running a query? - sqlite

When using DB Browser, changes are not written to file right away. In order to perform a write, the "Write Changes" button on the GUI needs to be used.
However, I have queries that I want to setup to write to file without me needing to click on this button.
How do I write a query to Write Changes as part of the query?

I haven't found a separate "Write Changes" command, but you can run the SQLite specific vacuum command to prompt for an immediate write.
This will bring up a write confirmation dialog, which can be accepted by just pressing Space Bar to immediately perform the write.
Example:
delete from "my_table";
vaccum;

Related

Automatically Reinstantiate a Template Instance in Tosca

I am working with a Test Case that takes input from an excel file in Tosca. I'm using a Template Instance so the data from the excel gets loaded and I can use it in my test case. I know I can reinstantiate the Instance by clicking on the Reinstantiate button, but I'm running the Test Case from an external source, so I can't go to Tosca and click the button every time I need to update the input (the data of the excel input is different every time I run it).
Is there a way to make Tosca automatically Reinstantiate the Template Instance every time I run it?
No, unforunately there is NO way to do it automatically. It is not even possible without a great effort to (re)instantiate several templates at once.

Symfony Command that keeps listening for stream input and updates screen information

I have a plan of writing a small command line tool that does the following (without getting into details):
Listen for input stream (a tail of some files)
Parse incoming data and update screen information real-time (like the top command does for example)
Untill the application is being quited (CTLR + C) it will be updating (not appending!) the information on the screen.
I prefer to work with the Symfony console.
Since they have for example the progress-bar I expect it to be doable, since the progress-bar does update the screen. (However, I don't need an actual progressbar).
Keep listening for an input stream and keep updating when information comes in is something I am not sure about to be possible in this manner.
I can't find enough information on how to do this. Does anyone know if this is possible and what components I would need to:
Listen for input stream and trigger an event when information comes in
Update screen information
Any help would be appreciated.
Update:
For now I built the tool without using any framework. I wrote it myself by using this "Listening for incoming streams" example and this "setting cursor position" example (and ofcourse this referenced overview of commands).
However I still would like to know whether and how this would be possible using Symfony's console components.

Run function on VB.NET, leave page, then have it email result to user?

I have an admin function that's rarely used that is built in VB.NET. It prepares a very long and complicated document and takes a very long time to process after user has hit the "Export PDF" button. What I would like to do is have ability for user to hit the Export button, leave the page and have it still going in the background and then when it's done, email the user to let them know report is ready with the URL of the file it generated.
I'm unsure if this is possible. It's my understanding that if you leave a page before a process is finished, it will interrupt/cancel the process. I can't reprogram my function in another language because it's just too complex for me to attempt to do that, so I need to stick with VB.NET.
I realize that it's not good practice to have a function that takes a long time on the server, but as I said, this is rarely used by a select amount of users and I'd like to make it more convenient for them.
Anyone know the best method/if any to get this accomplished?
Thanks!
I you can do one of these:
Create a child thread (spawn) and let that thread create and email PDF
Create a service and call that service to create and email PDF

SQL Server database hangs on trigger execution

We have implemented 6-7 trigger on a table and there are 4 update trigger out of these. All of the 4 triggers require long processing because of data manipulation and conditions. But whenever trigger executes then all the pages on the website stop responding and hangs for every other user from different systems also. Even when we execute update statement in SQL Server Management Studio on the trigger holding table then it also hangs. Can we resolve this hanging issue by shifting this trigger code into the stored procedure and call this stored procedures after update statement of the table?
Because I think trigger block the table access to the other user at the time of execution. If not then can anyone provide the solution over it.
Triggers are dangerous - they get fired whenever things happen, and you have no control over when and how often they fire.
You should definitely NOT do any time-consuming processing in a trigger! A trigger should be super fast, and lean.
If you need processing - let the trigger record the info needed into a separate "command" table, and have another process (e.g. a scheduled SQL Agent job) that checks that table for commands to be executed, and then executes those commands - separately, independently of the main application, in a separate execution path.
Don't block your main app by doing excessive data processing / manipulation in a trigger! That's the wrong place to do this!
Can we resolve this hanging issue by shifting this trigger code into the stored procedure
and call this stored procedures after update statement of the table?
You have a box that weights a ton. Does it get lighter when you put it into some nice packaging?
A trigger is already compiled. Putting it into a stored procedure is just dressing it up differently.
Your problem is that you abuse triggers to do heavy processing - something they should not do by design. Change the design.
Because I think trigger block the table access to the other user at the time of execution.
Well, triggers do NO SUCH THING - so you think wrong.
A trigger does what it is told to do and an empty trigger sets zero locks (the locks are there from whatever triggers it). If you do set up a table wide lock - fire whoever did that and redesign.
Triggers should be fast, light and be over fast. NO heavy processing in them.
Without actually seeing the triggers it's impossible to diagnose this confidently but here goes...
The TRIGGER won't set up a lock as such but if it sets off other UPDATE statements they'll require locks and if those UPDATE statements fire other triggers then you could have a chain reaction that produces the kind of grief you seem to be experiencing.
If that sounds like what might be happening then removing the triggers and doing the processing explicitly by running a stored procedure at the end may fix it. If the stored procedure is rubbish then you'll still have problems but at least they'll be easier to fix. Try to ensure that the Stored Procedure only updates the records that need updated
The main problem with shifting the functionality to a stored procedure that you run after the update is ensuring that it is in fact run every time.
If your asp.net skills are stronger than your T-SQL skills then this should be a far easier problem to solve than untangling a web of SQL triggers.
The other issue is that the between the update completing and the Stored Procedure completing the records will be in an intermediate state showing the initial change but not the remaining ones. This may or may not be a problem in your case

To run batch jobs one after the other

I submitting jobs to the batch process one after the other.
How do i control such that the second batch job runs only when the first one is finished.
Right now both the jobs executes simultaneously which i dont want to happen
There are two options. You can do this through code, or just via manual setup. Manual method is fairly easy, just go to (Basic>Inquiries>Batch Job), create a new batch job and save it. Then click "View Tasks" and create a new task, where this will be your first batch task. Choose your class, description, batch group, etc., then save. Click "parameters" to setup the parameters.
After that, you can setup your dependent task. Make sure your tasks both have descriptions. Add your second batch task and save. Then in the lower left corner, you click on your task that you want to have a condition, then add a row there and setup your conditions so that one task won't go until the second has completed.
Via X++ code, you would create a BatchHeader where you setup basically the same thing we just did manually. You use the .addDependency to make one task dependent on the completion of the other. This walkthrough will get you started with a job to create the batch header, and you'll just have to play around to get the dependency working.

Resources