loading and compiling template in ratpack: blocking or not? - ratpack

I'm adding Pebble template support to my ratpack application, and there is one matter that bothers me: should my RendererSupport instance use ratpack's Blocking.get() or not? As PebbleEngine has it's own cache, i can't say whether it would be loading template source from disk, so it is (possibly) an IO operation.
Looking at handlebars templating implementation i can't see any special treat of the IO operation.
So my question is: is it the rule of thumb to use Blocking for all the potentially IO-bound operations (e.g. filesystem or db access), or there is some more complicated rule?

If Pebble's cache is indefinite (Handlebars' one is) then I would say you can do the same as what Ratpack's integration for Handlebars does - depend on the cache and run the code that can potentially load the template from disk on the compute thread. You will pay a performance penalty every time a template is loaded for the first time (because you will run blocking code on a compute thread) but it will go away as your cache coverage increases.
Note that there is an issue in the tracker that aims at removing that performance penalty for the Handlebars integration by precompiling templates and thus populating the cache on startup.

Related

SQLite: how to totally clear the shared-cache?

I'm experimenting with enabling the shared cache in a SQLite implementation I'm working on. In the actual app everything works fine, but my unit tests are now failing with "disk I/O error"s. I'm assuming this is because the shared cache is making assumptions about the file that are no longer valid once it's been deleted.
How can I clear out this shared cache data? I've tried running sqlite3_shutdown() followed by sqlite3_initialize() but the problems persist.
I've actually discovered that some of my tests weren't closing connections properly, and that was the source of my problem - shared-cache was just highlighting it (though I'm not 100% sure why).
That said, in my journey, I did manage to find a way to control where SQLite puts its temporary files. the sqlite3_temp_directory global variable lets you define it - by default it's blank and defers to the OS, I think.
If you set that directory you can manually clear out any files whenever you wish.

Updating code on production server when using Go

When I develop and update files on production server with PHP I just copy the files on the fly and everything seems to work without interrupting the server.
But if I am to update the code on the Go server and application and would need to kill the server, copy the src files to the server, run go install, and then start the server, this would interrupt the service, and if I do this quite often then it is going to look very bad for my users of the service.
How can I update files without the downtime when using Go with Go's http server?
PHP is an interpreted language, which means you provide your code in source format and the PHP interpreter will read it and execute it (it may create a more compact binary form so that it doesn't have to analyze the source again when needed).
Go is a compiled language, it compiles into a native executable binary; going further it is statically linked which means every code and library your app is referring to is compiled and linked when the executable is created. This implies you can't just "drop-in" new go modules into a running application.
You have to stop your running application and start the new version. You can however minimize the downtime: only stop the running application when the new version of the executable is already created and ready to be run. You may choose to compile it on a remote machine and upload the binary to the server, or upload the source and compile it on the server, it doesn't matter.
With this you could decrease the downtime to a maximum of few seconds, which your users won't notice. Also you shouldn't update in every hour, you can't really achieve significant updates in just an hour of coding. You could schedule updates daily (or even less frequently), and you could schedule them for hours when your traffic is low.
If even a few seconds downtime is not acceptable to you, then you should look for platforms which handle this for you automatically without any downtime. Check out Google App Engine - Go for example.
The grace library will allow you to do graceful restarts without annoyance for your users: https://github.com/facebookgo/grace
Yet in my experience restarting Go applications is so quick, unless you have an high traffic website it won't cause any trouble.
First of all, don't do it in that order. Copy and install first. Then you could stop the old process and run the new one.
If you run multiple instances of your app, then you can do a rolling update, so that when you bounce one server, the other ones are still serving. A similar approach is to do blue-green deployments, which has the advantage that the code your active cluster is running is always homogeneous (whereas during a rolling deploy, you'll have a mixture until they've all rolled), and you can also do a blue-green deployment where you normally have only one instance of your app (whereas rolling requires more than one). It does however require you to have double the instances during the blue-green switch.
One thing you'll want to take into consideration is any in-flight requests -- you may want to make sure that in-flight requests continue to go to old-code servers until their finished.
You can also look into Platform-as-a-Service solutions, that can automate a lot of this stuff for you, plus a whole lot more. That way you're not ssh'ing into production servers and copying files around manually. The 12 Factor App principles are always a good place to start when thinking about ops.

How should I implement a runonce script for Symfony 2?

I use Scrum methodology and deploy functionality in builds every sprint.
There is necessity to perform different changes in the stored data (I mean data in database and on filesystem). I'd like to implement it as a PHP scripts invoked from console. But they should be executed only once, during the deployment.
Is there any way to implement it through app/console without listing it in the list of registered Console commands? Or is there any other way to implement runonce scripts?
DoctrineMigrations covers some part of my requirements, but it's hard to implement complex changes in Model. And it does not cover changes in files on the filesystem.
I don't think symfony has a facility for that, and besides, hiding the command is not the same as securing the command.
Instead, I would make the script determine if it has been run already (could be as simple as adding a version number to a file and checking that number before running) and stop if it detects it has already run before.

Keeping data in memory and persisting on Application_Disposed

I'm building a website (for personal use, low load) and instead of using an Access or MySQL database for data storage I'm thinking of having one XML file that I load and parse on Application_Start and then keep in memory (in static objects). The website then do reads and writes against these in-memory objects and I will finally persist all data to the XML file on Application_Disposed.
I'm aware that I'll need to make reading/writing thread-safe, but besides that, does anyone see any problem using this approach?
Yes, I see a big problem: There are a number of reasons to why the whole application might die without you knowing about it, and without your data being saved to that xml file.
You'll find Application_Dispose can get fired multiple times (so might don't be the best place to dispose your DI containers etc) whereas Application_End will only fires once (you can prove this by adding logging)
https://bytes.com/topic/asp-net/answers/561768-event-sequence
https://learn.microsoft.com/en-us/previous-versions/ms178473(v=vs.140)?redirectedfrom=MSDN
It seems VS2019 IIS Express doesn't seem to call Application_End as it should when you stop debugging, But IIS will.

What would be a good approach to use Thread or Thead Pool in ASP.NET

I am developing a component to create bespoke BulkImport functionality in ASP.NET. Underline this component will be using SqlBulkCopy class. There will be different file formats. The file is imported into a intermidiate table and is then transformed to the required tables. The upload file can be big and might take couple of minutes for processing. I would like to use Thread or Thead Pool to do asynchronous processing. Can you please suggest a good approach to handle this problem.
note: This is an internal application which would be used by max 2-5 person at any given time.
The main problem with firing up additional threads in ASP.NET is that the framework can rip the AppDomain out from under you (for example, if someone edits the web.config or IIS decides to recycle the worker process). If that happens, your worker thread is also terminated and you can't really control it.
If you don't think that'll be a problem, then it doesn't really matter, but I would suggest that a better solution would probably be to fire up the work in a separate process that you can then monitor from your web application.
That way, if someone edits the web config, or IIS recycles the worker process, the import process is running independently and you don't have to worry.
Here is my approach:
Ask the user to paste in the unc path to the file. Save this path into a table in sql.
Write a windows service to check for new entries in the path table. When finding a new entry, start processing the file. Update the tabel periodicly with the progress and check flags (below)
Have an ajax callback in the browser that checks the table for progress, returning as a percentage to the client. Allow the client to stop the process by adding some flags to the table.

Resources