Symfony3 Translation System Cache to Memcache - symfony

I have found so far an bundle that uses memcache as translation source but I haven't found anything on how to move the translation cache from disk storage to a service or directly to memcache.
I have also look at the options for the framework but I haven'T found anything useful on it (or I'm to stupid to use google ^^).
I need to move the cache files to memcache for deployment reason.
I'm having multiple Application Servers.
And to store the translation cache etc. on disk is slow and pane full if I deploy software (php Process on the productive app servers need to be restarted). It would make my live easier if that stuff would be stored in memcache as I would simply flush memcache to reset the translation stuff.
did anyone ever try this?

What first comes to the mind is to make a console command that would use one Loader (for example, \Symfony\Component\Translation\Loader\XliffFileLoader) and then another Dumper (something implementing \Symfony\Component\Translation\Dumper\DumperInterface from that bundle, like MemcacheDumper).
In your command your would load translations from one source by loader (in the form of \Symfony\Component\Translation\MessageCatalogue) and then dump them into another.

Related

JxBrowser: (why) can I (not) use URI path for cache directories?

I evaluated JxBrowser a short while ago. The following questions came to mind:
Can I use Java URIs to "reroute" all temporary files from the underlaying Chromium engine through a custom FileSystemProvider like encFs4J?
The reason I want to that is to comply with data privacy laws. Since browsers can not be forced by a web application to clear their cache or store any temporary files in a safe manner, I thought I could use JxBrowser for this. If I can handle all files myself, I can do some crypto magic so that (almost) no one has access to the data besides my application.
There is an API to define the directories via BrowserContextParams.
However, only absolute paths are allowed. URIs are not accepted.
Instead of doing
BrowserContext context = new BrowserContext(new BrowserContextParams("C:\\Chromium\\Data"));
Browser browser1 = new Browser(context);
I would like to do something like
BrowserContext context = new BrowserContext(new BrowserContextParams(new URI("enc+file:///C:/Chromium/Data"));
Browser browser1 = new Browser(context);
Does anyone know of a way to tap into the file handling routines of a process like JxBrowser? Can I somehow add this functionality like a wrapper around it?
I considered using something like VeraCrypt for this. But this is no good in terms of usability since you have to install virtual harddrive drivers. This is overkill for a rather simple issue.
Underlying Chromium engine in JxBrowser does not use Java IO API to access files. There is only a path string to the data directory that is passed to Chromium engine and it decides by itself how to handle all IO operations.
There is a mode in Chromium called incognito. In that mode all the files, including cookies, cache, history are stored in memory, nothing is stored on the hard drive, so once you close the application, all the data will be cleared automatically. If this meets your requirements we could investigate how to enable incognito mode in JxBrowser.
I will accepting Artem's answer to the original question. Incognito / private browser sessions - as long as they do not store anything on hard disk - would be a perfect and simple solution.
Furthermore, I want to share my research on this topic. The following answer is not related to JxBrowser but to any 3rd party applications and libraries which do not support URI path or require additional safeguarding of (temporary) files.
Option 1: RamDisk
needed: kernel mode driver for ram disk
privileges: admin once (to install the driver)
usability: might be seemless, if application can handle ram disk by code (not researched)
Installing a RamdDisk which can "catch" the files. If the ram disk only persists while the application is running, it is already automatically cleaned up. (not researched for feasibility)
With an own ram disk implementation one could perform additional steps.
Option 2: Virtual File System, e.g. VeraCrypt
needed: VeraCrypt, kernel mode driver
privileges: admin once (to install the driver)
usability: user has to mount container manually before using the application
Due to usability issues this was not further researched.
Option 3: embedded SMB server with local share
needed: SMB server implementation (e.g. JVLAN for Java), creating a server and share in code
privileges: user (Ports 1445 can be used under Linux etc.)
usability: seemless for the user, but quite a complicated solution for a simple issue
Steps: start a SMB server by code, add a share and user authentication (optional), mount the share to a local drive (windows) or mount point (linux), use an absolute path to access the files on the locally mounted share. If the application crashes, then the volatile / in-memory key for the "real" file encryption of the SMB server is lost and the files are safe from other eyes.
This option also has more potential, like clearing files once they got read, controling the access to third party apps and many more - even freakier - ideas.

Move the Symfony2 Cache directory off the filesystem?

I'm looking to remove the dependency on the filesystem for my symfony2 cache directory (app/cache/*) and I don't see a clear path to doing that by modifying services or anything of that nature.
Ultimately, I'm attempting to use Symfony on a filesystem that cannot be altered. In the past I've been able to re-map twig templates and intl files to the database, and in theory this should be possible with the cache (and it already is with the logs).
Is there a way I can make the framework use another service like Redis, a database, or even just keep it in RAM for the life of the script?
I've attempted to run a cache:warmup command also, however, writing to the filesystem still happens even with the cache pre-generated.
The parameter used by everything is defined in the framework as kernel.cache_dir

Synchronizing Plone 4 sites

I'm using Plone 4 for my sites and I was wondering if there is a way to synchronize two plone sites i.e. be able to synchronize my development site with my production site.
I have looked at Zsyncer product and it appears it is no longer maintained. Besides, the last version is not compatible with Plone 4.
I am thinking of writing a custom script that will handle exporting of the data.fs files and the src files as explained in these two articles:
Copying a remote site database
Copying a Plone site
Is there a better way of synchronizing two plone sites as described by my use case above?
For keeping the code synchronized, you want collective.hostout
For the database, use collective.recipe.backup - you could probably also use hostout to import the backups
Not sure if this solution will fit all your needs, but I use DemoStorage which is build-in to ZODB since version 3.9 (Plone 4 use it).
DemoStorage you have to setup on development instance and use Data.fs from production. All changes will be stored in memory or in separated file (it depends how you configure it), so changes in dev will not be visible on production. If you have both instances on the same server you can use Data.fs directly (without copying it), so it will be always synchronized.
To configure it you have to modify buildout. See: https://pypi.python.org/pypi/plone.recipe.zope2instance#advanced-options
When on prod and on dev transactions changes the same objects (it happens occasionally) DemoStorage can show errors, Than you have to just reboot dev instance (if you use memory change storage) or remove file with changes and than reboot.

Where to put a new ASP.NET website?

Where's the best place for a production asp.net application? I mean a place that we need less permission manipulation on folders and probably the experts choice.
under C:\inetpub\wwwroot or C:\inetpub or elswhere ?
In development/test phases I usually put it under C:\inetpub\wwwroot and create a new web application without setting bindings. But on production version with binding I'm not sure where's the right place.
You can put it anywhere you like, they key thing is to ensure that the app pool it is running under is set to run as a low privileged user (like NT AUTHORITY\NETWORK SERVICE), then ensure that user has Read (and possibly Browse if you want it) permissions on the folder you put your web app in. Very seldom (if ever) will the user need Write or Modify permissions on the folder.
and on a new system I had a lot of problem to modify batch files, setting permissions
Setting permissions should not be a problem, you should set the same basic permissions i mentioned above for the user you want to run the app pool as. You can use PowerShell or WMI for this, and you should use the same permissions no matter what folder you install in to.
You could always wrap all this up into an installer, then it can be as simple as hitting Next.. Next... Finish... in an installer wizard to set up your website on any machine. Doing this in an installer also gives you some certainty that nothing has been missed.
Personally I have a 'Development' folder on my D: drive which is then subdivided into different categories depending on the work. I generally don't use inetpub directory and any permission issues I come across I just set directly onto the relevant folder within my own development structure.
On production environments I've used in the past, we've generally done the same thing. Mainly to help backup scenarios really, but also because there's no strict need to use the default IIS directories - you're free to structure things how you like.
Personally, I always create a new folder (in the root of a drive) called WebSites. I then make sure it has the appropriate permissions for the website process(es) (aka App Pools).
eg.
C:\
|_WebSites
|_www.Foo.com
|_www.Bah.com
It also makes it easier to manage because you don't have to hunt through the folder structures to find any/all websites.
But technically, it can be (more or less) anywhere - just needs to have the correct permissions set.
Bonus Answer
I also remove the Default Website from IIS .. which in effect means I can also delete c:\inetpub\wwwroot.
You can put the website any where on the server hard disk, Just make sure it is a secure folder and also I recommend to don't put it in the same OS drive, in case it failed and you needed to formate it.
C:\inetpub\wwwroot and C:\inetpub are just the default places nothing more.
Really depends on how the production server is configured and how operations likes to operate over there. Typically we setup a second "data" drive on servers for a few reasons:
a) Back in the old days, there were a lot of cannonical attacks where the attacker would try to navigate from c:\inetpub to c:\winnt\cmd.exe. Putting things on a different drive prevented this sort of thing.
b) Recovery -- if the OS gets hosed, you can pretty easily reinstall/reimage or move the data disks to another box and get things stood up fast.
c) Typically is lots easier to do things like swap the non-os disk in case you need more disk space or faster disks or whatever.
Basically, off the OS drive is a good idea. Though virtualization and modern deployment tools make lots of this matter less.

SCM for ASP.net

As part of my overall development practices review I'm looking at how best to streamline and automate our ASP.net web development practices.
At the moment, our process goes something like this:
Designer builds frontend as static HTML/CSS on a network share. This gets tweaked until signed off. (e.g. http://myserver/acmesite_design)
Once signed off, developer takes over and copies over frontend HTML/CSS to a new directory on the same server (e.g. http://myserver/acmesite_development)
Multiple developers work on local copy until project is complete.
Developer publishes code to an external publicly accessible server for a client to review/signoff.
Edits made locally based on feedback.
Republish to external server.
Signoff
Developer publishes to live public server
What goes wrong? Lots of things!
Version Control — this is obviously a must and is being introduced
Configuration errors — many many times, there are environment specific paths and variables (such as DB names, image upload directories, web server paths etc. etc.) which incorrectly get copied from local to staging to live etc. etc. with very embarrassing results.
I'm pretty confident I've got no.1 under control. What about configuration management? Does anyone have any advice as to how best to manage an applications structure within asp.net apps to minimize these kinds of problems?
I found that using SVN, NAnt and NUnit with Cruise Control.net solves a lot of the issues you describe. I think it works well for small groups and it's all free. Just need to learn how to use them.
CruiseControl.net helps you put together builds and continuous integration.
Use NAnt or MSBuild to do different environment builds (DEV, TEST, PROD, etc).
http://confluence.public.thoughtworks.org/display/CCNET/Welcome+to+CruiseControl.NET
You got the most important part right. Use version control. Subversion is a good choice.
I usually store configuration along with the site; i.e. when coding a PHP-based site I have a file named config.php-dist. If you want the site to work at all you'll have to copy + edit in all the required parameters (this avoids storing passwords in version control). The -dist file should have reasonable defaults.
Upload directories should be relative if possible; actually all directories should be relative. I'm not experienced in ASP.net, but if it's anything like PHP the current directory is always the directory of the file being requested. If you channel all requests through a single file (i.e. index.asp), then this can even be found programmatically. Or you could find it programmatically by using the equivalent of dirname(____FILE____) in your configuration file.
I also recommend installing IIS (or whatever webserver you are using) on all development workstations (including the designers). Makes life easier as noone can step on each others toes. What one has to do is simply add test hosts to the hosts file (\windows\system32\drivers\etc\hosts iirc) in addition to adding a site to the local IIS. This plays well with version control (checkout, add site to IIS and hosts-file, edit edit edit commit).
One thing that really helps is making sure you keep your paths relative where you can and centralise them where you can't, so when I've been working with ASP.Net I have tended to use web.config to store any configuration and path related data that can't be found programmatically. It is quite possible to find information like your current application path programmatically through the Request object - it's worth looking in some detail over what the environment makes available to you.
One way to make sure you don't end up on something that is dependent on the path name is having a continuous integration server executing your test suite against your application. Each time this happens you create a random filepath. As soon as someone introduces a dependency on the filepath it will fail.

Resources