symfony cache dir /dev/shm in production? - symfony

You can define a custom cache dir in Kernel.php and use the shared memory via /dev/shm
public function getCacheDir(): string
{
return '/dev/shm/sf-cache/'.$this->environment;
}
This is somewhat recommended for developing on virtual machines on non-linux systems like vagrant / docker, as it significantly speeds up things there.
Could you also use that in production as a faster filesystem alternative?
If not, why? What are potential downsides?

Related

Pointing SLS files to salt directory

I'm a little confused on using file_roots. Currently we setup our salt directory in the following way.
srv/salt/<folder-connected-to-git>: contains all the folders we want for to use like win (repo / repo-ng), / scripts /states etc... for our salt build. But it doesn't have our binaries folder (holds the installers for programs).
The master config file uses the following:
file_roots:
base:
- /srv/salt/<folder-connected-to-git>
So when setting up SLS package installers we would use salt:// to point to the base folder. Since the binaries folder is outside that path (in /srv/salt), I gave the absolute path (ie. /srv/salt/binaries). It seems that when running it, salt doesn't recognize this path as an absolute path (maybe its looking for it on the minion instead).
Is there a way to point to directory outside of base. If not, I could change my file_roots to:
file_roots:
base:
- /srv/salt/
prod:
- /srv/salt/<git-folder>
But then, would salt look for the repo (to cache to the minion) inside /srv/salt/ instead of /srv/salt/<git-folder>? Could I change what salt:// points to without changing file_roots?
There is the built in fileserver that works together with salt '*' cp.get_file or salt '*' cp.get_dir. To use this with file_roots, you might want to create a separate environment for the binaries.
I am not sure, if it is intended to be used like that. Especially the file_roots environments. But I recently learned, that environments are made as flexible as possible, to enable you to use them for whatever you might need them.
You might want to have a look at gitfs - which allows you to mount git repositories into your state tree. This would make the environment unnecessary. We use this approach for formulas.
We currently solve this with a private network and a webserver that makes larger files available to all minions inside of this network. This works quite well, as all our minions are connected to this private network. Running such a network forces you to keep an eye on securing the minions and the masters communication inside of this network. We use local firewalls on all connected minions to achieve this.

Symfony3 Translation System Cache to Memcache

I have found so far an bundle that uses memcache as translation source but I haven't found anything on how to move the translation cache from disk storage to a service or directly to memcache.
I have also look at the options for the framework but I haven'T found anything useful on it (or I'm to stupid to use google ^^).
I need to move the cache files to memcache for deployment reason.
I'm having multiple Application Servers.
And to store the translation cache etc. on disk is slow and pane full if I deploy software (php Process on the productive app servers need to be restarted). It would make my live easier if that stuff would be stored in memcache as I would simply flush memcache to reset the translation stuff.
did anyone ever try this?
What first comes to the mind is to make a console command that would use one Loader (for example, \Symfony\Component\Translation\Loader\XliffFileLoader) and then another Dumper (something implementing \Symfony\Component\Translation\Dumper\DumperInterface from that bundle, like MemcacheDumper).
In your command your would load translations from one source by loader (in the form of \Symfony\Component\Translation\MessageCatalogue) and then dump them into another.

JxBrowser: (why) can I (not) use URI path for cache directories?

I evaluated JxBrowser a short while ago. The following questions came to mind:
Can I use Java URIs to "reroute" all temporary files from the underlaying Chromium engine through a custom FileSystemProvider like encFs4J?
The reason I want to that is to comply with data privacy laws. Since browsers can not be forced by a web application to clear their cache or store any temporary files in a safe manner, I thought I could use JxBrowser for this. If I can handle all files myself, I can do some crypto magic so that (almost) no one has access to the data besides my application.
There is an API to define the directories via BrowserContextParams.
However, only absolute paths are allowed. URIs are not accepted.
Instead of doing
BrowserContext context = new BrowserContext(new BrowserContextParams("C:\\Chromium\\Data"));
Browser browser1 = new Browser(context);
I would like to do something like
BrowserContext context = new BrowserContext(new BrowserContextParams(new URI("enc+file:///C:/Chromium/Data"));
Browser browser1 = new Browser(context);
Does anyone know of a way to tap into the file handling routines of a process like JxBrowser? Can I somehow add this functionality like a wrapper around it?
I considered using something like VeraCrypt for this. But this is no good in terms of usability since you have to install virtual harddrive drivers. This is overkill for a rather simple issue.
Underlying Chromium engine in JxBrowser does not use Java IO API to access files. There is only a path string to the data directory that is passed to Chromium engine and it decides by itself how to handle all IO operations.
There is a mode in Chromium called incognito. In that mode all the files, including cookies, cache, history are stored in memory, nothing is stored on the hard drive, so once you close the application, all the data will be cleared automatically. If this meets your requirements we could investigate how to enable incognito mode in JxBrowser.
I will accepting Artem's answer to the original question. Incognito / private browser sessions - as long as they do not store anything on hard disk - would be a perfect and simple solution.
Furthermore, I want to share my research on this topic. The following answer is not related to JxBrowser but to any 3rd party applications and libraries which do not support URI path or require additional safeguarding of (temporary) files.
Option 1: RamDisk
needed: kernel mode driver for ram disk
privileges: admin once (to install the driver)
usability: might be seemless, if application can handle ram disk by code (not researched)
Installing a RamdDisk which can "catch" the files. If the ram disk only persists while the application is running, it is already automatically cleaned up. (not researched for feasibility)
With an own ram disk implementation one could perform additional steps.
Option 2: Virtual File System, e.g. VeraCrypt
needed: VeraCrypt, kernel mode driver
privileges: admin once (to install the driver)
usability: user has to mount container manually before using the application
Due to usability issues this was not further researched.
Option 3: embedded SMB server with local share
needed: SMB server implementation (e.g. JVLAN for Java), creating a server and share in code
privileges: user (Ports 1445 can be used under Linux etc.)
usability: seemless for the user, but quite a complicated solution for a simple issue
Steps: start a SMB server by code, add a share and user authentication (optional), mount the share to a local drive (windows) or mount point (linux), use an absolute path to access the files on the locally mounted share. If the application crashes, then the volatile / in-memory key for the "real" file encryption of the SMB server is lost and the files are safe from other eyes.
This option also has more potential, like clearing files once they got read, controling the access to third party apps and many more - even freakier - ideas.

Shared Folders in Xen Hypervisor

I recently started using xen hypervisor migrating from virtualbox. My host system is Ubuntu 15.04 and guest is Windows 7. I wanted to know if there is any way where we can use shared folders similar to VirtualBox?
Thanks
to share files, you need a shared filesystem. there are two main
classes of these:
network filesystems: NFS, Samba, 9p, etc.
clustered filesystems: GFS, OCFS2, CXFS, etc. they're designed for
SAN systems where several hosts access the same storage box. in VM
case, if you create a single partition accessible from several VMs you
get exactly the same situation, (shared block device) and need the
same solution.
what definitely won't work is to use a 'normal' filesystem (ext3/4,
XFS, ReiserFS, FAT, HPFS, NTFS, etc) on a shared partition (just like
it won't work in a shared block device). Since every filesystem
aggressively caches metadata to avoid rereading the disk for every
access, a VM won't be 'notified' if another one modifies a directory,
so it won't 'notice' any change. and worse, since now the cached
metadata isn't consistent with the content of the disk, any write will
result in a heavily corrupted filesystem.
PS:
There was a project called XenFS which looked promising, but never reached a stable release.

Drupal & NFS Directory

I have 2 parallel Drupal Web Servers running (serving for one Drupal Instance together) and now i need to install NFS. My experience in multi Drupal Servers is, each Drupal Instance (Server) uses their own Aggregated JS + CSS files (storing in: sites/default/files/js and sites/default/files/css folders) which can NOT be used as common. (Files can not be the same for both Servers. They use their own ones.)
Based on these issues, my questions are:
How NFS actually works between Multi Drupal Servers?
Which directories will be/need to be shared between?
What will happen to Aggregated Files?
What will happen to Web User Uploads paths and files? (Need any configuration in Drupal?)
Can anyone share these knowledge/experience please?
You can definitively work with NFS and Drupal.
I do not understand why you do not want to share the files directory between both.
In fact you have two solutions:
1) Share all the source tree, starting at the web directory root, or even earlier if you have external directories for private files
2) Share only the moving directories and have all code based synchronised before and aftter any upgrade via some rsync commands. In this case you need to share between servers:
the files directories (project/www/site/default/files, project/www/site//files)
the private files directories (project/private) <-- it's an example
the php temporary upload path (project/tmp for example), check that both servers use the right folder (it's a php setting) and that this folder is shared.
before Drupal7 I would use solution 1, now the number of internal filesystem tree traversal launched by Drupal on a lot of occasion make it very bad on slow filesystem (and NFS is usually quite slow). Using APC with all filesystem check disabled (apc.stat, stat.ctime, etc) does not prevent Drupal from trying to access every file on your filesystem on a lot of occasions. So solution 2 is to be prefered.
I did not experience any problems with file aggregation with such installations.

Resources